Software

Stacklet Applies Generative AI to Simplify Cloud Governance

Stacklet today provided early access to a Jun0 tool that leverages generative artificial intelligence (AI) to improve cloud governance and reduce costs. Stacklet CEO Travis Stanfield said the goal is to make it possible to automatically surface recommendations and implement policies using a mix of large language models (LLMs) trained using data collected via the company’s Stacklet AssetDB database. Accessed via a natural language interface, Jun0 makes it possible to declaratively govern cloud computing environments via text-based queries to generate policies that can be implemented as code; that eliminates the need for specialized programming expertise, he added. IT teams can use text to launch queries pertaining to any operations, cost, security and compliance issues and then visually test the policies created as part of a dry run before implementing them at scale. In effect, Jun0 substantially reduces the level of expertise required to successfully manage cloud computing environments by making it simpler to create governance policies, noted Stanfield. DevOps teams are generally tasked with making sure cloud computing environments are optimally managed using policies that are usually implemented as code within a DevOps workflow. Implementing policy-as-code, however, typically involves mastering a domain-specific programming language. Stacklet is now making a case for a higher level of abstraction that eliminates the need to master yet another programming language to govern cloud computing environments. It’s still early days as far as the adoption of generative AI is concerned within DevOps workflows, but it’s already clear that implementing best practices is about to become substantially easier. In essence, DevOps practices are about to become democratized in a way that reduces the cognitive load required to implement them. In addition to increasing the number of application environments a DevOps team may be able to effectively manage, generative AI will make DevOps accessible to a wider range of organizations that previously would not have been able to hire and retain software engineers. Many of those software engineers should also be able to spend more time addressing more complex issues rather than, for example, writing scripts to ensure that only certain classes of workloads are allowed to run on a particular cloud service within a period of time which results in lower costs. Unfortunately, DevOps teams are already playing catch-up when it comes to having access to generative AI tools. Developers are already taking advantage of generative AI tools to create more code faster. As that code moves through DevOps pipelines, it’s apparent the overall size of the codebase that DevOps teams are being required to manage is only going to increase. Most organizations are not going to be able to hire a small army of software engineers to manage that codebase, so the tooling provided to existing DevOps teams will need to improve. The issue now is narrowing the gap between now and when next-generation AI tools are made generally available. One way or another, however, it’s clear that the way DevOps is managed will never be the same again.

Read More

Survey Surfaces Benefits of Applying AI to FinOps

A survey of 200 enterprise IT decision-makers published this week found organizations that have infused artificial intelligence (AI) into financial operations (FinOps) workflows to reduce IT costs are 53% more likely to report cost savings of more than 20%. Conducted by the market research firm Foundry on behalf of Tangoe, a provider of tools for managing IT and telecommunications expenses, the survey found organizations that embraced FinOps without any AI capabilities averaged less than 10% in cost savings. The top three drivers for adopting FinOps/cloud cost management programs are the need to increase cloud resource production and performance (70%), reduce budgets (60%) and rising costs (58%), and simpler overall program management (50%), the survey found. Major benefits included productivity savings (46%), cost savings (43%) and reduced security risks (43%). Nearly two-thirds of respondents cited service utilization and right-sizing of services as another reason to embrace FinOps. FinOps describes a methodology for embedding programmatic controls within DevOps workflows to reduce costs. In the face of increased economic headwinds, IT leaders are looking to reduce cloud computing costs, but it’s turning out to be more challenging than many of them anticipated. Cloud infrastructure is typically provisioned by developers using infrastructure-as-code (IaC) tools with little to no supervision. The reason for this is developers have long argued that waiting for an IT team to provision cloud infrastructure took too long. Developers would be more productive if they just provisioned cloud infrastructure themselves. However, after ten years of cloud computing, it’s become apparent there are a lot of wasted cloud infrastructure resources. Developers who don’t pay the monthly bills for cloud services tend to view available infrastructure resources as essentially infinite. It’s usually not until someone from the finance department starts raising cost concerns that developers even become aware there might be an issue. The challenge is that adopting FinOps best practices is not quite as easy as it might seem. In fact, more than half (54%) of survey respondents cited challenges in building the right process and human support systems for FinOps into workflows that have been in place for years. Chris Ortbals, chief product officer for Tangoe, said the simplest path to FinOps is to rely on a software-as-a-service (SaaS) platform designed from the ground up to leverage AI to help IT teams manage cloud computing and telecommunications expenses both before and after applications are deployed. Each DevOps team will ultimately need to determine how much they will implement metrics to foster more efficient consumption of cloud computing resources. The more aware of those costs DevOps teams are, the more likely that better decisions about what types of workloads should be run where and, just as importantly in the age of the cloud, at what time, given all the pricing options provided. Developers, of course, tend to jealously guard their prerogatives. Convincing them to give up their ability to provision cloud infrastructure on demand is going to be a challenge, at least until someone makes it plain how much all those cloud instances wind up costing the organization each and every month.

Read More

The Growing Impact of Generative AI on Low-Code/No-Code Development

No-code/low-code platforms, once a disruptor in the realm of software development, are now embracing the capabilities of generative AI to create even more dynamic experiences. This union of convenience and innovation redefines how users interact with their software. Imagine a scenario where crafting complex instructions like “Deploy endpoint protection to noncompliant devices” becomes as simple as conversing with your application. The fusion of generative AI and no-code/low-code platforms empowers users to shape their software’s behavior without delving into intricate technicalities. Users can input prompts such as “Generate a code snippet for converting date formats” or “Create a workflow that automates inventory updates.” By translating natural language into action, this approach streamlines development and fosters creativity. An Amalgamation of Generative AI and No-Code/Low-Code Beyond buzzwords, the amalgamation of generative AI with no-code/low-code platforms offers tangible benefits. The efficiency gains that occur when users can sidestep the need for manual configurations and directly communicate their intentions are both remarkable and unprecedented. Accessibility is enhanced, enabling non-technical individuals to actively participate in application development. Moreover, innovative use cases emerge, allowing organizations to streamline complex workflows with ease. As with any transformative technology, challenges emerge alongside benefits. Privacy concerns loom large when dealing with data input into generative AI models. Striking a balance between providing valuable insights and safeguarding sensitive information becomes paramount. Additionally, the inherently non-deterministic nature of generative AI can lead to varying outcomes, requiring careful consideration of use cases to ensure reliable results. As this collaboration matures, the landscape of software development is poised for significant change. Conversational interfaces that empower users to dictate software behaviors will continue to evolve, reducing implementation and configuration overhead. Imagine a future where complex workflows are summoned with a simple request or applications are custom-built based on natural language blueprints. This shift will not only streamline development but also democratize technology, making it accessible to a broader audience. The integration of generative AI with no-code/low-code platforms allows users to express their creativity more freely. By enabling natural language prompts like “Design an app to manage inventory with automatic restocking” or “Build a workflow that offboards a user across Google, Slack, and Salesforce,” users can drive software behaviors without being constrained by technical jargon. This fusion redefines the efficiency of software interaction. Tasks that previously required meticulous configuration or coding can now be executed through simple prompts. Whether generating email templates, creating data transformation scripts, or orchestrating multi-step workflows, the convenience of natural language input eliminates barriers and accelerates results. A Democratic Approach Looking forward, the integration of generative AI in no-code/low-code platforms points toward a more democratic approach to software development. This convergence will enable a broader range of individuals to participate actively, regardless of their coding expertise. By simplifying the process and making it more inclusive, we’re shaping a future where software truly adapts to human intent. As businesses continue to harness the potential of generative AI and no-code/low-code platforms, adaptation and learning will be key. Embracing this transformation requires a shift in mindset, and understanding that software can be molded through conversations and prompts. As technology matures, the barriers between user intent and software behavior will fade, ushering in an era where technological fluency is defined by our ability to communicate rather than code. Speculating on how this shift will impact the day-to-day […]

Read More

Microsoft kills Python 3.7 ¦ … and VBScript ¦ Exascaling ARM on Jupiter

Welcome to The Long View—where we peruse the news of the week and strip it to the essentials. Let’s work out what really matters. This week: VS Code drops support for Python 3.7, Windows drops VBScript, and Europe plans the fastest ARM supercomputer. 1. Python Extension for Visual Studio Code Kills 3.7 First up this week: Microsoft deprecates Python 3.7 support in Visual Studio Code’s Python extension. It’ll probably continue to work for a while, though (emphasis on the “probably”). Analysis: Obsolete scripting language is obsolete If you’re still using 3.7, why? It’s time to move on: 3.12 is the new hotness. Even 3.8 is living on borrowed time. Priya Walia: Microsoft Bids Farewell To Python 3.7 “Growing influence of the Python language”Python 3.7, despite reaching its end of life in June, remains a highly popular version among developers. … Microsoft expects the extension to continue functioning unofficially with Python 3.7 for the foreseeable future, but there are no guarantees that everything will work smoothly without the backing of official support.…Microsoft’s recent launch of Python scripting within Excel underscores the growing influence of the Python language across various domains. The move opens up new avenues for Python developers to work with data within the popular spreadsheet software. However, it’s not all smooth sailing, as recent security flaws in certain Python packages have posed challenges. Python? Isn’t that a toy language? This Anonymous Coward says otherwise: Ha, tell that to Instagram, or Spotify, or Nextdoor, or Disqus, or BitBucket, or DropBox, or Pinterest, or YouTube. Or to the data science field, or mathematicians, or the Artificial Intelligence crowd.…Our current production is running 3.10 but we’re looking forward to moving it to Python 3.11 (3.12 being a little too new) because [of] the speed increases of up to 60%. … If you’re still somewhere pre 3.11, try to jump straight to 3.11.6.…The main improvements … are interpreter and compiler improvements to create faster bytecode for execution, sometimes new features to write code more efficiently, and the occasional fix to remove ambiguity. I’ve been running Python in production for four years now migrating from 3.8 -> 3.9 -> 3.10 and soon to 3.11 and so far we have never had to make any changes to our codebase to work with a new update of the language. And sodul says Python’s reputation for breaking backward compatibility is old news: Most … code that was written for Python 3.7 will run just fine in 3.12. … We upgrade once a year and most issues we have are related to third party SDKs that are too opinionated about their own dependencies. We do have breaking changes, but mostly we find pre-existing bugs that get uncovered thanks to better type annotation, which is vital in larger Python projects. 2. Windows Kills VBScript Microsoft is also deprecating VBScript in the Windows client. It’ll probably continue to work for a while as an on-demand feature, though (emphasis on the “probably”). Analysis: Obsolete scripting language is obsolete If you’re still using VBScript, why? It’s time to move on: PowerShell is the new hotness—it’s even cross platform. Sergiu Gatlan: Microsoft to kill off VBScript in Windows “Malware campaigns”VBScript (also known as Visual Basic Script or Microsoft Visual Basic Scripting Edition) is a programming language similar to Visual Basic or Visual Basic for Applications (VBA) and […]

Read More

How Event-Driven Architectures Drive Real-Time Operations

People, events, the human brain—in fact, the whole world operate in real-time, but businesses have struggled to keep up. With the help of event-driven architecture (EDA) and the Open API economy, businesses can now do the same. The power of an event-driven world means that, after years of geopolitical events affecting how businesses operate, many businesses are starting to uncover real value by truly being able to operate in real-time. Whether it be retail and manufacturing or energy and resources and financial services, locating and responding to vital issues within a company’s supply chains or product lines in real-time, is key to success. Amazon’s CTO, Dr. Werner Vogels, said that “the world is event driven” in his keynote speech at AWS re:Invent in December 2022. Now, new IDC research unveils that nine out of 10 of the world’s largest companies will deploy real-time intelligence driven by event-streaming technologies by 2025. But What’s the Secret Behind Such Success? A recent IDC Infobrief, sponsored by Solace, surveyed over 300 enterprise IT professionals in North America, Asia and Europe, all of whom work for large companies implementing or considering EDA. The results are quite telling–an overwhelming 93% of respondents at companies that have deployed EDA across multiple use cases said EDA has either met or exceeded their expectations. In addition to technical advantages from EDA, most businesses also see clear business benefits: A full 23% of respondents reported increasing productivity; 22% said better customer acquisition and 18% saw revenues increase as a result of EDA efforts. 1. Get Support From the Top to Ensure Alignment Throughout Expanding the footprint of EDA across the enterprise is a journey, and every journey starts by assembling those that are critical to its overall success. Business sponsorship and engaging key stakeholders is vital, especially in the early days of EDA adoption – 56% of respondents in the early EDA stages cited this as a priority when ROI and business benefits may not be immediately clear. The impact of well-aligned C-suite, operational and technical teams is reflective of business-level digital maturity, too. As 35% of respondents at an advanced stage of EDA rollout felt C-level support was critical, it comes as no surprise that respondents with higher levels of EDA maturity also have higher levels of overall digital maturity, including digital strategy and change management support. 2. Tackle Complexities Head-On With the Backing of IT As EDA becomes more pervasive across an organization, demands on IT become more sophisticated, requiring a deepening of EDA skills in the IT organization, notably with DevOps teams, developers and architects. More than one-third (36.1%) of respondents cited the lack of skills to execute EDA as a hurdle to adoption. Approaches to logging, governance and oversight (30.7%) can also become increasingly challenging and must be thought through carefully. This is where EDA providers themselves need to step up and provide adequate training and a certification path for architects DevOps and developers looking to gain the fundamental knowledge and skills to design and implement event-driven systems. This should include technical details such as understanding various design patterns for EDA, microservices choreography versus orchestration, the saga pattern and RESTful microservices. Education should also clearly define and demonstrate key concepts and tools for EDA success, such as event portal, topic hierarchy best practices and event mesh. […]

Read More

Why AIOps is Critical for Networks

Speaker 1: This is Techstrong TV. Mitch Ashley: With great pleasure of being joined by Andrew Colby. Andrew is VP of AIOps at Vitria. Welcome, Andrew. Andrew Colby: Good afternoon, Mitch. And thank you. Mitch Ashley: It’s a great topic. I’m excited to talk with you about it. We could go down the share war stories in telco experience, which really could be about 10 episodes of a different show, but today in the telco environment, or just in the business environment in general, the economic conditions, competitive pressures, looking for areas where we can get more for less, there are a lot of different parameters that have shifted or changed or maybe tightened that we’re currently working within. I’d love to get your perspective on that. Andrew Colby: Certainly, and thank you. Yeah, I’d say we see cautious optimism. Obviously, I’m based in the US in the DC Metro area, Maryland. And the US, the government entities and quasi-governmental entities have been tightening the economic structure in order to tame inflation. Fortunately, that has not driven our economy and had the potential recessionary effect that was feared, but people are still cautious, businesses are still cautious. That said, it’s hard to hire people and it’s really hard to hire technical people. So a lot of companies are continuing to look towards how to leverage technologies and automation to build efficiency so that they can do more with either the same number of people or re-task their people to higher value purposes, and let the technology do some of the more menial and mundane tasks. And we can explore this a little bit, especially in these new complex service delivery and network environments. It’s very difficult for me to imagine how an engineer who’s gone through anywhere from two to eight years of college education is going to really be happy going and spending their days collecting a lot of data across network, container management VM and other infrastructure systems to figure out what’s going on. I mean, really that’s where a lot of the automation provides a significant amount of value to let the engineers do the smart, difficult things that we want humans to do. Mitch Ashley: And a lot of pressures around meantime to recovery, even looking at resiliency, how do we stand up under a test-full situation, whether it be a security attack that might be going on or some unobserved condition that our systems and networks have never been under? Andrew Colby: Oh, there’s so much of that. So much is changing. It’s not just a person like you or me behind a smartphone that can actually report that there’s a problem, but it’s sensors and equipment that won’t necessarily report right away, so it needs to be detected. So that’s a whole nother additional dimension that service providers, large enterprise IT organizations are under, which is to be able to have this kind of real-time awareness of what’s going on. Whether the service is real time, like the video conference that we’re on or not, there really is a desire and expectation to have real time awareness of the service delivery to be able to detect what’s going on, react to it, address it before the user, whoever that is, the customer, the employee […]

Read More

Senser Unveils AIOps Platform Using eBPF to Collect Data

Senser emerged from stealth this week to launch an artificial intelligence for IT operations (AIOps) platform that leverages extended Berkeley Packet Filter (eBPF) running in the microkernel of Linux operating systems to collect data from IT environments. Fresh from raising $9.5 million in funding, Senser CEO Amir Krayden said the company’s namesake platform then applies machine learning algorithms to that data to identify issues that could lead to outages. Those insights are surfaced using graph technology to make it simpler to both observe IT environments and triage issues at scale because the AIOps platform is running processes at the microkernel level rather than in user space. The approach provides IT teams with a more efficient and holistic approach to observability at a level of scale legacy platforms can’t achieve, said Krayden. The use of machine learning algorithms also reduces the cognitive load on DevOps teams because issues involving, for example, performance degradations are automatically surfaced, he added. In addition, the company is working toward adding generative AI capabilities to provide summaries that explain what IT events have occurred, noted Krayden. In effect, eBPF changes the way operating systems are designed because it enables networking, storage and observability software to scale at much higher levels of throughput since they are no longer running in user space. That’s especially critical for observability and AIOps platforms that need to dynamically process massive amounts of data in near-real-time. As the number of organizations running the latest versions of Linux continues to increase, more hands-on experience with eBPF will be gained. IT teams may not need to concern themselves with what is occurring in the microkernel of the operating systems, but they do need to understand how eBPF ultimately reduces the total cost of running IT at scale. AI and graph technology, in combination with eBPF, will fundamentally change how IT is implemented and managed. The current complexity of application environments is already exceeding the ability of IT teams to cost-effectively manage them at scale, so the need for a different approach is already apparent. Many IT environments are already too complex for IT personnel to manage without the help of some form of AI. It’s not clear precisely how much AI will automate the management of IT, but it’s not likely the need for humans to manage and supervise these environments will happen any time soon. However, the level of scale at which an IT environment can be effectively managed is changing as AI makes it easier to identify issues and understand their impact. Too often today, there are simply too many dependencies within an IT environment to keep track of using legacy monitoring tools that only track a set of pre-defined metrics. It may be a while before AI is pervasively employed across IT environments, but it’s now more a question of when rather than if. The issue now is determining where the interface between the humans and the machines that are jointly managing IT environments lies.

Read More

Raspberry Pi 5: Faster, Better, Stronger — Spendier

Welcome to The Long View—where we peruse the news of the week and strip it to the essentials. Let’s work out what really matters. In a cheeky extra post this week: Everyone’s favorite single-board ARM computer, the Raspberry Pi, has a new generation coming soon. Compared to the ’4, RPi5 has double the performance, quadruple the base RAM and far more capable I/O. Analysis: And you’ll even be able to buy one The pandemic completely messed up the Raspberry Pi Foundation’s supply chains, meaning they had to focus on supplying companies who’d forward-bought the devices. This time, Eben Upton’s crew are trying to get back to their roots, promising—for the first couple of months—to sell RPi5s only to individuals. What’s the story? Alaina Yee reports—“Raspberry Pi 5 just got announced”: “I can’t wait”Forget the holiday pie, this is what I want on my table for Thanksgiving. … It looks totally badass. … Not only does the Raspberry Pi 5 appear ready to deliver a sizable step up in performance compared to its 2019 predecessor, but its new silicon was designed in-house.…The Raspberry Pi 5 is leaning hard into high-octane mini-computing. … You can expect the Raspberry Pi 5 to be about two to three times faster. Memory bandwidth also doubles.…And … a new official first-party operating system will be launching … in mid-October. Called Raspberry Pi OS, it’s based on the Linux Debian distro, as well as the Raspbian derivative that’s existed for years. … I can’t wait. Speeds and feeds? Brad Linder’s got ’em—“Raspberry Pi 5 offers 2X the performance”: “4x ARM Cortex-A76”The new Raspberry Pi 5 is a single-board computer that’s a major upgrade over the Raspberry Pi 4 … in just about every way. … At launch, there will be two configurations available: a model with 4GB of RAM that sells for $60 and an 8GB version priced at $80. That means the starting model has twice as much RAM as a $35 Raspberry Pi 4.…At the heart of new computer is a new … 16nm chip featuring 4x ARM Cortex-A76 CPU cores @ 2.4 GHz, 512KB per-core L2 cache, 2MB L3 cache, VideoCore VII graphics with support for dual 4k/60 Hz HDMI displays. [It] also features 32-bit LPDDR4X 4267MT/s memory … 2x micro HDMI (4K/60Hz), 2x USB 3.0 Type-A, 2x USB 2.0 Type-A, 1x Gigabit Ethernet with PoE support, 1x USB-C power input, 1x microSD card reader. … There are also two 4-lane MIPI interfaces. Horse’s mouth? Eben Upton—“Introducing: Raspberry Pi 5!”: “We’re incredibly grateful”Virtually every aspect of the platform has been upgraded, delivering a no-compromises user experience. … And it’s the first Raspberry Pi computer to feature silicon designed in‑house here in Cambridge, UK. … Broadcom’s VideoCore VII [is also] developed here.…Like all flagship Raspberry Pi products [it’s] built at the Sony UK Technology Centre in Pencoed, South Wales. We have been working with Sony since the launch of the first Raspberry Pi … in 2012, and we’re firm believers in the benefits of manufacturing our products within a few hours’ drive of our engineering design centre in Cambridge.…We expect the first units to ship by the end of October. … We’re incredibly grateful to the community of makers and hackers who make Raspberry Pi what it is. [So,] we’re going to ringfence all of the Raspberry Pi 5s we sell until at least the end of […]

Read More

Generative AI’s Impact on Developers

There is a growing belief in the developer community that future software development will be performed by machines rather than humans by 2040. Software development will undergo a radical change with the combination of machine learning, artificial intelligence, natural language processing and code generation that draws from large language models (LLMs). Most organizations believe that there will be a 30-40% improvement in overall developer productivity with AI. While these arguments have some merit, I do not believe developers will be fully replaced. Instead, I believe generative AI can augment developers to support faster development and higher-quality code. I’ll address the impact of generative AI on the development community under three pillars: Automation and productivity Quality engineering and compliance Ways of working Automation and Productivity There will be a focus on increased automation all the way from business planning to operations of applications. LLMs can help provide better alignment of user stories to business requirements. In fact, one of the best use cases for generative AI during planning phases is to auto-generate user stories from business requirements documents. Since ambiguity of requirements or guesswork is taken out of the equation, one can expect a clearer “definition of done” through the auto-creation of acceptance criteria. In a typical development cycle, 15%-20% of coding defects are attributed to improperly defined requirements. Generative AI augmentation can result in significant reduction of those defects. Generative AI augmentation can help developers with better planning and estimation of work. Rather than relying on personal experience or home-grown estimation models, LLMs can better predict the complexity of work and developers and continually learn and adapt through multiple development sprints. AI-augmented code creation can allow developers to focus on solving complex business problems and creative thinking rather than worrying about repetitive code generation. Over the last decade or so, the perception of software development as a creative pursuit has been a dying phenomenon. With AI, I think that more and more younger developers will be attracted to the field. AI will put the “fun” back in coding. AI-assisted DevOps and continuous integration will further accelerate deployments of code so developers can focus more on solving complex business problems. Deployment failures due to human errors can be drastically reduced. Elaborating on the above, newer and less experienced developers can also generate higher-quality code with AI-augmentation, leading to better overall consistency of code in large programs. Overall, from a development standpoint, I think AI augmentation will free up 30% of developers’ time to work on enhancing user experience and other value-added tasks. Quality Engineering and Compliance In a hybrid cloud world, solutions will become more distributed than ever, making system architecture more complex. LLMs can assist in regulating design documents and architecture work products to conform to industry/corporate standards and guidelines. In essence, LLMs can act as virtual Architecture Review Boards. In a typical development life cycle, architecture/design reviews and approvals make up 5%-8% of work and augmenting the process with generative AI capabilities can cut that time in half. Ssecurity compliance for cloud-based solutions is imperative. LLMs can assist in ensuring such compliance very early on in the development life cycle, leading to more predictable deployments and timely program delivery. Generative AI-augmented test case creation can optimize the number of test cases needed to support the development while increasing the […]

Read More

Livechatting cu FreshChat – soluția de pentru a crește vânzările, satisfacția și loialitatea clienților in 2023

Livechatting este o modalitate eficientă și interactivă de a comunica online cu clienții potențiali sau existenți ale afacerilor online (e-commerce). Dacă ești interesat de o soluție de livechatting care să îți ofere funcții avansate, integrări ușoare și rezultate măsurabile, atunci acest articol este pentru tine. În acest articol, îți vom prezenta soluția de livechatting FreshChat de la Freshworks, o platformă care îți permite să inițiezi conversații în timp real cu vizitatorii e-shopului, să le oferi suport, să le faci oferte și să le fidelizezi. Vei afla cum să alegi ediția potrivită pentru afacerea ta, cum să o configurezi pe site-ul tău web, cum să folosești funcțiile avansate de chatbot, omni-chanel comunicații, cum să monitorizezi și să analizezi performanța echipei de customer service, cum să o optimizezi pentru a îmbunătăți conversația cu clienții, La finalul articolului, vei înțelege cum te poate ajuta soluția de livechatting de la Freshworks să crești vânzările, satisfacția și loialitatea clienților. Introducere: ce este livechatting și de ce este important pentru afacerea ta Livechatting este o modalitate de a comunica online cu alte persoane prin intermediul unui sistem de chat online care permite conversații în timp real. Livechatting este important pentru un business online, mai ales pentru unul care se ocupă de e-commerce, deoarece îi poate ajuta să își atingă scopul de creștere a bazei de clienți și a numărului de comenzi. Iată cum: Platformele de livechatting oferă o modalitate eficientă și interactivă de a comunica online cu clienți potențiali și existenți, care îți poate aduce beneficii semnificative pentru afacerea ta de e-commerce. În continuare, îți vom prezenta soluția de livechatting de la Freshworks, o platformă completă și personalizabilă care îți permite să creezi conversații în timp real cu vizitatorii site-ului tău web. Freshworks: o soluție de livechatting completă și personalizabilă FreshChat de la Freshworks este o soluție de automatizare a procesului de comunicare conversațională, care îți ajută echipa de customer support să comunice mai ușor cu clienții pe mai multe canale, cum ar fi chat web, email, telefon și platforme de comunicare socială ca WhatsApp, Instagram sau iMessage. Cu ajutorul acestei platforme se pot crea diferite scenarii de interacțiunie cu vizitatorii unui e-shop, cum ar fi: FreshChat de la Freshworks îți permite să creați conversații personalizate și relevante cu clienții Dvs, care să ajute la creșterea ratei de conversie, satisfacția și loialitatea clienților. Opțiuni de omni-channel communication oferite de FreshChat Canalele de comunicare cu potențiali clienți oferiți de către platforma FreshChat sunt următoarele: FreshChat îți permite să conectezi oricare dintre aceste canale cu inboxul tău unificat, astfel încât să poți gestiona toate conversațiile cu clienții dintr-un singur loc. De asemenea, FreshChat îți oferă funcții avansate de chatbot, co-browsing, video și voce, care îți permit să automatizezi și să personalizezi conversațiile cu clienții. Funcționalități avansate de livechatting de la Freshworks Freddy este chatbot-ul inteligent de la Freshworks, care îți permite să automatizezi și să personalizezi conversațiile cu clienții tăi. Cu Freddy, poți să oferi suport clienților 24/7, să le oferi răspunsuri rapide și eficiente, să le rezolvi problemele și să le îndeplinești cererile. Freddy are următoarele beneficii pentru afacerea ta: Dacă vrei să afli mai multe despre Freddy, poți vizita pagina Freddy chatbot by Freshworks. Cum să monitorizezi și să analizezi performanța soluției de livechatting de la Freshworks: rapoarte, metrici și feedback Funcționalitățile de analitică a conversațiilor oferită […]

Read More