AWS

Best of 2023: Microservices Sucks — Amazon Goes Back to Basics

As we close out 2023, we at DevOps.com wanted to highlight the most popular articles of the year. Following is the latest in our series of the Best of 2023. Welcome to The Long View—where we peruse the news of the week and strip it to the essentials. Let’s work out what really matters. This week: Amazon Prime Video has ditched its use of microservices-cum-serverless, reverting to a traditional, monolithic architecture. It vastly improved the workload’s cost and scalability. I’m Shocked. Shocked. Analysis: But it depends what you mean by “monolithic” None of this is a surprise to us old-skool devs. Although the team did need to clone the monolith a few times, splitting up the tasks so as to retain enough scaling headroom. But it shouldn’t be at all shocking—unless you’ve drunk the µservices Kool-Aid. What’s the story? Joab Jackson reports—“Return of the Monolith”: “Hopelessly archaic”The engineering team at Amazon Prime Video has been roiling the cloud native computing community with its explanation that … a monolithic architecture has produced superior performance over a microservices- and serverless-led approach. … Shocking!…In theory, the use of serverless would allow the team to scale each service independently. It turned out … they hit a hard scaling limit at only 5%. … Initially, the team tried to optimize individual components, but this did not bring about significant improvements. So, the team moved all the components into a single process, hosting them on … EC2 and … ECS.…The IT world is nothing but cyclical, where an architectural trend is derided as hopelessly archaic one year [and] the new hot thing the following year. Certainly, over the past decade when microservices ruled—and the decade before when web services did—we’ve heard more than one joke … about “monoliths being the next big thing.” Now it may actually come to pass. Not just a scaling advantage? Rafal Gancarz also notes huge cost savings—“Prime Video Switched from Serverless to EC2 and ECS”: “Single application process”Prime Video, Amazon’s video streaming service … achieved a 90% reduction in operational costs as a result. … The initial architecture of the solution was based on microservices … implemented on top of the serverless infrastructure stack. The microservices included splitting audio/video streams into video frames or decrypted audio buffers as well as detecting various stream defects … using machine-learning algorithms.…The problem of high operational cost was caused by a high volume of read/writes to the S3 bucket storing intermediate work items … and a large number of step function state transitions. … In the end, the team decided to consolidate all of the business logic in a single application process. … The resulting architecture had the entire … process running [as] instances distributed across different ECS tasks to avoid hitting vertical scaling limits. Horse’s mouth? Marcin Kolny, a Prime Video dev—“The move from a distributed microservices architecture to a monolith application helped achieve higher scale, resilience, and reduce costs”: “Also simplified the orchestration”We took a step back and revisited the architecture. … The main scaling bottleneck in the architecture was the orchestration management that was implemented using AWS Step Functions. Our service performed multiple state transitions for every second of the stream.…We realized that distributed approach wasn’t bringing a lot of benefits in our specific use case, so … we moved all components into a single process to keep the data transfer within the process memory, which also simplified the orchestration logic. [Then] we cloned […]

Read More

MongoDB Allies With AWS to Generate Code Using Generative AI

MongoDB and Amazon Web Services (AWS) announced today that they have extended their existing alliance to provide examples of curated code to train the Amazon CodeWhisperer generative artificial intelligence (AI) tool. Amazon CodeWhisperer is a free tool that generates code suggestions based on natural-language comments or existing code found in integrated development environments (IDEs). Andrew Davidson, senior vice president of product for MongoDB, said developers that build applications on MongoDB databases will now receive suggestions that reflect MongoDB best practices. The overall goal is to increase the pace at which a Cambrian explosion of high-quality applications can be developed, he added. Generative AI is already fundamentally changing the way applications are developed. Instead of requiring a developer to create a level of abstraction to communicate with a machine, it’s now possible for machines to understand the language humans use to communicate with each other. Developers, via a natural language interface, will soon be asking generative AI platforms to not only surface suggestions but also test and debug applications. The challenge developers are encountering is that generative AI platforms such as ChatGPT are based on large language models (LLMs) that were trained using code of varying quality collected from across the web. As a result, the code suggested can contain vulnerabilities or may simply not be especially efficient, resulting in increased costs because more infrastructure resources are required. In addition, the suggestions that surfaced can vary widely from one query to the next. As an alternative, AWS is looking to partner with organizations like MongoDB that have curated code to establish best practices that can be used to ensure better outcomes. These optimizations are available for C#, Go, Java, JavaScript and Python, the five most common programming languages used to build MongoDB applications. In addition, Amazon CodeWhisperer enables built-in security scanning and a reference tracker that provides information about the origin of a code suggestion. There’s little doubt at this point that generative AI will improve developer productivity, especially for developers who have limited expertise. DevOps teams, however, may soon find themselves overwhelmed by the amount of code moving through their pipelines. The hope is AI technologies will also one day help software engineers find ways to manage that volume of code. On the plus side, the quality of that code should improve thanks to recommendations from LLMs that, for example, will identify vulnerabilities long before an application is deployed in a production environment. Like it or not, the generative AI genie is now out of the proverbial bottle. Just about every job function imaginable will be impacted to varying degrees. In the case of DevOps teams, the ultimate impact should involve less drudgery as many of the manual tasks that conspire to make managing DevOps workflows tedious are eliminated. In the meantime, organizations should pay closer attention to which LLMs are being used to create code. After all, regardless of whether a human or machine created it, that code still needs to be thoroughly tested before being deployed in production environments.

Read More