MongoDB Allies With AWS to Generate Code Using Generative AI
MongoDB and Amazon Web Services (AWS) announced today that they have extended their existing alliance to provide examples of curated code to train the Amazon CodeWhisperer generative artificial intelligence (AI) tool.
Amazon CodeWhisperer is a free tool that generates code suggestions based on natural-language comments or existing code found in integrated development environments (IDEs).
Andrew Davidson, senior vice president of product for MongoDB, said developers that build applications on MongoDB databases will now receive suggestions that reflect MongoDB best practices. The overall goal is to increase the pace at which a Cambrian explosion of high-quality applications can be developed, he added.
Generative AI is already fundamentally changing the way applications are developed. Instead of requiring a developer to create a level of abstraction to communicate with a machine, it’s now possible for machines to understand the language humans use to communicate with each other. Developers, via a natural language interface, will soon be asking generative AI platforms to not only surface suggestions but also test and debug applications.
The challenge developers are encountering is that generative AI platforms such as ChatGPT are based on large language models (LLMs) that were trained using code of varying quality collected from across the web. As a result, the code suggested can contain vulnerabilities or may simply not be especially efficient, resulting in increased costs because more infrastructure resources are required. In addition, the suggestions that surfaced can vary widely from one query to the next.
As an alternative, AWS is looking to partner with organizations like MongoDB that have curated code to establish best practices that can be used to ensure better outcomes. These optimizations are available for C#, Go, Java, JavaScript and Python, the five most common programming languages used to build MongoDB applications. In addition, Amazon CodeWhisperer enables built-in security scanning and a reference tracker that provides information about the origin of a code suggestion.
There’s little doubt at this point that generative AI will improve developer productivity, especially for developers who have limited expertise. DevOps teams, however, may soon find themselves overwhelmed by the amount of code moving through their pipelines. The hope is AI technologies will also one day help software engineers find ways to manage that volume of code. On the plus side, the quality of that code should improve thanks to recommendations from LLMs that, for example, will identify vulnerabilities long before an application is deployed in a production environment.
Like it or not, the generative AI genie is now out of the proverbial bottle. Just about every job function imaginable will be impacted to varying degrees. In the case of DevOps teams, the ultimate impact should involve less drudgery as many of the manual tasks that conspire to make managing DevOps workflows tedious are eliminated. In the meantime, organizations should pay closer attention to which LLMs are being used to create code. After all, regardless of whether a human or machine created it, that code still needs to be thoroughly tested before being deployed in production environments.