generative AI

GitHub Aims to Expand Copilot Scope and Reach in 2024

GitHub is gearing up to launch Copilot Workspace next year, a platform that will leverage generative artificial intelligence (AI) to automatically propose a plan for building an application based on natural language descriptions typed into the GitHub Issues project management software. The platform, revealed at the GitHub Universe 2023 conference, Copilot Workspace will generate editable documents via a single click that can be used to create code that developers can then visually inspect, edit and validate. Any errors discovered by application developers or the Copilot Workspace platform can also be automatically fixed. In addition, summaries of the project can automatically be created and shared across an application development team. GitHub CEO Thomas Dohnke told conference attendees this “revolutionary” approach will enable developers to employ AI as a “second brain.” In the meantime, GitHub is making an enterprise edition of Copilot available that can be trained using code connected to a private repository to ensure intellectual property is protected. GitHub also moving to integrate GitHub Copilot with third-party developer tools, online services and knowledge outside GitHub by collaborating with, for example, Datastax, LaunchDarkly, Postman, Hashicorp and Datadog. GitHub is moving to make the generative AI capabilities it provides accessible beyond text editors. Copilot Chat, starting next month, can be accessed via a mobile application to foster collaboration by explaining concepts, suggesting code based on your open files and windows, detecting security vulnerabilities and finding and fixing code errors. Copilot Chat, based on Chat GPT 4, can also be accessible across the GitHub website in addition to integrated development environments (IDEs) such as JetBrains and via a command line interface (CLI). Generative AI is already having a massive impact on the rate at which applications are developed, but that code still needs to be reviewed. Chat GPT is based on a general-purpose large language model (LLM) that is trained by pulling in code of varying quality from all across the web. As a result, code generated by the platform might contain vulnerabilities or be inefficient. In many cases, professional developers still prefer to write their own code. Of course, not every programming task requires the same level of coding expertise. In many instances, ChatGPT will generate, for example, a script that can be reused with confidence across a DevOps workflow. There is no shortage of mediocre developers who are now writing better code thanks to tools such as GitHub Copilot, and soon, domain-specific LLMs will make it possible to consistently write better code based on validated examples of code. The one thing that is certain is the volume of code written by machines is only going to increase. The challenge will be managing all the DevOps pipelines that will be needed to move increased volumes of code into a production environment. There is no doubt that AI will be applied to the management of DevOps pipelines, but for the moment, at least, the pace at which AI is being applied to writing code is already exceeding the ability of DevOps teams to manage it.

Read More

MongoDB Allies With AWS to Generate Code Using Generative AI

MongoDB and Amazon Web Services (AWS) announced today that they have extended their existing alliance to provide examples of curated code to train the Amazon CodeWhisperer generative artificial intelligence (AI) tool. Amazon CodeWhisperer is a free tool that generates code suggestions based on natural-language comments or existing code found in integrated development environments (IDEs). Andrew Davidson, senior vice president of product for MongoDB, said developers that build applications on MongoDB databases will now receive suggestions that reflect MongoDB best practices. The overall goal is to increase the pace at which a Cambrian explosion of high-quality applications can be developed, he added. Generative AI is already fundamentally changing the way applications are developed. Instead of requiring a developer to create a level of abstraction to communicate with a machine, it’s now possible for machines to understand the language humans use to communicate with each other. Developers, via a natural language interface, will soon be asking generative AI platforms to not only surface suggestions but also test and debug applications. The challenge developers are encountering is that generative AI platforms such as ChatGPT are based on large language models (LLMs) that were trained using code of varying quality collected from across the web. As a result, the code suggested can contain vulnerabilities or may simply not be especially efficient, resulting in increased costs because more infrastructure resources are required. In addition, the suggestions that surfaced can vary widely from one query to the next. As an alternative, AWS is looking to partner with organizations like MongoDB that have curated code to establish best practices that can be used to ensure better outcomes. These optimizations are available for C#, Go, Java, JavaScript and Python, the five most common programming languages used to build MongoDB applications. In addition, Amazon CodeWhisperer enables built-in security scanning and a reference tracker that provides information about the origin of a code suggestion. There’s little doubt at this point that generative AI will improve developer productivity, especially for developers who have limited expertise. DevOps teams, however, may soon find themselves overwhelmed by the amount of code moving through their pipelines. The hope is AI technologies will also one day help software engineers find ways to manage that volume of code. On the plus side, the quality of that code should improve thanks to recommendations from LLMs that, for example, will identify vulnerabilities long before an application is deployed in a production environment. Like it or not, the generative AI genie is now out of the proverbial bottle. Just about every job function imaginable will be impacted to varying degrees. In the case of DevOps teams, the ultimate impact should involve less drudgery as many of the manual tasks that conspire to make managing DevOps workflows tedious are eliminated. In the meantime, organizations should pay closer attention to which LLMs are being used to create code. After all, regardless of whether a human or machine created it, that code still needs to be thoroughly tested before being deployed in production environments.

Read More

Atlassian Brings Generative AI to ITSM

Atlassian today added generative artificial intelligence (AI) capabilities to Jira Service Management, an IT service management (ITSM) platform built on top of Jira project management software already used widely by DevOps teams. Generative AI is at the core of a virtual agent that analyzes and understands intent, sentiment, context and profile information to personalize interactions. Based on the same natural language processing (NLP) engine that Atlassian is embedding across its portfolio, the virtual agent dynamically generates answers from sources such as knowledge base articles, onboarding guides and frequently asked questions (FAQs) documents. In addition, it can facilitate conversations with human experts any time additional expertise is required to respond to more complex inquiries. Atlassian is also extending the reach of Atlassian Intelligence, a generative AI solution launched earlier this year, to provide concise summaries of all conversations, knowledge base articles and other resolution paths recommended by previous agents that have handled similar issues. It will also help IT staff craft better responses and adjust their tone to be more professional or empathetic if needed. During setup, support teams can easily configure the virtual agent experiences to match how they deliver service without writing a single line of code. Edwin Wong, head of product for IT solutions at Atlassian, said these additions are part of a larger commitment Atlassian is making to unify the helpdesk experience. The company plans to leverage Atlassian Intelligence to coordinate routing of all employee requests to the right tools as it aggregates requests from multiple communications channels such as web portals, email, chat and from within third-party applications, he noted. The overall goal is to reduce the number of tickets generated by leveraging AI as much as possible to handle service requests in a way that costs less to implement and maintain, Wong said. In the longer term, Atlassian will also apply generative AI to enable organizations to automate IT asset management further, he added. There is little doubt at this juncture that AI will be pervasively applied across both ITSM and DevOps workflows. As those advances are made, it should also become easier to address issues that arrive either programmatically or by generating a ticket for a service request that is then processed by an ITSM platform such as Jira Service Management. Each organization will need to decide how quickly to incorporate AI into ITSM, but hopefully, the level of burnout experienced by IT personnel will be sharply reduced as more tasks are automated. Less clear is the impact AI will have on the size of IT teams required to provide those services, but for the foreseeable future, there will always be a need for some level of human supervision. In the meantime, IT teams should take an inventory of the processes that are likely to be automated by AI today with an eye toward restructuring teams as more tasks are automated. Ultimately, the goal should be to let machines handle the tasks they do best so humans can provide higher levels of service that deliver more value to the business.

Read More

Stacklet Applies Generative AI to Simplify Cloud Governance

Stacklet today provided early access to a Jun0 tool that leverages generative artificial intelligence (AI) to improve cloud governance and reduce costs. Stacklet CEO Travis Stanfield said the goal is to make it possible to automatically surface recommendations and implement policies using a mix of large language models (LLMs) trained using data collected via the company’s Stacklet AssetDB database. Accessed via a natural language interface, Jun0 makes it possible to declaratively govern cloud computing environments via text-based queries to generate policies that can be implemented as code; that eliminates the need for specialized programming expertise, he added. IT teams can use text to launch queries pertaining to any operations, cost, security and compliance issues and then visually test the policies created as part of a dry run before implementing them at scale. In effect, Jun0 substantially reduces the level of expertise required to successfully manage cloud computing environments by making it simpler to create governance policies, noted Stanfield. DevOps teams are generally tasked with making sure cloud computing environments are optimally managed using policies that are usually implemented as code within a DevOps workflow. Implementing policy-as-code, however, typically involves mastering a domain-specific programming language. Stacklet is now making a case for a higher level of abstraction that eliminates the need to master yet another programming language to govern cloud computing environments. It’s still early days as far as the adoption of generative AI is concerned within DevOps workflows, but it’s already clear that implementing best practices is about to become substantially easier. In essence, DevOps practices are about to become democratized in a way that reduces the cognitive load required to implement them. In addition to increasing the number of application environments a DevOps team may be able to effectively manage, generative AI will make DevOps accessible to a wider range of organizations that previously would not have been able to hire and retain software engineers. Many of those software engineers should also be able to spend more time addressing more complex issues rather than, for example, writing scripts to ensure that only certain classes of workloads are allowed to run on a particular cloud service within a period of time which results in lower costs. Unfortunately, DevOps teams are already playing catch-up when it comes to having access to generative AI tools. Developers are already taking advantage of generative AI tools to create more code faster. As that code moves through DevOps pipelines, it’s apparent the overall size of the codebase that DevOps teams are being required to manage is only going to increase. Most organizations are not going to be able to hire a small army of software engineers to manage that codebase, so the tooling provided to existing DevOps teams will need to improve. The issue now is narrowing the gap between now and when next-generation AI tools are made generally available. One way or another, however, it’s clear that the way DevOps is managed will never be the same again.

Read More

Generative AI’s Impact on Developers

There is a growing belief in the developer community that future software development will be performed by machines rather than humans by 2040. Software development will undergo a radical change with the combination of machine learning, artificial intelligence, natural language processing and code generation that draws from large language models (LLMs). Most organizations believe that there will be a 30-40% improvement in overall developer productivity with AI. While these arguments have some merit, I do not believe developers will be fully replaced. Instead, I believe generative AI can augment developers to support faster development and higher-quality code. I’ll address the impact of generative AI on the development community under three pillars: Automation and productivity Quality engineering and compliance Ways of working Automation and Productivity There will be a focus on increased automation all the way from business planning to operations of applications. LLMs can help provide better alignment of user stories to business requirements. In fact, one of the best use cases for generative AI during planning phases is to auto-generate user stories from business requirements documents. Since ambiguity of requirements or guesswork is taken out of the equation, one can expect a clearer “definition of done” through the auto-creation of acceptance criteria. In a typical development cycle, 15%-20% of coding defects are attributed to improperly defined requirements. Generative AI augmentation can result in significant reduction of those defects. Generative AI augmentation can help developers with better planning and estimation of work. Rather than relying on personal experience or home-grown estimation models, LLMs can better predict the complexity of work and developers and continually learn and adapt through multiple development sprints. AI-augmented code creation can allow developers to focus on solving complex business problems and creative thinking rather than worrying about repetitive code generation. Over the last decade or so, the perception of software development as a creative pursuit has been a dying phenomenon. With AI, I think that more and more younger developers will be attracted to the field. AI will put the “fun” back in coding. AI-assisted DevOps and continuous integration will further accelerate deployments of code so developers can focus more on solving complex business problems. Deployment failures due to human errors can be drastically reduced. Elaborating on the above, newer and less experienced developers can also generate higher-quality code with AI-augmentation, leading to better overall consistency of code in large programs. Overall, from a development standpoint, I think AI augmentation will free up 30% of developers’ time to work on enhancing user experience and other value-added tasks. Quality Engineering and Compliance In a hybrid cloud world, solutions will become more distributed than ever, making system architecture more complex. LLMs can assist in regulating design documents and architecture work products to conform to industry/corporate standards and guidelines. In essence, LLMs can act as virtual Architecture Review Boards. In a typical development life cycle, architecture/design reviews and approvals make up 5%-8% of work and augmenting the process with generative AI capabilities can cut that time in half. Ssecurity compliance for cloud-based solutions is imperative. LLMs can assist in ensuring such compliance very early on in the development life cycle, leading to more predictable deployments and timely program delivery. Generative AI-augmented test case creation can optimize the number of test cases needed to support the development while increasing the […]

Read More

Will ChatGPT Replace Human Software Developers? Probably Not

Since the release of ChatGPT, there has been a great deal of hype around generative AI and how companies can leverage it to cut costs and democratize software and application development. Naturally, with discussions of cost-cutting and democratization come whispers about what will happen to software developers. This is a real and valid concern, but software developers’ skills, expertise and creativity are still very much needed. While generative AI and AI code generation tools like ChatGPT have shown some promise and potential benefits, they are still in their infancy—like many other innovative technological advancements. We also don’t know what scenarios they may present down the road or their true abilities when the technology matures. For instance, how will it integrate with other technologies? We don’t know what will happen when a ChatGPT-generated line of code breaks or needs to be changed. We don’t know if it can provide a novel solution to a unique problem or what security threats it will present. Given these unknowns, technology executives should think twice about replacing experienced and creative technology talent, such as software developers, with AI code generators. Will ChatGPT create novel code to solve a unique problem never encountered before? Probably not. A Tale as Old as Time (Or at Least a Decade) The technology industry has been searching for and developing new ways to make certain software development tasks much easier and more streamlined for years. One example of this is low-code/no-code. The notion of simplifying application development and replacing software developers with laypeople (citizen developers) has been around for more than a decade now, as low-code and no-code solutions have grown more popular. These solutions have promised that companies don’t need technical talent to get their software and app projects off the ground. However, if you look at the impact of these solutions today, their use can result in large amounts of technical debt and almost always require the skill of experienced software developers. The reason? Building complex software and applications is extremely difficult; it’s an art. Low-code and no-code solutions have their rightful place and can make things easier if a company is looking to launch a simple app or static web page. These solutions can increase the pace of development and time-to-market and enable everyday people without any development skills to facilitate them. However, they are not actually a complete solution and often overlook aspects of development that a human software developer would typically address. Without a skilled expert involved, low-code/no-code platforms often can’t solve a unique problem a company has. So, how does this relate to AI code generators like ChatGPT? Here’s how. A Similar Situation—With One Key Difference When thinking about their place in the development process, AI code generators are not that different from low-code or no-code solutions. The thinking is that they will also enable non-technical individuals to create software and applications with ease. Yet, there is one key difference—they promise expertise, too. But is the expertise coming from the AI code generator or the person piloting it? The answer is simple; it is not from the code generator. There have been examples of companies and individuals that have tried using ChatGPT to build code, and they have appeared to be successful. However, without the input of the individuals using it, it never would […]

Read More

CloudBees CEO: State of Software Development is a Disaster

CloudBees CEO Anuj Kapur told attendees at a DevOps World event today that with developers spending only 30% of their time writing code the current state of software development in enterprise IT organizations is a disaster. After more than 14 years of effort, the promise of DevOps—in terms of being able to accelerate the rate at which applications are being deployed—remains largely academic, said Kapur. In fact, the effort to shift more responsibility for application security further left toward developers has only increased the amount of cognitive load and reduced the amount of time available to write code, he noted. However, with the rise of generative artificial intelligence (AI), an inflection point that will dramatically increase the velocity at which applications are being built and deployed has arrived, said Kapur. The challenge will be achieving that goal without increasing the cognitive load on developers. That cognitive overload results in 70% of developers’ time not being productive within organizations that often hire thousands of developers, he noted. Despite all the DevOps issues that need to be addressed, AI advances promise improvement. The overall DevOps market is still relatively young, given the current level of adoption, said Kapur. “We continue to believe the market is early,” he said. Today, CloudBees took the wraps off the first major update to the open source Jenkins continuous integration/continuous delivery (CI/CD) platform to have been made in the past several years. At the same time, the company also unveiled a DevSecOps platform based on Kubernetes that is optimized for building and deploying cloud-native applications based on portable Tekton pipelines. That latter platform provides the foundation through which CloudBees will, in the months ahead, apply generative AI to software engineering to, for example, create unit tests on the fly and automate rollbacks. In addition, DevSecOps capabilities will be extended all the way out to the integrated development environments (IDE) to reduce the cognitive load of developers. The overall goal is to reduce the number of manual processes that create bottlenecks that make it challenging to manage DevOps at scale. Criticism of the level of developer productivity enabled by DevOps compared to other development approaches needs to be tempered, said Tapabrata Pal, vice president of architecture for Fidelity Investments, because it still represents a significant advance. There is still clearly too much toil, but the issues that impede the pace at which developers can effectively write code tend to be more cultural than technical, he added. Organizations are not inclined to automatically trust the code created by developers, so consequently, there is still a lot of friction in the DevOps process, noted Pal. In theory, advances in AI should reduce that friction, but it’s still early days in terms of the large language models (LLMs) that drive generative AI platforms and their ability to create reliable code, he added. That should improve as LLMs are specifically trained using high-quality code, but in the meantime, the pace at which substandard code might be generated could overwhelm DevOps processes until AI is applied there as well, said Pal. Thomas Haver, master software engineer for M&T Bank, added that while assisted AI technologies will have a major impact, it’s not reasonable to expect large organizations to absorb them overnight. Patience will be required to ensure advances are made in ways that […]

Read More

JFrog swampUP: Addressing the Advent of AI

At JFrog SwampUp 2023, the buzz with all about AI, especially with JFrog’s announcement of Machine Learning (ML) Model Management capabilities. These conversations around AI and ML reflected these technologies’ growing influence and importance in the DevOps world. How much of the generative AI conversation is just hype, though? And what does that mean for AI and ML as a whole? Alan Shimel, CEO of Techstrong Group, and I sat down with Stephen Chin, VP of DevRel at JFrog, to find out. As far as Chin is concerned, even as more companies create and leverage AI models, these models must be managed like any other software component. Chin said JFrog Artifactory acts as a staging ground to operationalize models using DevSecOps best practices. Algorithms and models will continue to grow in size and complexity, and they will require robust processes around deployment and management – just like any other software artifact. The key, Chin said, is to think of ML as just another development language and leverage tools that standardize and streamline working with it. Compared to traditional enterprise applications, though, DevOps workflows for AI/ML are still relatively immature, but Chin said JFrog’s new model management capabilities aim to provide that missing automation and governance using DevSecOps best practices for governance, security, and licensing compliance. Additionally, Chin noted that AI/ML have become essential for development teams to keep up with the explosive demand for code. At this point, AI has become table stakes. In the AI arms race, the winners are those who understand AI has become a vital development tool to enhance productivity. In terms of job security, the losers are the ones who can’t keep up with the volume of code. According to Chin, you are out of the running if you don’t embrace AI. Looking ahead, AI will not make developers obsolete, though – quite the opposite. Given the quasi-unlimited appetite for new code, Chin emphasized that developers who embrace AI will have more work than ever. One way to think of it is that AI provides a new form of “outsourcing” to boost human productivity, much like previous waves of innovation in computer science. When it comes to security, there are still challenges that need to be addressed; code generated by today’s AI solutions still has significant drawbacks like potential data bias, lack of explainability and simple errors. In the long term, though, Chin believes AI itself will provide the solution to secure an exponentially growing codebase, given its superior scale. Just as AI will make individual developers more productive, it can also supercharge security teams – but it can also empower bad actors. The key will be continuing to democratize the benefits to even the playing field. AI promises to be a transformative technology on the scale of the Bronze Age or Quantum computing, Chin said, but the path forward will require humans and machines working together to ensure it’s used for good. It’s clear that the pace of innovation in AI and ML is rapidly accelerating. As these technologies become further democratized and integrated into developer workflows, they promise to transform how software is built and secured, Chin said. Companies must take advantage of this technology innovation by providing the pipelines and governance for this software revolution, he added. Chin believes the future will […]

Read More