GitHub Aims to Expand Copilot Scope and Reach in 2024

GitHub is gearing up to launch Copilot Workspace next year, a platform that will leverage generative artificial intelligence (AI) to automatically propose a plan for building an application based on natural language descriptions typed into the GitHub Issues project management software.

The platform, revealed at the GitHub Universe 2023 conference, Copilot Workspace will generate editable documents via a single click that can be used to create code that developers can then visually inspect, edit and validate. Any errors discovered by application developers or the Copilot Workspace platform can also be automatically fixed.

In addition, summaries of the project can automatically be created and shared across an application development team.

DevOps Unbound Podcast

GitHub CEO Thomas Dohnke told conference attendees this “revolutionary” approach will enable developers to employ AI as a “second brain.”

In the meantime, GitHub is making an enterprise edition of Copilot available that can be trained using code connected to a private repository to ensure intellectual property is protected. GitHub also moving to integrate GitHub Copilot with third-party developer tools, online services and knowledge outside GitHub by collaborating with, for example, Datastax, LaunchDarkly, Postman, Hashicorp and Datadog.

GitHub is moving to make the generative AI capabilities it provides accessible beyond text editors. Copilot Chat, starting next month, can be accessed via a mobile application to foster collaboration by explaining concepts, suggesting code based on your open files and windows, detecting security vulnerabilities and finding and fixing code errors.

Copilot Chat, based on Chat GPT 4, can also be accessible across the GitHub website in addition to integrated development environments (IDEs) such as JetBrains and via a command line interface (CLI).

Generative AI is already having a massive impact on the rate at which applications are developed, but that code still needs to be reviewed. Chat GPT is based on a general-purpose large language model (LLM) that is trained by pulling in code of varying quality from all across the web. As a result, code generated by the platform might contain vulnerabilities or be inefficient. In many cases, professional developers still prefer to write their own code.

Of course, not every programming task requires the same level of coding expertise. In many instances, ChatGPT will generate, for example, a script that can be reused with confidence across a DevOps workflow. There is no shortage of mediocre developers who are now writing better code thanks to tools such as GitHub Copilot, and soon, domain-specific LLMs will make it possible to consistently write better code based on validated examples of code.

The one thing that is certain is the volume of code written by machines is only going to increase. The challenge will be managing all the DevOps pipelines that will be needed to move increased volumes of code into a production environment. There is no doubt that AI will be applied to the management of DevOps pipelines, but for the moment, at least, the pace at which AI is being applied to writing code is already exceeding the ability of DevOps teams to manage it.