software

From Reaction to Robots: Riding the AI Wave in 2024

As we navigate another year of consistent zero-day breaches, legislative pivots, the explosion of AI tooling and threat actors growing bolder and more desperate, it’s safe to say that getting comfortable with change is a requirement for thriving in the technology industry. We occupy a notoriously unpredictable space, but that’s half the fun. Compared to many other verticals, technology—especially cybersecurity—is relatively youthful, and the future should be something we can all look forward to blossoming in sophistication alongside the technology we swear to protect. So, what can we expect in the industry in 2024? We put our heads together, looked into our crystal ball, and these were the results: Government Regulations Around AI Will Turn the Industry Upside Down It was the talk of the conference circuit in 2023, with several high-profile presentations at Black Hat, DEF CON, Infosecurity Europe and many more warning of the explosive changes we can expect from AI implementation across every industry, especially cybersecurity. As tends to happen with low barriers to entry for such transformative technology, adoption has outpaced any official regulation or mandates at the government level. With significant movements in general cybersecurity guidelines and benchmarks around the world, including CISA’s Secure-by-Design and -Default principles in the U.S. and similar initiatives from the UK and Australian governments, it is essentially a foregone conclusion that regulations around AI use will be announced sooner rather than later. While much of the debate surrounding the mainstream use of AI tooling and LLMs has centered around copyright issues with training data, another perspective delves into how AI is best used in cybersecurity practices. When it comes to coding, perhaps its most human quality is its similar hardship in displaying contextual security awareness, and this factor is deeply concerning as more developers are adopting AI coding assistants in the construction of software. This has not gone unnoticed, and in a time of increased scrutiny for software vendors adopting security best practices, government-level intervention certainly would not surprise. … And Demand for AI/ML Coding Tools Will Create a Need for More Developers, not Less! Much has been written about the AI takeover, and for the better part of a year, we have been subject to a plethora of clickbait headlines that spell doom and destruction for just about every white-collar profession out there, and developers were not spared. After months of speculation and experimentation with LLMs in a coding context, we remain entirely unconvinced that development jobs are at collective risk. There is no doubt that AI/ML coding tools represent a new era of powerful assistive technology for developers, but they are trained on human-created input and data, and that has rendered the results far from perfect. Perhaps if every developer on the planet was a top-tier, security-minded engineer, we might see genuine cause for concern. However, just as the average adult driver vastly overshoots their ability (notice how everyone says they’re a great driver, and it’s always other people who lack skill? That’s a classic example of the Dunning-Kruger effect!), so too does the development community, especially when it comes to security best practices. According to one Stanford study into developer use of AI tooling, it is likely that unskilled developers using this technology will become dangerous. The study claimed that participants who had access to AI assistants […]

Read More

Will ChatGPT Replace Human Software Developers? Probably Not

Since the release of ChatGPT, there has been a great deal of hype around generative AI and how companies can leverage it to cut costs and democratize software and application development. Naturally, with discussions of cost-cutting and democratization come whispers about what will happen to software developers. This is a real and valid concern, but software developers’ skills, expertise and creativity are still very much needed. While generative AI and AI code generation tools like ChatGPT have shown some promise and potential benefits, they are still in their infancy—like many other innovative technological advancements. We also don’t know what scenarios they may present down the road or their true abilities when the technology matures. For instance, how will it integrate with other technologies? We don’t know what will happen when a ChatGPT-generated line of code breaks or needs to be changed. We don’t know if it can provide a novel solution to a unique problem or what security threats it will present. Given these unknowns, technology executives should think twice about replacing experienced and creative technology talent, such as software developers, with AI code generators. Will ChatGPT create novel code to solve a unique problem never encountered before? Probably not. A Tale as Old as Time (Or at Least a Decade) The technology industry has been searching for and developing new ways to make certain software development tasks much easier and more streamlined for years. One example of this is low-code/no-code. The notion of simplifying application development and replacing software developers with laypeople (citizen developers) has been around for more than a decade now, as low-code and no-code solutions have grown more popular. These solutions have promised that companies don’t need technical talent to get their software and app projects off the ground. However, if you look at the impact of these solutions today, their use can result in large amounts of technical debt and almost always require the skill of experienced software developers. The reason? Building complex software and applications is extremely difficult; it’s an art. Low-code and no-code solutions have their rightful place and can make things easier if a company is looking to launch a simple app or static web page. These solutions can increase the pace of development and time-to-market and enable everyday people without any development skills to facilitate them. However, they are not actually a complete solution and often overlook aspects of development that a human software developer would typically address. Without a skilled expert involved, low-code/no-code platforms often can’t solve a unique problem a company has. So, how does this relate to AI code generators like ChatGPT? Here’s how. A Similar Situation—With One Key Difference When thinking about their place in the development process, AI code generators are not that different from low-code or no-code solutions. The thinking is that they will also enable non-technical individuals to create software and applications with ease. Yet, there is one key difference—they promise expertise, too. But is the expertise coming from the AI code generator or the person piloting it? The answer is simple; it is not from the code generator. There have been examples of companies and individuals that have tried using ChatGPT to build code, and they have appeared to be successful. However, without the input of the individuals using it, it never would […]

Read More