From Reaction to Robots: Riding the AI Wave in 2024
As we navigate another year of consistent zero-day breaches, legislative pivots, the explosion of AI tooling and threat actors growing bolder and more desperate, it’s safe to say that getting comfortable with change is a requirement for thriving in the technology industry.
We occupy a notoriously unpredictable space, but that’s half the fun. Compared to many other verticals, technology—especially cybersecurity—is relatively youthful, and the future should be something we can all look forward to blossoming in sophistication alongside the technology we swear to protect.
So, what can we expect in the industry in 2024? We put our heads together, looked into our crystal ball, and these were the results:
Government Regulations Around AI Will Turn the Industry Upside Down
It was the talk of the conference circuit in 2023, with several high-profile presentations at Black Hat, DEF CON, Infosecurity Europe and many more warning of the explosive changes we can expect from AI implementation across every industry, especially cybersecurity. As tends to happen with low barriers to entry for such transformative technology, adoption has outpaced any official regulation or mandates at the government level.
With significant movements in general cybersecurity guidelines and benchmarks around the world, including CISA’s Secure-by-Design and -Default principles in the U.S. and similar initiatives from the UK and Australian governments, it is essentially a foregone conclusion that regulations around AI use will be announced sooner rather than later.
While much of the debate surrounding the mainstream use of AI tooling and LLMs has centered around copyright issues with training data, another perspective delves into how AI is best used in cybersecurity practices. When it comes to coding, perhaps its most human quality is its similar hardship in displaying contextual security awareness, and this factor is deeply concerning as more developers are adopting AI coding assistants in the construction of software. This has not gone unnoticed, and in a time of increased scrutiny for software vendors adopting security best practices, government-level intervention certainly would not surprise.
… And Demand for AI/ML Coding Tools Will Create a Need for More Developers, not Less!
Much has been written about the AI takeover, and for the better part of a year, we have been subject to a plethora of clickbait headlines that spell doom and destruction for just about every white-collar profession out there, and developers were not spared.
After months of speculation and experimentation with LLMs in a coding context, we remain entirely unconvinced that development jobs are at collective risk. There is no doubt that AI/ML coding tools represent a new era of powerful assistive technology for developers, but they are trained on human-created input and data, and that has rendered the results far from perfect. Perhaps if every developer on the planet was a top-tier, security-minded engineer, we might see genuine cause for concern.
However, just as the average adult driver vastly overshoots their ability (notice how everyone says they’re a great driver, and it’s always other people who lack skill? That’s a classic example of the Dunning-Kruger effect!), so too does the development community, especially when it comes to security best practices. According to one Stanford study into developer use of AI tooling, it is likely that unskilled developers using this technology will become dangerous. The study claimed that participants who had access to AI assistants were more likely to introduce security vulnerabilities for the majority of programming tasks yet also more likely to rate their insecure answers as secure. This poses a significant issue; poor developers will be enabled to introduce security issues faster, and if anything, this will only increase the need for security-skilled developers with the knowledge and expertise to code securely and use AI technology safely.
We Will see Consequences for Software Vendors who Don’t Ship Secure Code
CISA director Jen Easterly has made it abundantly clear that software vendors should not be permitted to “pass the buck” when it comes to security within their products, highlighting that the current responsibility for software safety is largely passed to the consumer.
We have agreed with this view, and we believe it will take a shakeup of this magnitude to move the needle toward code-level security—not to mention appropriately educating developers—to be taken seriously.
While CISA’s powers only extend so far—essentially, they can only enforce secure-by-design practices to vendors that sell to federal agencies—this still presents a new security benchmark to hit for many large software vendors. Colonial Pipeline, SUNBURST and, more recently, the MOVEit data breach are all large-scale cyberattacks that affected government-level systems at some point, and with these new guidelines in play, there is a real possibility that future highly visible breaches will receive greater scrutiny and reprimand.
Reactive Security Will Start to be Seen as Old-School
As the goal of increased cyber resilience continues to dominate cyber strategies across multiple verticals, those who rely on reaction and incident response as the only core tenets of their plan will find themselves in a place of unacceptable exposure and risk.
Security professionals must act swiftly in the face of adversity and outright attack, but modern times call for modern solutions, and we simply cannot afford to take a less-than-holistic approach. “Shift left” needs to be more than a rapidly aging buzzword; code-level security should be prioritized, alongside upskilling and verifying the competence of the developers working on the software and critical digital infrastructure we take for granted. After all, who wants the security program equivalent of an Etch-a-Sketch when they could have an iPad Pro?
Now, more than ever, governments and enterprises alike must commit themselves to a preventative, high-awareness security program in which every member of staff is enabled to share responsibility. It’s not enough to cite the cybersecurity skills gap as a reason for falling behind; investment in security-aware developers and fostering collaboration between them and their AppSec counterparts should be a driving force in remaining as secure as possible, both as an organization and within the production of software.
Matias Madou, co-founder and CTO at Secure Code Warrior, contributed to this article.