software development

GitHub Aims to Expand Copilot Scope and Reach in 2024

GitHub is gearing up to launch Copilot Workspace next year, a platform that will leverage generative artificial intelligence (AI) to automatically propose a plan for building an application based on natural language descriptions typed into the GitHub Issues project management software. The platform, revealed at the GitHub Universe 2023 conference, Copilot Workspace will generate editable documents via a single click that can be used to create code that developers can then visually inspect, edit and validate. Any errors discovered by application developers or the Copilot Workspace platform can also be automatically fixed. In addition, summaries of the project can automatically be created and shared across an application development team. GitHub CEO Thomas Dohnke told conference attendees this “revolutionary” approach will enable developers to employ AI as a “second brain.” In the meantime, GitHub is making an enterprise edition of Copilot available that can be trained using code connected to a private repository to ensure intellectual property is protected. GitHub also moving to integrate GitHub Copilot with third-party developer tools, online services and knowledge outside GitHub by collaborating with, for example, Datastax, LaunchDarkly, Postman, Hashicorp and Datadog. GitHub is moving to make the generative AI capabilities it provides accessible beyond text editors. Copilot Chat, starting next month, can be accessed via a mobile application to foster collaboration by explaining concepts, suggesting code based on your open files and windows, detecting security vulnerabilities and finding and fixing code errors. Copilot Chat, based on Chat GPT 4, can also be accessible across the GitHub website in addition to integrated development environments (IDEs) such as JetBrains and via a command line interface (CLI). Generative AI is already having a massive impact on the rate at which applications are developed, but that code still needs to be reviewed. Chat GPT is based on a general-purpose large language model (LLM) that is trained by pulling in code of varying quality from all across the web. As a result, code generated by the platform might contain vulnerabilities or be inefficient. In many cases, professional developers still prefer to write their own code. Of course, not every programming task requires the same level of coding expertise. In many instances, ChatGPT will generate, for example, a script that can be reused with confidence across a DevOps workflow. There is no shortage of mediocre developers who are now writing better code thanks to tools such as GitHub Copilot, and soon, domain-specific LLMs will make it possible to consistently write better code based on validated examples of code. The one thing that is certain is the volume of code written by machines is only going to increase. The challenge will be managing all the DevOps pipelines that will be needed to move increased volumes of code into a production environment. There is no doubt that AI will be applied to the management of DevOps pipelines, but for the moment, at least, the pace at which AI is being applied to writing code is already exceeding the ability of DevOps teams to manage it.

Read More

Will ChatGPT Replace Human Software Developers? Probably Not

Since the release of ChatGPT, there has been a great deal of hype around generative AI and how companies can leverage it to cut costs and democratize software and application development. Naturally, with discussions of cost-cutting and democratization come whispers about what will happen to software developers. This is a real and valid concern, but software developers’ skills, expertise and creativity are still very much needed. While generative AI and AI code generation tools like ChatGPT have shown some promise and potential benefits, they are still in their infancy—like many other innovative technological advancements. We also don’t know what scenarios they may present down the road or their true abilities when the technology matures. For instance, how will it integrate with other technologies? We don’t know what will happen when a ChatGPT-generated line of code breaks or needs to be changed. We don’t know if it can provide a novel solution to a unique problem or what security threats it will present. Given these unknowns, technology executives should think twice about replacing experienced and creative technology talent, such as software developers, with AI code generators. Will ChatGPT create novel code to solve a unique problem never encountered before? Probably not. A Tale as Old as Time (Or at Least a Decade) The technology industry has been searching for and developing new ways to make certain software development tasks much easier and more streamlined for years. One example of this is low-code/no-code. The notion of simplifying application development and replacing software developers with laypeople (citizen developers) has been around for more than a decade now, as low-code and no-code solutions have grown more popular. These solutions have promised that companies don’t need technical talent to get their software and app projects off the ground. However, if you look at the impact of these solutions today, their use can result in large amounts of technical debt and almost always require the skill of experienced software developers. The reason? Building complex software and applications is extremely difficult; it’s an art. Low-code and no-code solutions have their rightful place and can make things easier if a company is looking to launch a simple app or static web page. These solutions can increase the pace of development and time-to-market and enable everyday people without any development skills to facilitate them. However, they are not actually a complete solution and often overlook aspects of development that a human software developer would typically address. Without a skilled expert involved, low-code/no-code platforms often can’t solve a unique problem a company has. So, how does this relate to AI code generators like ChatGPT? Here’s how. A Similar Situation—With One Key Difference When thinking about their place in the development process, AI code generators are not that different from low-code or no-code solutions. The thinking is that they will also enable non-technical individuals to create software and applications with ease. Yet, there is one key difference—they promise expertise, too. But is the expertise coming from the AI code generator or the person piloting it? The answer is simple; it is not from the code generator. There have been examples of companies and individuals that have tried using ChatGPT to build code, and they have appeared to be successful. However, without the input of the individuals using it, it never would […]

Read More

CloudBees CEO: State of Software Development is a Disaster

CloudBees CEO Anuj Kapur told attendees at a DevOps World event today that with developers spending only 30% of their time writing code the current state of software development in enterprise IT organizations is a disaster. After more than 14 years of effort, the promise of DevOps—in terms of being able to accelerate the rate at which applications are being deployed—remains largely academic, said Kapur. In fact, the effort to shift more responsibility for application security further left toward developers has only increased the amount of cognitive load and reduced the amount of time available to write code, he noted. However, with the rise of generative artificial intelligence (AI), an inflection point that will dramatically increase the velocity at which applications are being built and deployed has arrived, said Kapur. The challenge will be achieving that goal without increasing the cognitive load on developers. That cognitive overload results in 70% of developers’ time not being productive within organizations that often hire thousands of developers, he noted. Despite all the DevOps issues that need to be addressed, AI advances promise improvement. The overall DevOps market is still relatively young, given the current level of adoption, said Kapur. “We continue to believe the market is early,” he said. Today, CloudBees took the wraps off the first major update to the open source Jenkins continuous integration/continuous delivery (CI/CD) platform to have been made in the past several years. At the same time, the company also unveiled a DevSecOps platform based on Kubernetes that is optimized for building and deploying cloud-native applications based on portable Tekton pipelines. That latter platform provides the foundation through which CloudBees will, in the months ahead, apply generative AI to software engineering to, for example, create unit tests on the fly and automate rollbacks. In addition, DevSecOps capabilities will be extended all the way out to the integrated development environments (IDE) to reduce the cognitive load of developers. The overall goal is to reduce the number of manual processes that create bottlenecks that make it challenging to manage DevOps at scale. Criticism of the level of developer productivity enabled by DevOps compared to other development approaches needs to be tempered, said Tapabrata Pal, vice president of architecture for Fidelity Investments, because it still represents a significant advance. There is still clearly too much toil, but the issues that impede the pace at which developers can effectively write code tend to be more cultural than technical, he added. Organizations are not inclined to automatically trust the code created by developers, so consequently, there is still a lot of friction in the DevOps process, noted Pal. In theory, advances in AI should reduce that friction, but it’s still early days in terms of the large language models (LLMs) that drive generative AI platforms and their ability to create reliable code, he added. That should improve as LLMs are specifically trained using high-quality code, but in the meantime, the pace at which substandard code might be generated could overwhelm DevOps processes until AI is applied there as well, said Pal. Thomas Haver, master software engineer for M&T Bank, added that while assisted AI technologies will have a major impact, it’s not reasonable to expect large organizations to absorb them overnight. Patience will be required to ensure advances are made in ways that […]

Read More

JFrog swampUP: Addressing the Advent of AI

At JFrog SwampUp 2023, the buzz with all about AI, especially with JFrog’s announcement of Machine Learning (ML) Model Management capabilities. These conversations around AI and ML reflected these technologies’ growing influence and importance in the DevOps world. How much of the generative AI conversation is just hype, though? And what does that mean for AI and ML as a whole? Alan Shimel, CEO of Techstrong Group, and I sat down with Stephen Chin, VP of DevRel at JFrog, to find out. As far as Chin is concerned, even as more companies create and leverage AI models, these models must be managed like any other software component. Chin said JFrog Artifactory acts as a staging ground to operationalize models using DevSecOps best practices. Algorithms and models will continue to grow in size and complexity, and they will require robust processes around deployment and management – just like any other software artifact. The key, Chin said, is to think of ML as just another development language and leverage tools that standardize and streamline working with it. Compared to traditional enterprise applications, though, DevOps workflows for AI/ML are still relatively immature, but Chin said JFrog’s new model management capabilities aim to provide that missing automation and governance using DevSecOps best practices for governance, security, and licensing compliance. Additionally, Chin noted that AI/ML have become essential for development teams to keep up with the explosive demand for code. At this point, AI has become table stakes. In the AI arms race, the winners are those who understand AI has become a vital development tool to enhance productivity. In terms of job security, the losers are the ones who can’t keep up with the volume of code. According to Chin, you are out of the running if you don’t embrace AI. Looking ahead, AI will not make developers obsolete, though – quite the opposite. Given the quasi-unlimited appetite for new code, Chin emphasized that developers who embrace AI will have more work than ever. One way to think of it is that AI provides a new form of “outsourcing” to boost human productivity, much like previous waves of innovation in computer science. When it comes to security, there are still challenges that need to be addressed; code generated by today’s AI solutions still has significant drawbacks like potential data bias, lack of explainability and simple errors. In the long term, though, Chin believes AI itself will provide the solution to secure an exponentially growing codebase, given its superior scale. Just as AI will make individual developers more productive, it can also supercharge security teams – but it can also empower bad actors. The key will be continuing to democratize the benefits to even the playing field. AI promises to be a transformative technology on the scale of the Bronze Age or Quantum computing, Chin said, but the path forward will require humans and machines working together to ensure it’s used for good. It’s clear that the pace of innovation in AI and ML is rapidly accelerating. As these technologies become further democratized and integrated into developer workflows, they promise to transform how software is built and secured, Chin said. Companies must take advantage of this technology innovation by providing the pipelines and governance for this software revolution, he added. Chin believes the future will […]

Read More

The 2021 CISSP Exam and Application Security: What’s Changed?

Published July 1, 2021 WRITTEN BY MICHAEL SOLOMON Michael G. Solomon, PhD, CISSP, PMP, CISM, PenTest+, is a security, privacy, blockchain, and data science author, consultant, educator and speaker who specializes in leading organizations toward achieving and maintaining compliant and secure IT environments. The Certified Information Systems Security Professional (CISSP) certification, granted by the International Information System Security Certification Consortium Inc., or (ISC)2, is one of the most prestigious vendor-neutral information systems security leadership certifications. The CISSP certification is a credential that signifies its holder possesses professional experience and demonstrates a high level of knowledge across information systems security domains. (ISC)2 periodically updates the information systems security Common Body of Knowledge (CBK) to reflect the state of today’s organizations and environments. The latest version of the CISSP exam was released on May 1, 2021. This updated exam addresses the latest cybersecurity challenges. Some of the noticeable changes from the previous exam are in the software security domain. New CISSP exam takers must demonstrate a deeper knowledge of developing secure software than those who took previous editions of the exam. Software security has taken on a higher profile. Let’s look at how the 2021 CISSP exam changes add focus on developing secure software. Why the CISSP certification is important The CISSP certification is not the only cybersecurity certification, but it is one of the most respected certifications in the industry. Although criticized as an overly broad certification, its focus is on demonstrating a working knowledge in eight defined domains that cover most cybersecurity concerns. The CISSP exam focuses more on cybersecurity leadership and a grasp of pertinent concepts and topics, as opposed to a deep knowledge of a specialized practitioner. The certification tends to be more sought after by those either in or pursuing management and leadership positions. There are currently over 147,000 CISSPs worldwide, and the certification enjoys international recognition as a high-quality and difficult-to-attain certification. The CISSP was the first information security credential to meet the ISO/IEC 17024 standard requirements, which define criteria for certification-granting organizations. The CISSP is also approved by the Department of Defense to satisfy multiple DoDD 8570 Level III certification requirements. And in May 2020, the UK National Recognition Information Centre (UK NARIC) granted the CISSP a Level 7 ranking, which equates the certification with a master’s degree. The popularity of the CISSP certification, along with its longevity and demonstrated rigor, make it an attractive target for managers and executive leadership in information systems security roles. In short, there are many information systems security leaders who are CISSPs. Whatever (ISC)2 deems important in their CBK and exams will be considered important by its credential holders. Changes to the 2021 CISSP exam related to application security Domain 8 of the CISSP exam is Software Development Security, and it represents 11% of the questions test takers will encounter. The previous edition of the CISSP exam weighted Domain 8 at 10%. A single percentage increase in weight may not seem like very much, but some of the covered content has changed quite a bit. Previous coverage of Software Development Security was a bit generic and high-level, but the 2021 CISSP exam objectives are more granular with some interesting additions.   To give an overview of the CISSP exam objectives, here are the eight domains: Security and Risk Management Asset Security […]

Read More

The State of Mobile App Security 2021

Published June 24, 2021 WRITTEN BY ED TITTEL. Ed Tittel is a long-time IT industry writer and consultant who specializes in matters of networking, security, and Web technologies. For a copy of his resume, a list of publications, his personal blog, and more, please visit www.edtittel.com or follow @EdTittel The ever-increasing popularity and use of smartphones dwarfs that of more conventional computing devices, such as desktop, laptops, tablets and so forth. Here are some numbers to put things in perspective: according to Statista the total number of mobile devices should reach 17.71B by 2024, up from just over 14B such devices in use in 2020. The same source puts the size of the installed base of PCs worldwide at 1.33B in 2019, with a slight decline over the period from 2013-2019. Interestingly, Microsoft recently claimed 1.3B “active Windows 10 users” which tells us the overwhelming majority of PC users seem to favor their operating system. Putting Mobile Devices Into Proportion The real impact of this comparison, of course, is that mobile devices outnumber PCs by over an order of magnitude. In addition, that balance continues to swing to favor mobile devices ever more firmly. Mobile devices run mobile apps. Indeed this simple observation makes mobile app security crucial, simply because most of the human race (mobile devices currently outnumber humans by almost 2 to 1) uses such devices and the apps to go with them to communicate, access the Internet, and get on with the business of living. The Continuing Sad State of Mobile App Security Even as mobile apps keep proliferating, and more and more users rely on them to learn, work and play, the state of mobile app security can only be described as deplorable. On the one hand, App Annie reported that mobile app usage grew 40% year-over-year in Q2 2020 as compared to the preceding year. On the other hand, security firm Synopsys entitled its most recent survey Peril in a Pandemic: The State of Mobile App Security. The company found that significant causes for concern about the security in mobile apps were both abundant and alarming, primarily owing to three major factors: Commonly used apps that displayed well-known open source vulnerabilities Unsecured and unencrypted sensitive data in mobile application code that present potential points for information leakage and unwanted access and disclosures Frequent assignment of higher levels of access and permission to mobile apps than the “principle of least privilege” (PLP) would allow All of these unsafe programming or administrative practices leave mobile apps overly open to attack and potential compromise. The report analyzed over 3,000 mobile apps and reported some scary statistics – namely: 63% of apps included known security vulnerabilities, with an average of 39 vulnerabilities per app, of which 44% were rated “high risk,” 94% of which had publicly documented fixes, and 73% of which has been reported two or more years ago. Thousands of sensitive data items were exposed in the application code, including over 2K passwords, tokens and keys, over 10K email addresses, and nearly 400K IP addresses and URLs. Use of overly powerful device permissions showed just over 33K instances of normal permissions, with just over 15K of sensitive permissions, and just over 10K of permissions “not intended for third-party use.” What Can (and Should) Mobile Developers Be Doing? […]

Read More

Understanding the Colonial Pipeline Ransomware Attack

Published June 17, 2021 WRITTEN BY ED TITTEL. Ed Tittel is a long-time IT industry writer and consultant who specializes in matters of networking, security, and Web technologies. For a copy of his resume, a list of publications, his personal blog, and more, please visit www.edtittel.com or follow @EdTittel On or about May 7, 2021, Colonial Pipeline had to shut its pipelines down because of a ransomware attack. Colonial is a major fuel pipeline operator in the southern and eastern US. Its pipelines stretch from Texas to New Jersey, and reach into Louisiana, Mississippi, Alabama, Georgia, both Carolinas, Tennessee, Virginia, Maryland and Pennsylvania. After a week of downtime that saw gas shortages in many of the more eastern states just mentioned, the company announced on May 12 it was restarting pipeline operations. By May 15, those operations had more or less returned to normal. One burning question remains: What happened? Understanding The Colonial Pipeline Ransomware Attack A Word from Joseph Blount, Colonial Pipeline’s CEO In an interview with the Wall Street Journal, Blount recounted he authorized a ransom payment of $4.4 million. He did so because company executives, in the words of the WSJ story, “were unsure how badly the cyberattack had breached its systems or how long it would take to bring the pipeline back.” According to the WSJ, “Colonial Pipeline provides roughly 45% of the fuel for the East Coast…” Essentially Colonial Pipeline chose to disregard long-standing advice from the FBI and other law enforcement agencies not to pay ransom demands in such situations. Blount demurred and is quoted as saying he authorized payment because “…it was the right thing to do for the country.” More About the Attack Security experts are in agreement with US government officials who attribute the attack to a criminal gang based in eastern Europe named DarkSide. This shadowy organization builds malware to attack systems for extortion, and shares the proceeds obtained from its ransomware with affiliates who actually foist the attacks that see its ransomware take over business and government systems all over the world. As reported in the WSJ story, Colonial worked with experts who had prior experience dealing with the organization behind the attack. That said, the company declined to share details on the negotiations involved in making the payment, or how much of its losses might (or might not) be covered by its cyber insurance coverage. Once the attackers received payment, they provided a decryption tool to unlock affected systems. To underscore law enforcement advice, Colonial also disclosed that the decryption key did not provide everything needed to restore its systems to normal operation. According to CNN, and contrary to many other reports, the sponsoring Darkside organization is not “believed to be state-backed.” Instead Lior Div, CEO of cybersecurity firm Cybereason, describes DarkSide as a “private group that was established in 2020.” That said, consensus is emerging that DarkSide operates in Russia for two compelling reasons. According to CNN, “its online communications are in Russian, and it preys on non-Russian speaking countries.” Div is further quoted as saying “Russian law enforcement typically leaves groups operating within the county alone, if their targets are elsewhere.” DarkSide runs what CNN and other call a “ransomware-as-service” business. That it, it builds tools that it makes available to other criminals, who then use […]

Read More

Facebook Scraping Incident Leaks Info for a Half-Billion Users

Published June 10, 2021 WRITTEN BY ED TITTEL. Ed Tittel is a long-time IT industry writer and consultant who specializes in matters of networking, security, and Web technologies. For a copy of his resume, a list of publications, his personal blog, and more, please visit www.edtittel.com or follow @EdTittel In early April, numerous sources disclosed discovery of a pool of Facebook records including information on more than 530 million of its users. The leaked information included users’ names, dates of birth, and phone numbers as posted to a website for hackers. Business Insider’s (BI) April 3 story represented some of the first reporting on this breach, and focused on a database that security researcher Alon Gal of cybercrime intelligence firm Hudson Rock discovered in January 2021. BI reports further that it “reviewed a sample of the leaked data and verified several records by matching known Facebook users’ phone numbers with IDs listed in the data set.” Facebook’s Response and Explanation The BI story states that a “Facebook spokesperson told Insider that the data has been scraped because of a vulnerability that the company patched in 2019.” Scraping attacks involve downloading account pages from a Website and parsing their contents to discover personal information amongst the data the underlying Web markup contains. The vulnerability involved was based on the ability to import contact lists from users’ cellphones (with their permission) to extend friend lists and associated data. But while the vulnerability is no longer open to current exploit, even PII (personally identifiable information) data from 2019 can serve as entry points for various types of attack, including impersonation, identity theft, targeted phishing, and potential fraud. According to numerous sources who’ve analyzed the database in question, users from 106 countries are included in its contents. Of the over 500 million users represented therein, over US-based users number 32 million, with 11 million more from the UK, and an additional 6 million from India. For most users, their data includes Facebook IDs, phone numbers, full names, locations, dates of birth, and self-descriptions (bios). For some users, email addresses are also disclosed. How the Breach Was Identified Mr. Gal found the leaked data in January when a hacking forum users advertised a bot that could provide phone numbers for hundreds of millions of Facebook users at a price. At around that same time, Joseph Cox at Motherboard reported the existence of this automated Telegram bot, with a proof of function demo, with charges ranges from US$20 to get information for a single user account, and up to US$5K for 10,000 users. Motherboard reports it tested the bot and confirmed that it provides a valid phone number for a Facebook user known to them who elected to keep that number private. The exploit was documented in 2019 for Instagram users (Instagram is a subsidiary of Facebook) and included this statement “It would … enable automated scripts and bots to build user databases that could be searched, linking high-profile or highly-vulnerable users with their contact details.” This is apparently just what the database that Gal discovered contains. Since his initial findings in January, that database has been posted to a hacking forum at no charge. Thus, it’s available to anyone able to access the site. And indeed it could provide ample data to drive attacks even […]

Read More

Pandemic Legacy: Remote Work and Digital Transformation

Published June 3, 2021 WRITTEN BY MICHAEL SOLOMON Michael G. Solomon, PhD, CISSP, PMP, CISM, PenTest+, is a security, privacy, blockchain, and data science author, consultant, educator and speaker who specializes in leading organizations toward achieving and maintaining compliant and secure IT environments. The COVID-19 pandemic drove many companies to rapidly expand their support for remote work. This change was not simply to appease a changing workforce; it was simply to survive. When most of the workforce was suddenly told to stay home, many organizations had to either adapt or cease to exist. The increased reliance on transforming previously manual or hybrid procedures to purely digital ones required updated (or completely new) applications, supporting software and infrastructure. Digital transformation was no longer an aspirational goal — it became a survival necessity.  Let’s look at some fundamental changes the pandemic forced on companies and consumers, and how those changes affect all aspects of doing business today, including software development organizations developing secure application security in a decentralized world. Digital transformation plans were accelerated Prior to 2020, face-to-face interactions were not only the norm, but also the preferred way to communicate and carry out business. While a growing number of younger workers and consumers who preferred digital interaction were encouraging digital communication to gain popularity, total adoption was a long way off.  Digital transformation (DT) is the common term used to represent the process of replacing manual business processes or services with digital processes. The push for DT was underway in 2020, but only as it aligned with long-term strategy. A few existing companies and many startups relied on digital processes, but most companies approached DT conservatively. After all, the requirement to produce revenue today trumped the desire to innovate for the future. Once the pandemic hit, companies of all types suddenly had to carry on unhindered without face-to-face interactions. Some companies were built on the concept of offices full of workers. Others depended on the ability to serve a steady flow of physical customers. Regardless of the business model, the disruption of face-to-face interaction required solutions where technology could provide the connection. One of the first shifts was to simulate the business meeting, customer interactions or even the classroom. Zoom went from a video conferencing tool to a generic term for an online meeting. The term can even be used as a verb, as in “I’ll Zoom you.” COVID-19 shifted DT from a long-term strategic goal to a survival requirement. Although all companies could not simply “go digital,” many could. Restaurants, airlines, hotels and a long list of other service-oriented companies had to undergo radical transformations. Other types of companies, such as insurance companies, software development organizations and banks, could continue operations, but had to find a different way. Reliance on face-to-face interactions had to defer to digital transactions. Customer service was required to rise to the occasion and provide an acceptable level of service using remote workers and digital connections. Some companies, like Amazon, were up to the challenge. After all, they were already relying on a decentralized model for much of their business process. They encountered challenges at their warehouses that relied on many human workers, but the rest of their organization had already embraced digitization and automation. Other organizations were not as fortunate and had to accelerate their digital […]

Read More

How NIST SP 800-53 Revision 5 Affects Application Security

Published May 27, 2021 WRITTEN BY MICHAEL SOLOMON Michael G. Solomon, PhD, CISSP, PMP, CISM, PenTest+, is a security, privacy, blockchain, and data science author, consultant, educator and speaker who specializes in leading organizations toward achieving and maintaining compliant and secure IT environments. The National Institute of Standards and Technology (NIST) is a non-regulatory agency of the U.S. Department of Commerce that, among other things, maintains physical science laboratories and produces guidance for assessing compliance with a wide range of standards. NIST has a long history of providing documents that help organizations and agencies assess compliance with cybersecurity standards and implement changes to strengthen compliance and security. The NIST Special Publication (SP) series provides “guidelines, technical specifications, recommendations and reference materials, comprising multiple sub-series.” One sub-series, SP 800, focuses on cybersecurity, specifically containing guidelines for complying with the Federal Information Security Modernization Act (FISMA). (If you are interested in digging further into NIST cybersecurity offerings, check out the relatively new SP 1800 cybersecurity practice guides as well.)  How NIST SP 800-53 Revision 5 Affects Application Security NIST SP 800-53, “Security and Privacy Controls for Information Systems and Organizations,” provides guidance for selecting the most effective security and privacy controls as part of a risk management framework. The latest revision of NIST SP 800-53, revision 5, was released on September 23, 2020. Revision 5 includes requirements for RASP (runtime application self-protection) and IAST (interactive application security testing). While these approaches to application security are not new, making them a required element of a security framework is a first. Let’s examine how NIST SP 800-53 revision 5 affects the secure application development process. What NIST SP 800-53 contains The initial version of SP 800-53 was released in 2005, titled “Recommended Security Controls for Federal Information Systems.” SP 800-53 always focused on federal information systems, at least up through revision 4. Then, revision 5 dropped the word “federal” from its title. That means SP 800-53 is now a more general guidance document that applies to commercial information systems as well as federal systems. This may seem to be a minor change, but it really means that NIST just expanded the scope of their recommendations. SP 800-53 isn’t a mandate at all, but it does signal a strengthening of guidance from the federal government for non-federal environments. SP 800-53 contains a catalog of security and privacy controls, organized into 20 control families. Chapter two of SP 800-53 “describes the fundamental concepts associated with security and privacy controls, including the structure of the controls, how the controls are organized in the consolidated catalog, control implementation approaches, the relationship between security and privacy controls, and trustworthiness and assurance.” The other major chapter of SP 800-53, chapter three, includes a catalog of security and privacy controls, each of which includes a discussion of that control’s purpose and how it fits into a layered security approach. The goal of SP 800-53 is to provide a consolidated guidance document that describes security and privacy controls, how they are related to one another, and how to best select, deploy and assess the controls required for specific use cases. What revision 5 means to application security Although SP 800-53 revision 5 provides general guidance for selecting security and privacy controls, a noticeable portion of changes since revision 4 focus on software. As […]

Read More