Feds create software “Secure by Design, Secure by Default” guidelines
Technology for Lawyers
Published: May 19, 2023
Several federal agencies, with input from security agencies in seven other countries, have combined forces to create a promised broad set of guidelines to make software more secure from attacks, fulfilling one part of the current administration’s national cybersecurity strategy.
The “Principles and Approaches” document (see below) is not a proposal for a set of laws or regulations. Yet. But it lays out that part of the Biden administration’s approach to designing secure software from the ground up.
And it has the weight of much of the world’s intelligence agencies behind it, including CISA, the NSA, the FBI, and agencies from Australia, Canada, the UK and New Zealand (the Five Eyes), with input also from Germany and The Netherlands.
The basic outline of the guidelines include software manufacturers ending default passwords, writing in safe programming languages, and establishing vulnerability programs for reporting security flaws.
The essential thrust of these guidelines fit the cybersecurity strategy of shifting responsibility for computer security from the consumer to the manufacturer.
CISA Director Jen Easterly said in a statement that “ensuring that software manufacturers integrate security into the earliest phases of design for their products is critical to building a secure and resilient technology ecosystem.”
To that end, the guidelines recommend software that is secure-by-design and secure-by-default. The former means that security is a built-in feature in software design from the ground up, and the latter means that the software is secure for the end user out of the box.
The guidelines provide several recommended steps for designers to build into their programs, including using memory-safe programming languages (which are most of them except for C, C++ and assembly), conducting rigorous code reviews, and making them end-user friendly.
These guidelines are really just the start of a conversation that would need to take place among all of the stakeholders in the process (which are too numerous to list). And, of course, there will be delays and pushbacks from the industry side.
But this is at least a good start to limit the danger of all of the threat actors out there.
And, of course, AI is more dangerous than any threat actor and operates from the inside. But enough of that.