Katie Bowen is vice president, public sector at Synack.
President Biden made his biggest move yet on artificial intelligence this week, issuing an executive order that trains the full scope of the administration’s authority on emerging risks posed by the technology.
The White House has billed the order as “the most sweeping actions ever taken to protect Americans from the potential risks of AI systems.” That may be true for the Oval Office, but the private sector – to include the defense industrial base – will need to take similarly sweeping action to successfully head off AI-related security breaches.
That means companies with a stake in our AI-driven future must kick their own cybersecurity efforts into high gear to keep pace with shifting security requirements.
Human-led, continuous security testing of AI technology is a great (and necessary) place to start. The White House recognizes this: the Biden administration is directing the National Institute of Standards and Technology to set “rigorous standards for extensive red-team testing” to ensure AI systems can be trusted before and after they are released. The Department of Homeland Security will apply those testing standards to critical infrastructure sectors like energy and financial services, according to a fact sheet accompanying the order. This directive is an underscore of the Joint Cyber Defense Collaborative’s 2023 Planning Agenda to deepen relationships with critical infrastructure, such as energy.
Additionally, the order establishes an “advanced cybersecurity program to develop AI tools to find and fix vulnerabilities in critical software.” As the technology industry shifts to a “Secure By Design, Secure By Default” stance, AI will aid in uncovering and triaging vulnerabilities and will enable organizations to develop more comprehensive vulnerability management programs.
At Synack, we stand ready to lead and support Large Language Model (LLM) system testing, triage and on-demand patch verification by offering private and public sector program managers, AI developers and product leads access to 1,500+ researchers through our security testing platform. The breadth of talent in the Synack Red Team has uniquely positioned us to meet the security challenges posed by our customers’ use of fast-growing generative AI technology.
The need for human-led testing reflects a paradox of AI security: To clear the way for the next generation of AI technologies – including tools capable of automatically finding critical security vulnerabilities – human pentesters have to help. No AI tool has replaced the creativity and ingenuity of expert security researchers. In fact, AI technology has so far introduced whole new categories of security flaws for many organizations, as underscored by the first version of the OWASP Top 10 list of vulnerability types for Large Language Models (LLMs) Applications. In a symposium held for the cybersecurity community in Washington, D.C. earlier this month, we heard loud and clear about the excitement and challenges for industry and government in this area.
That’s not to say AI has no place in security testing. SRT members are already leveraging AI systems to detect certain types of vulnerabilities, like basic SQLi flaws, allowing strong pentesters to become even stronger. Synack is also deploying AI to automate parts of the vulnerability reporting process where appropriate, driving efficiencies in our platform. And there’s much more to come.
Maybe Just the Beginning for AI
Still, we may be years away from fully harnessing what the White House has described as “AI’s potentially game-changing cyber capabilities to make software and networks more secure.”
The Biden administration faces the unenviable task of putting guardrails on AI while not stifling that kind of “game-changing” innovation. The order covers a wide range of AI topics that extend well beyond the security testing space, and it remains to be seen how it will impact everything from civil rights to government agencies’ AI procurement. It’s clear that responsible AI initiatives, including those spearheaded by the Department of Defense’s Chief Digital and Artificial Intelligence Office, played a foundational role informing this presidential action.
In the meantime, this week’s order is a welcome step toward strengthening the privacy and security safeguards surrounding AI. It builds on the administration’s AI Cyber Challenge, a DARPA-led initiative unveiled earlier this year to drive automated and scalable AI software security solutions with some $20 million in prizes. It also comes on the heels of voluntary commitments from 15 leading AI companies to improve the security of their software products before releasing them publicly, among other steps.
To learn more about how Synack can protect your AI technologies and the critical systems they support, schedule a demo here.