Infosecurity Magazine recent published an articled titled ‘ChatGPT Leveraged to Enhance Software Supply Chain Security.’
In the article, Neatsun Ziv, CEO and co-founder of OX Security, said that the utilisation of AI tools will provide faster and more accurate data to developers compared to other tools, allowing them to repair security issues far more easily. Harman Singh, managing director and consultant at Cyphere, said that he expects ChatGPT and other generative AI models to make accuracy, speed and quality improvements to the vulnerability management process.
In my opinion, we really don’t need ChatGPT or other generative AI models writing code or integrated into vulnerability management processes. These tools are way too rudimentary and unreliable for such important tasks.
We need to train software developers on secure coding, for example on general standards like Building Security in Maturity Model (BSIMM), OpenSAMM (Software Assurance Maturity Model), and Open Web Application Security Project (OWASP) and on specific frameworks they use such as Angular, Laravel, Flutter, Ruby on Rails, .NET, and others.
We need strong access controls for repos and pushing updates to repos. We need tooling that creates SBOMs, detects bugs and vulnerabilities in code, and analyses dependencies for vulnerabilities and excessive permissions, among other things. We need effective and repeatable security architecture, patch mgmt and vulnerability mgmt tools and processes. We need software developers who are competent in threat modelling as well as in security by design and privacy design principles.
We DO NOT need generative AI meddling in our CI/CD pipeline and SSDLC (particularly right now)!