AI-generated code: what are the risks for the company?

Code généré par l’IA

This post is also available in: Français (French)

42% of developers use AI-generated code, but this technological revolution exposes businesses to new security risks. Are you ready to entrust your applications to artificial intelligence?

AI-generated code is transforming software development. According to a Cloudsmith study, nearly half of developers report that their projects rely heavily on code produced by artificial intelligence tools. This shift promises increased productivity, but also raises major questions about software security and reliability.

An asset for productivity

The benefits are numerous: automating repetitive tasks, accelerating development, and reducing time to market. Glenn Weinstein, CEO of Cloudsmith, says teams are delivering faster thanks to AI, which is taking over an increasing share of the code. This frees developers up to focus on innovation and solving complex problems.

However, this automation comes with a risk: a decline in vigilance over the quality and security of the generated code. The study reveals that only 67% of developers systematically review code before deployment. A third of teams could therefore publish code without human review, opening the door to vulnerabilities.

Trust in AI-generated code: an underestimated risk

20% of developers surveyed say they have “complete confidence” in it. This confidence is explained by the speed of integration, but masks the danger of a lack of control. More worryingly, 17% of companies have no control policy over the use of AI in software development.

While 59% of developers apply additional checks to AI-generated modules, only a third use tools that impose specific security policies. This lack of oversight creates potential gaps in the software security chain.

Attacks target AI-generated code

The rise of generative AI is accompanied by new threats, such as “slopsquatting”: cybercriminals exploit package names suggested by AI assistants to introduce vulnerabilities into projects. Only 29% of professionals say they are “very confident” in their ability to detect these vulnerabilities, while open source remains a preferred terrain.

Tech giants and AI-generated code

Google announced in 2024 that 25% of its internal code would be generated by AI, and that figure is growing rapidly. Microsoft estimates that 20 to 30% of its code now comes from automated tools. These companies emphasize the importance of human controls and security checks before any deployment.

Why regulate its use?

It will continue to prevail. But neglecting security exposes us to major risks: exploitable flaws, targeted attacks, and loss of control over the software chain. The lack of control policies and automated verification makes it easier for cybercriminals to succeed.

To limit these risks, it is essential to:

  • Maintain a systematic human review of the generated code
  • Implement automated control tools dedicated to AI code
  • Train teams on threats specific to generative AI
  • Adopt a strict security policy across the entire development chain

What future for AI-generated code?

This revolution is disrupting software development, but it also requires a rethink of security. The question is no longer whether AI will prevail, but how to ensure that this revolution doesn’t become a weakness for businesses.

And you, do you trust AI-generated code? What measures have you implemented to secure your applications? Share your experiences in the comments!

Leave a Reply

Your email address will not be published. Required fields are marked *