Customize Consent Preferences

We use cookies to help you navigate efficiently and perform certain functions. You will find detailed information about all cookies under each consent category below.

The cookies that are categorized as "Necessary" are stored on your browser as they are essential for enabling the basic functionalities of the site. ... 

Always Active

Necessary cookies are required to enable the basic features of this site, such as providing secure log-in or adjusting your consent preferences. These cookies do not store any personally identifiable data.

No cookies to display.

Functional cookies help perform certain functionalities like sharing the content of the website on social media platforms, collecting feedback, and other third-party features.

No cookies to display.

Analytical cookies are used to understand how visitors interact with the website. These cookies help provide information on metrics such as the number of visitors, bounce rate, traffic source, etc.

No cookies to display.

Performance cookies are used to understand and analyze the key performance indexes of the website which helps in delivering a better user experience for the visitors.

No cookies to display.

Advertisement cookies are used to provide visitors with customized advertisements based on the pages you visited previously and to analyze the effectiveness of the ad campaigns.

No cookies to display.

URL has been copied successfully!
AI programming copilots are worsening code security and leaking more secrets
URL has been copied successfully!

Collecting Cyber-News from over 60 sources

Overlooked security controls: Ellen Benaim, CISO at enterprise content mangement firm Templafy, said AI coding assistants often fail to adhere to the robust secret management practices typically observed in traditional systems.”For example, they may insert sensitive information in plain text within source code or configuration files,” Benaim said. “Furthermore, because large portions of code are generated for early stage products, best practices such as using secrets managers or implementing real-time password and token injection are frequently overlooked.”Benaim added: “There have already been instances where API keys or public keys from companies such as Anthropic or OpenAI were inadvertently left in the source code or uploaded in open-source projects, making them easily exploitable. Even in closed-source projects, if secrets are hard-coded or stored in plain text within binary files or local JavaScript, the risk remains significant, as the secrets are easy to extract.”

Establishing secure AI-assisted development practices: Chris Wood, principal application security SME at cybersecurity training firm Immersive Labs, described GitGuardian’s warning on the dangers of AI coding assistants as a “wake-up call.””While AI offers incredible potential for boosting productivity, it’s crucial to remember that these tools are only as secure as their training data and the developers’ vigilance,” Wood said.CISOs and security leaders need to formulate comprehensive secrets management strategies as a first step. In addition, enterprises should be establishing clear policies around using AI coding assistants and providing developers with specific training on secure AI-assisted development practices.”We must equip developers with the knowledge and skills to identify and prevent these types of vulnerabilities, even when AI assists with code creation,” Wood said. “This includes a strong foundation in secure coding principles, understanding common secret leakage patterns, and knowing how to properly manage and store sensitive credentials.””By empowering developers with the proper knowledge and fostering a security-first mindset, we can harness the benefits of AI while mitigating the potential for increased security vulnerabilities like secret leakage,” Wood concluded.

Proactive countermeasures: The more LLM code produced, the more developers will trust it, further compounding the problem and creating a vicious cycle that needs to be nipped in the bud.”Without proper security testing, insecure AI-generated code will become the training data for future LLMs,” Veracode’s Smith warned. “Fundamentally, the way software is built is changing rapidly, and trust in AI should not come at the expense of security.”The development of AI will continue to outpace security controls unless enterprises take proactive steps to contain the problem rather than attempting to rely on reactive fixes.”CISOs must move fast to embed security guardrails, automating security checks and manual code reviews directly into agentic and developer workflows,” Smith advised. “Auditing third-party libraries ensures AI-generated code does not introduce vulnerabilities from unverified components.”Integrating automated tools such as secret scanners into the CI/CD pipeline, followed by a mandatory human developer review, should be used to screen software developed using AI coding assistants.”All AI-generated code should be continuously monitored and sanitized, with a prompt incident response plan in place to address any discovered vulnerabilities,” Templafy’s Benaim advised.

Enterprises continuing to struggle with credential management: Credential management of API keys, passwords, and tokens is a long-established problem in application security that recent innovations in AI-powered code development are compounding rather than creating.GitGuardian’s State of Secrets Sprawl 2025 report revealed a 25% increase in leaked secrets year-over-year, with 23.8 million new credentials detected on public GitHub in 2024 alone.Hardcoded secrets are everywhere, but especially in security blind spots like collaboration platforms (Slack and Jira) and containers environments where security controls are typically weaker, according to GitGuardian.Despite GitHub’s Push Protection helping developers detect known secret patterns, generic secrets, including hard-coded passwords, database credentials, and custom authentication tokens, now represent more than half of all detected leaks. That’s because unlike API keys or OAuth tokens that follow recognizable patterns, these credentials lack standardized patterns, making them nearly impossible to detect with conventional tools, GitGuardian warns.GitGuardian highlights the 2024 US Treasury Department breach as a warning: “A single leaked API key from BeyondTrust allowed attackers to infiltrate government systems,” according to GitGuardian CEO Eric Fourrier. “This wasn’t a sophisticated attack, it was a simple case of an exposed credential that bypassed millions in security investments.”

Remediation lags: The study also found that 70% of leaked secrets remain active even two years after their first exposure. Delays tend to arise because remediation is complex, according to security experts.”Leaked API keys, passwords, and tokens are often overlooked because detecting them is only part of the solution; effective remediation is complex and frequently delayed,” said Mayur Upadhyaya, CEO of cybersecurity tools vendor APIContext. “The reliance on static keys, often embedded in code for convenience, continues to be a major weak point.”Upadhyaya added: “Best practices like rotating keys, implementing short-lived tokens, and enforcing least-privilege access are well understood but hard to sustain at scale.”Enterprises should look to erect stronger guardrails, automated scanning tools, proactive monitoring, and better developer support to ensure secure practices are followed more consistently, Upadhyaya concluded.

First seen on csoonline.com

Jump to article: www.csoonline.com/article/3953927/ai-programming-copilots-are-worsening-code-security-and-leaking-more-secrets.html

Loading

Share via Email
Share on Facebook
Tweet on X (Twitter)
Share on Whatsapp
Share on LinkedIn
Share on Xing
Copy link