Connect with us

AI

Unveiling Security Vulnerabilities in the Global AI Race

Published

on

Security lapses emerge amid the global AI race

AI Companies Overlook Basic Security Hygiene Practices, Leading to Leaked Secrets on GitHub

Recent reports from cybersecurity firm Wiz have shed light on a concerning trend among AI companies. According to Wiz, many leading AI firms are neglecting basic security hygiene practices, resulting in leaked verified secrets on GitHub. The leaked information includes API keys, tokens, and sensitive credentials that are often buried in code repositories, escaping detection by standard security tools.

Glyn Morgan, Country Manager for UK&I at Salt Security, has emphasized the gravity of this issue, describing it as a preventable and basic error. Morgan stated, “When AI firms accidentally expose their API keys, they lay bare a glaring avoidable security failure.” This oversight not only compromises the security of the companies involved but also poses significant risks to their systems, data, and models.

The implications of these security lapses go beyond individual companies, extending to the entire supply chain. As AI startups increasingly partner with enterprises, the security posture of these startups becomes a concern for the larger organizations. Wiz’s research has revealed that leaks could potentially expose organizational structures, training data, or private models, highlighting the far-reaching consequences of inadequate security measures.

The financial impact of these security breaches is substantial, with the combined valuation of the analyzed companies exceeding $400 billion. The report focused on companies listed in the Forbes AI 50, providing specific examples of the risks faced by these organizations.

  • LangChain was found to have exposed multiple Langsmith API keys, some with permissions to manage the organization and list its members, valuable information for potential attackers.
  • An enterprise-tier API key for ElevenLabs was discovered sitting in a plaintext file, posing a significant security risk.
  • An unnamed AI 50 company had a HuggingFace token exposed in a deleted code fork, granting access to approximately 1,000 private models. Additionally, the same company leaked WeightsAndBiases keys, potentially compromising the training data for numerous private models.

The Wiz report underscores the inadequacy of traditional security scanning methods in detecting these vulnerabilities. Basic scans of main GitHub repositories are no longer sufficient, as they fail to uncover the most severe risks. To address this gap, Wiz researchers developed a three-dimensional scanning methodology known as “Depth, Perimeter, and Coverage.”

  • Depth: This deep scan delves into the full commit history, commit history on forks, deleted forks, workflow logs, and gists—areas often overlooked by conventional scanners.
  • Perimeter: The scan extends beyond the core company organization to include organization members and contributors, who may inadvertently expose company-related secrets in their public repositories. Identifying these adjacent accounts involves tracking code contributors, organization followers, and correlations in related networks.
  • Coverage: Researchers specifically search for new AI-related secret types that traditional scanners may miss, such as keys for platforms like WeightsAndBiases, Groq, and Perplexity.

This expanded attack surface is particularly concerning given the lack of security maturity observed in many fast-moving companies. The report highlights the challenges faced when disclosing these leaks, as almost half of disclosures either failed to reach the target or received no response. Many firms lacked an official disclosure channel or failed to address the issue promptly upon notification.

Wiz’s findings serve as a cautionary tale for enterprise technology executives, outlining three crucial action items for managing internal and third-party security risks effectively.

  1. Security leaders should consider their employees as part of the company’s attack surface and implement a Version Control System (VCS) member policy to enforce best practices during onboarding.
  2. Internal secret scanning must evolve beyond basic repository checks, with public VCS secret scanning becoming a non-negotiable defense strategy. This scanning approach should adopt the “Depth, Perimeter, and Coverage” mindset to uncover hidden threats.
  3. Scrutiny should extend to the entire AI supply chain, with CISOs evaluating vendors’ secrets management and vulnerability disclosure practices. The report emphasizes the need for AI service providers to prioritize detection of their own secret types.

The overarching message for enterprises is clear: as technology advances rapidly, security governance must keep pace. Wiz stresses the importance of prioritizing security over speed, both for AI innovators and the enterprises that rely on their solutions.

Learn more about AI and big data from industry leaders at the AI & Big Data Expo, part of the TechEx event series. Explore upcoming events in Amsterdam, California, and London, co-located with leading technology events like the Cyber Security Expo.

AI News is powered by TechForge Media. Discover upcoming enterprise technology events and webinars to stay informed about the latest industry trends.

Transform the following:

Original: “I am going to the store to buy some groceries.”
Transformed: “I will be going to the store to purchase groceries.”

See also  Revolutionizing AI: OpenCV Co-Founders Enter the Video Startup Arena to Challenge OpenAI and Google

Trending