Unverified AI Data Driving Adoption of Zero-Trust Models
The rise of unverified and low-quality data produced by artificial intelligence (AI) models, commonly referred to as AI slop, is prompting security leaders to shift towards zero-trust models for data governance. According to Gartner, 50% of organizations are expected to embrace such policies by 2028.
Currently, large language models (LLMs) are trained on data sourced from various online platforms, books, research papers, and code repositories, some of which already contain AI-generated content. The proliferation of AI-generated data poses a significant risk to the reliability of LLMs, with the potential for models to collapse under the weight of inaccuracies and hallucinations.
A study by Gartner revealed that 84% of CIOs and tech executives plan to increase funding for generative AI (GenAI) in 2026. As the use of AI accelerates, the volume of AI-generated data will continue to grow, leading to a scenario where future LLMs are trained on outputs from existing models.
Gartner emphasized the need for organizations to adopt a zero-trust posture to authenticate and verify data, given the pervasive nature of AI-generated content. Wan Fui Chan, managing vice president at Gartner, highlighted the importance of implementing measures to safeguard business outcomes in the face of AI-generated data.
Verifying ‘AI-free’ Data
Chan suggested that regulatory requirements for verifying “AI-free” data will likely become more stringent in various regions. Organizations will need tools and skilled workforce for metadata management to identify and tag AI-generated data effectively.
Active metadata management practices are expected to play a crucial role in enabling organizations to analyze and automate decision-making processes across their data assets, ensuring real-time alerts and identification of stale or unreliable data.
Managing the Risks
Gartner outlined several strategies for organizations to mitigate the risks associated with untrustworthy AI data. This includes establishing a dedicated AI governance leadership role, forming cross-functional teams to assess and address AI-generated data risks, and updating governance frameworks to enhance security and ethics policies.