Netskope, a leader in modern security and networking, today published new research revealing a 30x increase in data sent to generative AI (genAI) apps by enterprise users in the last year. This includes sensitive data such as source code, regulated data, passwords and keys, and intellectual property, significantly increasing the risk of costly breaches, compliance violations, and intellectual property theft. The report also highlights how shadow AI is now the predominant shadow IT challenge hampering organisations as 72% of enterprise users are on their personal accounts in the genAI apps they are using for work.
The 2025 Generative AI Cloud and Threat Report from Netskope Threat Labs details the ubiquity of genAI usage in the enterprise. As of the writing of this report, Netskope had visibility into 317 genAI apps like ChatGPT, Google Gemini, and GitHub Copilot. A broader analysis across the enterprise found that 75% of enterprise users are accessing applications with genAI features, creating a bigger issue security teams must address: the unintentional insider threat.
“Despite earnest efforts by organisations to implement company-managed genAI tools, our research shows that shadow IT has turned into shadow AI, with nearly three-quarters of users still accessing genAI apps through personal accounts,” said James Robinson, CISO, Netskope. “This ongoing trend, when combined with the data in which it is being shared, underscores the need for advanced data security capabilities so that security and risk management teams can regain governance, visibility, and acceptable use over genAI usage within their organisations.”
GenAI Risk Reduction
Many organisations lack full or even partial visibility into how data is being processed, stored, and leveraged within indirect genAI usage. Oftentimes, they’re choosing to apply a “block first and ask questions later” policy by explicitly allowing certain apps and blocking all others. Yet, security leaders must look to pursue a safe enablement strategy as employees seek efficiency and productivity benefits from these tools.
“Our latest data shows genAI is no longer a niche technology; it’s everywhere,” said Ray Canzanese, Director of Netskope Threat Labs. “It is becoming increasingly integrated into everything from dedicated apps to backend integrations. This ubiquity presents a growing cybersecurity challenge, demanding organisations adopt a comprehensive approach to risk management or risk having their sensitive data exposed to third parties who may use it to train new AI models, creating opportunities for even more widespread data exposures.”
Over the past year, Netskope Threat Labs also observed the number of organisations running genAI infrastructure locally has increased dramatically, going from less than 1% to 54% and this trend is expected to continue. Despite reducing risks of unwanted data exposure to third-party apps in the cloud, the shift to local infrastructure introduces new types of data security risks from supply chains, data leakage, and improper data output handling to prompt injection, jailbreaks, and meta prompt extraction. As a result, many organisations are adding locally-hosted genAI infrastructure on top of cloud-based genAI apps already in use.
“AI isn’t just reshaping perimeter and platform security—it’s rewriting the rules,” said Ari Giguere, Vice President of Security and Intelligence Operations at Netskope. “As attackers craft threats with generative precision, defenses must be equally generative, evolving in real-time to counter the resulting ‘innovation inflation.’ Effective combat of a creative human adversary will always require a creative human defender, but in an AI-driven battlefield, only AI-fueled security can keep pace.”
A CISO Perspective
As human defenders combat AI-driven threats, many are turning to security tools already in their technology stacks. Nearly 100% of organisations are working to reduce their AI risks with policies that allow them to block access to AI tools and/or control which users can access specified AI tools and what data can be shared with these tools. Netskope recommends enterprises review, adapt and tailor their risk frameworks specifically to AI or genAI using efforts to ensure adequate protection of data, users and networks. Specific tactical steps to address risk from genAI include:
- Assess your genAI landscape: Understand which genAI apps and locally hosted genAI infrastructure you are using, who is using them and how they are being used.
- Bolster your genAI app controls: Regularly review and benchmark your controls against best practices, such as allowing only approved apps, blocking unapproved apps, using DLP to prevent sensitive data from being shared with unauthorised apps and leveraging real-time user coaching.
- Inventory your local controls: If you are running any genAI infrastructure locally, review relevant frameworks such as the OWASP Top 10 for Large Language Model Applications, National Institute of Standards and Technology’s (NIST) AI Risk Management Framework, and MITRE Atlas to ensure adequate protection of data, users and networks.