Cross-border GenAI misuse to cause 40% of AI data breaches by 2027: Gartner

THURSDAY, MARCH 06, 2025

Lack of global AI standards forces region-specific strategies, hindering scalability

 

By 2027, more than 40% of artificial intelligence (AI) related data breaches will stem from the improper use of generative AI (GenAI) across international borders, according to analysis by Gartner, Inc.

 

The rapid uptake of GenAI technologies has outpaced the development of robust data governance and security measures. This has raised significant concerns regarding data localisation, a consequence of the centralised computing power required to support these technologies.

 

"Unintended cross-border data transfers frequently occur due to insufficient oversight, particularly when GenAI is integrated into existing products without clear explanations or announcements," said Joerg Fritsch, VP Analyst at Gartner. "Organisations are observing changes in the content produced by employees utilising GenAI tools. While these tools can be deployed for approved business applications, they present security risks if sensitive prompts are sent to AI tools and application programming interfaces (APIs) hosted in unknown locations."

 

The absence of consistent global best practices and standards for AI and data governance exacerbates these challenges, resulting in market fragmentation and compelling enterprises to develop region-specific strategies. This, in turn, can restrict their ability to scale operations globally and capitalise on the benefits of AI products and services.
 

 

Cross-border GenAI misuse to cause 40% of AI data breaches by 2027: Gartner

 

"The complexity of managing data flows and maintaining data quality due to localised AI policies can lead to operational inefficiencies," stated Fritsch. "Organisations must invest in advanced AI governance and security to safeguard sensitive data and ensure compliance. This necessity will likely drive growth in the AI security, governance, and compliance services markets, as well as technology solutions that enhance transparency and control over AI processes."

 

Gartner predicts that by 2027, AI governance will become a mandatory requirement of all sovereign AI laws and regulations worldwide.

 

"Organisations that fail to integrate the required governance models and controls may find themselves at a competitive disadvantage, particularly those lacking the resources to swiftly extend existing data governance frameworks," said Fritsch.
 

 

Cross-border GenAI misuse to cause 40% of AI data breaches by 2027: Gartner

 

To mitigate the risks of AI data breaches, particularly those arising from cross-border GenAI misuse, and to ensure compliance, Gartner recommends the following strategic actions for enterprises:

 

Enhance data governance: Organisations must ensure adherence to international regulations and monitor unintended cross-border data transfers by extending data governance frameworks to incorporate guidelines for AI-processed data. This involves integrating data lineage and data transfer impact assessments within routine privacy impact assessments.

 

Establish governance committees: Form committees to bolster AI oversight and ensure transparent communication regarding AI deployments and data handling. These committees must be responsible for technical oversight, risk and compliance management, and communication and decision reporting.

 

Strengthen data security: Employ advanced technologies, encryption, and anonymisation to protect sensitive data. For instance, verify Trusted Execution Environments in specific geographic regions and apply advanced anonymisation technologies, such as Differential Privacy, when data must leave these regions.

 

Invest in TRiSM products: Plan and allocate budgets for trust, risk, and security management (TRiSM) products and capabilities tailored to AI technologies. This encompasses AI governance, data security governance, prompt filtering and redaction, and synthetic generation of unstructured data. Gartner predicts that by 2026, enterprises applying AI TRiSM controls will consume at least 50% less inaccurate or illegitimate information, thereby reducing faulty decision-making.