Generative AI (GenAI) is not just transforming businesses—it’s also creating new avenues for attackers. According to a new Gartner survey of 302 cybersecurity leaders across North America, EMEA, and Asia-Pacific, nearly a third of organizations (29%) reported an attack on their enterprise GenAI application infrastructure in the past year.
The study highlights a troubling trend: 62% of organizations were targeted by deepfake attacks, often used in social engineering schemes or to exploit automated processes. In addition, 32% experienced attacks on AI applications that manipulated prompts to trick large language models (LLMs) or multimodal models into producing biased or malicious outputs.
Chatbots and GenAI-driven assistants, now widely adopted across enterprises, have become prime targets. Attackers are exploiting adversarial prompting techniques, effectively “jailbreaking” systems to bypass safeguards and generate harmful outcomes.
“As adoption accelerates, attacks leveraging GenAI for phishing, deepfakes and social engineering have become mainstream, while other threats — such as attacks on GenAI application infrastructure and prompt-based manipulations — are emerging and gaining traction,” said Prashast Gupta, Director Analyst at Gartner.
While two-thirds (67%) of security leaders said these emerging risks require significant changes to cybersecurity strategies, Gartner cautions against overreaction. Instead, it recommends a measured approach: reinforce core security controls while introducing targeted defenses for each new GenAI threat category.
The findings underscore a growing tension for CISOs and IT leaders: how to harness GenAI for innovation and efficiency without opening the door to unprecedented cyber risks. As organizations double down on digital transformation, the security of GenAI ecosystems may become a defining battleground in enterprise resilience.