
Image for representational purposes only. | Photo credit: Getty Images
With personal healthcare data being so freely used, stored and shared on digital systems and AI chatbots, the security of that data is an ever-present threat. Recent research by a professional agency revealed that regulated data, including patient records and medical information, is particularly at risk, accounting for 89% of all data breaches in generative AI, significantly higher than the industry average of 31%.
Researchers at Netskope Threat Labs, which monitor the key cyber threats facing healthcare organizations and their employees over the past 13 months, released their annual healthcare report on Tuesday and hit the nail on the head. The report, which collected data between December 1, 2024 and December 31, 2025, with prior authorization, indicated that the deployment and use of internal AI tools, which require strict security guardrails, is already accelerating and highlighted the risks involved.
As healthcare professionals adopt and use GenAI with greater frequency than ever before, the risk of sensitive patient data being leaked through prompts and documents shared online is very high. What makes the scenario worse is the use of personal GenAI accounts to verify information.
Why is it important to limit it? Nearly 43% of healthcare workers still use personal accounts at work, making it impossible for security systems to detect leaks, the report said, adding that healthcare institutions are trying to change behavior by getting staff to use approved proprietary software. As a result, the proportion of users using organization-managed GenAI applications also increased over the same period, outpacing this trend across industries.
Protective steps
The report claimed that in healthcare, nearly two in three organizations detect API (application programming interface) traffic on OpenAI and AssemblyAI (63% and 62% respectively), and more than a third (36%) on Anthropic. Over the past year, more than half of healthcare organizations (56%) that have implemented such policies have blocked users from uploading files to personal Google Drive accounts, illustrating the frequency of potential data exposure in popular personal cloud applications. Google Drive was followed by Google Gmail (39%) and OneDrive (30%). This is important because attackers also continue to exploit employees’ inherent trust in cloud applications and the files they can find within them. In healthcare, researchers have identified several platforms that attackers often exploit to distribute malware.
Ray Canzanese, director of Netskope Threat Labs, said: “While building defenses against external threats is critical for healthcare organizations, which have historically been a prime target for cybercriminals, addressing internal risks is equally important, especially in such a highly regulated industry and in the context of rapidly advancing cloud and AI adoption. He added that deploying company-approved applications, along with appropriate security tools that offer full visibility and control over data usage and movement, should be for healthcare organization a priority.
Published – March 3, 2026 9:46 PM IST





