In terms of hype, nothing is hotter than AI right now; blockchain has some weak links, the metaverse isn't singing in this part of the multiverse, and even big data seems small. As the CEO of a leading cybersecurity company, I get asked daily about AI and what it means for data security.
Like most new technologies, generative AI presents both opportunities and risks. AI is already boosting productivity by acting as a virtual assistant for employees. From a risk perspective, however, there are two dimensions to consider—self-inflicted risk and external risk.
Self-inflicted risk will occur when an organization's employees start using AI to suggest content, either through a query or in the context of what they're creating. Unless data is locked down, there's little to prevent AI from analyzing your data estate and revealing your secret road map, financial information or other precious data to all of the wrong people.
To help mitigate this risk, Microsoft recommends securing sensitive data before rolling out its AI assistant, Copilot. One step it suggests taking is "[making] sure your organization has the right information access controls and policies in place."
Unfortunately, getting the right access controls and policies in place proves far more challenging than most organizations realize. This will likely only become more difficult as AI further increases the data we create and must protect.
Without the right controls in place, AI won't know who should see what. Organizations will be exposed, just like they are when they activate enterprise search platforms before locking things down—only much worse. If this happens, employees won't even need to search for content they want to steal or sneak a peek at; AI will gladly expose it for them.
How attackers are leveraging AI
External risk will continue to increase as attackers learn to use AI. Unfortunately, they've already started. WormGPT and FraudGPT use large language models (LLMs) to help attackers craft convincing phishing emails and translate them into other languages.
Attackers now also create fake data sets based on past breaches and other available data; they claim they've stolen data from companies to bolster their reputation as capable attackers or potentially dupe these companies into paying a ransom. Generative AI could increase data volume and make it harder to tell the difference between a real and fake breach.
Researchers have already used AI to craft malware as a proof of concept, and we should expect to see AI-generated malware in the wild. Unfortunately, the use of AI will continue to lower the barriers to entry for all kinds of cyber villainy.
These are just some of the risks AI presents—and at the pace this technology is advancing, there will be many more to come. Soon, generative AI may devise new cyber threats all on its own.
Cyber defenders will get an AI boost
Thankfully, AI also presents enormous opportunities for cybersecurity.
AI is excellent at recognizing patterns. By analyzing the right things, AI and machine learning can provide insights about vulnerabilities and unwanted behaviors. When coupled with automation, AI will be able to take care of routine tasks, giving humans more time to take care of tasks that require their precious attention.
When human intervention is required, AI will help cyber defenders be more efficient by providing insights and speeding up investigations. These uses for AI are imminent, and many more are on the horizon. For example, generative AI could create troves of synthetic data to serve as bait for attackers—making it harder for the bad guys to know whether they've stolen anything valuable while giving defenders and the technologies they rely on more opportunities to catch cyber crooks in their tracks.
Preparing organizations for AI
- Conduct a Data Risk Assessment to identify sensitive and overly accessible data before it's surfaced by "friendly AI" or "unfriendly" attacker-run AI. Your data makes AI valuable, and that's what you need to protect. Organizations don't know enough about where their important data is stored or who can—and does—use it.
- Lock your data down, especially your critical data. Once an organization can see its data risks during an assessment, they almost always find critical data that's far too accessible, in the wrong places, and used or unused in surprising ways. Your employees and partners should have only the information they need to do their jobs and nothing more.
- Watch your data. We don't know what new AI techniques attackers will use, but we do know what they'll be using them for—to steal your data. It's never been more important to monitor how humans and applications use data to look for unwanted activity. Credit card companies and banks have been monitoring financial transactions for years to detect financial crime, and everyone with valuable data should be monitoring their data transactions for data-related crimes.
While some newer, trending technologies peak and slide into obsolescence, AI will almost certainly outlast the hype. If your data isn't locked down, AI (friendly or otherwise) could make a data breach more likely. As far as we know, not even AI can un-breach data, so protect your data first to ensure AI works for you rather than against you.
This article first appeared on Forbes.
What should I do now?
Below are three ways you can continue your journey to reduce data risk at your company:
Schedule a demo with us to see Varonis in action. We'll personalize the session to your org's data security needs and answer any questions.
See a sample of our Data Risk Assessment and learn the risks that could be lingering in your environment. Varonis' DRA is completely free and offers a clear path to automated remediation.
Follow us on LinkedIn, YouTube, and X (Twitter) for bite-sized insights on all things data security, including DSPM, threat detection, AI security, and more.