Rewards and Risks: Why Generative AI Security is Essential

Learn the most significant gains of gen AI and what security risks you should be concerned with before deploying AI tools at your org.
Lexi Croisdale
7 min read
Last updated August 30, 2024
shield with person attempting to infiltrate it, depicting AI security risks

From countless news articles and social media posts to new tools built into your favorite software, artificial intelligence is everywhere.

Although the technology is not new, the buzz around generative AI started with the November 2022 release of ChatGPT, a large language model (LLM) that uses prompt engineering to generate various content for users. 

Since then, similar gen AI tools — such as Copilot for Microsoft 365 and Salesforce Einstein Copilot — debuted, and the use of AI to generate content, videos, photography, code, and more has spread like wildfire.

However, many security leaders hesitate to deploy these tools in their organizations due to the security concerns associated with gen AI.

AI security concerns

Cybercriminals have manipulated AI tools to exfiltrate data, using platforms like WormGPT, an AI model trained on malware creation data and used for ill intent or to generate malicious code. 

With tools like Microsoft Copilot, which can access everything users have permission to, both employees and threat actors can instantly search and compile data from across your org’s documents, presentations, emails, calendars, notes, and contacts. 

Therein lies the problem for information security teams. Copilot can access all the sensitive data a user can access, which is often far too much. These broad permissions can land critical information in the wrong hands. 

With the exponential growth in gen AI tools, there is no better time to protect your organization. In 2024, the mean cost of a data breach equated to nearly $5 million due to factors like lost IP, reputational damage, and steep regulatory fines.

In this blog, we’ll highlight the most significant gains of gen AI and share what security concerns to look out for.

Why generative AI security matters now

The birth of AI dates back to the 1960s, when Joseph Weizenbaum developed the first AI chatbot, ELIZA. So why is generative AI so popular now, more than 50 years later?

Most experts agree that the introduction of ChatGPT accelerated the development of gen AI and gave the world access to powerful technology.

“What ChatGPT has done is commoditized AI and made it available to more and more people; putting it on a search engine front-end just means that more people can use it without understanding the underlying technology,” said Thomas Cock, a Security Architect on the Varonis Incident Response Team, who presented a webinar on ChatGPT

With many software providers developing their own AI programs, security teams may be caught off guard when the tools are released because they haven’t yet learned how to combat the risks they present.

How gen AI tools enhance productivity 

Despite the security risks associated with gen AI tools, this technology undoubtedly boosts an organization's efficiency. 

For example, Copilot for Microsoft 365 relies on your company’s data to perform tasks, such as “attending” your Teams meetings and taking notes in real time, helping triage and reply to emails in Outlook, and even analyzing raw data in Excel. 

Varonis CMO Rob Sobers and Mike Thompson, Director of Cloud and Security Architecture, presented a deep dive on generative AI, explaining how Copilot’s security model functions. They outlined the pros and cons of the new technology to help security teams comprehend the tools.

Copilot is being called the most powerful productivity tool on the planet, and if you've ever used gen AI tools, you probably can see why it's being called that.

Robert Sobers, Varonis Chief Marketing Officer

 

“Imagine having a little ChatGPT built into all your Office apps like Word, PowerPoint, Excel, and Microsoft Teams,” Rob added.

Strengthening security with generative AI tools 

In addition to the above capabilities, security teams can benefit from several other gen AI features, including enhancing cybersecurity operations, threat detection, and defense strategies. 

Additional advantages of gen AI include: 

  • Blue team defenders: Just as a threat actor may use AI tools for harm, businesses can use them for good. Thomas shared how ChatGPT has simplified ways for users to check malicious code, detect specific vulnerabilities, and summarize outputs almost instantly.
  • Malware analysis: Generative AI can produce variants of known malware samples, aiding cybersecurity professionals in creating more comprehensive malware detection and analysis systems. 
  • Deception and honeypots: Gen AI can create realistic decoy systems or honeypots to entice attackers. This allows security teams to monitor and analyze attack techniques, gather threat intelligence, and divert bad actors away from real assets. 
  • Automated response generation: When an attack is detected, gen AI tools can assist in generating automated responses to mitigate the threat. This can include creating firewall rules, deploying countermeasures, and isolating compromised systems. The tech can also save analysts time when responding to threats.
  • Adaptive security measures: Generative AI can help develop security mechanisms that adapt to evolving threats. By continuously learning from new attack techniques, these systems can evolve and improve their defense strategies over time.
  • Visualizing attacks: Gen AI can assist in visualizing complex attack patterns and behaviors, making it easier for security analysts to understand how attacks are executed and identify patterns that might not be obvious. 
Get started with our free Copilot Security Scan.
Get your assessment
Microsoft_365_Copilot_Icon.svg (2)

Combatting the concerns and challenges of gen AI 

While gen AI offers many benefits, it's important to note that this developing technology also has risks that warrant attention. 

Prompt-hacking copilots 

One challenge generative AI poses for organizations is how tools can rapidly accelerate the data growth trend, which is already out of control. Security teams stretched thin are not ready for this force multiplier.

According to Forrester, security is a top concern for companies adopting AI, with 64% of respondents reporting they don’t know how to evaluate the security of generative AI tools. However, despite those concerns, organizations have yet to lock down gen AI use, with 62% of businesses having no strict access limitations and 54% lacking clear guidance on acceptable use.

One of the top Microsoft Copilot concerns is how its security model uses existing permissions to access all the files and information that a user can access because, unfortunately, most users in an organization already have too much access to information they don’t need. 

“One thing that every organization has in common is this huge spike in organization-wide access,” Mike said. “This is the biggest risk that we think goes unaddressed for most organizations because that's what Copilot is leveraging — the permissions are defined through SharePoint and OneDrive. It's your responsibility to enforce the least privilege model internally, but how many people are doing that effectively?” 

During one of Varonis' Copilot Live Labs sessions, our host entered the prompt, “What documents can you find containing new employee data?” to highlight how easily sensitive data can be provided. When this prompt is used in an insecure environment, Copilot can share information like social security numbers, salary and payroll figures, and more.

04 Employee data_CopilotPrompt

Without proper training or proactive security measures, companies risk sharing their crucial information throughout the organization and potentially the entire internet.

Forgoing human fact-checking 

As the adoption of gen AI tools continues to grow, humans will potentially over-rely on the accuracy of AI.

For example, an employee could ask Copilot for Microsoft 365 to generate a proposal using existing documents and meeting notes, eliminating hours of manual labor. However, without thoroughly reviewing the final draft, sensitive internal information from the original documentation could find its way into the external proposal.

Threat actors using AI

Bad actors can manipulate gen AI tools to write malicious code, locate vulnerabilities, launch large-scale attack campaigns, and generate fake data sets for extortion attempts. 

In 2024, KnowBe4 mistakenly hired a threat actor who had used AI tools to create false pictures and video footage during interviews.

“Attackers are going to get good at prompt engineering instead of learning PowerShell or Python,” Rob said. “If they know they can compromise a user, and they'll have access to an AI tool, why not get better at prompt engineering?” 

Other security concerns and risks associated with generative AI include: 

  • Cyberattack campaigns on demand: Attackers can harness generative AI to automate the creation of malware, phishing campaigns, or other cyber threats, making it easier to scale and launch attacks. In Thomas’ presentation, he shared an example of how ChatGPT can personalize an email to appeal to Elon Musk about investing in X, formerly known as Twitter. Including information about the target in the prompt can help threat actors write more appealing messages that are more likely to result in users taking action. Other prompts could include information such as age, gender, education, company information, and more.
  • Susceptibility to tool manipulation: AI technologies can be compromised, leading to incorrect or malicious outputs. While some AI tools have ethical standards in place to help combat improper use, threat actors have successfully circumvented these guardrails. 
  • Sensitive information leakage: Generative AI models often learn from large datasets, which might contain sensitive information. If not properly handled, generated outputs can inadvertently reveal confidential data. Some AI models can also store this information, making sensitive data accessible to anyone who accesses your user account with different AI tools.
  • Intellectual property theft: Generative models often pull in a massive amount of publicly available information, including exposed proprietary data. Gen AI tools could infringe upon others’ intellectual property rights and bring forth lawsuits. For example, image-based AI tools have been imprinting Getty’s watermark on generated images because the photos are created based on Getty’s multitude of public data. Just as gen AI solutions infringe on the media company’s copyright, your org’s IP could wind up in future generated content if it’s not properly secured. 
  • Identity risk and deepfakes: Generative AI can be used to create convincing images, videos, and audio clips, leading to identity theft, impersonation, and the creation of deepfake content to spread misinformation. These tools can also make phishing campaigns seem more human to appeal to their target.

ChatGPT, in particular, is designed to mimic human interaction, making it the perfect tool for phishing campaigns. Threat actors also use the LLM to package malware into fake applications, a popular attack during ChatGPT’s onset before parent company OpenAI issued an iOS application.

“Even if you search the Chrome web store for ChatGPT and include the word ‘official,’ you still get over 1,000 results, and none of these are legitimate, first-party applications,” Thomas said. “Not all of them will be malicious, but you have to wonder why people are paying for you to use the API on their back end. What are they gaining from it? What information are they taking from you?”

Using generative AI in automation 

A robust Data Security Platform (DSP) can prevent employees from accessing sensitive data they shouldn’t. DSPs can help security teams automatically discover, classify, and label sensitive data, enforce least privilege, and continuously remediate data exposure and misconfigurations.

In one instance, the security team at Tampa General Hospital knew they needed more insight into their organization's Microsoft 365 permissions before enabling Copilot.

With Varonis, they classified over 43,000 files in just seven days, allowing them to deploy Copilot safely in their organization.

"Varonis allowed us to actually deploy AI. Without it, I don't think I would have been able to safely greenlight and recommend that we use Copilot," said Jim Bowie, the hospital’s CISO."The only reason I'm prepared and feel safe in recommending our AI products through our AI Governance Committee is because we have Varonis." 

 

Varonis’ global Managed Data Detection and Response (MDDR) team also investigates abnormal activity on your behalf 24x7x365. If an employee is accessing information they shouldn’t, you’re alerted instantly. Our automation capabilities help reduce the time to detection, allowing us to respond and investigate quickly. 

“If you have good visibility of what people are doing, where the data sits, the sensitivity of that data, and where you have concentrations of sensitive data, it's much easier to reduce that blast radius and make sure only the right people have access,” Thomas said. 

How to confidently navigate the AI landscape

Delaying the implementation of AI security controls until after a data breach takes place puts you at a disadvantage. 

And simply forgoing gen AI tools, such as Microsoft Copilot or native features in your favorite software, is not a viable option, as they continue to gain popularity across all industries. 

Leaders should instead educate their teams on acceptable sharing and usage practices. For example, employees may include customer data in ChatGPT prompts without realizing the implications, but this is precisely the action threat actors hope users take.

If just one employee visits a counterfeit ChatGPT website and inputs confidential data, it could jeopardize your organization. 

Reduce your risk without taking any.

Undoubtedly, AI has made a significant impact globally and will continue to advance in the future. Disregarding the proliferation of gen AI tools due to security worries may hinder your organization's progress.

To secure your organization’s sensitive data, you must recognize the advantages and potential issues with generative AI, educate employees on proper usage, and establish guidelines for acceptable sharing practices. 

If you’re interested in deploying AI copilots at your organization, start with our free Copilot Security Scan. This assessment provides you with a summary of your Copilot data security risks and delivers practical advice for an effective generative AI rollout. 

What should I do now?

Below are three ways you can continue your journey to reduce data risk at your company:

1

Schedule a demo with us to see Varonis in action. We'll personalize the session to your org's data security needs and answer any questions.

2

See a sample of our Data Risk Assessment and learn the risks that could be lingering in your environment. Varonis' DRA is completely free and offers a clear path to automated remediation.

3

Follow us on LinkedIn, YouTube, and X (Twitter) for bite-sized insights on all things data security, including DSPM, threat detection, AI security, and more.

Try Varonis free.

Get a detailed data risk report based on your company’s data.
Deploys in minutes.

Keep reading

Varonis tackles hundreds of use cases, making it the ultimate platform to stop data breaches and ensure compliance.

why-your-org-needs-a-copilot-security-scan-before-deploying-ai-tools
Why Your Org Needs a Copilot Security Scan Before Deploying AI Tools
Assessing your security posture before deploying gen AI tools like Copilot for Microsoft 365 is a crucial first step.
2024-cybersecurity-trends:-what-you-need-to-know
2024 Cybersecurity Trends: What You Need to Know
Learn more about data security posture management, AI security risks, compliance changes, and more to prepare your 2024 cybersecurity strategy.
the-attacker’s-playbook:-security-tactics-from-the-front-lines
The Attacker’s Playbook: Security Tactics from the Front Lines
Understand a threat actor's mindset to strengthen your security posture with mitigation tips from Varonis' forensic experts.
what-is-a-data-risk-assessment-and-why-you-should-take-one
What is a Data Risk Assessment and Why You Should Take One
Conducting a Data Risk Assessment can help your organization map its sensitive data and build out a comprehensive security strategy. Here's how to perform it.