Generative AI Security: Preparing for Salesforce Agentforce

This article was written in collaboration with Mike Smith, Distinguished Security Architect at Salesforce. It covers how Salesforce Agentforce's (formerly Einstein Copilot) security model works and the risks you must mitigate to ensure a safe and secure rollout.
Collaborative Article
4 min read
Last updated December 13, 2024
Salesforce Agentforce

Salesforce Agentforce (formerly Einstein Copilot) is a platform that enables organizations to create and manage autonomous AI agents. These agents provide specialized, always-on support to employees and customers by executing tasks and answering questions based on their specific roles. 

Agentforce agents can understand natural language queries to answer questions, provide insights, and perform tasks across Salesforce to help streamline daily processes and increase productivity. They are designed to work autonomously, making decisions and taking actions without the need for constant human intervention.

The new AI will bring great leaps in productivity and streamline processes, but it will also come with risks that you must take the necessary steps to mitigate.

In this blog, we will discuss:

Salesforce Agentforce use cases

Some of the key use cases for Agentforce are:

  • Helping sales reps find leads, create opportunities, update records, schedule and summarize meetings
  • Enabling service agents to resolve cases faster, quickly access knowledge articles, and escalate issues
  • Assisting marketers in creating campaigns, writing emails, segmenting audiences, and analyzing results
  • Helping merchants optimize their online stores, create new Salesforce sites, manage inventory, process orders, and more
  • Providing users with the ability to analyze their data, create reports and dashboards, and discover trends and patterns

And all of this can be done with a simple prompt from the user in plain language. 

How Salesforce Agentforce works

Below is a simple overview of how Agentforce processes prompts:

  • A user inputs a prompt within Salesforce Marketing, Sales, or Service Cloud
  • Agentforce ingests the prompt, runs a similarity search, and identifies relevant context against the connected data sources
  • The prompt to the large language model (LLM) and response are processed through the Einstein Trust Layer
  • Agentforce generates an answer and performs actions within Salesforce

Retrieval_Augmented_Generation_(RAG)_with_SalesforceAgentforce processing model (Source)

The Einstein Trust Layer

Salesforce is committed to securing the data that customers process through Agentforce. To do this, they have developed the Einstein Trust Layer.

Customer data flowing through Agentforce is encrypted within the Trust Layer, and none of that data is retained on the backend. Any sensitive data like PII, PCI, and PHI is also masked.

The Einstein Trust Layer will also attempt to reduce the amount of biased, toxic, and unethical responses through its toxic language detection capabilities, reducing the burden on the end user.

Salesforce has stated it will not use customer data to train the LLMs behind Agentforce, and it will not be sold to third parties.

Einstein Trust Layer-1The Einstein Trust Layer ensures your data is safe. (Source)

Protecting your Salesforce data — a shared responsibility

One of the key components of Salesforce security is its shared responsibility model. The shared responsibility model defines the roles and responsibilities of Salesforce and its customers regarding the secure use of data, AI, and the overall platform.

In this model, Salesforce is responsible for securing the infrastructure, platform, and services that enable AI (as shown by the Einstein Trust Layer) and the secure processing of customer data through Agentforce.

At the same time, customers are responsible for securing the applications and configurations that connect to the AI, including:

  • Permissions – Agentforce agents will surface all organizational data that an individual user can access
  • Data – Agents rely on up-to-date data to provide high-quality and accurate results
  • Usage – Customers must ensure agents are used properly and responsibly

This ensures both parties work together to form the highest level of security and trust.

Shared responsiblity modelThe shared responsibility model between customers and cloud service providers (CSP) like Salesforce (Source)

Best practices to prepare your Salesforce Orgs for Agentforce

Lock down permissions to sensitive data. 

Agentforce agents inherit the access and permissions of the Salesforce user, so it’s imperative to mitigate risk by locking down critical data, ensuring that each user (and thereby agents) can only access what they need to do their job.

To understand each user’s permissions, you’ll need to parse their:

  • Profile
  • Permission Sets
  • Permission Set Groups
  • Role/hierarchy
  • Muted permissions

However, Salesforce permissions are highly complex and require significant effort to analyze and understand — especially considering a large enterprise can have up to 1,000 Permission Sets with dozens of permissions in each one.

On top of that, security teams must rely on Salesforce teams to help them complete this process, and because Salesforce admins have their plates full with keeping the business running, completing this process can be overwhelming.

Update and purge old internal data and documentation.

Agentforce relies on your internal documentation and data to ground generative AI prompts with helpful context and provide accurate and relevant information.

As Salesforce says, “Good AI starts with great data.”

Agentforce pulls data from the Salesforce Data Cloud, which unifies multiple data sources, including your Salesforce environment and cloud storage (like AWS and Snowflake).

Data is the source of truth for generative AI, and to ensure the best Agentforce experience and reduce the risk of hallucination, your data needs to be:

  • Secure
  • Available
  • Clean
  • Timely

Along with ensuring your permissions are locked down and correct, you should also perform an initial record and documentation review across the data stores agents pull from and update or purge out-of-date, stale, and inaccurate information.

Then, you can set up a regular review process to keep your internal documentation clean and up to date. 

Salesforce Gen AI experienceHow Agentforce uses your data to build AI experiences in Salesforce (Source)

Identify sensitive data that AI shouldn't access.

There is bound to be data in your environment that you don’t want agents to be trained on or surface answers from; with Salesforce, you can create zones that section off data you don’t want agents to access. However, it is up to the customer to determine what that data is and where it lives. 

Ensure proper use.

Many departments — from support to marketing — will use Agentforce to generate customer and public-facing content. However, as we mentioned previously, the quality and accuracy of AI output often rely on the quality of the input. 

Salesforce's Prompt Builder ensures your users are generating proper responses from the AI. This feature enables admins to set up guard rails for specific processes within the workflow (for example, customer support responses) to ensure appropriate, on-topic, and quality AI output.

The Agent Builder will enable admins to customize agent actions and provide users with a template to feed prompts into, dynamically grounding the prompt with information like customer names, accounts, context, and relevant articles that may further help the AI’s response.

Salesforce AI prompt guardrails

Create prompt guardrails through the Einstein Trust Layer (Source).

This will also help you safeguard against prompt injection attacks, in which a malicious actor tries to provide instructions that trick the model into giving a response it shouldn’t. 

Prepare your Salesforce Orgs for Agentforce with Varonis

Before you start your AI journey with Agentforce, it is essential you understand your Salesforce security posture and ensure that your data is prepared for a safe and smooth rollout.

The Varonis Data Security Platform helps organizations gain an overview of their Salesforce security posture by:

  • Greatly simplifying permissions analysis
  • Automatically discovering and classifying sensitive data
  • Surfacing stale data
  • Identifying critical misconfiguration
  • Managing third-party app risk
  • Continuously monitoring sensitive data activity and detecting risky behavior
  • Integrating with and enhancing Salesforce Shield

Try Varonis for free.

Varonis can help your organization prepare for a safe and smooth Agentforce rollout. 

Request a demo today and get started with a complementary Salesforce risk assessment. Getting started is free and easy, and the results are yours to keep.

What should I do now?

Below are three ways you can continue your journey to reduce data risk at your company:

1

Schedule a demo with us to see Varonis in action. We'll personalize the session to your org's data security needs and answer any questions.

2

See a sample of our Data Risk Assessment and learn the risks that could be lingering in your environment. Varonis' DRA is completely free and offers a clear path to automated remediation.

3

Follow us on LinkedIn, YouTube, and X (Twitter) for bite-sized insights on all things data security, including DSPM, threat detection, AI security, and more.

Try Varonis free.

Get a detailed data risk report based on your company’s data.
Deploys in minutes.

Keep reading

Varonis tackles hundreds of use cases, making it the ultimate platform to stop data breaches and ensure compliance.

copilot-security:-ensuring-a-secure-microsoft-copilot-rollout
Copilot Security: Ensuring a Secure Microsoft Copilot Rollout
This article describes how Microsoft 365 Copilot's security model works and the risks that must be considered to ensure a safe rollout.
varonis-in-the-cloud:-building-a-secure-and-scalable-data-security-platform
Varonis in the Cloud: Building a Secure and Scalable Data Security Platform
How we built our cloud-native SaaS platform for scalability and security—without taking any shortcuts.
a-practical-guide-to-safely-deploying-gen-ai
A Practical Guide to Safely Deploying Gen AI
Varonis and Jeff Pollard, Forrester Security and Risk Analyst, share insights into how to securely integrate generative AI into your organization.
understanding-and-applying-the-shared-responsibility-model-at-your-organization
Understanding and Applying the Shared Responsibility Model at Your Organization
To avoid significant security gaps and risks to sensitive data, organizations need to understand the shared responsibility model used by many SaaS providers.