Speed Data: Unpacking Gen AI With Yohan Kim

Yohan Kim, Distinguished Security Technical Architect for Salesforce, gives insight into AI functionality and customer sentiments.
Megan Garza
3 min read
Last updated September 9, 2024
Megan Garza and Yohan Kim

Welcome to Speed Data: Quick Conversations With Cybersecurity Leaders. Like speed dating, our goal is to capture the hearts of CISOs with intriguing, unique insight in a rapid format for security professionals pressed for time.

Generative AI is the golden child of technology these days, and in this week’s episode of Speed Data, Yohan Kim, Distinguished Security Technical Architect for Salesforce gives insight into AI functionality and customer sentiments.

He joined host Megan Garza to discuss the trends he sees in AI use, what his customers are looking for in machine learning capabilities, and how an innocuous phone call could be an exploitation attempt in disguise.

What customers look for in a CRM

As a Salesforce Distinguished Security Technical Architect, Yohan Kim works hand-in-hand with his customers to ensure that users of the world’s largest CRM are equipped for success. Through his discussions, Yohan has gleaned that most Salesforce users prioritize value, security, and regulatory compliance.

“Customers want value from their CRM investment, and every dollar counts,” he said. “Along the same lines as getting value, customers want solutions that are secure and compliant, and they want to know their data isn’t being used to train AI models — especially ones their competitors have access to.”

The meteoric adoption of gen AI

Generative AI has been dubbed the next Industrial Revolution and appears in everything from news articles and blog posts to embedded features in your favorite apps. J.P. Morgan Research predicts the technology could result in a massive workforce productivity boom over the next one to three years, which could affect the shape of the economic cycle.

Gen AI is being adopted at a remarkable pace, faster than any other technologies historically.

Yohan Kim, Salesforce Distinguished Security Technical Architect

 

“I see some of it sprinkled around without even looking for it,” Yohan said. “For instance, you click this little star in Slack, and it summarizes your conversation. I used it after taking a two-month leave of absence, and the summary of thousands of messages was pretty good, and I expect it to get better.”

Just as sales is unmanageable without Salesforce, I see a future where AI is essential for efficiency.

Yohan Kim, Salesforce Distinguished Security Technical Architect

Customer concerns about generative AI

Yohan said his customers are excited about AI's possibilities but understandably remain wary about providing their sensitive information due to security concerns.

“Customers are optimistic about all the AI and machine learning capabilities, but they are apprehensive about sticking their data in the models, and for good reason,” Yohan said. “There’s a lack of clarity in how governments and companies will ensure that AI and all the systems built and deployed around it are safe and ethical.”

One such security concern is that generative AI can produce authentic-looking counterfeit images, videos, or audio recordings, leading to identity theft, impersonation, and the creation of deepfake content.

“There’s something called ‘vishing,’ and it’s a technology that allows you to emulate a voice,” Yohan said. “Some of it has been used in the entertainment area, where you have Eminem or Tupac’s voice on a completely new track, and some of it sounds pretty good!”

“Vishing can also be used to gain access to places where you authenticate via voice,” Yohan said. This is particularly problematic because something as benign as answering a phone call could lead to impersonation.

“If someone calls you, I’ve heard not to say the word ‘yes’ because once they record that, they can use it to agree to a question being asked,” he said. “I know Amex uses voice verification, and it’s just the tip of the iceberg.”

Regulating gen AI

With the expanding application of generative AI, authorities worldwide recognize the need for oversight and are enacting measures to prevent the misuse of AI for harmful activities.

“There’s a new law as of May 2024 called the EU AI Act,” Yohan said. “It’s the world’s first comprehensive AI law regulating the use of AI in the EU.”

The new act aims to ensure the safety and reliability of AI systems, positioning the EU as a leader in AI governance. It mandates transparency requirements for all general AI applications and outlines obligations such as risk management, self-assessment, and mitigation of systemic risks.

“I see it as the GDPR of AI. I suspect we’ll see more of this in the same way GDPR spurred other privacy-related laws.”

I predict customers will try to get ahead of the ball in the same way they were for privacy and other regulations.

Yohan Kim, Salesforce Distinguished Security Technical Architect

 

Yohan said that to become adept at using generative AI, it's crucial to continuously study and evaluate the technology's advancements.

“You’ve got to keep learning and be prepared,” Yohan said. “The security landscape and tools are always changing, so digging deep in a specific domain and being known as a go-to person for that goes a long way.”

What should I do now?

Below are three ways you can continue your journey to reduce data risk at your company:

1

Schedule a demo with us to see Varonis in action. We'll personalize the session to your org's data security needs and answer any questions.

2

See a sample of our Data Risk Assessment and learn the risks that could be lingering in your environment. Varonis' DRA is completely free and offers a clear path to automated remediation.

3

Follow us on LinkedIn, YouTube, and X (Twitter) for bite-sized insights on all things data security, including DSPM, threat detection, AI security, and more.

Try Varonis free.

Get a detailed data risk report based on your company’s data.
Deploys in minutes.

Keep reading

Varonis tackles hundreds of use cases, making it the ultimate platform to stop data breaches and ensure compliance.

salesforce-einstein-copilot:-boosting-productivity-with-a-focus-on-security
Salesforce Einstein Copilot: Boosting Productivity With a Focus on Security
AI tools like Salesforce Einstein Copilot can improve efficiency, but also increase risk. Check out these tips on preparing for a Copilot rollout.
6-prompts-you-don't-want-employees-putting-in-copilot
6 Prompts You Don't Want Employees Putting in Copilot
Discover what simple prompts could expose your company’s sensitive data in Microsoft Copilot.
speed-data:-the-(non)malicious-insider-with-rachel-beard
Speed Data: The (Non)Malicious Insider With Rachel Beard
Salesforce's Rachel Beard discusses why insider threats may not always have ill intentions and why security in the CRM is crucial.
why-your-org-needs-a-copilot-security-scan-before-deploying-ai-tools
Why Your Org Needs a Copilot Security Scan Before Deploying AI Tools
Assessing your security posture before deploying gen AI tools like Copilot for Microsoft 365 is a crucial first step.