Earlier this year, the European Parliament and Council agreed to regulate the use of AI in the EU with the passage of the European Union Artificial Intelligence Act (EU AI Act), the world’s first comprehensive AI regulation.
Why is the EU AI Act important?
If your organization uses AI and operates within the EU — or is an external company doing business in the EU — it’s important to understand the new regulations and ensure adherence.
Non-compliance with certain provisions can result in fines of up to €35 million EUR ($38 million USD) or up to 7% of your gross revenue.
The new act aims to ensure the safety and reliability of AI systems, positioning the EU as a leader in AI governance. It mandates transparency requirements for all general AI applications and outlines obligations such as risk management, self-assessment, mitigation of systemic risks, and more.
Key dates to be aware of:
- February 2025: Prohibitory requirements go into effect
- August 2026: The EU AI Act becomes fully enforceable
- August 2027: Rules for AI systems embedded into regulated products come into force
Continue reading to learn the steps your organization needs to take to comply with the new requirements.
The four levels of classification
The EU AI Act classifies systems according to the risk they pose to users, with each level of risk having its own level of regulation. These four levels — unacceptable, high, limited, and minimal/no-risk — will be treated differently based on their use cases.
The EU AI Act focuses on unacceptable and high risk AI systems. Let’s take a closer look.
Unacceptable risk AI systems
AI systems regarded as an unacceptable risk are considered a clear threat to people and fail to comply with EU values.
The EU’s examples of unacceptable risk AI systems include:
- Cognitive behavioral manipulation of people or specific vulnerable groups: for example, voice-activated toys that encourage dangerous behavior in children
- Social scoring: classifying people based on behavior, socio-economic status, or personal characteristics
- Biometric identification and categorization of people
- Real-time and remote biometric identification systems, such as facial recognition
Unacceptable risk AI systems will be banned within six months of the EU AI Act's entry, with some exceptions for law enforcement purposes.
High risk AI systems
AI systems regarded as high risk negatively affect safety or fundamental rights and are divided into two categories:
- AI systems used in products falling under the EU’s product safety legislation, including toys, aviation, cars, medical devices, and lifts
- AI systems falling into specific areas that will have to be registered in an EU database, including management and operation of critical infrastructure, education and vocational training, and migration, asylum, and border control management
High risk AI systems must be evaluated before entering the market and during their lifecycle. Citizens have the right to file complaints about these systems.
These requirements and obligations will become applicable 36 months after the entry of the EU AI Act.
Generative AI systems
Gen AI tools like Microsoft 365 Copilot and ChatGPT are not classified as high risk but require compliance with transparency requirements and EU copyright laws. This includes:
- Clearly disclosing when content is generated by AI so end users are aware
- Designing the model to prevent it from generating illegal content
- Publishing summaries of copyrighted data used for training
High-impact general-purpose AI models must pass thorough evaluations, and serious incidents must be reported to the European Commission.
These requirements will apply 12 months after the entry of the EU AI Act.
How Varonis can help
Complying with complex regulatory structures is challenging, and with the introduction of the EU AI Act, the necessity for transparency and data security becomes even greater.
Varonis simplifies compliance management and provides real-time visibility and control over the critical data used by AI to help you comply with the EU AI Act in four critical ways:
- Securing the private and sensitive data ingested and produced by generative AI
- Providing complete visibility into AI prompts and responses, including indicating when sensitive data is accessed
- Alerting on threats and anomalies, including activity and behaviors that indicate misuse
- Automatically securing data, including revoking excessive access, correcting labels, and fixing misconfigurations, to reduce exposure and risk
We’ve also enabled thousands of enterprises to comply with regulations, including HIPAA, GDPR, CCPA, NIST, and ITAR.
Honestly, I don’t know how other agencies achieve compliance without a solution like Varonis. I think we’re doing a lot better than other organizations in the space, and Varonis supports that.
Security Admin, Healthcare Organization
Accelerating AI adoption for security teams
As part of a holistic approach to generative AI security, Varonis also provides the industry’s only comprehensive offering for securing Microsoft 365 Copilot and Salesforce Einstein Copilot.
Our Data Security Platform’s wide range of security capabilities can accelerate your organization's AI adoption and deployment with complete visibility and control over tool permissions and workloads.
Reduce your risk without taking any.
If you're beginning your AI journey or unsure how to comply with the EU AI Act’s requirements, we recommend starting with our free Data Risk Assessment.
In less than 24 hours, you'll have a clear, risk-based view of your security posture and whether it meets compliance standards.
Get started with your free assessment today.
What should I do now?
Below are three ways you can continue your journey to reduce data risk at your company:
Schedule a demo with us to see Varonis in action. We'll personalize the session to your org's data security needs and answer any questions.
See a sample of our Data Risk Assessment and learn the risks that could be lingering in your environment. Varonis' DRA is completely free and offers a clear path to automated remediation.
Follow us on LinkedIn, YouTube, and X (Twitter) for bite-sized insights on all things data security, including DSPM, threat detection, AI security, and more.