European regulation of artificial intelligence is here: what does the AI Act bring and how to prepare for it?

The European Union has approved the world's first comprehensive legal framework for artificial intelligence, known as the AI Act. This legislative milestone aims to strike a balance between encouraging innovation and protecting the public from the risks that some AI systems pose.

AI technology is becoming an integral part of the modern world - from healthcare to transportation to HR systems. While most AI solutions bring benefits, some can pose serious risks, such as discrimination, loss of privacy or threats to fundamental rights. The AI Act therefore:
  • introduces a classification of AI systems according to the level of risk - from minimal to unacceptable
  • sets stricter rules for high-risk systems
  • bans certain dangerous applications (e.g. real-time biometric surveillance in public spaces)

But what does this mean specifically for companies developing, using or importing AI into the European market? And why is it a good idea to start preparing now?

Why was the AI Act created and what is its purpose?
The AI Act introduces obligations for the development, deployment and operation of AI systems in the EU. Systems are categorised according to risk. The category of so-called high-risk systems - i.e. AI systems used for e.g:
  • cyber threat detection
  • user behaviour analysis (UEBA)
  • classification or prioritisation of incidents
  • decision-making in security operations (e.g. SOAR systems)
Specific measures will be required for these systems in terms of risk management, model testing, decision transparency and auditability.

Who is required to comply with the AI Act?
The regulation applies to all companies wishing to develop, supply or use AI in the EU, regardless of where it originates. Exemptions apply for use in defence, national security or for purely personal purposes.
It applies to the following types of entities:
  • companies developing their own AI tools for security
  • security solution and service providers that use AI as part of their detection and decision-making mechanisms
  • organisations in regulated sectors that use third-party AI tools and need to verify their compliance with legislation
  • entities covered by other European regulations such as NIS2, DORA, GDPR, which form an interdependent framework of requirements with the AI Act

The rules are different for providers and users
  • providers are companies developing and selling AI systems
  • deployers are companies and organisations that only use AI
 
What rules apply to AI providers?
Companies that develop AI systems - especially high-risk ones - will have to comply with a number of obligations. These include:
  • Establishing a risk and quality management system
  • maintaining technical documentation and automated records
  • demonstrating compliance through certification (conformity assessment)
  • transparency with customers and authorities
  • CE marking for high-risk systems
  • reporting of incidents and deficiencies
 
What are the obligations of companies that "only" use AI?
Ordinary users of AI will not escape liability either - especially for high-risk systems (e.g. scoring of job applicants, AI in critical infrastructure areas, etc.), companies using AI will have to:
  • ensure that the system is used as instructed
  • conduct a fundamental rights impact assessment (where necessary)
  • train employees in AI
  • inform affected persons that a decision has been made or influenced by AI
  • keep records and cooperate with supervisory authorities

For low-risk systems such as chatbots or deepfake videos, it is sufficient to notify users that they are interacting with a machine.
 

A special chapter is devoted to general-purpose - generic - AI models (GPAI)

Models like GPT-4, Claude or Gemini will have their own set of rules:
  • the obligation to keep technical documentation
  • a description of the model's capabilities and limitations
  • the introduction of a copyright compliance policy
  • for "systemically important models", in addition, risk assessment and incident reporting
 

Key dates for AI Act implementation:
Obsah článku
 
Penalties: up to €35 million or 7% of turnover!
Non-compliance can become significantly more expensive. Regulation is therefore not something that can be left to the "last minute". Companies should start addressing regulatory compliance when designing AI systems, whether they are in the provider or user category, not only because of the legislative requirements and security risks, but also in terms of competitive advantage.
 
How can you prepare today?
We present a set of recommendations that you should start implementing in advance of the AI Act coming into full effect:

Take an inventory of AI systems
  1. Identify all AI systems and models used in your security infrastructure-both internal and those from external vendors.
  2. Don't limit yourself to just the visible tools - many SIEM, SOAR or endpoint platforms use ML/AI "under the hood."
Evaluate the level of risk
  1. For each AI system, assess whether it falls into the AI Actu's "high-risk" category. Typically, this applies to systems that impact security decision-making or automation interventions.
  2. Consider a combination of technical and legal assessments - including the legal department, risk management and IT security.
Put in place AI-specific risk management
  1. Create processes to manage model-specific risks - e.g., misclassification, bias in input data, vulnerability to adversarial attacks, i.e., techniques used to fool machine learning models such as image, voice, or text recognition.
  2. Introduce control mechanisms such as limits on autonomous AI decision making, fallback scenarios or escalation mechanisms.
Validate and document the robustness of models
  1. Set up processes for regular testing of AI systems, including adversarial testing, validation of training data, and verification of model behavior.
  2. Document all model changes, deployment decisions, and test results - ideally within the existing GRC or ISMS structure.
Prepare for regulatory and audit oversight
  1. The AI Act emphasizes transparency and auditability. Prepare an "audit trail" for your AI systems.
  2. Develop a policy for the use of AI in cybersecurity - defining purposes, responsibilities, access rights and controls.

Consider validation by an external - independent expert or security testing of AI systems, especially if the systems are deployed at clients or in highly regulated environments.


Autor: Marek Kovalčík