All Blog Posts / Data Protection

AI Features in Your SaaS? What the EU AI Act Means for You

By Team LEXR

Last Updated 20/03/2026

Executive Summary

If your SaaS product uses AI – whether it’s a chatbot, recommendation engine, or automated decision-making – the EU AI Act now applies to you. The good news: most SaaS companies aren’t building AI from scratch. They are integrating it. That makes them a “deployer” under the AI Act, which comes with a specific (and manageable) set of obligations.

This guide explains what the AI Act means for SaaS founders who use third-party AI services like ChatGPT, Claude, or Gemini. You’ll learn how to determine your role under the AI Act, what obligations apply right now, and how to update your SaaS contracts to stay compliant.

Why SaaS Founders Need to Pay Attention Now

Almost every SaaS product today has AI features. Maybe you’ve added a GPT-powered assistant to your customer support. Perhaps your platform uses machine learning for personalization or fraud detection. Or you’ve integrated an AI API to automate workflows for your users.

Here’s the problem: many founders assume the AI Act only affects companies building AI models. That’s not how it works.

The EU AI Act regulates everyone in the AI supply chain – from the companies that train foundation models to the businesses that use those models in their products. If you integrate AI into your SaaS solution, you have legal obligations. Some of these obligations are already in force.

The February 2025 deadline for prohibited AI practices has passed. The AI literacy requirement is now mandatory. If you’re using general-purpose AI models such as ChatGPT or Claude, the deployer obligations have been in effect since August 2025. Enforcement is also ramping up, with Germany’s Federal Network Agency (Bundesnetzagentur) acting as the competent authority.

The AI Act has extraterritorial reach. Even if your company is based outside the EU, the AI Act applies if:

  • You place AI systems on the EU market
  • You deploy AI systems in the EU
  • The output produced by your AI system is used in the EU

If you have EU customers, the AI Act likely applies to you.

Are You a “Provider” or “Deployer”? Why It Matters

The AI Act assigns different obligations depending on your role. For SaaS companies, the critical distinction is between providers and deployers.

Providers are companies that develop AI systems (or have them developed), then place them on the market or put them into service under their own name or trademark. Think OpenAI, Anthropic, Google, or a startup that has trained its own proprietary model.

Deployers are natural or legal persons that use AI systems under their own authority – excluding purely personal, non-professional use. For SaaS companies, this means: if you’re integrating AI into your product or business operations, you’re a deployer.

The practical test: Did you train the model yourself, or are you calling an API? If you’re calling an API from OpenAI, Anthropic, Google, or similar providers, you’re almost certainly a deployer.

When You Might Become a Provider

There’s an important exception. Under Article 25 of the AI Act, you may become a provider yourself if you:

  • Make a substantial modification to a high-risk AI system already on the market, or
  • Modify a general-purpose AI model in a way that integrates it into an AI system with a new intended purpose not covered by the original provider’s assessment

“Substantial modification” means a change not foreseen by the original provider that affects compliance with the AI Act or modifies the intended purpose. Standard fine-tuning, prompt engineering, or RAG (retrieval-augmented generation) implementations typically don’t cross this threshold – but significant retraining that fundamentally changes the model’s capabilities or risk profile could.

This matters because provider obligations are significantly more demanding: technical documentation, conformity assessments, quality management systems, and post-market monitoring. Before undertaking major model modifications, get legal advice on whether you’re crossing the line from deployer to provider.

What Deployers Must Do: Your Compliance Checklist

As a deployer using AI in your SaaS product, here’s what the AI Act requires:

1. AI Literacy (Mandatory Since February 2025)

Article 4 requires that your staff and others dealing with AI systems on your behalf have a sufficient level of AI literacy, appropriate to their role and the context of use. This means they should understand:

  • How the AI system works at a basic level
  • Its capabilities and limitations
  • The potential risks associated with its use
  • How to interpret its outputs appropriately

Practical step: Implement training for team members who work with AI features. Document this training. It doesn’t need to be a PhD program – but your customer support team using an AI chatbot should understand what it can and can’t do reliably.

2. Transparency to Users

If your AI system interacts directly with people, from August 2026 you will need to inform them they’re interacting with AI. This applies unless it’s obvious from the context.

Practical steps:

  • Add clear disclosures in your product. “This response was generated by AI” or “You’re chatting with an AI assistant” is sufficient.
  • Update your terms of service to reflect AI usage.

3. Human Oversight

For AI systems that make or support decisions affecting people, you need appropriate human oversight mechanisms. Someone competent must be able to review, intervene in, or override AI decisions when necessary.

Practical step: Don’t let AI make consequential decisions on autopilot. Build in review workflows, especially for decisions affecting users’ rights, access, or significant interests.

4. Input Data Quality

This obligation applies to high-risk AI systems – but it looks different depending on your role. Providers must ensure that training, validation, and testing datasets meet quality criteria, including relevance, representativeness, and accuracy (Art. 10). Deployers, on the other hand, are responsible for the data they feed into the system during operation: to the extent you control the input data, it must be relevant and sufficiently representative for the intended purpose (Art. 26(4)). Note that even outside the AI Act, data quality requirements may arise from other regulations – most notably the GDPR (Art. 5(1)(d)).

Practical step: If you’re using customer data as input to AI features, ensure it’s accurate, up-to-date, and appropriate for the use case. The principle is straightforward: poor input data leads to poor outputs – and for high-risk systems, that’s now a compliance issue.

5. Monitoring and Incident Reporting

You must monitor AI system operations and report serious incidents to your AI provider and, where required, to authorities.

Practical step: Set up logging and monitoring for your AI features. If something goes seriously wrong – discriminatory outputs, security breaches, significant harms – you need to know about it and report it.

High-Risk AI: When the Stakes Get Higher

Most SaaS AI features fall into the “limited risk” category, where transparency obligations are the main concern. But some applications trigger “high-risk” classification, which brings substantially more demanding requirements.

Your AI system may be high-risk if it’s used for:

  • Employment and HR: Recruiting, screening candidates, making promotion or termination decisions, allocating tasks, monitoring performance
  • Credit and insurance: Assessing creditworthiness, setting insurance premiums, evaluating claims
  • Education: Determining access to education, assessing students, monitoring behavior during exams
  • Essential services: Evaluating eligibility for public benefits, emergency services prioritization

If your SaaS AI features fall into high-risk categories, the compliance requirements increase significantly. For most private SaaS companies, this means robust documentation, human oversight measures, and monitoring obligations.

Fundamental Rights Impact Assessments

Fundamental Rights Impact Assessments (FRIAs) under Article 27 are specifically required for:

  • Public bodies or private entities providing public services
  • Any deployer using AI for credit scoring (Annex III, point 5(b))
  • Any deployer using AI for life and health insurance risk assessment (Annex III, point 5(c))

Most private B2B SaaS companies are not required to conduct formal FRIAs. However, even if you’re not legally required to do so, performing a similar internal assessment is good practice for high-risk AI deployments – and may become a customer requirement for enterprise deals.

Practical step: Map your AI features against the high-risk categories in Annex III of the AI Act. If there’s any doubt about your classification, get a legal assessment before your next product release.

Prohibited AI: What You Cannot Do

Some AI applications are banned outright. Since February 2025, these prohibitions are enforceable. Make sure your product doesn’t include:

  • Social scoring: Evaluating people based on social behavior or personal characteristics in ways that lead to detrimental treatment disproportionate to the context
  • Biometric emotion recognition in workplaces or schools: AI systems that infer emotions of employees or students based on their biometric data (facial expressions, voice patterns, physiological signals). Exceptions exist for medical and safety purposes. Note: text-based sentiment analysis of customer feedback or support tickets is not covered by this prohibition.
  • Manipulative AI: Systems designed to use subliminal techniques or exploit vulnerabilities to materially distort behavior in ways that cause significant harm
  • Untargeted facial image scraping: Creating or expanding facial recognition databases through untargeted scraping from the internet or CCTV

Practical step: Audit your AI features against the prohibited practices list. If you’re building HR tech, edtech, or consumer-facing products, pay particular attention. A “sentiment analysis” feature that analyzes employee facial expressions during video calls would be banned – but analyzing the text of customer reviews would not.

Don’t Forget GDPR

The AI Act doesn’t replace data protection requirements – it adds to them. If your AI features process personal data, you still need:

  • A valid legal basis under GDPR for the processing
  • Appropriate data processing agreements with your AI providers
  • Compliance with data subject rights (access, deletion, portability)
  • A lawful basis for any international data transfers

AI-generated decisions about individuals may also trigger Article 22 GDPR requirements on automated decision-making. If your AI makes decisions that significantly affect users without meaningful human involvement, you may need to provide information about the logic involved and allow users to contest decisions.

Practical step: Review your data processing agreements with AI providers. Ensure your privacy policy accurately describes AI-related data processing. Consider whether any AI features constitute “automated decision-making” under Article 22 GDPR.

Updating Your SaaS Contracts for AI Compliance

The AI Act doesn’t just affect your product – it affects your contracts. Here’s what needs attention:

Your Terms of Service

Update your customer-facing terms to:

  • Disclose AI usage: Clearly state which features use AI and how
  • Explain limitations: Set appropriate expectations about AI accuracy and reliability
  • Allocate responsibility: Clarify that AI outputs require human review for consequential decisions
  • Define permitted use: Specify what customers can and cannot use AI features for

Sample clause language:

“AI-generated outputs are probabilistic and may contain errors or inaccuracies. Customer is responsible for reviewing and verifying all AI outputs before relying on them for business decisions. Customer shall not use AI features for [high-risk decisions without human oversight / purposes prohibited under applicable law / critical safety decisions].”

Your Contracts with AI Providers

When you use OpenAI, Anthropic, or similar services, ensure your agreements address:

  • Compliance commitments: The provider confirms their model complies with AI Act provider obligations
  • Documentation access: You can obtain necessary technical documentation to fulfill your deployer obligations
  • Incident notification: The provider will notify you of relevant incidents, compliance issues, or material changes to the model
  • Data handling: Clear terms on whether customer data is used for model training, how it’s processed, and where it’s stored
  • Indemnification: Consider who bears liability if the underlying model causes damages

Liability Clauses Under German Law

Here’s where German law creates specific challenges. If you’re selling to German customers, your standard terms (AGB) face strict scrutiny under §§ 305-310 BGB.

You cannot exclude liability for:

  • Intent or gross negligence
  • Injury to life, body, or health
  • Breach of essential contractual obligations (“cardinal duties” – obligations fundamental to the contract’s purpose)

You can limit liability for:

  • Simple negligence (except for cardinal duties, where you can cap but not exclude)
  • Strict liability under certain circumstances

Critical warning: If you’re using a US-style contract with broad liability exclusions (“provided as is,” “no warranties,” “no liability for indirect damages”) and simply translating it for the German market, those exclusions are likely unenforceable. Under German AGB law, invalid clauses don’t simply disappear. Per § 306 (2) BGB, they’re replaced by statutory provisions – often leaving you with unlimited liability for matters you thought were capped. German courts will not rewrite invalid clauses to make them valid. This makes proper drafting essential from the start.

Practical step: Have your SaaS contracts reviewed by counsel familiar with both AI Act requirements and German AGB law. LEXR’s contract review services can help ensure your agreements are compliant and enforceable.

Your AI Act Compliance Roadmap

Here’s a prioritized action plan:

Immediate:

  1. Audit your AI features: List every AI component in your product and identify the underlying provider
  2. Check for prohibited practices: Ensure nothing in your product falls under the banned categories
  3. Implement AI literacy training: Document that relevant staff understand your AI systems

Short-Term (next 3 months):

  • Update user-facing disclosures: Add transparency notices where AI interacts with users
  • Review contracts: Update terms of service and vendor agreements for AI compliance
  • Assess high-risk exposure: Determine if any features trigger high-risk classification
  • GDPR alignment: Ensure AI data processing is covered in privacy policies and DPAs

Medium-Term (next year)

  • Build monitoring infrastructure: Implement logging and incident detection for AI features
  • Prepare for high-risk rules: Full high-risk obligations apply from August 2, 2026 (Annex III systems) and August 2, 2027 (Annex II systems)

Key Takeaways

The EU AI Act is now a reality for SaaS companies. If you integrate AI into your product, you have compliance obligations – regardless of whether you built the AI yourself.

The good news: as a deployer, your obligations are manageable. Focus on:

  • AI literacy for your team
  • Transparency to users interacting with AI
  • Human oversight for consequential decisions
  • Contract updates for both customers and AI providers
  • GDPR alignment for AI-related data processing

Know your risk category. Avoid prohibited practices. And remember that the AI Act works alongside GDPR, not instead of it.

Companies that address compliance now will avoid scrambling later – and may find that “AI Act compliant” becomes a competitive advantage when selling to enterprise customers who increasingly require it.

Next Steps

Navigating the intersection of AI regulation and contract law requires specialized expertise. LEXR advises SaaS companies on AI Act compliance, contract updates, and the specific requirements of the German market.

Contact LEXR for a consultation to discuss your AI compliance strategy and ensure your SaaS contracts are ready for the new regulatory landscape.

Related Resources

  • EU AI Act: Comprehensive Analysis and Compliance Guide
  • AI Legal & Compliance Workshop
  • LEXR Contract Review Services

Related

Let’s Go!

Book a free, non-binding discovery call to discuss how we can help you achieve your business goals.

Or feel free to reach us directly via email at [email protected].

Book your free call