What regulated businesses need to know before deploying AI

The EU AI Act's high-risk obligations become enforceable in August 2026. A practical guide to what financial institutions must do before the deadline and which architectural choices make compliance easier.

Blog Collection Athour img
Jarek Glowka
Co-founder, Compliance & Operations
shape

What regulated businesses need to know before deploying AI

On 2 August 2026, the EU AI Act's obligations for high-risk AI systems become fully enforceable. For any financial institution using AI in credit scoring, fraud detection, AML risk profiling, insurance underwriting, or automated decisioning, that deadline is not a future concern — it is roughly three months away.

But the EU AI Act is not the only regulation that affects how financial institutions deploy AI. DORA's third-party oversight requirements, data residency obligations under GDPR, and existing sectoral rules from the EBA and national regulators all impose constraints that shape which AI architectures are viable and which are not.

Most financial institutions already use AI in some form. Many do not yet know whether their current deployments meet the requirements that are about to become enforceable. This article provides a practical overview of what the regulatory landscape requires, where the most common compliance gaps exist, and what architectural choices make the difference between a system that is defensible and one that is not.

What does the EU AI Act require for financial services?

The EU AI Act classifies AI systems into four risk tiers: unacceptable (banned), high-risk (heavily regulated), limited risk (transparency obligations), and minimal risk (unregulated). For financial services, the relevant category is almost always high-risk.

Which AI systems are classified as high-risk?

Annex III of the EU AI Act explicitly classifies the following financial services use cases as high-risk: AI systems used for creditworthiness assessment and credit scoring of natural persons, AI systems used for risk assessment and pricing in life and health insurance, and AI systems used to evaluate or classify the financial standing of individuals.

In practice, this covers most of the AI use cases that financial institutions care about. If the system influences a decision about whether to extend credit, approve a policy, price a product, or flag a customer for enhanced scrutiny, it is likely in scope.

There is a notable exception: AI systems used purely for detecting financial fraud are explicitly carved out from the high-risk classification. However, the boundaries of this exception are not yet fully tested — an AI system that combines fraud detection with risk profiling or customer scoring may still fall under the high-risk requirements.

What obligations apply?

High-risk AI systems must meet requirements under Articles 9 through 15 of the Act. In practical terms, this means six things:

Risk management. A documented, continuous process for identifying and mitigating risks across the AI system's entire lifecycle — development, deployment, monitoring, and decommissioning. A one-time risk assessment at deployment does not satisfy this requirement.

Data governance. Training, validation, and testing data must be relevant, representative, and free from errors to a degree appropriate to the system's intended purpose. Sources must be documented. Bias checks must be ongoing, not a one-off exercise at launch.

Transparency and logging. The system must produce automatic logs detailed enough to enable post-hoc review of how the system behaved and why. "The model decided" is not an audit trail. The logs must trace decisions from input to output.

Human oversight. Humans must meaningfully oversee AI decisions — not rubber-stamp outputs they do not understand. Analysts need to comprehend what the AI is doing and have genuine authority to override it. Perfunctory review does not satisfy the requirement.

Accuracy, robustness, and cybersecurity. The system must perform consistently within declared accuracy levels and be resilient to errors, faults, and attempts at manipulation.

Conformity assessment and registration. Before deployment, high-risk systems must undergo a conformity assessment, maintain technical documentation, and be registered in the EU database.

Who is responsible?

The Act distinguishes between providers (who develop AI systems) and deployers (who use them). Both have obligations. Critically, deployers cannot outsource their compliance to the vendor. If your institution purchases an AI system from a third party and that system operates as a black box with no explainability, the deployer bears the regulatory liability — not the vendor.

This has direct implications for vendor selection, contract negotiation, and architectural decisions.

What does DORA add to the picture?

The Digital Operational Resilience Act has been enforceable since January 2025 and imposes specific obligations on financial institutions regarding third-party ICT service providers — which includes AI API providers.

Third-party oversight

Every AI system accessed through an external API constitutes a third-party ICT dependency under DORA. The institution must continuously monitor the provider, maintain contractual governance arrangements, conduct regular resilience testing, and have exit strategies in place. For each API provider, this means legal review, contractual negotiation, audit rights, incident reporting protocols, and ongoing oversight.

The burden scales with the number of providers. An institution using three different AI APIs for three different use cases has three separate third-party dependencies to manage — each requiring its own contractual framework, resilience testing, and oversight process.

Concentration risk

DORA also requires institutions to assess and manage ICT concentration risk. If multiple critical functions depend on the same AI provider, the institution must evaluate and document the risk of that provider experiencing an outage, a pricing change, or a service disruption.

The architectural implication

AI systems running on the institution's own infrastructure — using open-weight models that the institution controls — eliminate the third-party dependency entirely. There is no external provider to monitor, no contractual governance to maintain, no concentration risk to assess. This does not make compliance free — the institution must still meet the EU AI Act's high-risk obligations — but it removes an entire layer of regulatory overhead.

Where are the most common compliance gaps?

Based on the regulatory requirements and the current state of AI deployment across financial services, several gaps appear consistently.

No AI system inventory

Many institutions do not have a complete, documented inventory of every AI system in production — including vendor tools, internal models, and hybrid systems. The first step in compliance is knowing what you have. Without an inventory mapped to the EU AI Act's risk classifications, the institution cannot assess its compliance posture.

Black-box vendor models

Institutions using third-party AI systems that provide outputs without explainability cannot meet Article 13's transparency requirements. If the institution cannot trace a decision from input to output and explain why the system reached a particular conclusion, it fails the transparency obligation — and the liability sits with the deployer, not the vendor.

This is not a theoretical concern. Many AI-powered fraud detection, credit scoring, and AML tools currently in production offer limited or no explainability. Renegotiating vendor contracts to require technical documentation, audit rights, and model change notification is a necessary step — and one that takes weeks or months.

Logging infrastructure not in place

Automated logging that captures system inputs, outputs, and decision logic must be in place before the deadline — not retrofitted after a regulatory inquiry. Building logging into deployment pipelines is an engineering task that requires planning and testing. Institutions that have not started this work are running out of runway.

Human oversight is perfunctory

The Act requires meaningful human oversight — which means the human overseeing the AI system understands what it is doing, has the ability to interpret its outputs, and has genuine authority to override its decisions. In many current deployments, human oversight amounts to an analyst clicking "approve" on a queue of AI-generated recommendations they do not fully understand. This does not satisfy the requirement.

What architectural choices make compliance easier?

The regulatory requirements are not technology-neutral in their practical implications. Some architectures are structurally easier to make compliant than others.

On-premise, open-weight models

An open-weight model running on your own infrastructure gives the institution full control over the AI system's behaviour, data processing, and audit trail. You can document exactly how the system works, what data it was trained on, how it reaches its conclusions, and how it behaves under different conditions. You can demonstrate this to regulators on your terms.

This contrasts with a proprietary API where the institution depends on the provider's willingness and ability to supply documentation, and where the model can be updated by the provider without notice — potentially changing the system's behaviour after the conformity assessment was completed.

RAG with curated document sources

Retrieval-augmented generation architectures are structurally well-suited to regulatory compliance because the system's knowledge base is explicit and auditable. The institution can demonstrate exactly which documents the system draws from, verify that those documents are current and approved, and trace any output back to its source material. This makes the transparency and logging requirements significantly easier to meet than with a model that generates answers from opaque training data.

Purpose-built models for bounded tasks

Deploying a specialised model for a specific task — rather than a general-purpose model for everything — simplifies the risk management and conformity assessment process. A model with a clearly defined scope, documented training data, and measurable accuracy on its specific task is easier to assess, test, and defend to regulators than a general model used across multiple functions with variable performance.

What should your institution do in the next 90 days?

The August 2026 deadline is close. The Digital Omnibus proposal could potentially extend the timeline for some Annex III obligations to December 2027, but this is not confirmed, and no institution should plan on the basis of a postponement that may not materialise.

The practical steps are sequential and each one depends on the previous:

Build a complete AI system inventory across all business units. Classify each system against Annex III criteria and document the classification decision. Assess each high-risk system against the six obligation areas — risk management, data governance, transparency, human oversight, accuracy, and conformity. Identify gaps and prioritise remediation based on regulatory risk. Review and renegotiate vendor contracts where third-party AI systems lack the documentation, audit rights, or explainability required. Implement automated logging and monitoring infrastructure for all high-risk systems.

For institutions that are still early in this process, the gap between where they are and where they need to be is real — but it is addressable if the work starts now.

If your institution needs help assessing its AI systems against the regulatory requirements and identifying the fastest path to compliance, we can help you scope the work.

Related reading:

Ready to Own Your AI?

Stop renting generic models. Start building specialized AI that runs on your infrastructure, knows your business, and stays under your control.