AI Act: What Changes for Accountants and Consultants
A practical guide to EU Regulation 2024/1689 for professional firms. Obligations, deadlines and how to prepare.
Caricamento...
A practical checklist with the 15 key AI Act obligations for professional firms. From AI tool inventory to documented training, internal policy and client disclosure.
AI Act compliance cannot be improvised. Every week new tools arrive in the firm, new colleagues start using AI independently, new regulatory updates are issued. Without a structured framework, the risk is doing a one-off clean-up and finding yourself out of compliance three months later. This 15-point checklist across 5 areas is the starting tool for a systematic assessment — not something to do once, but a document to review every six months.
Each point should only be ticked when the documentation exists and can be produced, not when the process has been discussed or is "planned". If a point is not fully satisfied, mark it as open and treat it as a priority.
The logic is that of an audit: imagine that a supervisory authority inspector knocked on the door of your firm tomorrow morning and asked to see evidence of each point. If you cannot produce it within 15 minutes, the point is not closed.
The inventory is the foundation of everything. You cannot govern what you do not know, and you cannot inform clients about tools that have not been catalogued.
Point 1.1 — Complete register of AI tools in use Does an up-to-date document exist with: tool name, version, supplier, adoption date, activities for which it is used, data processed? Does the register include not only tools officially adopted by the firm, but also those used individually by colleagues (personal ChatGPT, Copilot, etc.)?
Point 1.2 — Risk classification for each tool Has a risk classification been assigned to each tool in the register (minimal, limited, high) under the AI Act? Is the classification documented with reasoning? Have high-risk tools been identified and treated separately?
Point 1.3 — GDPR verification of AI suppliers For each tool that processes clients' personal data, has a Data Processing Agreement (DPA) been signed with the supplier? Have the terms of use been verified to confirm that data will not be used to train models?
Art. 4 of the AI Act imposes an AI literacy obligation for all staff who use AI systems. This obligation has been in force since February 2, 2025: anyone who has not yet complied has an outstanding obligation to remedy.
Point 2.1 — Training delivered to all staff Have all colleagues and partners who use AI tools received specific training? Did the training cover: what an AI system is, how it works, limitations and risks, professional responsibilities, legal obligations?
Point 2.2 — Training documented Do participation certificates or attendance records exist for each training session? Does the documentation include: date, duration, content, participants, facilitator? Are these documents retained and accessible?
Point 2.3 — Update plan Does a schedule exist for refresher training (at minimum annually)? Does the plan include onboarding training for new colleagues before they start using AI tools? Has someone been identified as responsible for updating training content?
Client transparency is the most visible obligation under national AI Act implementing legislation and the one most directly exposed to professional disputes.
Point 3.1 — Contractual clause in the engagement letter Does the standard engagement letter template include an AI clause identifying the tools in use, the activities for which they are used and professional oversight? Is the clause consistent with the current list of tools in the register?
Point 3.2 — Disclosure delivered to existing clients Have clients with ongoing engagements received the notice on the use of AI in the firm? Is there evidence of delivery (signed receipt, email with read receipt, certified email)?
Point 3.3 — Documented opt-out procedure Has the firm defined and communicated to clients the procedure for requesting that specific activities be carried out without AI? Can this procedure be managed operationally without a serious impact on service delivery?
The internal AI policy is the document that governs how AI is used in the firm: who can use what, for which activities, with which data, with what oversight. It does not need to be an encyclopaedic manual — it must be readable and applicable by every colleague.
Point 4.1 — AI policy drafted and signed Does a written AI policy approved by the partners exist? Has the policy been distributed to all staff, who have acknowledged it in writing? Is it dated and versioned?
Point 4.2 — Authorised tools defined Does the policy contain an explicit list of AI tools authorised for professional use? Is it clear what happens if a colleague wants to use a tool not on the list (approval procedure)?
Point 4.3 — New tool approval procedure Before adopting a new AI tool, is there a procedure that includes: risk assessment, GDPR check, specific training, updating the register and the policy? Who has the authority to approve?
Risk assessment is at the heart of the AI Act. Without it, all the other obligations lack a foundation: you do not know what to protect, with what priority, or with what level of control.
Point 5.1 — Documented risk classification for each tool For each tool in the register, does a document exist explaining why it was classified at that risk level? Does the classification take into account the context of use (not just the technical characteristics of the tool)?
Point 5.2 — High-risk systems identified Have tools potentially classifiable as "high risk" under the AI Act (e.g. systems used to assess creditworthiness, support decisions affecting people's rights) been identified? Has it been assessed whether they fall within Annex III of the AI Act?
Point 5.3 — Human oversight documented For tools that produce output used directly in professional services (opinions, contracts, tax reports), does a documented human review procedure exist? Who reviews, how, and when — and is this tracked?
| Area | Points | Key obligation |
|---|---|---|
| 1. AI Tools Inventory | 1.1 — 1.2 — 1.3 | Updated register + GDPR supplier checks |
| 2. AI Literacy Training | 2.1 — 2.2 — 2.3 | Documented training + update plan |
| 3. Client Transparency | 3.1 — 3.2 — 3.3 | Contractual clause + disclosure to existing clients |
| 4. Internal AI Policy | 4.1 — 4.2 — 4.3 | Signed policy + authorised tools + approval procedure |
| 5. Risk Assessment | 5.1 — 5.2 — 5.3 | Risk classification + documented oversight |
Not all 15 points have the same urgency. If you need to choose where to start, this is the recommended sequence:
Immediate (within 2 weeks)
The AI literacy obligation under Art. 4 has been in force since February 2, 2025. Anyone who has not yet delivered training is already in violation. The tool inventory (Area 1) is the prerequisite for everything else: without knowing what is in use, nothing else can be done sensibly.
Within 30 days
Client disclosure (Area 3) and internal policy (Area 4): these are the obligations most visible to clients and colleagues alike. They require drafting and review work, not just data gathering.
Within 60 days
Risk assessment (Area 5): requires more careful thought and, for more complex tools, possible external support. This is not a document to be drafted in a hurry.
Download the PDF version of this checklist with fillable fields from the Resources and Guides section. If you prefer a guided path, explore the AI Act Ready service designed to complete all 15 points in 10 working days. For a thorough grounding in the regulation behind each of these obligations, our AI Act guide covers the full regulatory framework — including the deadlines, risk categories and deployer obligations that give this checklist its structure.
It depends on the starting point. A firm with existing compliance awareness (e.g. already GDPR-compliant) can complete the essential steps in 10-15 working days. A firm starting from scratch may need 4-6 weeks. Our AI Act Ready service is calibrated for 10 days for an average-sized firm.
Not necessarily. A very small firm (1-3 people, limited AI tool use) can self-assess with this checklist and fix the critical points independently. Larger firms or those with more complex AI systems benefit from external support, especially for risk assessment and drafting a robust policy.
The law does not impose a specific role, but best practice is to identify an internal 'AI officer' — even informally — who coordinates the inventory, training and policy updates. In small firms this is often the senior partner. In firms with multiple offices a dedicated role may be necessary.
AI Act penalties are proportionate but significant: up to 3% of global turnover for violations of operator obligations (including lack of AI literacy). But beyond fines, the reputational risk with clients is equally relevant: a dispute based on undisclosed AI use in a professional opinion can be very costly.
Ogni settimana: guide pratiche, novità normative e casi d'uso per studi professionali. Niente spam.
A practical guide to EU Regulation 2024/1689 for professional firms. Obligations, deadlines and how to prepare.
What the AI literacy obligation requires, who it applies to, what penalties risk non-compliance, and how to fulfil it before the deadline.
Law 132/2025 is Italy's first national law on artificial intelligence, in force since October 10, 2025. Here are the concrete obligations for professional firms: client disclosure, governance and sanctions.