AI Consulting, DIY, or SaaS: Which Is Right for Your Firm?
Three approaches to bringing AI into your professional firm: custom consulting, DIY and vertical SaaS. An honest comparison of costs, risks and when to choose which.
Caricamento...
Most AI implementations in SMEs do not reach their stated objectives. Here are the 3 mistakes we see repeated most often in professional firms, and how to avoid them.
Most AI implementations in small and medium-sized enterprises do not reach the objectives they set at the point of adoption. This is not a question of tool quality: the AI tools available today are extraordinarily capable. The problem is almost always in the method — or its absence. In professional firms, where client responsibility is direct and reputation is everything, method errors have amplified consequences. These are the three mistakes we see repeated most often.
The pattern is recognisable: someone in the firm hears about ChatGPT, Claude, or a new AI tool specific to the legal or tax sector. They try it, find it interesting. They convince colleagues to adopt it. The firm buys the subscription and starts using it.
Three months later, use is sporadic. Six months later, the subscription is forgotten or cancelled.
This is not the tool's fault. It is that nobody ever defined which problem it was supposed to solve.
"We use AI to be more efficient" is not a use case. It is a wish. A use case is: "Drafting an employment contract template takes our junior an average of 45 minutes. The objective is to bring that below 20 minutes while maintaining current quality." That sentence contains the specific problem, the quantified baseline and the measurable objective.
The pattern repeats: after a few weeks of initial enthusiasm, use becomes sporadic because nobody had defined which problem the tool was supposed to solve. Nobody had compared times before and after. Nobody had trained the team with practical examples from their type of work.
The tool had no predefined problem to solve.
The solution is to invert the process:
The criterion is not "what can AI do" — it is "what is my most costly problem that AI could alleviate". This inversion changes everything: tool selection, training, measurement of results.
The second mistake is more serious than the first, because it exposes the firm to risks that are not just operational but legal.
The pattern: the firm decides to adopt AI. It buys subscriptions, trains the team on the technical use of the tool, and starts using it. No policy. No definition of which data can be entered. No procedure for reviewing outputs. No client disclosure.
The predictable result has a name: shadow AI. Even without an explicit policy, people start using the AI tools they prefer, with whatever data they have to hand, for the activities they consider appropriate. When the firm does not define the rules, everyone creates their own.
Without written rules, people use AI with whatever data they have to hand — including clients' personal data on systems without a signed DPA, sensitive financial data on tools whose terms allow use for model training, confidential documents on non-EU providers that have not been evaluated.
The potential consequences: GDPR violation (personal and sensitive data of a third party transmitted to an AI system without a legal basis), AI Act violation (use without governance), professional liability to the client, possible obligation to notify the regulator.
There is no bad faith. There is an absence of consistently applied governance.
The solution is governance-first — not governance-after:
The AI policy must be written before tools are adopted, not after problems emerge. A policy does not need to be long: it needs to be clear on four points:
A two-page policy, signed by all colleagues, is worth more than a thousand hours of technical training without written rules.
The third mistake is the most subtle and the most common among firms that have done the first two things correctly.
After three months of adoption, someone asks: "Is it working?" The answer is almost always one of two: "Yes, we seem faster" or "You can't really tell, honestly." Both are useless.
"Seems faster" is not data. "You can't tell" is not an evaluation. Without a baseline measured before implementation, any result becomes anecdotal — and anecdotal means indefensible: to the team, to clients, and to yourselves when you have to decide whether to renew a €500/month subscription.
The problem is structural: almost nobody measures their processes before changing something. They change, then assess "by feel". AI is no exception — but the consequences of not measuring are more serious, because subscriptions cost money, training takes time, and the decision to scale or abandon AI is strategic.
The mechanism is recurring: after 90 days, someone asks "are we saving time?" and the answer is "it seems so, but we have no data". Perception prevails over facts. At 6 months, the tool is often discontinued — "results weren't sufficient" — when in reality no results had been measured. Results probably existed, but without a baseline it was impossible to prove them.
The solution is simple but must be done first:
Measure 5 operational metrics before any AI implementation:
| Metric | How to measure | Frequency |
|---|---|---|
| Hours per key process | Time log on typical activity | 4 weeks pre-implementation |
| Error/revision rate | Number of revisions per typical document | 4 weeks pre-implementation |
| Client response time | From receiving request to sending response | 4 weeks pre-implementation |
| Revenue per billable hour | Revenue / billable hours | Monthly |
| Team satisfaction (1-5) | Anonymous survey | Pre and post, quarterly |
No sophisticated system is needed. A shared spreadsheet, updated weekly, with 5 columns. What matters is consistency: the same metrics, with the same methodology, before and after. Comparisons only make sense if measurements are comparable.
The three mistakes share a common root: AI adoption is treated as a technical upgrade — choose a tool, install it, use it — rather than as a process change.
Processes are changed with method:
This is the sequence that makes the difference between those who do not reach their objectives and those who exceed them.
If you are starting the AI adoption path in your firm or want to understand where you are committing one of these three mistakes, the first step is a structured assessment. Explore the AIRA Method and how to apply it to your situation. Take the free AI Readiness Assessment to get a concrete picture of your current position across governance, tooling and compliance — or contact us directly if you would prefer to start with a conversation.
Start from your most repetitive and lowest value-added processes: drafting templates, regulatory research, document formatting, standard client communications. The criterion is not 'what can AI do' but 'what is my most costly problem that AI could alleviate'. An initial assessment helps identify the 2-3 use cases with the highest ROI.
Completely normal. Resistance is not irrational: people worry about their jobs, distrust the tools, or have already had bad experiences (wrong outputs, data sent to the wrong provider). The solution is not technical training but change management: involving people in defining how AI is used, not imposing it from above.
You need a baseline measured before you start: hours spent on each process, number of errors, client response times, revenue per billable hour. Then, at 90 days, you measure the same metrics. Without a baseline, any result becomes anecdotal and indefensible.
Ogni settimana: guide pratiche, novità normative e casi d'uso per studi professionali. Niente spam.
Three approaches to bringing AI into your professional firm: custom consulting, DIY and vertical SaaS. An honest comparison of costs, risks and when to choose which.