A Dutch SME spends an average of €35,000 on AI implementation and earns it back in roughly eleven months [14]. That's the short answer. The long answer, and the reason you're probably reading this article, is that this average hides a lot of variation: companies that break their journey into four clean phases (discovery, pilot, production, scaling) and work with an experienced partner succeed about three times more often than companies trying to do it all themselves [1]. The rest? They end up in the pilot statistic nobody paid for.
This article shows what you can realistically expect. Price ranges per phase. Timelines. What you get on paper and what you have to provide yourself. We've seen enough engagements over the past few years to know where things usually break, and it's almost never where people think they'll break.
One warning upfront: success depends less on the technology you choose than on the engagement itself. Those who respect the phases, deliver clean work on the input side, and agree upfront on data ownership and KPIs end up in the winning minority. Those who want "AI in production" right away with no discovery end up in the other statistic.
Why SMEs outsource AI instead of building it themselves
The numbers are pretty unambiguous.
MIT's State of AI in Business 2025 report shows that 95% of enterprise generative AI pilots produce no measurable ROI [1]. Sounds like a death sentence for the entire field. But the same study also shows something far more interesting, and this is the part most summaries skim over: companies that source AI through specialised partners succeed roughly 67% of the time. Internal builds succeed at one-third of that rate [1]. The partner approach works three times more often.
For SMEs, a fairly mundane problem compounds this. Dutch CBS data for 2025 show that just 13.7% of Dutch micro-businesses use AI. For companies with 10-20 employees that rises to about 25% [12][13]. The biggest reason cited for not starting? According to 71.6% of non-adopters: lack of experience [13].
That's not a technology problem. It's a capacity problem. The average SME doesn't have a data engineer, ML engineer, MLOps engineer and domain architect walking around, and trying to hire one means a year before the team produces functional work. By that time, the market has moved on.
Outsourcing solves more than the staffing problem. An experienced partner brings patterns from previous engagements: which architecture works for your type of process, which data pitfalls to expect, which integrations turn out more expensive than they first appear (Exact, AFAS and older ERPs, to name three we see underestimated time and time again). Research from Coherent Solutions estimates that working with experienced suppliers can lower project costs by 30-40% versus internal builds [7].
It's also not an either/or choice, and that's said too rarely. The best engagements in 2026 are hybrid. An external partner runs the first two phases. Knowledge is gradually transferred to an internal owner who later manages further development. Works fine, as long as you know from day one that this is the plan.
The four phases of an AI implementation
A serious engagement almost always runs in four recognisable phases. Following this structure avoids most of the pitfalls that show up in Gartner and MIT data.
Phase 1: Discovery
This is the strategy phase. Here you establish which process, decision or bottleneck AI should solve. Equally important: how you measure success before writing a single line of code. No measurable KPI means no project. Period.
The output is a document with the prioritised use case, a data assessment, an architecture sketch and a substantiated business case with assumptions. Discovery is never expensive in absolute terms, but it's the most value-dense part of the entire engagement. Skipping discovery to "get started fast" produces pilots that nobody later dares to scale.
Phase 2: Pilot
In the pilot you build a working solution for one bounded use case. Real users. Real data. The pilot is not a technical test, that's what a PoC is for and we'll get to that. The pilot must deliver a measurable business outcome: lower handling time, higher conversion, fewer errors in output.
A good pilot has three properties. A narrow scope (one process, not five). Real production-like data, not a test file from 2022. And an end user who touches the system daily. Without those three, you can't prove it works under realistic conditions. You've then built something technically functional that nobody dares lean on.
Phase 3: Production
In the production phase, the pilot is rebuilt into a robust, integrated part of your IT environment. This is where monitoring, security review, authorisations, audit logging, retraining procedures and a maintenance contract come in. This is usually the most expensive phase. Not because the AI gets more complex, but because the world around it has to mature.
Many companies underestimate this. McKinsey reports that 88% of organisations use AI in at least one business function, but only about 33% have scaled it across the entire organisation [4]. The difference is almost never in the AI model. It's in the production engineering around it. The boring layer, the one nobody wants to pitch to the board.
Phase 4: Scaling
Here you expand. To adjacent processes, to more departments, to more integrations. Forrester estimates that scaling costs three to five times the pilot budget [5]. You need to know this upfront, otherwise you'll hit a painful surprise halfway through phase 3 that you didn't have to encounter.
The party that wins plans phase 4 already during phase 2. Architecture choices in the pilot need to be suitable for later scaling. Otherwise you build twice and go through the learning curve twice.
What outsourcing AI implementation costs per phase
The price ranges below are realistic for SMEs based on international benchmark research and the Dialogic / Dutch government report [14]. Amounts exclude third-party licences and VAT.
| Phase | Range | Time | What's included |
|---|---|---|---|
| Discovery | €5,000 - €15,000 | 2-6 weeks | Use case prioritisation, data assessment, architecture sketch, business case |
| Pilot | €20,000 - €80,000 | 3-4 months | Working system, integration, real users, first measured results |
| Production | €40,000 - €250,000 | 3-18 months | Hardening, security, monitoring, audit, maintenance contract |
| Scaling | 3-5x pilot budget | ongoing | Additional processes, departments, integrations, retraining cycle |
On top of these project costs, two items run continuously. Cloud infrastructure, typically €30,000-€80,000 per year for an SME with production workloads. And maintenance plus retraining, €5,000-€50,000 per year [5]. People forget the latter especially.
A frequently overlooked reality: software licences are only 30-50% of total implementation cost [5]. The rest goes to integration, data preparation, infrastructure and maintenance. Anyone looking only at licence prices underestimates the cost by a factor of two. We've literally seen this in proposal conversations: a prospect comparing €400/month in OpenAI tokens to a €50,000 implementation proposal as if they were the same category.
For a deeper breakdown of pricing per project type, see What Does AI Implementation Cost?. It also covers concrete chatbot, automation and custom-build budgets.
Realistic timeline per phase
The global benchmark for enterprise AI is grim. McKinsey reports an average of 17 months from project start to production [8]. That average is skewed by enterprise programmes with multi-source compliance requirements. For an SME with focused scope, much faster is possible.
Discovery: 2-6 weeks. An SME discovery typically lands at 2-4 weeks. More complex or multi-stakeholder situations can stretch to 6 weeks. Accelerators: one decision-maker with mandate, availability of process owners, and existing documentation of the process you want to automate.
Pilot: 3-4 months. This is the time to a measurable result on a specific pain point. Cleaner than this you rarely get. Pilots that are "done" in four weeks usually never had real users, or nobody looked critically at the output.
Production: 3-6 months for narrow scope, 9-18 months for complex systems. The difference lies in the number of integrations, compliance requirements (GDPR, and from 2026 the EU AI Act in phased rollout), and the degree of workflow redesign. One process, one integration, one department = on the fast side. Four integrations with legacy systems, GDPR impact assessment, and multi-team rollout = on the long side.
A serious accelerator does exist. Mature partners can deliver a prototype in 2-3 weeks and value in production within 6-12 weeks for narrow scope [9]. Disciplined mid-market teams ship production models in 60 days [15]. This requires: one clear use case, clean data, one decision-maker, and willingness to make decisions fast. If one of those four is missing, forget the 60 days.
What you get: deliverables and ownership
Here lies a blind spot many SMEs discover too late. The contractual arrangements over what you actually hold in your hands at the end of the engagement. A good AI contract explicitly governs four things [16].
1. IP and code ownership. Who owns the code? For custom work this is standardly the client, but for components the supplier reuses (general utilities, frameworks, prompt templates) a licence is often agreed. This needs to be in writing. Not "we'll figure it out", but in the contract appendix.
2. Client training data versus supplier training data. The contract must distinguish between data you provide and data the supplier uses to feed their own models. A prohibition on using your data for the supplier's generic model training is a hard requirement for most SME engagements. For anyone working in a regulated sector: a non-negotiable requirement.
3. Ownership of outputs. All outputs of the AI system (generated text, classifications, predictions) must contractually belong to you. This sounds obvious and isn't always.
4. Handling at termination. What happens to fine-tuned models and embeddings when the contract ends? A good clause requires safe destruction of fine-tuned models and the return of all data.
For every engagement, ask about the standard clauses the supplier uses on these points. An experienced partner has these in order without prompting. A less experienced partner will go look this up for you, and you'll see that reflected in the lead time. For a thorough checklist of questions to ask, see Choosing an AI Agency: 7 Questions.
What you have to provide as the client
Outsourcing doesn't mean you have to do nothing. On the contrary. The companies with the best results are precisely the ones where the client thinks along actively. Four things.
Access to data. Clean, structured, with the right permissions. Gartner expects 60% of AI projects to fail due to a lack of AI-ready data [3]. It's no coincidence that the first week of a good pilot is almost always a data audit. If your data is fragmented across five systems, that's not a blocker, it's just work that has to be done first, and a good partner maps it out during discovery.
One decision-maker with mandate. Someone who can decide within 48 hours on scope changes, budget deviations and priority. Engagements with a steering committee of six people meeting monthly often go wrong the same way: not because the solution doesn't work, but because nobody dares to say "yes" or "no" on time. The lead time doubles without any extra work being done.
Internal time from the business. Not just IT. Especially the process owners who'll use the system daily. Plan on 2-6 hours per week during the pilot. Sounds like a lot. It isn't. It's the presence that ensures the system fits how the work is actually done, not how someone thinks it ought to be done.
Willingness to redesign processes. This is the most important, and the least fun. McKinsey shows that AI high performers redesign their workflows fundamentally alongside the AI deployment 2.8 times more often than putting AI on top of existing processes (55% versus 20%) [4]. Anyone who only wants AI to "take over the existing work" rarely captures more than 10-15% of the potential. Those who redesign the work around what AI does well capture the full benefit. Not everyone wants to. Being honest about that willingness at the start saves a lot of disappointment later.
For SMEs still deciding which process is the best fit, How to Apply AI in Your Business helps with that choice.
Pilot versus proof of concept: what do you choose?
There's confusion between these two terms, and that confusion costs money. The technical literature draws a clear line [10].
A proof of concept (PoC) tests technical feasibility in a controlled environment. Narrow scope. Mock data. Isolated sandbox. Focus on one question: does the algorithm work at all? A PoC costs less and runs shorter. It only proves the technology works, not that it works in your business.
A pilot tests whether it works under realistic conditions. Broader scope, real users, real data, integration with existing systems. A pilot is more expensive and riskier than a PoC, but delivers something far more valuable: proof that the business will adopt it and that ROI is achievable. The technical question has long been answered. The question of whether your people will use it has not.
For most SME engagements, a direct pilot makes more sense than a PoC. The technical feasibility of modern AI models is already proven for 90% of business use cases. The real question is no longer "can an LLM do this?", but "does this fit how we work, and will our team use it?". Only a pilot answers that.
A PoC remains useful in three scenarios. If you're considering a genuinely novel technical approach (your own predictive model on your own data, for example). If regulatory risk is high. Or if pilot investment would be irresponsibly large before technical feasibility is confirmed.
Fixed price versus time-and-materials
This is a recurring question, and the answer is more nuanced than "fixed price is safer".
Fixed price seems low-risk for the client. The reality differs. AI engagements rarely have fully clear scope at the start. What does the supplier do when it turns out more complex than estimated? Two options: protect the margin by trimming quality, or submit a change request [11]. In practice the risk shifts to scope and quality, not to the supplier. You then have a fixed price on something that in its new form is no longer what you originally ordered.
Time-and-materials is more flexible, suited for exploration and for agentic AI where requirements evolve along the way. The downside: without a cap, the budget can run away. You need to insure against that.
Best practice in 2026 is hybrid: T&M with an agreed cap for discovery and the first pilot iterations, then fixed price or capped T&M for the production phase once scope has crystallised [11]. This shifts risk most naturally. The supplier carries uncertainty where uncertainty exists (production). The client pays for exploration where exploration is needed (discovery). Both understand what they're signing for.
For SME engagements: always agree on a maximum cap on the variable part. A professional partner has no problem with this. Anyone who does have a problem with a cap is telling you something important.
Red flags that signal an engagement going off the rails
An honest partner addresses these points proactively. With a less mature provider, you'll have to enforce them yourself.
- No measurable business KPI defined upfront. "We're going to implement AI" is not a project. "We want to lower customer service ticket handling time by 40%" is a project.
- No phased approach. A supplier who wants to build straight to production with no discovery and pilot is running a 95% chance of the MIT statistic.
- No clear arrangements on data ownership and IP. If these clauses only appear in the second contract round, that's a sign the supplier hasn't done this often.
- A supplier who can only build technically. According to McKinsey, workflow redesign is the #1 predictor of EBIT impact [4]. A partner with no opinion on how your work processes change around AI is a builder. Not a partner.
- No exit or model destruction clause. What if the relationship ends? A good contract handles this upfront, not in an awkward conversation afterwards.
- Optimistic about data quality. A supplier who says without a data audit that "it'll all be fine" has either not done this often, or prefers selling to being honest. Neither is what you want.
Results companies are achieving now
That covers the expectations. What does it deliver on the other side?
Dutch SMEs that implement AI see an average investment of €35,000 with an average payback period of about eleven months [14]. Time savings on administrative tasks land between 30 and 50%. That's significantly more favourable economics than the global average, which leans more toward 18-30 months to break-even. Dutch companies have an infrastructure advantage (digital adoption is higher here) and tackle it more pragmatically than many international benchmarks.
McKinsey identifies a specific group of AI high performers, about 6% of organisations, attributing more than 5% of EBIT to AI [4]. The common denominator? 55% of them fundamentally redesign their workflows around the AI deployment, versus 20% for the rest. It's not a technology advantage. It's an implementation advantage.
In specific sectors the results are even harder. EchoStar reports about 35,000 work hours saved per year and a minimum of 25% productivity gain through targeted AI applications on repetitive operational tasks. A documented marketing agency achieved 85% reduction in manual work, 200% growth in client capacity and 75% improvement in profit margin after deploying AI-supported content and workflow automation. A mid-market fintech that followed the phased discovery-pilot-production-scaling approach achieved positive ROI in month 21 and a five-year return of 320% [5].
The pattern is consistent. Companies that take the engagement seriously, bring in an experienced partner, and are willing to redesign their work processes capture substantial returns. Companies that go at it half-heartedly hit the 95% statistic. There's little in between.
For Dutch SMEs there's a tax advantage too often forgotten in proposal conversations. WBSO deduction covers 36% of qualifying R&D wage costs (50% for starters). The Innovation Box lowers profit tax on innovation from 25.8% to 9%. RVO subsidies can provide additional support. For projects with a real R&D component, these schemes substantially lower the effective net cost of an AI investment. Applications must be filed before project start, not afterwards. A good partner reminds you of this in the discovery phase, not in month eight when it's too late.
Ready to put AI to work in your business for real?
Nexaton guides SMEs through the full engagement — from discovery and pilot to production and scaling — with clear pricing, defined deliverables, and ownership of code and data staying with you. Book a no-obligation call →
Sources
[1] Fortune, "MIT report: 95% of generative AI pilots at companies are failing", https://fortune.com/2025/08/18/mit-report-95-percent-generative-ai-pilots-at-companies-failing-cfo/
[2] Gartner, "Gartner Predicts 30% of Generative AI Projects Will Be Abandoned After Proof of Concept By End of 2025", https://www.gartner.com/en/newsroom/press-releases/2024-07-29-gartner-predicts-30-percent-of-generative-ai-projects-will-be-abandoned-after-proof-of-concept-by-end-of-2025
[3] Gartner, "Lack of AI-Ready Data Puts AI Projects at Risk", https://www.gartner.com/en/newsroom/press-releases/2025-02-26-lack-of-ai-ready-data-puts-ai-projects-at-risk
[4] McKinsey, "The State of AI 2025: Agents, innovation, and transformation", https://www.mckinsey.com/capabilities/quantumblack/our-insights/the-state-of-ai
[5] SmartDev, "True Cost of Generative AI for SMEs: 5-Year Breakdown", https://smartdev.com/gen-ai-implementation-cost-sme/
[6] Ardas, "How Much Does AI Implementation Cost in 2025? A Real-World Breakdown", https://ardas-it.com/how-much-does-ai-implementation-cost-in-2025
[7] Coherent Solutions, "AI Development Cost Estimation: Pricing Structure, Implementation ROI", https://www.coherentsolutions.com/insights/ai-development-cost-estimation-pricing-structure-roi
[8] AIDOLS, "AI Implementation Timeline: How Long Does It Actually Take?", https://aidolsgroup.com/en/blog/category/industry-insights/ai-implementation-timeline-guide/
[9] Helium42, "AI Implementation Roadmap: The 6-8 Week Framework That Actually Works", https://helium42.com/blog/ai-implementation-roadmap
[10] Agility at Scale, "AI Proof of Concept (PoC) and Pilot Projects: How to Validate and Scale", https://agility-at-scale.com/ai/strategy/pilot-projects-and-proof-of-concept/
[11] Faberwork, "Fixed Price vs T&M: A Guide for Enterprise AI and Software Projects", https://www.faberwork.com/latest-thinking/fixed-price-vs-t-m
[12] CBS, "Gebruik kunstmatige intelligentie (AI) door bedrijven neemt toe", https://www.cbs.nl/nl-nl/nieuws/2025/09/gebruik-kunstmatige-intelligentie--ai---door-bedrijven-neemt-toe
[13] CBS, "Gebruik van AI-technologie door Nederlandse microbedrijven", https://www.cbs.nl/nl-nl/longread/rapportages/2026/gebruik-van-ai-technologie-door-nederlandse-microbedrijven?onepage=true
[14] Dialogic / Rijksoverheid, "Onderzoek AI-gebruik in het MKB: Ambitie of aarzeling?", https://www.rijksoverheid.nl/documenten/rapporten/2025/09/29/onderzoek-ai-gebruik-in-het-mkb
[15] SwiftFlutter, "AI Roadmap 2025: Ship Production Models in 60 Days", https://swiftflutter.com/2025-ai-roadmap-how-mid-market-teams-ship-production-models-in-60-days
[16] Contract Nerds, "Key IP Contract Clauses for AI Deployment and Development", https://contractnerds.com/securing-innovation-key-ip-contract-clauses-for-ai-deployment-and-development/



