Choosing an AI agency is not an IT procurement exercise. It's a partnership decision. And the consequences play out for years.
The right partner has something running in 6 to 8 weeks, helps you stack GDPR and the EU AI Act cleanly, and hands over your code, data, and models if you ever decide to walk away. The wrong one delivers an expensive pilot, a vendor lock-in, and a report that disappears into a drawer never to return. The seven questions below separate the two. Further down you'll also find a few red flags and a step-by-step plan that takes you from shortlist to signed pilot contract in four weeks.
First, the numbers. Because they make immediately clear why this conversation matters more than most SME owners think. Gartner estimates that organisations will abandon roughly 60% of their AI projects through the end of 2026 because the underlying data simply isn't AI-ready [1]. In the Netherlands, 23% of companies with 10 or more employees now use AI [4]. And 74.6% of Dutch SMEs that considered it but pulled back cite "lack of experience" as the main reason [5]. The partner you put next to you largely decides whether you end up in the first group or the second.
Why the choice of your AI agency is decisive
Technology is no longer the problem. Models are mature, integrations are standard, the building blocks are sitting on the shelf. The difference between an AI project that delivers and one that quietly dies is in the people.
IBM measured this rather sharply in October 2025. In EMEA, 72% of large enterprises reported significant productivity gains from AI; for SMEs, that figure was 55% [9]. A 17-point gap. And that gap doesn't come from technology. Both groups buy from the same vendors, use the same models, read the same white papers. It comes down to execution. And execution depends on who's standing next to you.
A good agency, then, doesn't deliver models. It delivers results in production, a team that thinks alongside you, contracts that protect you, and an approach that actively reduces the chance of failure. The questions below will tell you in ten minutes whether you're talking to that kind of partner — or to someone who mostly sells slides.
Question 1: Can you show me three customers in my sector who have made it to production?
No pilots. No proof-of-concepts. No "we have a few under NDA." Three projects, in production, with customers you're allowed to call. Preferably in your sector, or at least with comparable data complexity.
Why it matters: more than half of GenAI projects die after the PoC phase [10]. An agency that runs lots of pilots but few production systems knows how to build a demo. Not how to keep a system running for three years while the model drifts, the data shifts, and users keep asking different things. That's an entirely different craft.
Also explicitly ask about a project that failed. An honest agency always has one. The answer reveals whether they learn — or whether they'll repeat the same mistake on you.
Then call at least one reference. Ten minutes on the phone with someone who actually went through the project tells you more than ten pages of case studies. Don't ask "did it go well?" Ask "what went wrong, and how did they react when it did?"
Question 2: How do you handle data, GDPR, and where do the models run?
Since 2 February 2025, a first portion of the EU AI Act has been in force in the Netherlands, with the Dutch Data Protection Authority as supervisor [6]. For most SME applications: GDPR (and its Dutch implementation, the UAVG) remains the foundation, the AI Act stacks on top. An agency that can't explain this off the top of its head doesn't belong on your shortlist. Period.
Concretely, you want answers on:
- Will there be a data processing agreement, and does it explicitly state that your data is not used for model training?
- Where do the models run? Inside the EU, on AWS Frankfurt, on Azure West Europe, or via a US provider?
- Which sub-processors are in the chain?
- How is the application classified under the AI Act, and what does that mean for logging, human oversight, and explainability?
The urgency sits in an uncomfortable statistic. 92% of AI vendor contracts claim broad usage rights to your data, and only 17% commit to full regulatory compliance [7]. Translation: if you don't negotiate those clauses, nobody will do it for you. Not your vendor. Not your accountant. Not the regulator.
(A quick aside: I've seen these clauses sit on page 14 of real contracts, in a font size you have to read three times. So actually read the whole document, even when the salesperson tells you it's "standard.")
Question 3: Who owns the code, models, and integrations if we stop?
67% of organisations actively try to avoid vendor lock-in with AI suppliers, and 45% say lock-in has already prevented them from switching to a better option [6]. Not a theoretical risk, then. It happens weekly.
What you want in the contract:
- Code in a repository in your name. Not "access during the term." Ownership.
- External accounts (OpenAI, Anthropic, Azure OpenAI, etc.) under your company name, with your billing address. The agency gets access via your tenant.
- Finetune data and any custom model weights transferable in a standard format.
- An exit clause with a 30 to 60 day handover period at pre-agreed rates.
Does an agency push back on this? It's telling you something important. Their business model depends on you not being able to leave. Walk away.
Question 4: What's your approach — fixed price, time & materials, or pilot-first?
The right answer is not "fixed price," however much business owners want to hear it.
AI projects carry real technical uncertainty at the start. Whether the data is good enough. Whether the model performs on your use case. Whether integration with your ERP or CRM is even feasible in the form you want. An agency that quotes a fixed price at that stage is either pricing in the uncertainty (read: too expensive), or cutting corners later (read: quality slips). Both are bad outcomes for you.
The approach that works in practice is hybrid. Time & materials or a light fixed price for a 6 to 8 week pilot, scoped to one sharply defined business question [11]. Fixed price only after that, for the production build, because by then you know what you're building and on what data. See also our overview of what AI implementation realistically costs for SMEs for the budget per phase.
The signal you're looking for: an agency that aligns its commercial model with the project risk, not with its own margin.
Question 5: How do we measure ROI, and when do we pull the plug?
PoCs with success criteria tied to an operational decision reach production 2.7x more often than PoCs measured only on model accuracy [8]. Translated into what to ask the agency: pin down, together, upfront, one measurable business decision this system has to influence. Time per quote. Error rate in purchase order processing. Share of customer queries handled without a human. Something concrete, something a CFO can work with.
On timeline, don't expect miracles. Deloitte NL benchmarks 2 to 4 years to full ROI; only 6% of companies see full payback within 12 months [12]. An agency that promises your investment back by the end of the quarter after three weeks of work is selling fiction.
So build a kill switch. Pre-agreed metrics that decide, after the pilot: continue, or stop. Without that moment, every project drifts longer, more expensive, and vaguer. Honestly: this is the question most SME owners skip in practice, and it's exactly the question that costs the most later. Not yet clear which process you actually want to tackle? Read how to apply AI deliberately in your business before you start shopping.
Question 6: Which part of the work do you do yourselves, and which do you outsource?
Many "AI agencies" are in reality a commercial layer on top of an offshore team or a freelancer pool. Nothing inherently wrong with external capacity. But you need to know about it. And it needs to be in your contract.
Ask concretely:
- Who writes the code? Full-time staff, freelancers, or an offshore partner in Poland, India, or Ukraine?
- Who handles data engineering and MLOps?
- How many people sit on your account, and what exactly are their roles?
- Who is your primary point of contact, and what's their escalation path when things go sideways in week six?
An agency that answers transparently can be trusted, even if they do work with external partners. An agency that dances around the question is probably going to hand off your project in pieces to people who have never met each other. You'll see it in the quality. Every time.
Question 7: What happens after go-live? Who maintains, retrains, and improves the system?
An AI system in production is not a classical web application. Models drift. Data shifts. Users suddenly start asking different questions. Regulation evolves. Only 45% of organisations with high AI maturity keep projects operational for three years or longer; for the rest, the plug gets pulled [3]. Maintenance is not an extra. It's the difference between an investment and a write-off.
What you want in the SLA:
- A fixed cadence for model evaluation and retraining (quarterly or monthly, depending on the use case).
- Drift monitoring with clear thresholds for re-intervention.
- Incident response times specified by severity, not "best effort."
- A monthly or quarterly improvement budget, with a lightweight process to ship small enhancements without negotiating a new contract every time.
An agency that only sells "support" delivers SaaS-grade service on top of an MLOps problem. That's underpowered, and you'll feel it after a year.
Red flags: signals to walk away
A few patterns I see repeatedly on projects that go off the rails:
- A "strategy phase" of three to six months before a single line of code gets written. Realistic pilots run in 6 to 8 weeks.
- Not a single reference in production. Only pilots and demos.
- Broad data clauses in the contract, where the agency is allowed to use "metadata" or "anonymised data" for "service improvement." That's almost always code for: your data is being used to train their product.
- No exit or handover clause. Or a handover clause so expensive it's effectively a lock-in.
- A fixed price before the pilot, without any substantive data check. That's gambling with your money.
- One person who runs all the conversations, answers all the technical questions, and writes the proposal. Often a freelancer with a logo.
- No clear answer on GDPR / AI Act questions, or a deflection to "our lawyers" the moment things get specific.
No single signal is automatically a dealbreaker. Two or more? Call the next agency.
Practical step plan: from shortlist to signed contract in 4 weeks
A workable cadence that prevents you from getting stuck in selection mode for months:
| Week | What you do |
|---|---|
| Week 1 | Define one business question sharply. Build a longlist of 6 to 8 agencies. Send a short RFI with the seven questions above. |
| Week 2 | Cut down to a shortlist of 3. First call per agency, technical and commercial. Call references. |
| Week 3 | Request concrete pilot proposals (scope, price, success criteria, timeline). Compare not just the number, but approach and risk distribution. |
| Week 4 | Negotiate contract clauses: data, ownership, exit, SLA. Sign the pilot contract. Kick off in two weeks. |
This rhythm forces everyone — you and the agencies — to make decisions. It prevents the most common pitfall on AI projects in the SME segment: open-ended conversations that drag on for months without anything concrete happening. Everyone nods, nobody signs.
If you want to be sharper upfront on what type of solution fits, see also the difference between an AI agency and other partners and how to compare different AI tools for businesses. That context makes your week 1 RFI a lot sharper.
What you walk away with if you do this right
The Dutch SMEs that are getting real results from AI in 2025 and 2026 have three things in common. They picked one sharply scoped problem. They started with a 6 to 8 week pilot with measurable success criteria. They signed a contract where data, code, and exit were watertight.
A mid-market fintech that followed this approach hit ROI in 21 months and a five-year return of 320% [13]. Not because of a spectacular model. Because of a disciplined selection and execution process. Boring, in other words.
The technology is the same for everyone. The difference is in who you have standing next to you, and in the conversation you have before you sign.
Looking for an AI partner who answers these seven questions head-on?
We build production-ready AI solutions for SMEs — with clear ownership clauses, exit terms, and pilots that prove within 6-8 weeks whether it works. Get in touch →
Sources
[1] Gartner, "Lack of AI-Ready Data Puts AI Projects at Risk", https://www.gartner.com/en/newsroom/press-releases/2025-02-26-lack-of-ai-ready-data-puts-ai-projects-at-risk
[2] Gartner, "AI Projects in I&O Stall Ahead of Meaningful ROI Returns", https://www.gartner.com/en/newsroom/press-releases/2026-04-07-gartner-says-artificial-intelligence-projects-in-infrastructure-and-operations-stall-ahead-of-meaningful-roi-returns
[3] Gartner, "Survey on AI Maturity (June 2025)", https://www.gartner.com/en/newsroom/press-releases/2025-06-30-gartner-survey-finds-forty-five-percent-of-organizations-with-high-artificial-intelligence-maturity-keep-artificial-intelligence-projects-operational-for-at-least-three-years
[4] CBS, "Gebruik kunstmatige intelligentie (AI) door bedrijven neemt toe", https://www.cbs.nl/nl-nl/nieuws/2025/09/gebruik-kunstmatige-intelligentie--ai---door-bedrijven-neemt-toe
[5] Dialogic, "Onderzoek AI-gebruik in het mkb (november 2025)", https://dialogic.nl/wp-content/uploads/2025/11/onderzoek-ai-gebruik-in-het-mkb.pdf
[6] Kai Waehner, "Enterprise Agentic AI Landscape 2026: Trust, Flexibility, and Vendor Lock-in", https://www.kai-waehner.de/blog/2026/04/06/enterprise-agentic-ai-landscape-2026-trust-flexibility-and-vendor-lock-in/
[7] Internet Lawyer Blog, "Drafting AI Vendor Contracts: The 10 Clauses That Protect Your Business", https://www.internetlawyer-blog.com/drafting-ai-vendor-contracts-the-10-clauses-that-protect-your-business/
[8] Imaginary Cloud, "AI Proof of Concept ROI: A Guide to De-Risk Your Investment", https://www.imaginarycloud.com/blog/ai-proof-of-concept-roi-guide
[9] IBM, "Two-thirds of EMEA enterprises report significant productivity gains from AI", https://newsroom.ibm.com/2025-10-28-Two-thirds-of-surveyed-enterprises-in-EMEA-report-significant-productivity-gains-from-AI-finds-new-IBM-study
[10] Addend Analytics, "AI Proof of Concept in 6 Weeks Framework", https://addendanalytics.com/blog/blog-ai-proof-of-concept-6-weeks-framework
[11] Coherent Solutions, "AI Development Cost Estimation: Pricing Structure, Implementation ROI", https://www.coherentsolutions.com/insights/ai-development-cost-estimation-pricing-structure-roi
[12] Deloitte NL, "AI ROI: The paradox of rising investment and elusive returns", https://www.deloitte.com/nl/en/issues/generative-ai/ai-roi-the-paradox-of-rising-investment-and-elusive-returns.html
[13] SmartDev, "GenAI implementation cost in SMEs", https://smartdev.com/gen-ai-implementation-cost-sme/



