Most people selling AI start with a demo. A slick presentation. A long list of possibilities.
We start with a question: where does it actually hurt?
That might sound obvious. It usually isn't. Most AI projects get stuck not because the technology failed, but because they started in the wrong place. With "what can AI do?" instead of "what do you actually need right now?" One question opens a catalogue. The other opens a conversation.
So we flip the order. We spend time up front understanding the work, the workflows, and the people. We find the one use case where AI can make a meaningful difference quickly. Then we build fast, and that first real implementation does more for AI adoption inside an organization than any training session or awareness campaign ever could. People stop wondering what AI might do for them once they've seen it handle something they used to dread.
Below is a live snapshot of what we're working on. We update this page as new projects start, because we think real examples beat polished case studies every time.
IT service organization for education
The organization: An IT service organization supporting a larger group of schools, a childcare organization, and a central headquarters. Roughly 3,000 employees across the board.
Where we started: Two workshops. Not to sell AI, but to map the landscape. What systems are in play? Where does work pile up? What's eating time that shouldn't be? Two tracks came out of it.
Track one: Ticket Creation Agent. The IT helpdesk had a bottleneck that wasn't where you'd expect it. Employees weren't sure how to submit a proper support ticket, so the IT team spent a significant chunk of their time reformatting requests, chasing missing information, and clarifying intent before they could even start solving the actual problem. We built an AI agent that guides employees through ticket creation: the right details, the right format, the first time. Phase 1 is live. The feedback from the team? Very positive.
Track two: AI adoption across the organization. Schools are all over the place when it comes to AI literacy and policy. Some have it figured out. Most are still finding their footing. We're helping design the governance framework, running adoption pilots, and comparing Microsoft 365 Copilot against free AI tools to find what actually fits the way these teams work. This track runs through 2026, with results measured in Q4.
Construction, infrastructure and industry
The organization: A company that produces mandatory safety and risk reports for projects in construction, infrastructure, and industry.
If you work in construction in the Netherlands, you know the paperwork. Every project comes with a legally required stack of risk assessments, safety protocols, and compliance documentation. Nobody enjoys it. There's even a nickname for this kind of administrative burden in Dutch: de Paarse Krokodil (the Purple Crocodile). A term for all the obligatory admin that's unavoidable, time-consuming, and rarely anyone's favorite part of the job.
This company was spending around two hours per report. Every report, every time.
The honest start: They'd been burned. A previous agency had promised AI magic and delivered very little. There's a lot of that going around right now, and we get it. So we didn't start with promises. We started with their process. How does a report get made? What information goes in? Where does the bottleneck sit?
After a proper analysis of the actual workflow, we built an AI agent that takes over the core of the reporting process. It asks the right questions, structures the input, and generates a first draft. Correct, complete, and ready for the consultant to review and refine.
The results:
- 50% time saving per report. From two hours to one.
- Higher quality at first draft. Fewer correction rounds needed.
- More time for the work that matters. The expert adds judgment, not admin.
- Better client feedback. Customers say the reports have improved compared to before.
Working MVP live in under six weeks. And now a second, larger trajectory is underway: a more sophisticated Azure-based architecture with multiple specialized sub-agents, each handling a different part of the process.
Communications team at a large financial institution
The organization: The communications team at one of the Netherlands' larger financial institutions.
The ambition: Go AI-first. Not as a buzzword or a headline, but as a genuine operating model, where AI is embedded in how the team researches, writes, edits, and publishes, day to day.
The challenge: This is a heavily governed environment. You don't just deploy a tool and see what happens. Compliance isn't a checkbox here; it's a load-bearing wall. Any AI approach needs to be built on a clear picture of the current state, a structured view of where the real opportunities are, and a roadmap that can survive legal and governance scrutiny.
So we started at the beginning: an AI maturity scan combined with a structured use case prioritization. The contract was signed last week. The first interviews with the team started yesterday.
This is exactly where the work begins. Not with tools. Not with pilots. With the right questions.
What's next
These three projects are in different phases. One is already delivering measurable results and scaling up. One is in progress across two tracks. One just started.
That's how AI adoption actually works. It's not a single transformation moment. It's a series of honest conversations, smart bets, and fast builds that add up over time.
If any of this sounds familiar, and you recognize your organization in one of these stories, we'd love to talk. Not to pitch you. Just to understand where it actually hurts.
