Hackathon: consultants build a three-way matching agent in an afternoon
On April 30, teams from BPM Company and Mendify built a working three-way matching agent for accounts payable in a single afternoon, hands-on in the AgentsLab platform with Claude Code as a sparring partner.

On April 30, together with BPM Company we hosted a hackathon for consultants inside Impacto Group. For one afternoon, teams from BPM Company and Mendify built a working AI coworker for accounts payable, with Claude Code as a sparring partner on the setup.
Not a demo, but actually building
The idea behind the hackathon was simple: instead of giving a presentation about what AI coworkers can do in theory, we let experienced consultants build one themselves. Hands-on, in pairs, in a real production environment of the AgentsLab platform.
The group was around ten System Architects and consultants from BPM Company and Mendify, all with strong technical backgrounds and a solid grasp of business processes. For a large part of the group it was their first serious encounter with Claude Code as a development tool.
The brief: the third control step in accounts payable
In an accounts payable team, incoming invoices from contractor consultants are normally checked through three-way matching. The three controls are:
- Invoice: what the supplier is charging us.
- Logged hours: what the consultant entered in the time-tracking system.
- Contract: under which rate, term and hour cap we agreed the engagement.
When all three line up, the invoice can be posted without further review. When they diverge, finance has to judge whether it's a small clarification (a question to the supplier) or a real deviation (escalation to the cost-center owner).
The brief for the teams was to build that third control step end-to-end in the AgentsLab platform. Concretely: build five new agents (four linear and one multiple-choice verdict agent), together forming the flow from invoice to final decision. The verdict agent then routes the outcome to one of three end stations: post in Exact, draft a question to the supplier, or escalate.
To make it concrete and testable, the teams got seven test fixtures. A match between 176 hours and 176 hours that should route to posting. An invoice for 200 hours against 176 hours in the time-tracking system. An expired contract. A rate of 120 euros per hour against an agreement of 100 euros per hour. A missing contract. Each scenario had an expected outcome, so by the end of the afternoon it was measurable whether the flow actually worked.
Claude Code as a sparring partner
What stood out during the build session was how quickly participants got into the groove once they put Claude Code to work on the workflow setup and the agent configuration. Instead of figuring out every pattern themselves by reading reference code, they let Claude Code think along about the structure, the right imports and the logic of the verdict agent.
For many consultants that was a different kind of working than they were used to. AI as a help with writing an email or a piece of text is something most people know by now. AI as a sparring partner that thinks through the architecture of a process, shows how an existing agent is put together and proposes a structure for the new one, that's another step. The difference is in the thinking help, not just the typing speed.
Where it got substantively interesting
The most instructive agent in the brief was the contract check. Not because the matching itself is complex, but because a contract check has to distinguish between different kinds of deviations: missing contract, expired contract, period mismatch, rate variance, hour cap exceeded. Each variant calls for a different next step. The teams that gave their contract agent a clean, structured result object got their verdict agent working almost for free afterwards.
That's exactly the kind of design choice we run into in our own customer projects: how do you make intermediate results explicit enough that a next agent can decide on them deterministically, without cramming all the decision logic into one big prompt.
What we take away from this
A few observations from the afternoon:
- The step from process consultant to agent builder is smaller than many people think. With a good platform and the right sparring partner, working flows against realistic test scenarios stand up in an afternoon.
- The combination platform + Claude Code works well. The platform supplies the building blocks and the production structure, Claude Code helps with the reasoning around setup and configuration. That's a different division of roles than "AI as a coding assistant", and fits the way consultants think.
- A well-structured intermediate result is worth more than a clever prompt. The teams that delivered their contract check as a clean data structure had a simpler verdict agent and fewer edge cases as a result.
We'd like to thank BPM Company for the hospitality in Utrecht and all the participants from BPM Company and Mendify for their energy and willingness to build something new in one afternoon. The next edition is already on the calendar.
Keep reading
More insights
Curious what an AI coworker can do for your process?
Book a no-strings Quick Scan and explore the options.
Book a Quick Scan