GDPR for AI coworkers: where your data sits and why it matters
AI adoption often stalls on one question: where does our data go? For AI coworkers working in Dutch and European processes, the answer is clear: EU hosting, role-based access and case-level audit trails.
Why data residency is the first question
In every AI evaluation the same two people sit at the table: the process owner who's excited about it, and the security or compliance officer who wants to be sure where the data goes. Rightly so. An AI coworker running invoice processing, order processing or candidate screening sees personal data, financial data and sometimes special-category data. GDPR isn't an annex here, it's the frame.
We've built the platform around that from day one. Not as an extra option, but as a default. That means choices you don't see, but that surface the moment your Data Protection Officer starts asking or you have to fill out a DPIA.
Where your data sits
All AgentsLab customer data is processed and stored inside the European Union. Our platform infrastructure runs on EU-based hosting providers with ISO 27001 certification. Model inference happens on EU endpoints of our model providers (Azure EU, AWS Bedrock EU, and where relevant on-prem or dedicated setups for customers with extra requirements).
In practice that means:
- No transfer of customer data outside the EU, unless explicitly agreed with the appropriate legal safeguards in place.
- No model training on your data. We don't fine-tune on customer data without explicit consent and separate data-processing agreements.
- Data retention follows your retention periods, not ours. We keep what's needed for the process and delete afterwards.
Roles, permissions and audit trail
An AI coworker operates with its own user in your ERP, ATS or mail server, with explicit authorizations. What that user isn't allowed to see or do, doesn't happen. The audit trail lives at user level in your source systems (SAP change documents, AFAS audit log, Exact logbook, Odoo mail.message) and in our own case logs, where per case you can trace which steps were taken, which data was used and what the outcome was.
For personal data we apply PII redaction in logs and internal monitoring. Your team sees what's happening in the dashboard without unnecessary personal data lying around there. For use cases involving special categories (HR, healthcare, legal) we have additional isolation patterns available.
DPIA, data processing agreement and sub-processors
We provide a standard data processing agreement (DPA) and a DPIA template your DPO can use for the internal assessment. The list of sub-processors is transparent and maintained; changes are communicated before they take effect. For customers who want it, we work with dedicated infrastructure where the data stays inside your own cloud tenant (BYOC) or runs fully on-prem.
Practical questions we get often
Can the AI coworker read personal data? Yes, insofar as your employee can, within the same roles and purpose binding. Is data used for model training? No, not without explicit opt-in. What happens if we leave? Your data is wiped or returned within the agreed timeframe, with a deletion certificate. How about the AI Act? Our agents fall within managed risk categories and we document per use case. Plan a Quick Scan and we'll walk through these questions for your specific situation.
Keep reading
More insights
Curious what an AI coworker can do for your process?
Book a no-strings Quick Scan and explore the options.
Book a Quick Scan