ISITC TALKTIME: Can AI Act Compliance be automated by AI Agents?
![]()
TalkTime Podcast: Can AI Act Compliance be Automated by AI Agents
A provocative conversation with ISITC Europe and Genbounty
As AI systems grow more powerful, a new question is emerging across financial services, technology, and regulatory circles: Can AI compliance be automated? In this compelling episode of ISITC Europe’s AI podcast series, Gary Wright sits down with Bob Morel, CEO of Genbounty, to explore whether AI agents can truly manage EU AI Act compliance—and what happens when machines start checking the work of other machines.
Bob explains the rapid rise of AI agents capable of research, decision‑making, and autonomous action. These agents can already test AI models, benchmark them against frameworks like NIST, OECD, and OWASP, and even generate risk reports or propose fixes. In some development pipelines, AI is already reviewing code, flagging vulnerabilities, and halting deployments. But this innovation comes with a warning.
Bob highlights the fundamental flaw: AI validating AI directly contradicts the core principles of the EU AI Act, which demands transparency, explainability, and—critically—human oversight. Allowing AI to certify its own safety risks creating systemic blind spots, cascading errors, and the very “AI‑checks‑AI” loop regulators are trying to prevent.
The conversation dives into:
- Why automation is attractive—but dangerous—when applied to AI safety
- How human creativity and motivation still outperform AI in uncovering novel risks
- Why mandatory logging, explainability, and human‑in‑the‑loop controls are essential
- How Genbounty blends automation with human expertise to keep AI accountable
- Why financial‑sector regulations like MiFID, DORA, and consumer‑protection rules reinforce the need for human command
Gary and Bob make a powerful case: AI should enhance compliance, not replace human judgment. Firms must use AI wisely—not let AI use them.
Viewing time: 13 mins

Bob Morel is the CEO of Genbounty, an AI Risk Management platform designed to facilitate market access for AI-driven applications within the European Union. Specializing in the EU AI Act and Enterprise Architecture, Bob helps AI teams classified as manufacturers under new regulations to navigate complex compliance landscapes. Through Genbounty, he delivers end-to-end product risk management, offering services that range from litigation defense and consumer safety to accreditation for CE Marking.
With a robust background in technical leadership, Bob previously served as the Head of Application Security at Centrica and the Application Security Lead at CoinFLEX, where he oversaw secure development lifecycles and ISO 27001 compliance. He is an active contributor to the cybersecurity community as an author for Infosec, creating learning paths on topics such as HTML5 security and the use of ChatGPT for offensive security. His expertise is supported by a B.Sc. in Computer Science, an ongoing MBA in Cybersecurity, and industry certifications including the (ISC)² CISSP, Security+, and SecAI+.
Leave a Reply
You must be logged in to post a comment.