ISITC TALKTIME: Compliance Obligations for High Risk AI Systems

Home » Forums » AI » ISITC TALKTIME: Compliance Obligations for High Risk AI Systems

ISITC TALKTIME: Compliance Obligations for High Risk AI Systems

Posted on

Compliance Obligations for High Risk AI Systems

TalkTime Podcast: Compliance Obligations for High Risk AI Systems

A must‑hear deep dive from ISITC Europe and Genbounty

As artificial intelligence becomes embedded in every corner of financial services, one area is rising to the top of every board agenda: high‑risk AI compliance. In this powerful episode of ISITC Europe’s AI podcast series, Gary Wright sits down with Bob Morel, CEO of Genbounty, to unpack what the EU AI Act really demands from firms deploying high‑risk AI systems—and why the stakes have never been higher.

Bob explains that for any organisation selling or deploying AI within the EU, high‑risk classification triggers a comprehensive set of obligations. These include risk management frameworks, data governance, continuous monitoring, technical documentation, logging, and—crucially—human oversight. The goal is clear: ensuring AI‑driven decisions remain safe, explainable, and accountable.

Listeners will gain practical insight into:

  • Why high‑risk AI requires a holistic compliance ecosystem, not just a one‑off assessment.
  • How data quality, model training, and third‑party vendor risk directly impact regulatory exposure.
  • Why firms must assess both their internal AI use and the AI embedded in their software suppliers’ products.
  • How the EU AI Act aligns with existing frameworks like ISO standards and data protection regulations—making it a powerful future‑proofing tool for UK and global firms.
  • Why continuous testing and monitoring are essential, not optional, for maintaining compliance over time.
  • How “human‑in‑the‑loop” oversight is becoming a core job function, not a theoretical concept.

Gary and Bob make one message unmistakably clear: if AI isn’t already part of your risk and compliance strategy, it needs to be—now.

Explore the full conversation and prepare your organisation for the new era of AI governance.

 

Viewing time: 14 mins

Bob Morel

Bob Morel is the CEO of Genbounty, an AI Risk Management platform designed to facilitate market access for AI-driven applications within the European Union. Specializing in the EU AI Act and Enterprise Architecture, Bob helps AI teams classified as manufacturers under new regulations to navigate complex compliance landscapes. Through Genbounty, he delivers end-to-end product risk management, offering services that range from litigation defense and consumer safety to accreditation for CE Marking.

With a robust background in technical leadership, Bob previously served as the Head of Application Security at Centrica and the Application Security Lead at CoinFLEX, where he oversaw secure development lifecycles and ISO 27001 compliance. He is an active contributor to the cybersecurity community as an author for Infosec, creating learning paths on topics such as HTML5 security and the use of ChatGPT for offensive security. His expertise is supported by a B.Sc. in Computer Science, an ongoing MBA in Cybersecurity, and industry certifications including the (ISC)² CISSP, Security+, and SecAI+.