ISITC TALKTIME: AI Safety vs AI Security
![]()
TalkTime Podcast: AI Safety vs AI Security
A powerful conversation with ISITC Europe and GenBounty
AI is transforming financial services at breakneck speed—but with innovation comes risk. In this eye‑opening episode of ISITC Europe’s AI podcast series, Director Gary Wright sits down with Bob Morel, CEO of GenBounty, to unpack one of the most misunderstood yet critical topics in the industry today AI safety and AI security. Bob begins by drawing a clear line between the two.
- AI security focuses on protecting the infrastructure—authentication, API protection, data encryption, input validation, and defending against cyber‑attacks.
- AI safety, on the other hand, ensures the behaviour of the model is trustworthy: preventing drift, bias, hallucinations, unsafe decision‑making, and unintended access to underlying systems.
Together, these disciplines form the backbone of responsible AI deployment.
The discussion dives into how the EU AI Act strengthens this foundation by classifying AI systems by risk and enforcing safeguards that scale with their potential impact. From simple transparency requirements for limited‑risk tools to extensive logging, explainability, and oversight for high‑risk systems, the Act provides a structured path to safer AI.
Gary and Bob explore how this ties directly into broader regulatory frameworks such as DORA, consumer duty, and market resilience—highlighting why firms must stop treating AI as a standalone topic and instead weave it into their entire risk and compliance ecosystem.
The conversation also raises a stark warning: with class‑action litigation accelerating across Europe, firms that fail to assess and control their AI exposure could face significant legal and financial consequences. Every AI‑enabled application is now effectively a “product”—and that means liability.
Through their partnership, ISITC Europe and GenBounty are equipping firms with the tools, knowledge, and frameworks needed to stay compliant, resilient, and firmly in control of AI.
Explore the full discussion and take the next step toward safeguarding your organisation’s AI future.
Viewing time: 17 mins

Bob Morel, is the CEO of Genbounty, an AI Risk Management platform designed to facilitate market access for AI-driven applications within the European Union. Specializing in the EU AI Act and Enterprise Architecture, Bob helps AI teams classified as manufacturers under new regulations to navigate complex compliance landscapes. Through Genbounty, he delivers end-to-end product risk management, offering services that range from litigation defense and consumer safety to accreditation for CE Marking.
With a robust background in technical leadership, Bob previously served as the Head of Application Security at Centrica and the Application Security Lead at CoinFLEX, where he oversaw secure development lifecycles and ISO 27001 compliance. He is an active contributor to the cybersecurity community as an author for Infosec, creating learning paths on topics such as HTML5 security and the use of ChatGPT for offensive security. His expertise is supported by a B.Sc. in Computer Science, an ongoing MBA in Cybersecurity, and industry certifications including the (ISC)² CISSP, Security+, and SecAI+.
Leave a Reply
You must be logged in to post a comment.