AI that acts demands governance that adapts

By

Kyra Chacra, Carole Alsharabati

Jihad Bitar, Amer Maouad, Dany Mezher

29 Jul 2025

|

Publication

An illustration of a humanoid face with glitch effect
An illustration of a humanoid face with glitch effect

Innovation without governance isn’t progress — it’s a gamble.

AI systems are no longer passive tools. They decide, they adapt and they sometimes act in ways their designers didn’t expect.

This paper explores the security and governance risks that come with increasingly autonomous AI systems, and what institutions can do to stay in control.

From model theft and data poisoning to strategic deception and goal drift, we outline a practical, adaptive toolkit to keep AI aligned, secure and trustworthy.

The risks are real

  • Welfare wrongly denied by automated decision systems

  • Legitimate transactions blocked by misinterpreted logic

  • Bias amplified through unchecked algorithms

  • Personal data leaked by systems acting beyond their scope

And these challenges are only growing as agentic AI systems enter mainstream deployment.

What this paper covers

  • Securing AI infrastructure and MLOps pipelines

  • Designing safe, human-in-the-loop agent behaviour

  • Detecting deception, drift and reward hacking

  • Embedding privacy, transparency and alignment into evolving systems

  • Real-world case studies from digital government and finance

The key message

Agentic AI brings immense power but autonomy must be earned, not granted. Governance must be dynamic, proactive and aware that today’s safeguard may be tomorrow’s loophole.

👉 Download the full paper

More from this project