
Wrapped up Day 2 of Black Hat MEA participating in a Fireside Chat with two amazing security leaders Trina Ford and Priya Mouli.
The topic of our chat was “Agents Unleashed: Can We Control What We’ve Created?” We talked about the promise of agentic AI and the underlying risks that businesses and cyber professionals need to address.
This thought-provoking conversation explored areas such as:
- Output Gates: Ensuring that final action requests by agents are mediated by a security-controlled API or service layer that checks the output against strict, predetermined enterprise policies.
- Rate Limiting: Temporal controls to prevent infinite loops, rapid escalation, or denial-of-service, preventing misaligned or hallucinating agents from causing immediate, high-volume harm.
- Reversibility: Autonomy is acceptable only when the agent’s actions can be immediately and easily undone without a system failure or data loss.
- Identity and Access Management: Why agents should have unique service identities and must be restricted by controls such as PAM, least privilege, and zero wildcard permissions.
- Governance: Subjecting agents to governance processes such as architecture reviews, threat modeling, risk classification, and incident response management (e.g., playbooks, tabletop exercises, etc.).
- Shadow AI: Leveraging policy frameworks, identity governance, and network/data layer monitoring to protect against unauthorized or unmanaged agents.
Business leaders often view agents as highly efficient macros or bots. They fail to grasp that the agent’s autonomy and emergent behavior – its ability to reason, adapt, and combine tools – creates risks that are fundamentally different from traditional automation.
The deployment of Agentic AI necessitates robust, layered security controls because it introduces unique, high-velocity risks that traditional perimeter and human-speed security models cannot handle.









