Securing Azure AI Workloads Against Prompt Injection and Unsafe Agentic Access
Event Title:
Securing Azure AI Workloads Against Prompt Injection and Unsafe Agentic Access
Speaker:
Waseem Awwad – Microsoft MVP, Security and Azure
Event Format:
Online technical community session
Event Overview:
As Azure AI workloads move from pilots into real enterprise environments, the security discussion must go beyond model behavior and basic AI safety. Many AI-enabled applications are now connected to internal documents, search indexes, APIs, automation flows, identities, and hybrid systems. This creates a wider security surface where prompt behavior, data access, tool permissions, network exposure, monitoring, and governance all need to be reviewed together.
This online technical community session will examine how to secure Azure AI workloads from an architecture and operating model perspective. The discussion will cover practical risks such as prompt injection, indirect prompt injection through retrieved content, unintended data exposure, excessive permissions, unsafe agentic access, weak retrieval boundaries, unmanaged API/tool access, and gaps in logging or operational ownership.
The session will also explain why Azure AI security should be treated as workload security, not only as model configuration. Participants will learn how controls such as managed identities, least privilege access, private connectivity, data classification, retrieval access trimming, monitoring, logging, human approval for sensitive actions, and governance review can reduce risk before AI workloads move deeper into production.
This session is designed for technical professionals who are building, reviewing, securing, or governing AI-enabled workloads on Azure.
Key Discussion Areas:
Prompt injection and indirect prompt injection risks
Retrieval boundaries and enterprise data exposure
Unsafe agentic access and tool/API permissions
Managed identities and least privilege access
Private connectivity and endpoint exposure
Logging, monitoring, and investigation readiness
Governance controls for AI-enabled workloads
Human review and approval for sensitive actions
Security review questions before production deployment
Target Audience:
Security professionals
Cloud architects
AI engineers
Developers
IT professionals
Infrastructure teams
Technical decision-makers
Event Summary:
An advanced online technical community session focused on securing Azure AI workloads through practical architecture controls, identity design, retrieval governance, safe tool access, monitoring, and operational security review.