Ariel Fogel
AI Security Researcher, Pillar Security

Ariel Fogel

Ariel Fogel is a founding engineer and AI security researcher at Pillar Security, where he helps organisations understand and reduce the security risks introduced by agentic systems in production. His research focuses on supply chain attacks and how AI deployments actually fail under adversarial pressure across model, template, and inference-layer attack surfaces, and translates those findings into concrete guidance for security, engineering, and risk leaders.

His work on inference-time backdoors, which reframes chat templates rather than model weights as a critical payload layer, was first presented at OWASP AppSec Global 2025 and BSides TLV 2025, and was subsequently accepted to ICLR 2026. The research has shaped how organisations think about supply chain and configuration risk in their AI stacks. Ariel is a contributor to a number of OWASP initiatives, and co-leads the OWASP State of Agentic AI Security and Governance initiative, working with industry practitioners, standards bodies, and enterprise security teams to turn emerging threats in agentic systems into actionable controls and governance frameworks.

He holds a B.A. in Behavioural Economics from Muhlenberg College and an M.A. in Learning Sciences from the University of Wisconsin–Madison.