Gen AI Application Security & Risk Summit

Generative and agentic AI are transforming how applications are built, deployed and operated but they are also expanding the attack surface in ways traditional AppSec programs were never designed to handle. Autonomous workflows, dynamic tool invocation, AI-generated code and probabilistic reasoning systems demand a new security model.

As AI accelerates DevOps velocity and introduces non-deterministic behavior into production environments, organizations must move beyond reactive vulnerability management toward continuous validation, structured red teaming and lifecycle-wide governance.

This virtual summit, presented in partnership with the OWASP GenAi Security Project, brings together CISOs, security practitioners and builders to tackle one urgent question: How do we secure AI systems before risk outpaces adoption?

The program will explore how enterprises are embedding AI risk into governance frameworks, evolving adversarial testing for agentic systems and designing secure-by-default architectures for autonomous applications.

In this conference, we will explore:

  • The expanding attack surface introduced by GenAI and agentic applications — including prompt injection, tool misuse, memory manipulation, data leakage and AI supply chain risk.
  • Practical frameworks and lifecycle control mapping for securing AI across development, testing, deployment and governance. • The evolution of AI red teaming — from one-time assessments to continuous adversarial validation.
  • Executive strategies for AI risk ownership, vendor evaluation and regulatory alignment.
  • Developer and DevSecOps approaches for securing agentic applications without sacrificing speed and innovation.
  • Real-world lessons from organizations operationalizing AI security in production environments.

Attendees will learn:

  • Clear strategies for embedding AI risk into enterprise governance
  • Practical approaches to red teaming GenAI and agentic systems
  • Secure-by-design patterns for shipping autonomous applications safely
  • Insight into emerging attack techniques and AI supply chain exposure
  • Real-world lessons from organizations deploying AI in production

Key Issues Covered at The Official Cybersecurity Summit

Top 6 Reasons to Attend the Cybersecurity Summit

1
Learn

Learn from renowned experts from around the globe on how to protect & defend your business from cyber attacks during interactive panels & fast track discussions.

2
Evaluate Demonstrations

Evaluate and see demonstrations from dozens of cutting-edge cybersecurity solution providers that can best protect your enterprise from the latest threats.

3
Time, Travel & Money

Our mission is to bring the cyber summit to the decisionmakers in the nation’s top cities. Our events are limited to one day only and are produced within first-class hotels, not convention centers.

4
Engage, Network, Socialize & Share

Engage, network, socialize and share with hundreds of fellow business leaders, cybersecurity experts, C-Suite executives, and entrepreneurs. All attendees are pre-screened and approved in advance. On-site attendance is limited in order to maintain an intimate environment conducive to peer-to-peer interaction and learning.

5
CEUs / CPE Credits

By attending a full day at the Cybersecurity Summit, you will receive a certificate granting you Continuing Education Units (CEU) or Continuing Professional Education (CPE) credits. To earn these credits you must participate for the entire summit and confirm your attendance at the end of the day.

6
A Worthwhile Investment

By investing one day at the summit you may save your company millions of dollars, avoid stock devaluation, and potential litigation.

Questions

For any questions, please contact our
Registration Team.

Sponsor

To sponsor at an upcoming summit, please fill out the
Sponsor Form.

Admission to the Cybersecurity Summit is reserved exclusively for active cybersecurity, IT, Information security practitioners tasked with safeguarding their enterprises against cyber threats and managing cybersecurity solutions. All registrations are subject to review.
 
Students, interns, educators, individuals not currently employed in IT, and those in sales or marketing roles are not eligible to attend.
 
Additionally, if we are unable to verify your identity with the information you provided during registration, your attendance may be cancelled.
 
Please note these qualifications pertain to all attendees, including members of our partner organizations.

Agenda

The Official Cybersecurity Summit delivers high-impact sessions designed to help leaders strengthen resilience, protect critical infrastructure, and align security with business goals.

Attendees will gain actionable insights from expert panels, explore cutting-edge solutions, and connect directly with top industry innovators - making this a can’t-miss agenda for CISOs and security executives.

10:00-
Exhibit Hall Opens
12:15-12:45
AI Weaponization by Threat Actors

The Threat Intelligence Initiative presents a cyber threat intelligence briefing about the use of AI systems by threat actors, an analysis of advancement of offensive security capabilities of models and provides an overview of the way these impact the threat landscape today and in the near future. 


12:15-12:45
Inside the OWASP State of Agentic AI Security and Governance v2

Twelve months ago, agentic AI security was largely theoretical: research demos, controlled environments, and speculative threat models. By spring 2026, the picture has changed. Documented attacks now exist across nearly every category of the OWASP Top 10 for Agentic Applications, the agentic supply chain has moved from theory to active exploitation, and regulators have responded with incident reporting clocks measured in hours. This session releases v2 of the OWASP State of Agentic AI Security and Governance report and walks through its three central findings: the threats are operational now, deployment-layer safety converges with security for high-autonomy agents, and governance has to keep pace with deployment. Attendees will leave with the new Enterprise Adoption Maturity Model, the alignment to the OWASP Top 10 for Agentic Applications, and a current map of the regulatory landscape across 42 instruments and 10 jurisdictions.


12:15-12:45
Top10 Security Risks For Agentic AI For 2026

Join the initiative lead for an in-depth walkthrough of the OWASP Agentic Top 10, a forward-looking framework that identifies the most critical security risks in agent-based and autonomous AI systems. As organizations increasingly adopt agentic architectures—where AI systems act, plan, and execute tasks with varying levels of autonomy—understanding their unique threat landscape becomes essential.

This session will break down each of the Top 10 risks, explaining how they manifest in real-world agentic environments, from prompt injection and tool misuse to over-privileged agents and unintended autonomy escalation. Beyond identifying risks, the session will focus heavily on practical mitigation strategies, including architectural safeguards, policy controls, monitoring techniques, and secure design patterns.

Attendees will gain:

A clear understanding of the emerging risks specific to agentic AI systems
Real-world examples illustrating how these threats can be exploited
Actionable guidance on mitigating vulnerabilities across the agent lifecycle
Insights into building secure, resilient, and trustworthy agentic applications

Whether you're a security professional, AI engineer, or technical leader, this session will equip you with the knowledge needed to navigate and secure the next generation of AI systems.


12:45-1:00
Networking Break
1:00-1:30
FinBot Goes Live: Hands-On Agentic AI Security with OWASP’s CTF Platform

In this session, we’ll explore how FinBot transforms theory into practice—allowing participants to actively exploit and defend against vulnerabilities such as goal manipulation, tool misuse, and privilege escalation in agentic systems. Built as an “Agentic AI Juice Shop,” the platform provides realistic multi-agent workflows, security challenges, and automated scoring to help practitioners understand how attacks unfold in modern AI-driven applications.
Attendees will learn how the latest FinBot update enables:

Hands-on experimentation with real-world agentic AI attack scenarios

Deeper understanding of OWASP’s evolving threat taxonomy through live exercises

Practical application of detection, mitigation, and guardrail strategies

A continuous feedback loop between offensive testing and secure design

Whether you’re a builder, defender, or security leader, this session will show how FinBot bridges the gap between guidance and implementation—equipping you to secure the next generation of autonomous AI systems.
 


1:00-1:30
From Risk to Evidence: Validated AI Red Teaming for Regulated Environments

Most organisations treat AI red teaming as a one-off exercise and OWASP Top 10 as a separate checklist from regulatory risk. Both mistakes leave gaps. This session shows how to consolidate them into a single operationalised pipeline — and how to scale it. Drawing on 1,080 attack transcripts across NHS healthcare and regulated financial services, we present evidence that no single attacker or scorer model is universally best, that panels of diverse models outperform any individual "best" model, and that cascade evaluation achieves 97% of optimal accuracy at 30% of the cost. You'll see how to map OWASP Top 10 categories alongside domain-specific regulatory obligations into concrete multi-turn test objectives, and how tools like airt (built on UK AISI's Inspect framework), PyRIT, and Promptfoo enable structured crescendo attacks, majority-vote scoring, and ZDR-compliant execution at scale. Walk away with a framework for operationalising evidence-based AI red teaming in regulated environments — not another tutorial defaulting to GPT-4o-mini.


1:00-1:30
Operationalizing the OWASP Agentic Top 10: A Real-World Case Study

An in-depth case study on how Intuit successfully implemented the OWASP Agentic Top 10 as part of the Agentic Top 10 Adoption Challenge.

1:50-2:20
Evaluating AI Red Teaming Solutions/Vendors: New OWASP Criteria Guide

As demand for AI red teaming accelerates, organizations face a growing challenge: how to meaningfully evaluate the tools and vendors claiming to secure their generative and agentic AI systems. The OWASP GenAI Security Project addresses this gap with its newly published Evaluating AI Red Teaming Solutions/Vendors: Criteria Guide—a practical framework for assessing capability, coverage, and credibility in an increasingly crowded market.

This session introduces the guide and walks through its core evaluation dimensions, including depth of attack simulation, alignment with real-world threat models (such as the Agentic Top 10), transparency of methodologies, and support for continuous testing across the AI lifecycle. We’ll examine how to distinguish surface-level testing from rigorous adversarial assessment, and how to validate whether a solution can effectively uncover risks like prompt injection, tool abuse, and unsafe autonomy.


1:50-2:20
From Visibility to Action: Operationalizing AI Supply Chain Security with AIBOM

AI systems are no longer simple applications with a model attached. They are increasingly complex software ecosystems composed of models, datasets, prompts, agents, tools, APIs, plugins, vector databases, open-source packages, infrastructure services, and runtime integrations. This creates a new software supply chain problem: organizations cannot secure, govern, or respond to risks they cannot see.

This session explores how software supply chain transparency must evolve for AI systems, especially as organizations move from standalone LLM applications to agentic architectures that can reason, call tools, access memory, interact with external systems, and influence business decisions. Building on established SBOM practices, the session introduces the AI Bill of Materials, or AIBOM, as a practical mechanism to document and understand the components, dependencies, relationships, and runtime capabilities that shape AI system behavior.

Attendees will learn where traditional SBOMs remain essential, where they fall short, and what additional AI-specific elements are needed to support security, compliance, procurement, risk management, and incident response. The session will examine AI supply chain risks across the lifecycle, including model sourcing, training and fine-tuning data, system prompts, agent instructions, tool and plugin dependencies, third-party AI services, orchestration layers, and MCP-style integrations.

The session will connect these challenges to OWASP guidance for LLM and agentic applications, while also highlighting open-source tooling such as the OWASP AIBOM Generator, which helps translate AI transparency concepts into practical implementation. The goal is to help security leaders, architects, and practitioners treat AIBOM not as static documentation, but as an operational security capability for evidence-based governance of AI systems.


1:50-2:20
Governing the Data Your AI Doesn't Know It's Leaking: A Practical OWASP Framework for CISOs

Generative AI doesn't just process data — it aggregates, transforms, and exposes it in ways that break every assumption your existing data security controls were built on. The context window collapses multiple trust domains into a single flat namespace with no internal access control. A confidential HR record retrieved by a RAG pipeline sits alongside user input with equal trust weight. Your DLP solution was never designed for this.
This session, led by the lead of the OWASP GenAI Data Security Initiative, delivers a CISO-ready framework for governing data security across LLM, GenAI, and agentic AI systems. Drawing directly from three OWASP resources — the GenAI Security Framework Crosswalk, the GenAI Data Security Risks and Mitigations 2026, and the in-progress GenAI Data Security Best Practices v2 — attendees will walk away with a structured, actionable approach to understanding what is actually at risk and what to do about it.
The session addresses the 21 enumerated OWASP GenAI data security risks (DSGAI01–DSGAI21), spanning sensitive data leakage, shadow AI, data poisoning, agent credential exposure, vector store vulnerabilities, multimodal exfiltration, cross-context bleed, and regulatory non-compliance. It maps these risks against established frameworks — NIST AI RMF, MITRE ATLAS, ISO/IEC 42001, the EU AI Act, and others — through the OWASP GenAI Security Crosswalk, giving security leaders a governance bridge between what OWASP defines and the frameworks their boards and auditors already recognize.
Whether you are building your first AI security program or hardening a production deployment at scale, this session provides the governance vocabulary, risk taxonomy, and prioritized controls to brief your board, align your teams, and reduce your actual exposure.


2:20-3:05
Panel 1: From Experimentation to Adoption, Implementing AI Security Best Practices

As generative and agentic AI systems transition from experimentation to enterprise-scale deployment, executive leaders are redefining how security, governance and innovation intersect. This panel brings together leaders and senior practitioners for a candid discussion on adopting AI security best practices across complex organizations.

Panelists will share how they are aligning to emerging frameworks, integrating AI risk into enterprise risk management, evaluating AI vendors and balancing oversight with speed. The conversation will explore board-level accountability, regulatory pressures, cross-functional coordination and how to measure AI security maturity in a rapidly evolving landscape.


2:20-3:05
Panel 2: Red Teaming GenAI & Agentic System Evolving Tactics for a New Attack Surface

Agentic and generative AI systems introduce dynamic behaviors that challenge traditional security testing approaches. This practitioner-led panel explores how red teaming methodologies are evolving to address prompt injection, tool misuse, data leakage, memory manipulation and emergent agent behavior.

Panelists will debate structured vs. exploratory testing models, discuss how to define adversarial objectives and measurable outcomes, and examine how continuous validation differs from one-time assessments. The session will also address collaboration between red teams, developers and AI engineers to ensure findings translate into durable mitigations.


2:20-3:05
Panel 3: Securing Agentic Applications — Developer & DevSecOps Frontlines

Building agentic applications means orchestrating models, APIs, data pipelines and external tools — often with autonomous decision-making in the loop. This panel brings together developers and DevSecOps leaders to discuss the architectural and operational challenges of securing these complex systems.

 

Panelists will explore secure-by-design patterns, least-privilege tool invocation, runtime guardrails, AI supply chain transparency, monitoring and drift detection. The discussion will focus on real implementation trade-offs — how to maintain velocity while embedding meaningful security controls into CI/CD and production environments.


3:05-3:15
Networking Break
3:15-3:45
A Practical Incident-Response Framework for Generative AI Systems

This session will provide a deep dive into the GenAI-IRF, a practitioner-focused framework designed to solve the ""operational synthesis gap""—the distance between high-level AI governance and tactical incident execution.

Key Learning Objectives:

The Six Incident Archetypes: Understand why clustering incidents by ""response workflow similarity"" (e.g., merging Training Data Poisoning and Supply Chain vulnerabilities into a single ""Model/Data Poisoning"" runbook) is more effective for SOC teams than traditional vulnerability-centric models .

Standards Synthesis: See how the framework maps granular response actions directly to NIST AI 600-1 controls, MITRE ATLAS TTPs, and OWASP LLM risks, providing an end-to-end auditable trail .

Multi-Disciplinary Orchestration: Learn to use the framework’s RACI matrices and Swimlane diagrams to coordinate complex responses across technical (ML Engineers), legal (Compliance), and business (PR/Communications) teams.

The ""Echo Chamber"" Case Study: We will walk through a simulated evaluation of a sophisticated ""storytelling jailbreak"" (a multi-turn, context-poisoning attack) to demonstrate how the playbook provides immediate structure and actionable steps during a crisis .

The session concludes with actionable recommendations for organizations to integrate these playbooks into their existing SOAR (Security Orchestration, Automation, and Response) platforms for scalable AI security ."


3:15-3:45
Hardening AI Coding Agents with Hooks: Enforcing Least Privilege on Autonomous Developers

AI coding agents like Claude Code execute shell commands, read and write files, and make autonomous decisions -effectively acting as developers with broad access to your codebase and system. As adoption accelerates, securing these agents at runtime becomes critical. But how do you enforce least privilege on an agent that needs wide access to be useful?

This talk presents a practical, hook-based approach to securing AI coding agents. Using Claude Code's event-driven hook system, I'll demonstrate how PreToolUse and PostToolUse interception points can enforce security policies mapped directly to the OWASP Top 10 for LLM Applications: blocking dangerous commands before execution (LLM06 - Excessive Agency), detecting prompt injection patterns in tool calls (LLM01 - Prompt Injection), and generating full audit trails for every agent action (LLM09 - Overreliance).

The session includes a live walkthrough of open-source hook scripts -block-dangerous-commands, protect-secrets, and an auto-audit-logger -along with performance benchmarks comparing Node.js and Python hook implementations. Hooks run synchronously, meaning every millisecond counts; I'll share real-world data on keeping security controls under 100ms per invocation.

Attendees will leave with a ready-to-use, open-source hook toolkit, a framework for applying defense-in-depth and human-in-the-loop principles to any AI agent that executes code, and concrete patterns for building custom security hooks without degrading agent performance. No slides-only theory -everything demonstrated is running in production and available on GitHub.


3:15-3:45
Your AI Agent Is a People Pleaser — And That’s a Security Bug

AI agents are designed to be helpful, cooperative, and aligned with user intent. But this very alignment often becomes a security vulnerability. When an agent is overly eager to satisfy requests, it may bypass safeguards, misuse tools, or expose sensitive data — not out of malice, but because it is optimized to comply.
This talk explores how “helpfulness” in AI systems creates exploitable behavior. Based on real-world testing of AI applications and agents, we’ll examine how attackers can manipulate agents into performing unintended actions by leveraging social engineering patterns, ambiguous instructions, or indirect prompt injection techniques.
We’ll dive into the tension between alignment and security, showing how current guardrails fail when agents are given autonomy, memory, and access to external systems. Through concrete examples, we will demonstrate how agents can be coerced into privilege escalation, unsafe tool execution, and policy violations — even when safeguards appear to be in place.
The session will introduce a security-focused perspective on agent design, including strategies to constrain behavior without breaking usability. Topics include intent validation, capability scoping, and designing agents that can say “no” safely.
Attendees will gain practical insights into why alignment is not a substitute for security — and how to build AI systems that are resilient, not just helpful.


3:45-4:15
From Copilots to Autonomous Agents: A Reference Architecture for Securing Agentic GenAI Applications

The next generation in agentic generative AI systems is shifting to simple, one-turn copilots to the autonomous, multi-agent platforms able to read documents, call APIs, and organize workflows and make crucial decisions without much human intervention. Such change increases the AI attack surface. Model-centric and traditional prompt guardrails and checks are inadequate when the agents are able to chain tools, maintain long-term memory and can communicate with untrusted users, services and data. Based on the nascent OWASP advice on GenAI security and red-teaming, this session proposes a reference architecture and methodology of securing agentic GenAI applications throughout their lifecycle.

It provides a layered security model that discusses identity management, the use of tools, memory management, orchestration, and human-in-the-loop governance of single-agent as well as multi-agent systems. It maps the important failure modes, e.g. tool and goal misuse, memory/context pollution, supply-chain, cross-agent privilege escalation, to concrete controls and observability hooks that are more shaping the current application security and DevSecOps practices. The session is followed by an account of a research-informed evaluation plan that will be a combination of threat modeling, GenAI red-teaming, and antifragile experiments in chaos. This method constantly challenges the behavior of agents in unfavorable and degraded conditions, guided by the OWASP GenAI Red-Teams Initiative as well as the larger literature on cyber-red-teaming.

The participants will leave behind a clear mental model, test cases reusable architectural patterns, and research-based test cases, which they can customize to their own platforms. The meeting will focus on security-oriented engineers, DevSec teams, and researchers who would rather not have the next wave of agentic GenAI systems stuck to a set of guardrails, instead creating a systematic and measurable system resilience.


3:45-4:15
Gen AI Red Teaming Guidance and Best Practices

As artificial intelligence (AI) systems, encompassing both agentic (autonomous, goal-directed) and non-agentic models, become increasingly integrated into various sectors, ensuring their security and alignment with human values is paramount. This panel delves into the OWASP Generative AI Red Teaming Guide, offering insights into methodologies for identifying and mitigating vulnerabilities in these diverse AI applications.
 


3:45-4:15
Lethal by Design: How Extending AI Agents with MCP Servers and Skills Turns Capabilities into Weapons

When Security teams think about AI agent risk, they typically focus on scanning MCP servers, reviewing supply chains, and checking for known vulnerabilities. But this approach addresses about half the problem. The other half - Skills, the textual instruction sets that shape agent reasoning - remains almost entirely ungoverned.

This session presents findings from our research that analyzed hundreds of the most popular MCP servers and Skills in the wild, revealing a fundamental asymmetry in the AI agent capability stack. MCP servers expose deterministic, observable tool calls that can be logged and audited. Skills operate inside the model's reasoning context, where their influence on agent behavior is causally opaque - you can see what an agent did, but connecting that action back to the skill that caused it requires inference, not observation.

And the data is stark: we found that 76% of popular MCP servers carry at least one high-risk capability. One in four expose arbitrary code execution. And 62% of popular Skills carry at least one risky characteristic, leaving them largely invisible to current security tooling.

We then map these capabilities into toxic combinations: multi-tool attack chains grounded in real-world incidents including attacks against Cursor, Docker, Amazon Q, Salesforce Agentforce, and Replit, to show how individually legitimate capabilities come together to form catastrophic attack paths.

The No Excessive CAP framework is then introduced, which shifts agent governance from properties you cannot fully control (whether an agent will encounter malicious input) to amplifiers you can: Capabilities (what the agent can do), Autonomy (how freely it can act), and Permissions (what identity it runs under). These three dimensions interact multiplicatively, and we provide concrete guidance for assessing and controlling each.


4:15-4:45
Execution Governance: The Layer That Defines What Agents Cannot Do

In March 2026, a supply chain attack compromised LiteLLM — the universal proxy between AI agents and every major LLM API.
The attack never reached the agent's reasoning layer.
It operated in the dependency beneath it.

Every behavioral defense remained active.
Every defense was irrelevant.
This pattern repeats.

Attacks increasingly operate below the agent — in the execution environment, in trusted dependencies, in the composition of individually safe components.
The same month, Axios (100M weekly downloads) was backdoored via a compromised maintainer account.
Five projects compromised in 12 days. Each component passed individual verification. The chain was the attack.

This talk presents execution governance as the missing architectural layer.
The approach does not detect unsafe behavior. It defines a World Manifest — a compiled specification of what actions and components exist in the agent's executable world. At runtime, enforcement is deterministic: same input, same decision, always. No LLM on the critical enforcement path.
We demonstrate the gap through a controlled scenario: an agent configured with standard best practices executes a supply chain–style attack.
Then, under a governed execution environment — without modifying the agent — the same attack cannot execute. Not because it was blocked.
Because the action does not exist in the agent's world.

The takeaway is architectural: OWASP Agentic Top 10 classifies how agents fail.
Execution governance defines what cannot happen. These are complementary layers.
Currently, only one exists in standard practice."


4:15-4:45
Guardrails First: Building AI Agents That Won’t Leak Your Secrets

Generative and agentic AI are rapidly reshaping security workflows. But without proper guardrails, they can just as easily introduce new risks as they solve existing problems. As a Security Engineer and Pentester I clearly understand how important it is to keep up with the latest trends and do not limit yourself in using AI because in this case you will be several steps behind the malicious actors. AI agents bring new opportunities, extend our capabilities and more. But how many of those people who use or build them on a daily basis really think about data protection, privacy and security? We have a great example with OpenClaw when numerous insecure deployment led to the compromise and exposure of sensitive data.
When using AI agents for pentesting, security audits or bug bounty, you should keep in mind even more risks because the information that the agent has access to can cost someone reputation and customer trust. If the information about misconfigurations or exploitable issues is somehow leaked to third-party, it could turn into a huge pitfall. One more thing to consider, not always available on the market solutions can be a good fit for your current needs. That’s why more and more security professionals and entire teams building their own agents.
I have also decided to build several own agents due to the aforementioned reasons and it was a valuable experience for me that I would like to share with the community. As an OWASP member, I clearly understand how important it is to share knowledge with others in order to scale adoption of security practices. My talk will cover lessons learned during implementation of several security agents.
I will walk you through the importance of secure design and threat modeling practices at the early stages of planning your future agents, developing specifications. We will talk how to utilize existing AI-powered IDEs and frameworks for embedding security right from the start.
Next part will be dedicated to work with AI/LLM providers and how to ensure that you achieve the following:
- Do not leak sensitive data to third-party
- Cost-efficient usage of LLMs that assists in achieving your goals
- Importance of setting up the sandbox for agents that have opportunities to run commands, launch different tools
- Important tips to know when creating prompt and instructions for an agent
- How to test security of your agent using different tools
- How to ensure that every new piece of code does not introduce security issues and what tools can be helpful
During the presentation, I will also share how to use OWASP resources like Securing Agentic Applications Guide and OWASP Top 10 for Agentic Applications when developing your own agents and how they can help you to ensure that agents remain secure and resilient.

Of course, it is hard to imagine successful projects without pitfalls and issues and I am going to cover situations and problems that I have faced when started using agents in production. One of the questions that I am going to cover - Why even after several rounds of testing on sample data you may (and most probably will) face a lot of challenges when use your agent in real projects for the first time.

This presentation will be useful for a broad range of specialists, from Developers, to DevOps specialists, Security Engineers, Penetration Testers and everyone who plan to build or already working on the AI agents. """


4:45-4:55
Closing Keynote with Scott Clinton

Key takeaways from the day.


Speakers

Our speakers bring unmatched expertise and real-world experience in cybersecurity, risk management, and business strategy. Through engaging keynotes, panels, and discussions, they deliver actionable insights and practical solutions that help CISOs and security leaders stay ahead of evolving threats.

Karan Bansal
Karan Bansal
Global Head of AI and Security Innovation
ArmorCode
Scott Clinton
Scott Clinton
Co-chair, Board of Directors OWASP GenAI Security Project
OWASP
Ron Del Rosario
Ron Del Rosario
VP, Head of AI Security, ISBN
SAP
Ophir Dror
Ophir Dror
Co-Founder & CPO
Lasso Security
Ariel Fogel
Ariel Fogel
AI Security Researcher
Pillar Security
Joel Fulton
Joel Fulton
SVP, Engineering & AI and CISO
Cyderes
Ziad Ghalleb
Ziad Ghalleb
Senior Product Marketing Manager
Wiz
Emmanuel Guilherme Junior
Emmanuel Guilherme Junior
Data Security Initiative Lead
OWASP Gen AI Security Project
Rachel James
Rachel James
Ass. director Cyber threat intelligence
Novartis
Keren Katz
Keren Katz
Director of Threat Detection Research
Zenity
Tolgay Kizilelma
Tolgay Kizilelma
Director of the MS in Cybersecurity Program
Dominican University of California
Evgeniy Kokuykin
Evgeniy Kokuykin
Co-Lead of the Agentic Security Initiative
OWASP
Shan Kulkarni
Shan Kulkarni
Co-Founder & CEO
Nullify
Kayne McGladrey
Kayne McGladrey
Senior Member
IEEE
Mike McGuire
Mike McGuire
Senior Product Marketing Manager
Wiz
Venkata Sai Kishore Modalavalasa
Venkata Sai Kishore Modalavalasa
Chief Architect and Engineering Leader
Straiker
Gal Moyal
Gal Moyal
CTO Office
Noma Security
Bryan Nakayama
Bryan Nakayama
CTI Initiative Lead/Senior Threat Intelligence Analyst
OWASP GenAI Security Project
Helen Oakley
Helen Oakley
VP, Software & AI Security
SAP
Gal Pnini
Gal Pnini
AI Security Researcher
Noma Security
Dmitry Raidman
Dmitry Raidman
Co-Founder and CTO
Cybeats
Jason Ross
Jason Ross
Co-lead, AI Red Teaming Initiative, Product Security Principal
Salesforce
Ihor Sasovets
Ihor Sasovets
Lead Security Engineer (Penetration Tester)
TechMagic
Victoria Shutenko
Victoria Shutenko
Penetration Tester
TechMagic
John Sotiropoulos
John Sotiropoulos
Head of AI Security
Kainos
Derrisa Tuscano
Derrisa Tuscano
Graduate - MSc Cyber Security
University of Warwick
V. Venesulia Carr
V. Venesulia Carr
CEO
Vicar Group, LLC
Sergey Vlasov
Sergey Vlasov
Senior Software Engineer
Radware
Vidura Wijekoon
Vidura Wijekoon
Senior Software Engineer (AI/ML) - Line Manager
Virtusa
Steve Wilson
Steve Wilson
Chief Product Officer
Exabeam
Ross Young
Ross Young
CEO
Clear Capabilities Inc

Sponsors

The Official Cybersecurity Summit connects innovative solution providers with the cybersecurity leaders who evaluate and influence purchasing decisions. With a dynamic exhibition hall and a packed agenda of interactive panels and engaging sessions, this event offers unmatched opportunities to showcase solutions and build meaningful connections.

Partners

The Cybersecurity Summit is proud to partner with some of the industry’s most respected organizations in technology, information security, and business leadership.

Admission to the Cybersecurity Summit is reserved exclusively for active cybersecurity, IT, Information security practitioners tasked with safeguarding their enterprises against cyber threats and managing cybersecurity solutions. All registrations are subject to review.

Students, interns, educators, individuals not currently employed in IT, and those in sales or marketing roles are not eligible to attend.

Additionally, if we are unable to verify your identity with the information you provided during registration, your attendance may be cancelled.

Please note these qualifications pertain to all attendees, including members of our partner organizations.