Cybersecurity for AI
Scope The Cybersecurity for AI Focus Group focuses on securing AI systems throughout their lifecycle—from design and development to deployment, operation and continuous monitoring. The group addresses AI‑specific cyber risks that go beyond traditional application, cloud, or infrastructure security, including risks related to models, data, agents, supply chains, and autonomous decision‑making.
In addition to technical security, the Focus Group places strong emphasis on governance aspects such as risk management, accountability, transparency, compliance, and human oversight. This includes aligning AI security practices with regulatory requirements, ethical principles, and organizational governance frameworks.
This Focus Group explicitly concentrates on Security for AI, not AI for Cybersecurity. Its mission is to ensure that AI applications themselves are resilient, trustworthy, and securely governed, enabling their safe and responsible use in the face of evolving cyber and systemic risks.
Objectives
- Share practical implementation experience on securing AI systems in real‑world environments
- Identify and analyse AI‑specific threat vectors and attack scenarios
- Translate regulatory and policy requirements (e.g. EU AI Act, NIS2) into actionable technical and organizational controls
- Contribute to common guidance, good practices, and reference architectures for secure AI
- Strengthen cross‑sector collaboration between industry, public authorities, and academia on AI security challenges
Topics
The Focus Group covers, among others:
- AI‑specific threat landscape (e.g. data poisoning, model inversion, prompt injection, inference‑time attacks) with a focus on agentic AI security.
- Secure AI lifecycle & Secure‑by‑Design principles
- Governance of models, agents, and AI tool execution
- Identity, access management, and privilege control for AI systems and agents
- Integrity, provenance, and protection of training and inference data
- Monitoring, logging, and incident response for AI systems
- Model risk classification and security assurance
- Interaction between AI security and EU regulatory frameworks (AI Act, NIS2, certification schemes)
These topics complement—without duplicating—the scope of Application Security, GRC, Cloud Security, and Enterprise Security Architecture focus groups.
Practices
- Quarterly meetings, combining expert presentations and peer‑to‑peer discussions
- Use‑case driven sessions focused on lessons learned, not theory
- Optional output in the form of summary notes, guidance papers, or recommendations, when agreed by participants
- The Focus Group operates according to the Cyber Security Coalition’s established Focus Group code of conduct which enables experience sharing under Traffic Light Protocol (TLP) and Chatham House Rule, ensuring a trusted and non‑commercial environment.
How to join the group
This Focus Group is intended for professionals directly involved in the design, deployment, governance, or security of AI systems, including:
- Private sector organisations deploying or developing AI
- Public authorities and regulators dealing with AI oversight
- Academic and research institutions working on AI security topics
Active participation and experience sharing are strongly encouraged.
