Adversarial Robustness and Defense

     As AGI research accelerates and advanced AI models become deeply integrated into real-world decision-making systems, their vulnerability to adversarial attacks poses a critical challenge to safety, trust, and deployment at scale. At the Artificial General Intelligence Conferences, we spotlight this high-impact domain to foster groundbreaking dialogue among security researchers, AGI theorists, and deep learning pioneers.

    Adversarial Robustness addresses how intelligent systems can maintain reliable and correct behavior under hostile perturbations. Meanwhile, defense mechanisms strive to secure these systems throughout the training and inference lifecycle.

    We invite submissions and expert talks on:

    • Robust Training Techniques: Adversarial training, certified defenses, and perturbation-resilient learning.

    • Biologically Inspired Mechanisms: Defenses rooted in human perception and adaptive response systems.

    • Benchmarking & Evaluation: New frameworks for stress-testing AGI models under threat.

    • Explainability-Robustness Intersections: Linking transparency with resilience to build inherently trustworthy AGI.

    • Deployment-Aware Threat Modeling: Robustness in federated, edge, and continual learning environments.

    • Protecting Foundation & AGI Models: Ensuring the security of large language models and evolving AGI systems.

    Whether you’re working on provable guarantees, innovative defense algorithms, or resilient AGI architectures, this session is your platform to present cutting-edge advances to a global community of scientists, technologists, and thought leaders.

    Join us at the Artificial General Intelligence Conferences 2026 to help shape an AI future that is not only intelligent—but secure, robust, and trustworthy.