Secure, Robust, and Adversary-Resilient AI systems encompass the design and implementation of AI systems that can function reliably, safely, and predictably come what may, including attacks, uncertainty, and hostile environments. This type of security encompasses the protection of AI systems, models, and the data that the systems use from unauthorized access, manipulation, theft, and misuse at all phases of the AI lifecycle from data gathering, model training, deployment, and updates. Robustness ensures that systems will still function despite the presence of incomplete, biased, or unexpected data and noisy real-world environments that may differ from training settings. Adversary resilience goes a step further than simply negative real-world environments by account for adversarial attacks, which can include anything from adversarial attacks, data poisoning, model inversion, membership inference, and backdoor attacks. All of these attacks have the potential to maliciously deceive, extract, or corrupt the behaviour of AI. Secure and resilient AI systems integrates various components. These include secure data pipelines, model training trust, adversarial and robust optimization, explainable AI, runtime monitoring, and fail-safe systems. Defensive layering includes cryptography, secure enclaves, access control, model watermarks, and threat model validation (continuous, adaptive). Most notably, these systems are engineered based on the premise that attacks will occur. As a result, they are purpose-built for attack detection, graceful degradation, and recovery as opposed to failure. This are the attributes that are necessary for the AI to work properly. Reliable AI systems are necessary in high-stakes applications like autonomous systems, financial platforms, healthcare diagnostics, and national security. Compromised AI systems in these domains will cause calamitous and irreversible loss.