Refers to methods of attacking AI models by making deliberate, often subtle changes to input data to fool the model, and corresponding strategies to defend against these attacks. Understanding these is crucial for building robust and secure AI systems.