This monograph presents a comprehensive exploration of Reverse Engineering of Deceptions (RED) in the field of adversarial machine learning. It delves into the intricacies of machine and human-centric attacks, providing a holistic understanding of how adversarial strategies can be reverse-engineered to safeguard AI systems.

For machine-centric attacks, reverse engineering methods for pixel-level perturbations are covered, as well as adversarial saliency maps and victim model information in adversarial examples. In the realm of human-centric attacks, the focus shifts to generative model information inference and manipulation localization from generated images.

In this work, a forward-looking perspective on the challenges and opportunities associated with RED are presented. In addition, foundational and practical insights in the realms of AI security and trustworthy computer vision are provided.