How to defend against adversarial attacks on artificially intelligent systems
In the adversarial attacks, legitimate inputs (e.g., images) are altered by adding small, often imperceptible by humans, perturbations to the inputs
This increases the probability that a learned classifier makes classification error
Further details are provided here
Mr. Shubham Malaviya