Attacking machine learning with adversarial examples

OpenAI Blog
February 24, 2017
Adversarial examples are inputs to machine learning models that an attacker has intentionally designed to cause the model to make a mistake; they’re like optical illusions for machines. In this post we’ll show how adversarial examples work across different mediums, and will discuss why securing systems against them can be difficult.
Verticals
airesearch
Originally published on OpenAI Blog on 2/24/2017