Classic machine learning (especially as it is taught in classes) emphasizes a nice safe static environment where you are given some unchanging data and are asked to produce a nice predictive model one time.
It is formally easier that casual inference or statistical inference as being right often is enough, no matter what the reason. It lives in an overly idealized world where one implicitly assumes the following simplifying assumptions:
- The world does not know you are trying to model it (and so can’t take counter-measures
- Your model has no effect on the world (positive or negative)
Adversarial machine learning is the formal name for studying what happens when conceding even a slightly more realistic alternative to assumptions of these types (harmlessly called “relaxing assumptions”).