The dangers of inadvertently poisoned training sets in physics applications
ORAL
Abstract
An increasing number of attacks on online services that use machine learning techniques rely on training set poisoning where an attacker manipulates a fraction of the training data to subvert the training process and, for example, overcome advanced spam filters. While these
poisoned training attacks are of malicious nature, inadvertent poisoning due to, e.g., poor quality of the input data can strongly influence the predictive outcome of machine learning approaches. Here, we illustrate the potential pitfalls of using machine learning techniques with a poisoned training set using spin-glass problems and highlight the dangers of using machine learning techniques for condensed matter physics applications.
poisoned training attacks are of malicious nature, inadvertent poisoning due to, e.g., poor quality of the input data can strongly influence the predictive outcome of machine learning approaches. Here, we illustrate the potential pitfalls of using machine learning techniques with a poisoned training set using spin-glass problems and highlight the dangers of using machine learning techniques for condensed matter physics applications.
–
Presenters
-
Chao Fang
Physics, Texas A&M Univ
Authors
-
Chao Fang
Physics, Texas A&M Univ
-
Helmut Katzgraber
Physics and Astronomy, Texas A&M Univ, Physics, Texas A&M Univ