Poisoning Attacks

Poisoning attacks (also known as causative attacks) feed malicious training data to the classifier. These attacks aim at modifying the decision boundary learned by the machine learning algorithm to increase the overall classification error or the error rate of some specific classes, or, alternatively, to produce misclassification for specific sets of data points.