Adversarial Learning and Active Learning


David J. Miller (PI) and George Kesidis (co-PI)


Supported by:

AFOSR DDDAS grant 2017-2021

Cisco Systems URP gift

AWS cloud-credits gift



Deep learning, as active learning, has become fundamental to complex Dynamic Data-Driven Applications Systems (DDDAS), which is a program of the Air Force Office of Scientific Research. Security of DDDAS must consider threats posed to the (data-driven) learning mechanisms used by them.  This project’s main focus is on adversarial learning, including state-of-the-art defenses against test-time evasion attacks (detection of “adversarial samples”), reverse-engineering attacks, and data-poisoning attacks – the last including backdoor attacks (trojans planted during learning).  Our first work in this area was on adversarial active learning.


The backdoor problem, and our breakthrough defenses for it, are particularly important considering: the wide deployment of deep neural networks in applications where safety and security are important factors, the need for a very large training set for deep learning, and the relative ease with which a small amount of poisoning (through insiders or compromised out-sourcing/supply-chain process) can implant a backdoor into a deep neural network. Our unsupervised defenses include post-training scenarios without the training set available, and so solve the supervised IARPA TrojAI problem as a special case. By “unsupervised” we mean no unrealistic assumptions regarding what is known to the defense, e.g.: no hyperparameters; no unreasonable assumptions re. the (unknown) backdoor pattern; no unreasonable assumptions re. the architecture and learning process of the AI; no unreasonable assumptions regarding available AIs known to be clean or poisoned with the same backdoor one is trying to detect (as IARPA TrojAI).


Also we have produced state of the art detectors of test-time evasion attacks based on nulls of deep-layer activations, including ADA for low-confidence attacks and a GAN based approach that is effective for both low and high confidence attacks.


Finally, we have devised defenses against data poisoning attacks which aim to degrade classifier accuracy (and do not plant a backdoor) and reverse-engineering (probing) attacks.




         Papers and personnel participating in work supported in whole or part by this grant:


1.     X. Li, G. Kesidis, D.J. Miller, M. Bergeron, R. Ferguson, V. Lucic. Robust and Active Learning for Deep Neural Network Regression.

2.     Z. Xiang, D.J. Miller and G. Kesidis. A Backdoor Attack against 3D Point Cloud Classifiers. In Proc. International Conference on Computer Vision (ICCV), Oct. 2021;

3.     X. Li, D.J. Miller, Z. Xiang, G. Kesidis. A BIC based Mixture Model Defense against Data Poisoning Attacks on Classifiers.  submitted.

4.     H. Wang, Z. Xiang, D.J. Miller and G. Kesidis. Anomaly Detection of Test-Time Evasion Attacks Using Class-Conditional Generative Adversarial Networks. submitted.

5.     Y. Tao, Z. Xiang, D.J. Miller and G. Kesidis. Anomaly-Detection Defense against Test-Time Evasion Attacks on Robust DNNs. to appear in Springer DDDAS book series Vol. II, F. Darema and E. Blasch (Eds.), 2021.

6.     Z. Xiang, D.J. Miller and G. Kesidis. Reverse Engineering Imperceptible Backdoor Attacks on Deep Neural Networks for Detection and Training Set Cleansing. Elsevier Computers & Security, 2021;

7.     Z. Xiang, D.J. Miller, G. Kesidis. L-RED: Efficient Post-Training Detection of Imperceptible Backdoor Attacks without Access to the Training Set. In Proc. IEEE ICASSP, 2021;

8.     Z. Xiang, D.J. Miller, and G. Kesidis. Detection of Backdoors in Trained Classifiers Without Access to the Training Set. IEEE TNNLS, Dec. 2020; shorter version in Proc. IEEE ICASSP, Barcelona, May 2020;

9.     Z. Xiang, D.J. Miller, H. Wang, and G. Kesidis. Revealing Perceptible Backdoors, without the Training Set, via the Maximum Achievable Misclassification Fraction Statistic.  Neural Computation, Feb. 2021; shorter version in Proc. IEEE MLSP, Sept. 2020;

10.  X. Li, D.J. Miller, Z. Xiang, and G. Kesidis. A Scalable Mixture-Model Based Defense Against Data-Poisoning Attacks on Classifiers. In Proc. DDDAS Conference, Oct. 2020.

11.  D.J. Miller, Z. Xiang and G. Kesidis. Adversarial Learning in Statistical Classification: A Comprehensive Review of Defenses Against Attacks. Proceedings of the IEEE 108(3), March 2020;

12.  G. Kesidis, D.J. Miller, and Z. Xiang. Notes on Margin Training and Margin p-Values for Deep Neural Network Classifiers., Dec. 5, 2019.

13.  Z. Xiang, D.J. Miller and G. Kesidis. A Benchmark Study of Backdoor Data Poisoning Defenses for Deep Neural Network Classifiers and A Novel Defense. In Proc. IEEE MLSP, Pittsburgh, Sept. 2019.

14.  D.J. Miller, Y. Wang, G. Kesidis. Anomaly Detection of Attacks (ADA) on DNN Classifiers at Test Time. Neural Computation 31(8), Aug. 2019; shorter version in Proc. IEEE MLSP, Aalborg, Denmark, Sept. 2018;, Dec. 2017, revised June 2018.

15.  Y. Wang, D.J. Miller, G. Kesidis. When Not to Classify: Detection of Reverse Engineering Attacks on DNN Image Classifiers. In Proc. IEEE ICASSP, Brighton, UK, May 2019.

16.  D.J. Miller, X. Hu, Z. Qiu and G. Kesidis. Adversarial Learning: A Critical Review and Active Learning Study. In Proc. IEEE MLSP, Tokyo, Sept. 25-28, 2017.

Some related papers:

1.     Z. Qiu, D.J. Miller and G. Kesidis. A Maximum Entropy Framework for Semisupervised and Active Learning with Unknown or Label-Scarce Categories. IEEE Trans. on Neural Networks and Learning Systems (TNNLS), Apr. 2017.

2.     Z. Qiu, D.J. Miller and G. Kesidis. Flow Based Botnet Detection through Semi-supervised Active Learning. Proc. IEEE ICASSP, New Orleans, March 2017.

3.     D.J. Miller, Z. Qiu and G. Kesidis. Parsimonious Cluster-based Anomaly Detection (PCAD). In Proc. IEEE MLSP, Aalborg, Denmark, Sept. 2018.


Some related links:


         github page

NROTC projects

Air Force Research Lab