Adversarial Training and Provable Defenses: Bridging the Gap

Mislav Balunovic and Martin Vechev
Deep Learning, Focus Area, Model Type, Reinforcement Learning

This research presents COLT, a new method to train neural networks based on a novel combination of adversarial training and provable defenses. The intent is to model neural network training as a procedure which includes both the verifier and the adversary. In every iteration, the verifier aims to certify the network using convex relaxation, while the adversary tries to find inputs inside that convex relaxation that cause verification to fail. We experimentally show that this training method, named convex layerwise adversarial training (COLT), is promising and achieves the best of both worlds – it produces a state-of-the-art neural network with certified robustness of 60.5% and accuracy of 78.4% on the challenging CIFAR-10 dataset with a 2/255 L∞ perturbation. This significantly improves over the best concurrent results of 54.0% certified robustness and 71.5% accuracy.