CS781: Tool-based Practice Problem Set 1
We have read about three different algorithms (DeepPoly,
αβ-CROWN and Reluplex) for verifying local robustness
properties of Deep Neural Networks. In this exercise (to be completed
in two parts), we will try out three different tools that implement
these algorithms (and much more) to do some local robustness
calculations.
In this first part of the exercise, we will use the ERAN toolset for
DeepPoly. Instructions
for installing the ERAN toolset (DeepPoly is part of this toolset)
have already been posted. Please follow these instructions to install
ERAN on your laptop.
We have seen during the class demo how DeepPoly can be used to find
whether a given neural network for classifying handwritten digits
(MNIST benchmarks) is locally robust to L_infinity perturbations
with respect to a given image.
In this simple exercise, we will use the following 4 small to moderate
sized neural networks from
ERAN's github repository.
- MNIST, 5x100, fully-connected, ReLU activation, DiffAI training defence
- MNIST, 6x500, fully-connected, Sigmoid activation, None training defence
- MNIST, 6x500, fully-connected, Sigmoid activation, PGD ε=0.3 training defence
- MNIST, ConvMed, convolutional, Tanh activation, None training defence
We will also use 6 handwritten images of the digit "4" given in
this CSV file. Recall that each row of the CSV file represents
one image: the first column is the label and the remaining 784 columns
give the pixel intensity (on a scale of 0-255) for a 28$\times$28
pixel matrix.
For each pair of (network, image), you are required to use DeepPoly to
find the maximum value of epsilon for which the network is robust to
L_infinity perturbations with respect to the given image.
Warning: The above experiments may take a fair bit of CPU time on your laptop, so please
keep your laptop connected to power supply when running these experiments.
You must report your results as a table (see format below) in a PDF file to be uploaded on
Moodle. You must also indicate any inference you
can draw from this set of experimental results.
Network No. | Benchmark No. | Max ε value | Time taken |
1 | 1 | | |
1 | 2 | | |
To help you visualize the images and how they get distorted with perturbations,
you can use the following Python scripts:
- mnist_visualize.py: To run the script, type the following from your conda eran environment:
python3 mnist_visualize.py path_of_mnist_csv_file
This will show the images corresponding to all rows of the CSV
file in a separate viewer window, along with the label of the
image. After showing each image the script waits for you to kill the viewer window (type Ctrl-W in the viewer window) before it shows the next image.
- mnist_linf_perturb.py: To run the script, type the following from your conda eran environment:
python3 mnist_linf_perturb.py epsilon_val_between_0_and_1 path_of_mnist_csv_file
This has the same functionality as mnist_visualize.py except that each pixel is perturbed by a random value between -maxval and +maxval, where maxval = epsilon * 255 (recall 255 is the maximum value of a pixel in the MNIST dataset)
To help you visualize the different neural networks from .onnx files, you are encouraged to try out netron.app