Homework 5
This homework teaches you all about over-fitting and now to reduce it. You’ll build your code on the network you’ve trained for homework 4. If your solution to homework 4 didn’t reach 90% validation accuracy you might want to wait for the master solution.
In class we looked at how to prevent over fitting though data augmentation, ensembles, and early stopping. In this assignment you will take your convnet to 96+% validation accuracy.
Note: This assignment will take a significantly longer time to tweak and tune. Start early, and maybe find a good movie to watch while training your networks.
Data augmentation
Try multiple different data augmentation techniques, not all of them will work well for supertux.
- RandomHorizonalFlip: Horizontally flip the given Image randomly with a given probability (default probability=0.5).
- ColorJitter: Randomly change the brightness, contrast and saturation of an image.
- RandomSizedCrop
Weight normalization
Add an L2 norm to regularize each weight layer.
In PyTorch, you can perform L2 regularization through the weight_decay
parameter in the optimizer itself.
Read more about this here.
Early stopping
This time around we will compute the validation accuracy as we train the model. Pick a good number of iterations to train the model for.
Too many iterations might lead to overfitting, too little might not give you a good enough validation accuracy.
Don’t tweak this too hard though, it might lead to some overfitting.
You will also need to modify the code to save your model checkpoint for intermediate iterations. However, your final submission should only contain one model checkpoint with the name convnet.th
that will be used for grading.
Ensembles
You may train an ensemble of multiple models to boost your performance but it should not be required.
Getting Started
We provide you with starter code that loads the image dataset and the corresponding labels from a training and validation set.
The code will measure classification accuracy as you train the model. We also provide an optional tensorboard interface.
- Define your model in
models.py
. - Train your model.
python3 -m homework.train
- Optionally, you can use tensorboard to visualize your training loss and accuracy.
python3 -m homework.train -l myRun
and in another terminal
tensorboard --logdir myRun
, wheremyRun
is the log directory. Pro-tip: You can run tensorboard on the parent directory of many logs to visualize them all. - Test your model.
python3 -m homework.test
- To evaluate your code against grader, execute:
python3 -m grader homework
- Create the submission file
python3 -m homework.bundle
If your model trains slowly or you do not get the accuracy you’d like, you can increase the number training iterations in homework.train
by providing an -i
argument with the desired number of training iterations, e.g. -i 20000
.
You should also develop your model using fewer iterations, e.g. -i 1000
.
The code will measure classification accuracy as you train the model.
Input example
Output example
[-10.2, 4.3, 1.2, 8.7, -1.3, 2.8]
Grading Policy
The grading will depend on your test accuracy in the following manner:
* We'll grade your test accuracy linearly from 92% to 94%.
* Achieving accuracy in the range [94.5%,95%) will give you 100 points + 10 extra credit
* Similarly, for accuracy in range [95%-95.5%), you'll get 100 points + 20 extra credit
* If you achieve accuracy greater than or equal to 95.5%, you'll get 100 points + 30 extra credit
* Please note that our test set is slightly harder than the validation set (by about 2%)
Note the test set is slightly harder than the validation set, expect the validation accuracy to be 3-4% higher than the test accuracy. There are no guarantees about the test performance though if you tune your parameters too much, or train on the validation set.