Congratulations to the winners!
I’m eager to learn about the solutions of the fellow competitors. So please share your approaches.
Since it was computer vision task, pre-trained deep convolutional networks should work like a charm.
There are numerous pre-trained models, one of the best open-sourced model is GoogLeNet from BVLC Caffe. So I took it, fine-tuned with several tricks and ensembled. I’ll probably write a blog post with all the details of my solution.
What was your way to solve the challenge? And I wonder what AUC you have reached without pre-trained models?
@L_V_S Thanks for starting this thread and definitely let us know if you end up writing up your approach in a blog post.
Looking forward to hearing what people did!
@L_V_S I wanted to try something like what you did but due to a lack of experience all I got working was Overfeat feat. extraction + an SVM (Private: 0.9205).
Without using a pretrained model I only managed to get 0.7686 (Private) using a really questionable computer vision features approach.
Looking forward to your blog post!
First I would like to congratulate you @L_V_S for your impressive score, and thank you for sharing your method.
In this competition I avoided using pre-trained networks, since my main goal was to learn how to use and train conv-nets. Using Nolearn (+lasagne+theano) on python I could reach an AUC-ROC of 96.54 % (10th rank) with model averaging, and roughly 96% with a single net (VGG11 with max-out). Since the data-set was quite small, I used data augmentation a lot (flip, 10% zoom-in/out, ±10 pixels translation).
I’d be very happy to read your blog post! Especially, I’d like to know the method you used to ensemble models.
I’ve written a blog post summarizing my solution; it is available here.