Deep learning has gained even more prominence recently with a hire of Yann LeCun by Facebook and attendance of Mark Zuckerberg at NIPS 2013. I've seized the opportunity of new year's holidays to get better understanding of this subfield of artificial intelligence by studying tutorial on Unsupervised Feature Learning and Deep Learning by Stanford's professor Andrew Ng. This tutorial teaches basic concepts of deep learning, such as stacked neural networks, backpropagation algorithm, autoencoders and sparsity, softmax regression, and convolution and pooling on examples of classifying MNIST database of handwritten digits and STL-10 image database. Since I plan to apply techniques learned in this tutorial also to the problems that we experience at Zemanta, I rewrote the examples in Python using numpy an scipy. I've put the code to GitHub so that also people not versed in Matlab can play with it and see for themselves how trivial intelligence becomes once you learn the right models. For example, this code
[code language="python"] activation = data for layer in stack: activation = sigmoid(activation.dot(layer.W.T) + layer.b)
h_data = exp(softmaxTheta.dot(activation.T)) h_data = h_data / sum(h_data, 0) return argmax(h_data, axis=0) [/code]
achieves 97.7% accuracy on MNIST dataset (see stack_autoencoder.py for complete example). I consider this an amazing feat given simplicity of the model and complexity of the task.