The field of Artificial Intelligence is abuzz with excitement. After two decades of dirty tricks and feature engineering, we have finally made a major step forward towards strong AI. Enormous increase in computing power over the last decade and even bigger increase in available data have propelled some old techniques into the spotlight under the new name of deep learning. The technique has already seen commercial application with Google improving image search in Google+ and Microsoft presenting simultaneous translation from English to Mandarin. The essence of deep learning is a multilayer artificial neural network where each layer models progressively more complex features. Instead of features engineered by hand that are required by traditional machine learning techniques, the input to deep learning system is raw signal such as image pixels, characters of text, or sound recordings and then we let first few layers of a deep learning system learn features by itself. If the training of a deep learning system is successful, the last few layers should correspond to the concepts we are interested in, e.g. cats.
The break-through of deep learning is not so much in improved algorithms but in the new way of learning connections in the system. Instead of learning the whole multilayer artificial neural network at once, a deep learning system proceeds layer by layer. Since such approach requires enormous amount of data, most of the learning is done using unlabeled data in an unsupervised fashion, while labeled data is used mostly to fine-tune and direct the model. There are some speculations that a similar process is responsible for development of human brain, which makes deep learning research even more exciting.