29.01.16 Deep learning and black magic Neural networks and black magic, machines which can remember and artificial intelligence. The auditorium full of young people discussing about deep learning. Neural networks and black magic, machines which can remember and artificial intelligence. The auditorium full of young people discussing about deep learning. Are you expecting Terminator to appear on the stage? Well… This is not a science fiction movie. This is the start of new series of Geekstone meetups in 2016 year. The first gathering of this kind has been opened by Mihailo Isakov. He is the assisting teacher from Faculty of Technical Sciences in Novi Sad, who shared “the black magic” of the deep neural networks with the audience. In a very interesting way, with lots of examples from his personal experience, this young lecturer has brought closer the notion of machine learning to the attenders and probably awaken a wish for deeper learning. However, what does the deep learning actually means? There are a lot of discussions about this topic, mostly from people who do not have experience in deep learning and therefore see something magical in it. There are a lot of TED lectures, which all come to this conclusion: “ He is a player who will change the game”. This is established without the framework and it is nothing more than basic mathematics. This field is still very young and attractive to lots of people according to Mihailo. In the lecture named “Intro into deep learning” he explained the basic principles which enabled resolving lots of fresh classes of problems. As he said, the most of this solutions are simple, do not need lots of implications so they can give results. Moreover, attendees had an opportunity to listen his talk about convolutional networks, auto-encoders and LSTM. In past few years, there’s been a lot of changes in the field of artificial intelligence (AI). It’s spread beyond the academic world among major players like Google, Microsoft, and Facebook creating their own research teams and making some impressive results. Some of this can be attributed to the abundance of raw data generated by social network users, as well as to the cheap computational power available via GPGPUs. However, these changes have overtaken a big part of a new trends in AI, especially in machine learning, better known as “deep learning”. “Deep learning” is a part of much wider family of machine learning based on presentation of data. Various deep learning architecture like deep neural networks, convolutional deep neural networks, deep belief networks, recurrent neural networks, are used in fields such as computer vision, automated recognition of speech, processing the natural language, audio recognition and bioinformatics, where the produce state-of-the-art results in various tasks. As Mihailo said during the lecture, practically, we were always able to make networks that give good results on limited training set, but regulating and (or generating) is the main goal. Why? Because, through good regularization, networks stop remembering and they start to understand what is going on. Is this topic the one that teases your imagination and makes you think about it? For me, it was almost like I could see a movie in front of me. Mihailo Isakov has opened the new Geekstone chapter and announced the rush of great topics, but, he also gave the homework to the other lecturers. He is about to publish a work about deep learning for recognizing comments of bots on media websites, so we can probably expect more interesting lectures from this young man. You can check his presentation at https://drive.google.com/a/vegaitsourcing.rs/file/d/0B-JezYn4rPEyMW5RZm1XM0p2OXc/view Autor: Andrea Jerinic