De Econometrist

De Econometrist neemt een statistische kijk op de wereld.

Neural networks
Operations Research Programming Wiskunde

Neural networks, the machine learning of the future

The human brain solves millions of problems daily. To do this, the brain first acquires the necessary information needed to solve any given problem. Then it uses and modifies this information to get a clear assessment of what the problem exactly entails. Once it knows all of this it is (in most cases) able to solve the problem at large. It is able to do so, because of the knowledge about similar problems it has already obtained. Neural networks, named after the neurons in the human brain, work in a very similar way. They consist of multiple algorithms and are designed to solve problems given the knowledge they already have from the data that was used to train this network. As a human being learns things about problems and their surroundings from the moment they were born, the neural networks also learn. The only difference is that humans have a little bit more freedom, as they can choose what to learn and the neural networks can only learn from the data that they are fed. But what use do we have of neural networks and why are people concerned that computers with neural network AI will take over the world?

 

The dawn of a new era in computer science 

The first step in creating neural networks was made in 1943 by Warren McCulloch and Walter Pitts, a neurophysicist and mathematician, respectively. They wrote a paper together that described how one might make a neural network with electrical circuits. In 1949,  Donald Hebb narrated in his book ‘The Organization of Behavior’ how the use of a neural path multiple times would improve the strength in this link. Combining the theories from these works people started working on making a practical version of the model described. At the end of the 50s two professors from Stanford accomplished the unthinkable. They managed to make a self learning algorithm. In the years after this many more improvements were made.

 

The mechanism behind neural networks

The conventional way of visualizing a neural network is as a number of layers that all consist of a number of nodes. Each node contains a bit of information represented by a number. Every node in a certain layer is connected to all other nodes in the layer before it and after it. The system works from left to right with the leftmost layer containing the input nodes and the rightmost layer the output nodes. The layers in between are called hidden layers. These layers are the key of the system. Each node in a hidden layer uses the data it gets from all the nodes in the layer before it and transforms it using a certain formula (mostly linear). (More on these formulas in the next allinea). These layers try to uncover certain details about any given data. The beauty of neural networks is that you can train them. You give the network an input and the network provides an output. Then you give the correct output to as a comparison. With this output the network can make small tweaks to improve the accuracy of the prediction. 

 

The transformation between the layers

The most common and easy transformation that is implemented is the linear operation. In this instance all the links between nodes are given a weight. These weights are then used to calculate a weighted sum of all the values in the previous layer. This weighted sum can highlight certain aspects of the system and help determine what the correct output should be. The other thing that is added to this sum is a bias. This bias is used to determine from which point on the data is significant enough to help with the correct prediction. This is a video that visualizes this very well.

Another algorithm that is used to detect patterns is the discrete fourier transformation. With the fourier transformation sinusoids can be detected. That is, for example, if the data behaves like a sine wave, this transformation is able to pick up on that. This github post is a great example of this. It shows how there are sinusoids imbedded in the rows of pixels in an image. The neural network can pick up on that and learn from it.

 

The danger to humanity that neural networks may pose

Since neural networks are modelled after the brain, it might be a possibility that the AI’s become as intelligent as humans or maybe even more intelligent. For a good example of this read the article by Berke about an AI that can complete articles by just receiving a prompt text. As shown there, they can wreak havoc and cause a lot of problems in the wrong hands. This AI however still needs the assistance of a human. With deep learning the human aspect can very possibly be cut out completely. In that case, the AI would have free reign, meaning that it could take over the internet for example. To prevent this from happening, most people in the industry have followed at least some form of ethics classes. There are also a lot of strict regulation on how much freedom you are allowed to grant your AI. 

 

Conclusions

Neural networks are really valuable in a lot of fields. The industry will probably be heavily dependent on neural networks for all kinds of data analytics. Neural networks are great at finding patterns and can be trained to give really good predictions to a certain problem. However, we have to be very careful in the measure of freedom we allow to our neural networks. As Elon Musk stated, we have to be very careful with AI. An AI that takes over the world is a scenario that might really happen. In the year to come a lot of progress in this field will most likely be made. We will just have to see what will become of it.


This article was written by David Anthonio

Deel dit artikel:

By Daniele Zedda • 18 February

← PREV POST

By Daniele Zedda • 18 February

NEXT POST → 34
Share on