AI, Google and OpenAI reveals the decision making of neural networks for deep AI analysis,

Google and OpenAI reveals the decision making of neural networks for deep AI analysis

Despite the fact that automatic learning is an artificial intelligence (AI) technique widely used in today’s technology industry to make predictions from a range of specific data, the lack of an explanation of how AI makes decisions has opened up the possibility of negative consequences.

AI, Google and OpenAI reveals the decision making of neural networks for deep AI analysis,

The most recent example was a study by the Massachusetts Institute of Technology (MIT) accusing Amazon’s facial recognition technology of being racist and discriminating against people based on their gender.

Researchers from Google and OpenAI presented a technique that reveals the interactions of the elements that make up the neural networks of image classification in order to openly demonstrate the functioning of the internal systems of the AI. They have called it Activation Atlas Visualization and claim that it attempts to illustrate how these interactions enable an AI model to make decisions.

In statements to VentureBeat, Chris Olah, member of the OpenAI engineering team, said:

There is a long research in the visualization of features, a [subfield] of this sector that tries to understand what happens in neural networks (…) What we tried to do with the activation atlases is to go away and see the whole picture of the things that the neural network can represent.

The researchers explained that neural networks have functions modeled by biological neurons. These functions are organised in layers and linked by an artificial “synapse” that transmits signals to other neighbouring neurons. The signals in turn migrate from one layer to another and slowly tune the network to adapt the synaptic force of each connection.

Over time, the network extracts elements from the data set and then identifies sampling trends. Finally, the model learns to make accurate predictions. Experts explained that neurons do not predict isolated situations or images, but collaborate in groups.

Ultimately, these are the groups that understand patterns about people, objects, texts or language. Shan Carter, a Google scientist, compared the AI decision with the Latin alphabet, because the 26 letters of this system, although they form the basis of the English language, say little about linguistic concepts. In this context, Carter explained that no one instructs neural networks to make decisions. They just do.

However, this information does not explain the combination of neurons that need to be studied in the network, nor does it explain the task that each group of neurons performs. To explain this, Google scientists have discovered a neural network technique called “Special Activations” that is unable to detect activations that occur in more than one initial sample.

Here the activation alerts act, record an entry and organize it according to their activation values in the form of vectors. They also show visualizations of the properties of the averaged activations and create a universal view of the concepts created within the model layers.

These activation captures are best represented by the concept of canvases or photo collages. The Google and OpenAI report found that the atlases reveal a large number of structures that were known to be there, but that researchers did not know how they were created. In addition, they said that because neural networks are not perfect, activation atlases can detect errors and their root causes.


Comments

Leave a Reply

Your email address will not be published. Required fields are marked *