SciNet is able to predict the position of Mars relative to the Sun by using a heliocentric model, an idea that took man years to apply. Although it does not reveal the exact equation it uses, it shows that the neural network uses only two variables to arrive at the solution.
The laws of physics, one of mankind’s greatest discoveries, have emerged for many centuries in a process often led by the leading thinkers of that time. This process, in turn, has influenced the development of science and gives the impression that some laws could not have been discovered without the knowledge of past epochs.
One example is quantum mechanics, which is based on classical mechanics and uses several mathematical ideas that were known at that time.
But perhaps there is another way to discover the laws of physics that do not depend on the understanding we have already acquired about the universe.
Researchers at the Swiss Federal Institute of Technology Zurich, Raban Iten, Tony Metger and their team say that they have developed and used such a method to discover the laws of physics in a completely new way. Both point out that it is possible to find completely new formulations of physical laws with this method.
First, some context. The laws of physics are simple representations that can be questioned to provide information about more complex scenarios. Imagine setting a pendulum in motion and asking where the basis of the pendulum will be at some point in the future. One possible answer is to measure the pendulum position during oscillation. This information can be used as a kind of search table to find the answer. But the laws of motion offer a much easier way to find the answer: simply connect the values of the various variables in the corresponding equation. That is also the right answer. Therefore, the equation can be considered as a condensed representation of reality.
This gives us an idea of how neural networks can find these laws. With some observations from an experiment, e.g. a pendulum swing, it is a matter of finding a simpler representation of this data.
The idea of Iten, Metger and their team is to feed this data into the machine so that they can learn how to make an accurate prediction of the position. Once the machine has learned this, it can predict the position from an initial state. In other words, he learned the applicable physical law.
To find out whether it works, researchers have integrated data from an oscillating pendulum experiment into a neural network they called SciNet. You have repeated this process for experiments that include the collision of two spheres, the results of a quantum measurement in an ulna and even the positions of the planets and the sun in the night sky.
The results provide interesting reading. With the help of the pendulum data, SciNet can predict the future frequency of the pendulum with an error of less than 2%.
In addition, Iten, Metger and their team may ask SciNet to see how they get the answer. Unfortunately, this does not show the exact equation but shows that the network uses only two variables to arrive at the solution. This is exactly the same number as in the corresponding laws of motion.
But that’s not all. SciNet also provides accurate predictions of the angular momentum of two spheres after they have collided. This is only possible with the conservation of the moment, a version that seems to have been discovered by SciNet. It also predicts the probabilities of measurement in questioning a kobit using a representation of the quantum world.
Perhaps most impressive is that the network learns to predict the future position of Mars and the Sun from the starting position on Earth. This is only possible if the neural network uses a heliocentric model of the solar system, an idea that man has used for centuries.
And indeed, a SciNet interrogation suggests that he learned exactly one heliocentric presentation. “SciNet stores the angles of the Earth and Mars as seen from the Sun in the two latent neurons, i.e. it restores the heliocentric model of the solar system,” the researchers explain.
It is an impressive work, but it must be put into perspective. This could be the first proof that an artificial neural network can compress data in such a way that aspects of the physical laws become visible. But it is not the first time that a mathematical approach has derived these laws.
A few years ago, computer scientists at Cornell University (USA) used a genetic algorithm to derive a series of physical laws from experimental data. These include energy saving laws and impulses. The system even spits out the equation itself, not just an indication of how it was calculated, as SciNet does.
It is obvious that evolutionary algorithms have the advantage of discovering the laws of physics with the help of experimental raw data (since evolution is the process that has created biological neural networks in the first place, it is questionable that it is always the most powerful approach).
There is an interesting conclusion. It has taken mankind centuries to discover the laws of physics, often in a way that depends crucially on the previously discovered laws. Are there better laws that can be derived from experimental data without prior knowledge of physics?
If so, this automatic or evolutionary approach to learning should be exactly what it takes to find it.
Ref: arxiv.org/abs/1807.10300 : Discover physical concepts with neural networks