In the second game of the Man-machine Go game, AlphaGo stunned the world with 37 hands in the blank area. Li Shishi left after seeing it, and Redmond, who watched and explained it remotely, was equally shocked. The only nine-segment player in the West said, “I really don’t know if this is a good move or a bad move.†At the same time, Chris Gallo, the English narrator of the American Go Association’s communications, said, “This is a mistake."
In this step, Li Shishi took about 20 minutes, but after four hours, he still lost. Later, in this man-machine battle, hundreds of AI programs linked to Google data centers around the world were linked, defeating perhaps the best players in the most complex games ever designed by humans.
Fan Yi, who is also confused about 37 hands, is not only the European Go champion who lost to AlphaGo 5-0, but since then he has become a partner of AlphaGo. In the more than five months before the battle with Li Shishi, Fan Wei and AlphaGo conducted hundreds of matches. He watched AlphaGo grow up day by day. Fan Wei has failed more and more times, but he is the one who knows the most about AlphaGo. Looking at AlphaGo's 37th hand, he knows that there must be something that ordinary people can't understand. After 10 seconds of calculation, he said that "it is such a wonderful hand."
Yes, most people think that AlphaGo's victory is a strong crush of computing power. However, the 37th hand proves that AlphaGo is not just a calculation. It shows a certain degree of understanding of Go, just like a human chess player. ". Therefore, the 37 hands have a historic significance, which shows that machines and humans have finally begun to have a truly integrated day.
AlphaGo founder Hassabis was born in London in 1976. He started to play chess at the age of 4 and became a "chess master" at the age of 13. He ranked second in the world under the age of 14. AlvaGo team leader Silva said, "I have seen him appear in our town, won the game, and then left." The two of them officially met when they were studying undergraduate in Cambridge. In order to understand human thinking and to study whether machines can also become intelligent, both of them specialize in computational neuroscience. When studying for a doctoral program in cognitive neuroscience at University College London (UCL), Hassabis focused on the hippocampus, which is responsible for navigation, recall and imagination, and laid the foundation for a computer that is more human-like. The new theory was awarded the Top Ten Technology Breakthroughs in 2007 by Science.
When the 1997 Deep Blue computer defeated the chess champion in 1997, it happened that Hassabis was studying computer science at Cambridge University. At that time, he first met Go in Cambridge - a chess game with a thousand years of history, and Hasabis, who had just touched Go, couldn't help but wonder: Why did the machine never break this puzzle? Because of this, Hasabis made up his mind to hope to make a computer system that is better than humans. In terms of game theory, Go, like chess and checkers, is a complete information game – no luck at all, and information is completely open. Generally speaking, the computer should be able to master it easily, but it is impossible to overcome Go.
Hasabis said that in Go, neither humans nor machines can calculate the final result of each step. The top players rely on intuition, not hard calculations - that is, chess. “The layout of Go is aesthetically pleasing, and the good layout looks beautiful.â€
In 1998, after the two graduated, they opened a video game company. Games are a great way to test artificial intelligence. But in 2005, their game company went bankrupt. Silva went to the University of Alberta to study the primary form of artificial intelligence - enhancing learning. Enhanced learning technology allows the machine to repeat the same tasks and find the best-performing decisions for self-learning. Hassabis went to the University College London to receive a Ph.D. in neuroscience. Both of them specialize in computational neuroscience, in order to study whether machines can also become intelligent. In 2010, they met again – Hassabis established an artificial intelligence company called DeepMind in London, and Silva joined him.
When Google CEO Brin meets Hassabis, Hasabis said: "DeepMind may be able to defeat the World Go Championship in a few years." Even the visionary Brin also felt incredible, but they did.
After the second game of the man-machine battle, Silva entered AlphaGo's control room to monitor whether it was functioning properly and track how it changed for each game's outcome. Silva brought up AlphaGo's decision-making record during the game and saw what happened to AlphaGo just before the 37th hand.
Before the advent of DeepMind and AlphaGo, the machine relied on brute force methods, that is, exhaustive, IBM's Deep Blue used this. At that time, Deep Blue also stepped out of the unexpected steps of humanity, but the violent calculation could not solve Go. There are too many changes in Go, and computers can't be calculated.
Therefore, DeepMind can only find another way - machine learning.
The DeepMind team imported 30 million steps from the human footsteps into a deep neural network. This network simulates the neural network in the human brain, and the team hopes that it can think like a human brain and learn independently. For example, Facebook's computer vision technology, Google's speech recognition. Observe enough cats, it will recognize the cat; input enough language data, it can understand the natural language; also, enter enough games, it can also learn how to play chess. However, creative associations and rules are two different things. For example, 37 hands are not in the 30 million steps, so how does it work? In fact, AlphaGo also calculated that the probability of a human professional player taking such a step is only about one in ten thousand, but it chose this step.
"It knows that the chances of a professional player are so low, but when it passes its own calculations, it can overturn the original reference to the game," Silva explained, in a sense, AlphaGo began to think for itself. The decisions it makes are not based on the rules that its creators have programmed in its digital DNA, but on its self-learning algorithms.
After letting it learn to play chess, Silva let AlphaGo play against himself - a neural network that is different from its version. In the process of self-game training, AlphaGo records the best moves – this is the enhanced learning technique that Silva studied.
Give yourself a shot – this is an effective way to improve your game, but it's part of the skill. Knowing the situation and having a logical calculation is not enough. It is also based on intuition to find a good hand in the chessboard. It is a perceptual prediction based on the shape of the chess. After the enhancement of learning technology, Silva's team entered these non-human chess moves into the second neural network, teaching it to predict the chess game like the chess game in the dark blue. After entering all the information collected after playing with many games, AlphaGo can begin to predict the way a Go game can be developed. This is intuition. For example, AlphaGo's 37 hands. Even if Silva returned to the background viewing process, there is no way to know how AlphaGo came to the result - this is the formation of chess.
AlphaGo is an important step for DeepMind to enter the AI ​​field, but for "AI replaces human theory," Hasabis said there is no need to worry. In his view, AI is a tool, a structural wisdom, and a better tool for humans. Although AlphaGo currently has such capabilities, it does not necessarily know what "self" is doing. So, with such a tool, how does Hassabis envision the AI ​​world for the next five years? Google spent $650 million to buy a company, not just playing a board game.
With deep learning and self-directed thinking, AlphaGo can play chess today and learn design tomorrow. Deep learning and neural networks support more than a dozen services provided by Google, including its omnipotent search engine. AlphaGo's less-secret weapon – Enhanced Learning is already teaching the company's lab machines that people pick up and move around.
However, business issues are not the most important. When asked about Hassabis and seeing how Li Shishi lost his game, he pointed to his heart and said, "I am very sad." He is proud to see the results he has created, but he feels sad because of human instinct. He hopes Li Shishi can win the next game.
Auxiliary Equipment For Plastic Recycling Machine
High-Quality Auxiliary Equipment For Plastic Recycling Machine,Customizable Auxiliary Equipment For Plastic Recycling Machine,Advanced Technology In Auxiliary Equipment For Plastic Recycling Machine,Auxiliary Equipment For Plastic Recycling Machine Manufa
Zhejiang IET Intelligent Equipment Manufacturing Co.,Ltd , https://www.ietmachinery.com