It was seven years ago, in a lounge at the Four Seasons hotel in Seoul, under the bewildered gaze of the world, that artificial intelligence won its greatest victory. Lee Sedol, “best player in the world” at go and idol of a South Korea passionate about this multi-millennial game, bowed to AlphaGo, a program developed by the London startup DeepMind.
Four games will follow, only one won by Lee Sedol, where AlphaGo will stand out not only for its excellent quality of play, but also for its ability to play unconventional shots. The South Korean champion would later speak of an “entity that cannot be defeated” about this strange opponent.
To discover this strategy, Far AI – Kellin Pelrine’s research group – used another algorithm designed to explore KataGo’s flaws. The player then put it into practice in a series of duels where he was not assisted by computer.
The reason for the success of this encirclement tactic remains a matter of conjecture. The researchers, however, put forward the idea that the rarity of this strategy could have deceived the vigilance of the AI. The latter, having never encountered her in her training data, would not have known how to spot her. This hypothesis seems to be confirmed by the analysis of the probability of victory, which KataGo calculates in real time at each moment of the game. Indeed, the researchers note the AI’s high confidence for the majority of the match (putting near 99% chance of victory), followed by a drastic drop, often only one hit before being captured by encirclement.
What lessons can we learn from this unexpected shift in the human versus AI game? First: that this one is far from bent. Human cunning and intuition still have a bright future ahead of them. Then: the rise of a new adversity pushes man to question his own limits and to conquer new territories, sometimes with the help of intelligent tools. Finally, if the strategy found by Kellin Pelrine should be easily rendered obsolete, by integrating it into KataGo training, this result raises questions about the fundamental understanding of the game of go by the AI. Doesn’t his performance rely on the phenomenal size of his training data, more than genuine intelligence?
Anyway, these methods, called adversarial examples, are available in all areas affected by AI: image recognition, sound or language. Their applications extend from the protection of privacy to counter-espionage, passing through the subject – oh so important – of the interpretability of the decisions made by these algorithms. The difficulty of circumventing them definitively draws another future for artificial intelligence: that, not of an insurmountable victory of AI over humans, but of a fragile balance between actors with divergent interests, each using different ‘a mixture of human faculties and intelligent tools to maintain their position.