Guardian’s view on the integration of human and machine learning: all in the game | Editorial

LLast week, an artificial intelligence team called NooK defeated eight world champions on the bridge. These algorithms can beat people, which may not seem interesting. IBM Deep Blue defeated 1997 World Chess Champion Harry Kasparov. In 2016, Google AlphaGo defeated Grandmaster Go. A year later, AI Libratus released four poker stars. However, the use of such technologies in the real world is limited. Stephen Maglton, a computer scientist, argues this by saying that they are “black boxes” that can learn better than humans, but cannot express and communicate this learning.

NooK, from the French startup NukkAI, is different. It won through the development of rules, not just the calculation of brutal force. Bridge is not like Chess or Go, where two-player games are based on a perfectly known set of facts. Bridge is a game for four players divided into two teams and involves collaboration and competition with incomplete data. Each player only sees his cards and has to collect information about the hands of other players. Unlike poker, which also includes confidential and bluffing information, on the bridge the player must disclose the information he transmits to his partner to his opponents.

This money feature meant that NooK could explain how game decisions were made and why it represented a breakthrough for AI. When faced with a new game, people usually learn the rules and then learn to improve by reading a book. Instead, AIs study their “black box” with in-depth study: playing billions of times to figure out how to win the algorithm. It’s a mystery how this software comes to its conclusion – or how it fails.

NooK points to the work of British AI leader Donald Michi that the ultimate goal of AI will be to develop new insights and teach them to people, as a result of which their performance will rise to a level above human learning. Mitch believes that learning a “weak” machine can only improve AI performance by increasing the amount of data entered.

His understanding was confirmed as the limitations of in-depth study were revealed. Self-driving cars remain a distant dream. Last year, as predicted, radiologists were not replaced by AI. People, unlike computers, often do short-term, complex and lucrative jobs. Fortunately, human society is not under constant monitoring. But this often means that there is not enough information available for AI and often it is hidden prejudices and not socially acceptable. The impact of the environment is also a growing concern, as computers are expected to account for 20% of global energy demand by 2030.

Technologies build trust if they are understandable. There is always the risk that an AI black box will solve the problem in the wrong way. And the more powerful the in-depth learning system, the more transparent it can become. The House of Lords Justice Committee said this week that such technologies have a serious impact on human rights and warned of AI-based convictions and prisons that they cannot be understood or challenged. NooK will change the world of technology if it meets the promise of solving complex problems and explains how to do it.

Leave a Comment