A #frAIday talk by Thomas Hellström
The dangers of AI are currently debated both in the research world and at the political level. One described long-term danger is, inspired by SF movies and books, that AI and robots take control of the world. This is most often linked to the concept of Artificial General Intelligence (AGI). AGI could lead to a so-called Singularity, where the intelligence of machines increases so fast that humans no longer can follow what is happening. In such a situation the AGI may become self-aware and prioritize its own existence over people, who are seen as a threat because they may “pull the cable” and thereby “kill” the AGI. In such a situation, the AGI might decide to fight back. The good news is that the development of AGI probably takes a long time.
In this talk I will discuss whether scenarios of this kind really require a general intelligence to become reality. Could it be enough if a few AI systems that, in one way or another, affect the physical world are combined in a master AI system with an ability to detect and exploit causal relations to achieve its given goal? The extensive investments in “metaverses” by the big Internet companies may contribute to such a development, and there is therefore reason to debate and analyze the risks and opportunities that this entails.