Super Intelligence In AI

Super Intellegence in AI

Anshul Jain

July 16, 2020

In the past few years, Artificial Intelligence (AI) has become an integral part of numerous industries and technologies. From acting as a catalyst for technologies like big data, robotics, IoT, and more to being the tech behind the innovation of machine intelligence in self-driving cars, virtual assistants, and intelligent machines, there is nothing that isn't affected by AI. But this journey of artificial intelligence is still in its infancy and has a far way to go. Leveraging its capabilities, this technology will continue to impact the future of virtually every industry as well as human beings. As stated by the AI oracle and venture capitalist, Dr. KAi-Fu Lee in 2018, Artificial Intelligence "is going to change the world more than anything in the history of mankind. More than electricity".


Apart from acting as the main driver of the emerging technologies, artificial intelligence has also set the foundation for the most advanced, and at times feared form of intelligence in machines i.e. Superintelligence. Today, we will be unraveling the concepts and secrets of this level of intelligence and understand what makes it threatening as well as the ultimate goal of AI.


So, let's begin by answering the most important question, what does superintelligence mean?

What is Super Intelligence?

A technology prominently existing in sci-fi movies and shows like The Matrix, I, Robot, The Terminator, etc., artificial superintelligence is one of the levels of AI, preceded by Artificial Narrow Intelligence (ANI) and Artificial General Intelligence (AGI). It is considered to be the most advanced form of AI that offers machine intelligence that surpasses the intelligence of the brightest and most gifted humans, like Stephen Hawking.


Superintelligence is, therefore, a hypothetical agent or AI with capabilities to understand and mimic human cognition and behavior. This level of AI offers machines the ability to become self-aware and surpass the capacity of human intelligence and ability. As defined by Philosopher Nick Bostrom of Oxford, Super Intelligence is "any intellect that greatly exceeds the cognitive performance of humans in virtually all domains of interest".


Currently, superintelligence remains mainly in theory, as the majority of development today in the fields of computer science and artificial intelligence is based on Artificial Narrow Intelligence. However, AI researchers and practitioners are working towards creating different types of technology and machines with artificial general intelligence, which is believed to pave the way for the development of Superintelligence.

Characteristics of Superintelligence:

From being one of the most advanced forms of artificial intelligence to inaugurating a new era of technology, superintelligence, if accomplished, will initiate another industrial revolution, the likes never seen before. Other characteristics of artificial superintelligence singularity that will set it apart from other technologies and forms of intelligence are:


  • It will be the last invention humans ever need to make.
  • Accelerate technological progress in numerous fields.
  • Lead to more advanced forms of superintelligence.
  • Allow Artificial minds to be easily copied.
  • It may or may not be associated with technological singularity.

Important Design Consideration for Superintelligence:

Among the numerous considerations of superintelligence, the most important was specified by Nick Bostrom in his book, which is the design considerations for superintelligence. For this, he proposed three important points that one should consider if/when designing a superintelligent machine. These are:


  • Coherent Extrapolated Volition: It should be inculcated with values that humans would converge with.
  • Moral Rightness: It should be morally right. This will require the machine to have natural language processing and other human-like abilities that will allow it to assess what is morally acceptable and what is not.
  • Moral Permissibility: Also, it should have values that help it stay within the bounds of the moral permissibility.

Now that we understand the basics of superintelligence, it is time that we understand its feasibility and the possible threat that it bears to humanity.

Feasibility of Super Intelligence:

Today, though intelligent machines are using only artificial narrow intelligence or weak AI for problem-solving and other complex tasks they are already able to surpass human performance in terms of speed. As stated by Nick Bostrom in his book Superintelligence: Paths, Dangers, Strategies (2014), "existing electronic processing cores can communicate optically at the speed of light". Considering this, it won't be a surprise that superintelligence will be far more feasible and successful than the human brain and will be able to achieve intelligence that will be far more superior to the human level.


Other factors that make this technology a dream worth achieving for technological researchers are:

  • It will lead to an intelligence explosion, where intelligence will have no limitations and be able to discover, invent, and create almost anything.
  • Allow the creation of human-like reasoners that could think, act, and perform functions faster.
  • Help increase the size or computational capacity of computers.
  • It can initiate the development of collective superintelligence.
  • Will help enhance human abilities.

Superintelligence: Potential Threat to Humanity

Among the numerous supporters of Artificial Intelligence, there are a number of theorists and technological researchers who are adamantly opposed to the idea of machines surpassing human intelligence and human brains in general. They believe it could lead to global catastrophe, as frequently depicted by the entertainment industry. Hence, they are constantly addressing the risks of superintelligence and how it will have a negative impact on human lives.

Even experts like Elon Musk and Bill Gates are apprehensive about the technology and consider further venturing into it risky. As Bill Gates said at the Human-Centered Artificial Intelligence Symposium at Stanford University (2019) "the world hasn't had that many technologies that are both promising and dangerous..."

Therefore, here are some of the potential threats of superintelligence are:

  • Become unstoppable: One of the major threats of superintelligence considered by theorists worldwide is that it can use its power and ability to become unstoppable, take unforeseen actions, or out-compete humanity. Moreover, it can lead to the annihilation of human-kind.
  • Weaponization of AI: "The place that I think this [AI] is most concerning is in weapon systems," Bill Gates, at the 2019 Human-Centered Artificial Intelligence Symposium at Stanford University. Today, governments worldwide are using artificial intelligence to enhance their military power. However, the introduction of weaponized conscious superintelligence in the field would transform warfare. Moreover, if it is unregulated could even be catastrophic.
  • Competition: Though not as threatening as other points in the list, the development of AGI and superintelligence can lead to unhealthy competition, which could result in shortcuts to safety and potentially violent conflict.
  • Preemptive nuclear strike: Another potential threat of superintelligence is a preemptive nuclear attack. It is believed a country with AGI or superintelligence technological supremacy can be attacked by rivals through nuclear weapons, which will ultimately cause a nuclear war.
  • Malevolent Superintelligence: Like any power or brains in general intelligence, superintelligence can also be exploited by governments, military, corporations, and even sociopaths that want to subjugate certain groups of people. Hence, in the wrong hands, it can be misused to increase human suffering.

Myths and Facts About Superintelligence:

While discussing superintelligence, it is imperative to address the numerous myths and facts associated with this technology, as they the way people, in general, perceive it.

MYTHS:

  • In the world of superintelligence, there is no role for human expertise and intelligence.
  • Superintelligence will develop on its own without any human knowledge.
  • Superintelligence can quickly solve our major unsolved problems.
  • Superintelligence poses a looming, existential threat to humanity.

FACTS:

  • Augmenting human intelligence with AI and AGI will lead to better results and solutions.
  • AI and Superintelligence will be highly dependent on human knowledge, as they rely on humans to develop and program their code.
  • Though super AI can accelerate processes, there is no guarantee that it can solve all unsolved problems.
  • Superintelligence is not yet a technical reality and is decades away from being a potential realization.

Conclusion:

Superintelligence, though still far from being achieved, is intriguing people as well as researchers worldwide. Even though it has numerous risks associated with it, researchers feel achieving it will be the ultimate accomplishment of humanity and will allow us to unravel the mysteries of the universe and beyond. Today, the future of Artificial Intelligence is extremely bright, but there is still uncertainty and fear due to its unpredictable nature and continuing growth. Hence, the upcoming years or even decades will only tell what superintelligence brings to the table and how dangerous it will be to humanity.

return-to-top