Artificial intelligence has changed in recent years.
What began in the public eye as a burgeoning field of promising (but largely benign) applications has snowballed more than $100 billion Industry where the heavyweights – Microsoft, Google and OpenAI to name a few – appear Intent to outperform the competition each other.
The result is often increasingly sophisticated large language models dismissed in a hurry and without adequate testing and oversight.
These models can do much of what a human can, and in many cases even better. You can beat us Advanced strategy gamesto generate incredible art, diagnose cancer and compose music.
There is no doubt that AI systems appear “intelligent” to a certain extent. But could they ever be as intelligent as humans?
There is a term for this: Artificial General Intelligence (AGI). Although a broad concept, for simplicity, AGI can be thought of as the point at which AI acquires human-like general cognitive abilities. In other words, it’s to the point where AI can handle any intellectual task a human can handle.
AGI isn’t here yet; Current AI models are hampered by the lack of certain human traits, such as true creativity and emotional awareness.
We asked five experts if they think AI will ever reach AGI, and five out of five answered yes.
But there are subtle differences in the approach to the question. Further questions arise from their answers. When could we reach AGI? Will it surpass humans? And what constitutes “intelligence” anyway?
Here are their detailed answers:
Paul Formosa
AI and technology philosophy
AI has already reached and surpassed human intelligence in many tasks. It can beat us and outperform us in many strategy games like Go, Chess, StarCraft and Diplomacy voice performance benchmarks and write passable bachelor university essays.
Of course, it can also make things up or “hallucinate” and get things wrong – but humans can (though not in the same way).
Given a long enough timeframe, it seems likely that AI will achieve AGI, or “human-level intelligence.” That is, it will have acquired knowledge in sufficient areas of the interconnected areas of intelligence that humans possess. Still, some may fear that, despite AI’s achievements to date, AI won’t actually be “intelligent” because it doesn’t understand (or can’t understand) what it’s doing, having no consciousness.
However, the rise of AI suggests we can have intelligence without it consciousness, because intelligence can be understood functionally. An intelligent entity can do intelligent things such as B. learning, arguing, writing essays or using tools.
The AIs we create may never be conscious, but they are increasingly able to do intelligent things. In some cases, they are already doing so on a level beyond us, a trend that is likely to continue.
Christina Maher
Computational Neuroscience and Biomedical Engineering
AI will reach human-level intelligence, but maybe not anytime soon. Human-level intelligence enables us to reason, solve problems, and make decisions. It requires many cognitive skills, including adaptability, social intelligence, and learning from experience.
AI already meets many of these criteria. What is left is for AI models to learn inherent human traits like critical thinking and understand what emotions are and what events might trigger them.
As humans, we learn and experience these qualities from the moment we are born. Our first experience of “happiness” is too early to even remember. We also learn critical thinking and emotional regulation throughout childhood, developing a sense of our “emotions” as we interact with and experience the world around us. Importantly, it can take many years for the human brain to develop such intelligence.
AI has not acquired these skills yet. But if humans can learn these traits, AI can probably do the same—and maybe even faster. We are still discovering how AI models should be built, trained and interacted with to develop such properties in them. The big question really isn’t if AI will reach human-level intelligence, but when – and how.
Seyedali Mirjalili
AI and swarm intelligence
I believe that AI will surpass human intelligence. Why? The past offers insights we cannot ignore. Many people believed that tasks like playing computer games, recognizing images, and creating content (among others) could only be done by humans — but technological advances proved otherwise.
Today, the rapid advancement and adoption of AI algorithms coupled with a wealth of data and computing resources has led to previously unimaginable levels of intelligence and automation. If we follow the same path, a more general AI is no longer a possibility but a certainty of the future.
It’s only a matter of time. AI has made significant progress, but not yet for tasks that require intuition, empathy and creativity, for example. But breakthroughs in algorithms will make this possible.
Additionally, once AI systems achieve such human-like cognitive abilities, there will be a snowball effect and AI systems will be able to improve with minimal to no human involvement. This kind of “automation of intelligence” will fundamentally change the world.
Artificial general intelligence remains a significant challenge, and there are ethical and societal implications that need to be addressed very carefully as we continue to move towards it.
Dana Rezazadegan
AI and data science
Yes, AI will become as smart as humans in many ways — but how smart it becomes will depend largely on advances in technology quantum computing.
Human intelligence is not as simple as knowing facts. It has several aspects such as creativity, emotional intelligence and intuition that current AI models can emulate but cannot match. However, AI has evolved massively and this trend will continue.
Current models are limited by relatively small and biased training data sets and limited computational power. The emergence of quantum computing will change the capabilities of the AI. With quantum-enabled AI, we will be able to feed AI models with multiple massive datasets comparable to humans’ natural multimodal data collection, achieved through interaction with the world. These models will be able to maintain fast and accurate analysis.
An advanced version of continuous learning should lead to the development of sophisticated AI systems that can improve beyond a certain point without human intervention.
Therefore, AI algorithms run stably quantum computer have a high chance of achieving something similar to generalized human intelligence – even if they don’t necessarily agree with all aspects of human intelligence as we know it.
Marcel Schart
machine learning and AI alignment
I think it’s likely that AGI will become a reality one day, although the timeline remains highly uncertain. If AGI is developed, it seems inevitable that it will surpass human-level intelligence.
Man himself is proof that highly flexible and adaptable intelligence is permitted by the laws of physics. There is no fundamental reason We should believe that machines are inherently incapable of performing the calculations required to achieve human-like problem-solving skills.
In addition, AI clear benefits towards people such as B. better speed and memory capacity, less physical limitations and the potential for more rationality and recursive self-improvement. As computing power increases, AI systems will eventually surpass the computing capacity of the human brain.
Our primary challenge then is to gain a better understanding of intelligence itself and knowledge of how to build AGI. Today’s AI systems have many limitations and are far from capable of mastering the various domains that would characterize AGI. The path to AGI will likely require unpredictable breakthroughs and innovations.
The mean forecast date for AGI am metaculus, a reputable forecasting platform, is in 2032. That seems overly optimistic to me. A 2022 expert survey estimated a 50 percent chance that we will achieve human-level AI by 2059. I find that plausible.
Noor Gilanitechnology editor, The conversation
This article is republished by The conversation under a Creative Commons license. read this original article.