The 3 stages of Artificial Intelligence: which one are we in and why many think that the third can be fatal
Since it was launched at the end of November 2022, ChatGPT, the chatbot that uses artificial intelligence (AI) to answer questions or generate texts at the request of users, has become the fastest growing internet application in history.
In just two months it had 100 million active users. It took the popular app TikTok nine months to reach that milestone. And Instagram two and a half years, according to data from the technological monitoring company Sensor Town.
"In the 20 years we've been following the internet, we can't remember a faster rise of a consumer internet application," said analysts at UBS, who reported the record in February.
The massive popularity of ChatGPT, developed by the company OpenAI, with financial backing from Microsoft, has sparked all kinds of discussions and speculation about the impact that generative artificial intelligence is already having and will have in our near future.
This is the branch of AI that is dedicated to generating original content from existing data (usually taken from the Internet) in response to a user's instructions.
The texts (from essays, poems and jokes to computer codes) and images (diagrams, photos, artwork of any style and much more) produced by generative AIs such as ChatGPT, DALL-E, Bard and AlphaCode - to name just a few of the best known - are, in some cases, so indistinguishable from human work, that they have already been used by thousands of people to replace their usual work.
From students who use them to do their homework to politicians who entrust them with their speeches - Democratic representative Jake Auchincloss debuted the resource in the US Congress - or photographers who invent snapshots of things that did not happen (and even win awards for This, like the German Boris Eldagsen, who won first place in the last Sony World Photography Award for an image created by AI).
This same note could have been written by a machine and you probably wouldn't notice.
The phenomenon has led to a human resources revolution, with companies such as technology giant IBM announcing that it will stop hiring people to fill nearly 8,000 jobs that can be handled by AI.
A report from investment bank Goldman Sachs estimated in late March that AI could replace a quarter of all jobs performed by humans today, although it will also create more productivity and new jobs.
If all these changes overwhelm you, prepare yourself for something that could be even more disconcerting.
And, with all its impacts, what we are experiencing now is just the first stage in the development of AI.
According to experts, what could come soon - the second stage - will be much more revolutionary.
And the third and last one, which could occur very shortly after that, is so advanced that it will completely alter the world, even at the cost of human existence.
The three stages
AI technologies are classified by their ability to imitate human characteristics.
1. Artificial Narrow Intelligence (ANI)
The most basic category of AI is better known by its acronym: ANI, for Artificial Narrow Intelligence.
It is so called because it focuses narrowly on a single task, performing repetitive work within a range predefined by its creators.
ANI systems are usually trained using a large set of data (for example from the Internet) and can make decisions or take actions based on that training.
An ANI can match or surpass human intelligence and efficiency but only in that specific area in which it operates.
An example is chess programs that use AI. They are capable of beating the world champion of that discipline, but they cannot perform other tasks.
That's why it is also known as "weak AI."
All programs and tools that use AI today, even the most advanced and complex ones, are forms of ANI. And these systems are everywhere.
Smartphones are full of apps that use this technology, from GPS maps that allow you to locate yourself anywhere in the world or know the weather, to music and video programs that know your tastes and make recommendations.
Also virtual assistants like Siri and Alexa are forms of ANI. Just like the Google search engine and the robot that cleans your house.
The business world also uses this technology a lot. It is used in the internal computers of cars, in the manufacturing of thousands of products, in the financial world and even in hospitals, to perform diagnoses.
Even more sophisticated systems such as driverless cars (or vehicles autonomous asses) and the popular ChatGPT are forms of ANI, since they cannot operate outside the range predefined by their programmers, so they cannot make decisions on their own.
They also have no self-awareness, another trait of human intelligence.
However, some experts believe that systems that are programmed to learn automatically (machine learning) such as ChatGPT or AutoGPT (an "autonomous agent" or "intelligent agent" that uses information from ChatGPT to perform certain subtasks autonomously) could become the next stage of development.
2. Artificial general intelligence (AGI)
This category -Artificial General Intelligence- is achieved when a machine acquires human-level cognitive capabilities.
That is, when it can perform any intellectual task that a person performs.
It is also known as "strong AI."
The belief that we are on the verge of reaching this level of development is such that last March more than 1,000 technology experts asked AI companies to stop training, for at least six months, those programs that are more powerful. than GPT-4, the most recent version of ChatGPT.
"AI systems with intelligence that competes with humans can pose profound risks to society and humanity," Apple co-founder Steve Wozniak and the owner of Tesla, SpaceX, and Neuralink, among others, warned in an open letter. and Twitter, Elon Musk (who was one of the co-founders of Open AI before resigning from the board due to disagreements with the company's leadership).
In the letter, published by the nonprofit Future of Life Institute, the experts said that if companies do not quickly agree to stop their projects "governments should intervene and institute a moratorium" so that mitigation measures can be designed and implemented. solid security.
Although this is something that - for the moment - has not happened, the United States government did convene the owners of the main AI companies - Alphabet, Anthropic, Microsoft, and OpenAI - to agree on "new actions to promote responsible innovation of AI".
"AI is one of the most powerful technologies of our time, but to take advantage of the opportunities it presents, we must first mitigate its risks," the White House declared in a statement on May 4.
The US Congress, for its part, summoned OpenAI CEO Sam Altman this Tuesday to answer questions about ChatGPT.
During the Senate hearing, Altman said it is "crucial" that his industry be regulated by the government as AI becomes "increasingly powerful."
Carlos Ignacio Gutiérrez, public policy researcher at the Future of Life Institute, explained to BBC Mundo that one of the great challenges that AI presents is that "there is no collegiate body of experts who decide how to regulate it, as happens, for example, with the Intergovernmental Panel on Climate Change (IPCC)".
In the experts' letter, they defined what their main concerns were.
"Should we develop non-human minds that could eventually outnumber us, outsmart us, make us obsolete and replace us?" they asked.
"Should we risk losing control of our civilization?"
3. Artificial Super Intelligence (ASI)
The concern of these computer scientists has to do with a well-established theory that maintains that, when we reach AGI, shortly thereafter we will reach the last stage in the development of this technology: Artificial Superintelligence, which occurs when synthetic intelligence surpasses the human.
Oxford University philosopher and AI expert Nick Bostrom defines super intelligence as "an intellect that is much smarter than the best human brains in virtually all fields, including scientific creativity, general wisdom, and social skills." ".
The theory is that when a machine manages to have intelligence on par with that of humans, its ability to multiply that intelligence exponentially through its own autonomous learning will make it vastly surpass us in a short time, reaching the ASI.
"To be engineers, nurses or lawyers, we humans must study for a long time. The issue with AGI is that it is immediately scalable," says Gutiérrez.
This is thanks to a process called recursive self-improvement that allows an AI application to "continuously improve itself, in a time that we could not do."
While there is much debate about whether a machine can really acquire the kind of broad intelligence that a human being has - especially when it comes to emotional intelligence - it is one of the things that most worries those who believe that we are close to achieve AGI.
Recently, the so-called "godfather of artificial intelligence " Geoffrey Hinton, a pioneer in research into neural networks and deep learning that allow machines to learn from experience, just like humans, warned in an interview with the BBC that we could be close to that milestone.
"Right now (the machines) are not smarter than us, as far as I can see. But I think they could soon be," said the 75-year-old, who just retired from Google.
Extinction or immortality
There are, in general rules, two camps of thought in relation to ASI: there are those who believe that this superintelligence will be beneficial for humanity and those who believe just the opposite.
Among the latter was the famous British physicist Stephen Hawking, who believed that super-intelligent machines posed a threat to our existence.
"The development of full artificial intelligence could mean the end of the human race," he told the BBC in 2014, four years before he died.
A machine with this level of intelligence would "take off on its own and redesign itself at an ever-increasing rate," he said.
"Humans, who are limited by slow biological evolution, would not be able to compete and would be surpassed," he predicted.
However, on the opposite side they believe the opposite.
One of the biggest enthusiasts of ASI is the American futurist inventor and author Ray Kurzweil, an AI researcher at Google and co-founder of the Singularity University in Silicon Valley (the "singularity" is another name for the era in which machines become super intelligent).
Kurzweil believes that humans will be able to use super-intelligent AI to overcome our biological barriers, improving our lives and our world.
In 2015 he even predicted that by the year 2030 we humans will be able to achieve immortality thanks to nanobots (very small robots) that will act inside our body, repairing and curing any damage or illness, including that caused by the passage of time.
In his testimony before Congress on Tuesday, OpenAI's Sam Altman was also optimistic about the potential of AI, noting that it could solve "humanity's biggest challenges, like climate change and curing cancer." .
In the middle are people, like Hinton, who believe that AI has enormous potential for humanity, but consider the current pace of development, without clear regulations and limits, to be "concerning."
In a statement sent to The New York Times announcing his departure from Google, Hinton said that he now regretted the work he had done because he feared "bad actors" would use AI to do "bad things."
"He imagines, for example, that some bad actor like [Russian President Vladimir] Putin decided to give robots the ability to create their own sub-goals."
Machines could eventually "create subgoals like, 'I need to get more power,'" which would pose an "existential risk," he said.
At the same time, the British-Canadian expert said that, in the short term, AI will provide many more benefits than risks, so "we must not stop developing it."
"The issue is: now that we've discovered that it works better than we expected a few years ago, what do we do to mitigate the long-term risks of things smarter than us taking control?"
Guitérrez agrees that the key is to create an AI governance system before intelligence is developed that can make its own decisions.
"If these entities are created that have their own motivation, what does it mean when we are no longer in control of those motivations?" he asks.
The expert points out that the danger is not only that an AGI or ASI, whether self-motivated or controlled by people with "bad objectives", starts a war or manipulates the financial, productive system, energy, transportation or any other infrastructure. system that is computerized today.
A superintelligence could dominate us in a much more subtle way, he warns.
"Imagine a future where an entity has so much information about every person on the planet and their habits (thanks to our internet searches) that it could control us in a way we wouldn't realize," he says.
"The worst scenario is not that there are wars between humans and robots. The worst thing is that we do not realize that we are being manipulated because we are sharing the planet with an entity that is much more intelligent than us."


No comments:
Post a Comment