Research scientists and personalities call for ethical reflection on the development of artificial intelligence.
The progress made in the last five years artificial intelligence will allow us to build robots capable of performing virtually all human tasks, threatening tens of millions of jobs over the next 30 years, agree to predict scientists.
“We are approaching the point where machines will surpass humans in almost every task,” warned Moshe Vardi , director of the Institute for Information Technology at Rice University in Texas .
“Society must address this issue now because if robots do almost everything we do as work, what we do,” he asked Saturday alongside other experts at the annual conference the American Society for the Advancement of Science (AAAS) meeting in Washington .
Today there are over 200,000 industrial robots in the country and the number continues to increase.
The research is currently focused on reasoning ability of machines and progress these last twenty years are spectacular, according to this expert.
“The next 25 years there is every reason to believe that progress will be equally impressive,” he added.
According to him, 10% of jobs that require driving a car in the US could disappear due to automation of driving here twenty-five years.
Bart Selman , a professor of computer science at Cornell University for its part provides that “in two or three years (to come) standalone machines (…) will enter into society to including automated driving cars and trucks but also control of surveillance drones. ”
The expert explained that very significant progress has been made over the last five years especially in artificial vision and hearing enabling robots to see and hear like humans.
Professor Selman said that investments in artificial intelligence in the United States were far the highest in 2015 since the birth of this field of research it was fifty years ago, citing Google, Facebook, Microsoft and Tesla, the billionaire Elon Musk, noting that the Pentagon has requested $ 19 billion for developing intelligent weapons systems.
What is worrying in this new software, agree experts, is their ability to synthesize data and perform complex tasks.
“One can wonder about the level of intelligence that these robots can reach and if humans are not likely one day to lose control,” pointed Bart Selman.
The British astrophysicist Stephen Hawking was particularly warned against this danger explaining that “humans are limited by slow biological evolution.” “Artificial intelligence could develop itself at a pace more rapid,” he explained.
These issues have led scientists to consider the establishment of ethical rules to regulate the development of artificial intelligence and programs focused on safety.
Elon Musk launched in 2014 an initiative of $ 10 million for this purpose, believing that artificial intelligence was “potentially more dangerous than nuclear.”
In 2015, a group of top-flight figures, including Stephen Hawking, Elon Musk and Steve Wozniak, cofounder of Apple, had published an open letter calling for “the prohibition of autonomous weapons.”
They explained that “if a great power was developing weapons with an autonomous artificial intelligence, this would result in a dangerous race of this type of weapon.”
For Wendel Wallach , an ethicist at Yale University , these hazards require mobilization of the international community.
The idea was he summarized Saturday “is to ensure that the technology remains a good servant does not become a dangerous master.” (Afp / nxp)