Elon Musk-supported group awards $7 million in grants to keep artificial
Elon Musk, CEO of Tesla Motors, raises the alarm on the future of AI.
With artificial intelligence we are summoning the demon.
A 37 projects out of a total of around 300 were selected and will receive grants from the Open Philanthropy Project and Elon Musk. Winners included groups from Carnegie-Mellon University, Stanford University and a new AI centre at Oxford-Cambridge universities in the United Kingdom, the institute said.
The teams will research questions in computer science, economics, law, policy and other areas that relate to AI. To set up the center the FLI is giving Bostrom a cool $1.5 million. At that time, he said that he was looking forward to back research intended to keep AI beneficial for humanity.
The Future of Life Institute, a volunteer-run research and outreach organization, was founded in early 2014.
Attacks against artificial intelligence usually fall within the domain of fiction literature and the conspiracy theorists. The aim is to preempt any possible disaster that could be caused by emerging developments in artificial intelligence, and the money will be to reward researchers who come up with ideas to prevent such eventualities. It’s not from the standpoint of actually trying to make any investment return.
Since then, there’s been a push to see if artificial intelligence can eventually exceed the intelligence of humans. Doesn’t work out, said Musk.
This type of research needs to be developed soon, not for fear of AI controlled assaults on humans but because learning algorithms that are now being developed should have an integrated component which deals with issues of morality and practicality. According to Musk, this is a matter of protecting humanity from computers who are too powerful, and begin making decisions themselves, which are geared toward self-preservation, rather than human preservation.
“We need to be super careful with AI”. He’s brought upTerminator in the past while discussing his concerns about the evolution of AI and stated that AI has the potential to be “more unsafe than nukes”.
Max Tegmark, president of the Future of Life Institute, said that scientists should manage the growing power of technology.
Some projects may seem even more advanced as one project’s goal is to provide a framework establishment to keep AI related weapons still under meaningful and relevant human control. He believes the danger with the move is not that the human race will be wiped out, but “it distracts from the real issues posed by the technology”.