Group backed by Elon Musk awards $7 million in grants for safety from Al | New York
“With artificial intelligence, we are summoning the demon”, Musk, also CEO of Space Exploration Technologies Corp., said in October. Essentially, this is to say that the threat it could pose to humanity would be an extreme version of feeling “inconvenienced” by computers.
Elon Musk, founder of Tesla and SpaceX, has never been shy about his belief that artificial intelligence is the forerunner to humanity’s downfall. The organization selected 37 global research projects, which will help to clear the way for individuals and the intelligence systems to work and integrate safely and conveniently, simultaneously. The judges were looking for research that “aims to help maximize the societal benefit of AI”, FLI wrote on its website.
Until now 37 research groups have received a share of the $7 million in grants made by the initiative. An additional $1.2 million has been funded from the Open Philanthropy Project.
Three of the projects at hand are creating Artificial Intelligence systems that can learn what humans like and don’t like just be observation.
Another group headed up by Manuela Veloso from Carnegie Mellon University in Pittsburgh, Pennsylvania will be concentrating on programming AI that will fully explain their choices to human beings.
The teams will research questions in computer science, economics, law, policy and other areas that relate to AI. The control of lethal autonomous weapons is the subject of a grant awarded at the University of Denver. In fact, he claims that Hollywood’s bleak, catastrophic vision of the future may distract from the real issues surrounding AI that this grant money will go towards combating. FLI has awarded the fund to research teams that will be tasked with exploring risks associated with Artificial Intelligence.
“This week Terminator Genisys is coming out and that’s such a great reminder of what we should not worry about”, Tegmark said.
Is mankind in danger of a self-aware artificial intelligence, much like Skynet in the film “Terminator“?
Musk, Stephen Hawking and other public intellectuals are concerned that AI systems are developing faster than a regulatory framework to ensure they don’t wipe out the human race.
At a June 27 conference at Boston University’s College of General Studies, FLI core member Richard Mallah described the reasons for that focus. An Internet connection is enough.
The term Artificial Intelligence is coined by John McCarthy in 1955.
As an example, he described a self-driving vehicle and a human rider who asks to be taken to the airport as quickly as possible. But in this case its values weren’t aligned with the human’s.