Fortunately for the likeminded, Musk has invested millions into the Future of Life Institute (FLI), which looks to put a stopper on evil AI.
Since then, there’s been a push to see if artificial intelligence can eventually exceed the intelligence of humans.
Elon Musk is not anxious, however, that machines will be sent from the future to assassinate people or take over the world, a plot popular with Hollywood movies. Till now, 37 research groups have been provided with a share from the $7 million grants made by the three famous personalities. An additional $1.2 million has been funded from the Open Philanthropy Project.
At the beginning of 2015 billionaire entrepreneur Elon Musk committed a substantial amount of money to Boston-based Future of Life Institute.
The institute announced in this week that it will be issuing grants to 37 different research teams, which have been selected from a list of around 300 applicants.
Another group headed up by Manuela Veloso from Carnegie Mellon University in Pittsburgh, Pennsylvania will be concentrating on programming AI that will fully explain their choices to human beings.
These teams will be pushed to undertake research in economics, law and computer science for the program. At that time, he said that he was looking forward to back research intended to keep AI beneficial for humanity.
Attacks against artificial intelligence usually fall within the domain of fiction literature and the conspiracy theorists. It’s not from the standpoint of actually trying to make any investment return. There is also a threat of the artificial intelligence committing illegal actions. “I think there is potentially a risky outcome there”, Mr Musk has said.
This type of research needs to be developed soon, not for fear of AI controlled assaults on humans but because learning algorithms that are now being developed should have an integrated component which deals with issues of morality and practicality. According to Musk, this is a matter of protecting humanity from computers who are too powerful, and begin making decisions themselves, which are geared toward self-preservation, rather than human preservation. “So we need to be very careful”.
Max Tegmark, president over at the Future of Life Institute, gave a statement saying that “There is this race going on between the growing power of the technology and the growing wisdom with which we manage it”. Earlier this year, the Tesla and SpaceX founder described the dangers of AI run amok: “You could construct scenarios where recovery of human civilization does not occur”, he said.