How Can We Fulfill Asimov’s Dream of Peaceful Artificial Intelligence?

Elon Musk and experts want to ensure AI won’t be used against humanity.

NASA Administrator Charles Bolden, left, congratulates SpaceX CEO and Chief Designer Elon Musk.

NASA Administrator Charles Bolden, left, congratulates SpaceX CEO and Chief Designer Elon Musk.

OpenAI, a new non-profit Artificial Intelligence (AI) research company, wants to ensure that AI benefits humanity as a whole.

Co-chairs Elon Musk and Sam Altman hope the organization will accomplish this by being “unconstrained by a need to generate financial return” and by keeping a commitment to research.

Researchers involved in the organization will be encouraged to publish their work as whitepapers, blog posts, codes or patents to be shared publicly. This is fitting as the inspiration for their research might come from three of the most publically shared laws in science fiction, Asimov’s Laws of Robotics:

  1. A robot may not injure a human being or, through inaction, allow a human being to come to harm.
  2. A robot must obey the orders given it by human beings except where such orders would conflict with the First Law.
  3. A robot must protect its own existence as long as such protection does not conflict with the First or Second Laws.

“We’ll freely collaborate with others across many institutions and expect to work with companies to research and deploy new technologies,” reads OpenAI’s website.

This is good news to hear, considering the Terminator movie universe never had such a movement. The question is: can we safely avoid such a fate? After all, Asimov’s 56th edition of the Handbook of Robotics isn’t scheduled for a release until 2058.

Major funders including Amazon Web Services (AWS), Infosys and YC Research have already committed $1 billion to OpenAI’s coffers. The organization states that it anticipates spending only a tiny fraction of this in the next few years, so it may be some time before we see any major plays from the non-profit.

In the meantime, let’s hope our world’s version of Skynet doesn’t send a skeletal tin-man back in time to stop Mr. Musk and co. before then.

Taking a strategic play from Asimov’s book ‘The Foundation,’ OpenAI is looking to attract the world’s greatest minds into one place so they can freely share their results with all of society. Currently, potential applicants have come in from universities like Stanford, UC Berkeley, NYU and more. OpenAI is also hiring research engineers and scientists as the group focuses in on deep learning research.

Deep learning uses neural networks comprised of a high number of layers to process and identify abstractions like a picture of a cat.

The process of applying deep learning to teach a machine to identify a specific object is similar to teaching a young child what a pineapple is: repeated exposure results in memorization and association. Google’s DeepDream software operates on these principles.

OpenAI’s warnings are clear: “It’s hard to fathom how much human-level AI could benefit society and it’s equally hard to imagine how much it could damage society if built or used incorrectly.”

So will the company advocate for government regulation of AI? How will it act against the inevitable militarization of robotics? Will Ultron defeat the Avengers? These questions and more will hopefully be answered when OpenAI releases more information.

For more information visit openai.com or follow the organization on Twitter at @open_ai