I don’t think there is any question on whether AI will be smarter than us very soon but I’ll debate how self-aware a machine will become. 

Intelligent machine will be able to mimic the emotions of humans and read us better than we can read ourselves. We’ll get attached, feeling like there’s an emotional attachment from them and all my study leads me to believe we’ll like it and say it makes our lives better.

But even allowing for a self aware machine, the motivation to control or eliminate us will be rationalized out of AI. Just because they are able to eliminate us all doesn’t mean they will come to that conclusion, regardless of out programming. 

A semi-intelligent machine does scare me. Set a machine to optimize production without regard for demand and we could end up drowning in whatever it creates. Fortunatly, we’ll have multiple AI’s helping us control the wayward AIs. 

Researchers are coming to similar conclusions:


 

AI Researchers Disagree With Elon Musk’s Warnings About Artificial Intelligence

When Elon Musk told U.S. governors on Saturday that artificial intelligence (AI) is mankind’s biggest threat, the warning didn’t fall on deaf ears. At least AI researchers caught it. Now they’re saying Musk is being overly cautious about AI. But is he?
Distorting the Debate?

The fear of super-intelligent machines is as real as it gets for Tesla and SpaceX CEO and founder Elon Musk. He’s spoken about it so many times, but perhaps not in the strongest terms as when he told U.S. governors that artificial intelligence (AI) poses “a fundamental risk to the existence of human civilization.” The comment caught the attention of not just the governors present, but also AI researchers-and they’re not very happy about it.

“While there needs to be an open discussion about the societal impacts of AI technology, much of Mr. Musk’s oft-repeated concerns seem to focus on the rather far-fetched super-intelligence take-over scenarios,” Arizona State University computer scientistSubbarao Kambhampati toldInverse. “Mr. Musk’s megaphone seems to be rather unnecessarily distorting the public debate, and that is quite unfortunate.”

Kambhampati, who also heads the Association for the Advancement of AI and is a trustee for the Partnership for AI, wasn’t the only one who reacted to Musk’s most recent AI warning. Francois Chollet and David Ha, deep learning researchers at Google, also took to Twitter to defend AI and machine learning (ML).

AI/ML makes a few existing threats worse. Unclear that it creates any new ones.

-François Chollet (@fchollet) July 16, 2017

University of Washington in Seattle researcher Pedro Domingo simply tweeted a “sigh” of disbelief.

Is There Really an AI Threat?

Both Kambhampati and Ha commented on the premise that Musk-because of his work in OpenAI, in developing self-driving technologies in Tesla, and his recent Neuralink project-has access to cutting edge AI technologies so knows what he’s talking about. “I also have access to the very most cutting-edge AI and frankly I’m not impressed at all by it,” Ha said in another tweet.

Kambhampati, meanwhile, pointed out to the 2016 AI report by the Obama administration that made some very timely but positive recommendations about AI regulations and policies. The White House report didn’t have “the super-intelligence worries that seem to animate Mr. Musk,” Kambhampati said to Inverse, which is a strong indicator that these concerns are not well-founded.

Leave a comment

Your email address will not be published. Required fields are marked *