Hawking and others claim artificial intelligence could result in the downfall of mankind. Per the article:
Stephen Hawking has warned that artificial intelligence has the potential to be the downfall of mankind....Dismissing the implications of highly intelligent machines could be humankind's
"worst mistake in history", write astrophysicist Stephen Hawking, computer scientist Stuart Russell, and
physicists Max Tegmark and Frank Wilczek in the Independent...."One can imagine such technology outsmarting financial markets, out-inventing human researchers, out-manipulating human leaders, and developing weapons we cannot even understand," they write. "Whereas the short-term impact of AI
depends on who controls it, the long-term impact depends on whether it can be
controlled at all."....And what are we humans doing to address these concerns, they ask. Nothing.
Hawking indeed warned about welcoming aliens. Per the article below:
"If aliens visit us, the outcome would be much as when Columbus landed in America, which didn't turn out well for the Native Americans," he said.
Isn't Hawking the one who warns about evil aliens, too
Maybe you're thinking of Michio Kaku? He observed that we've been beaming electromagnetic signals out to the universe at a great rate - starting a few decades ago.
It's a one-time experiment with shouting out to the universe "Hey, we're here" and seeing if anything shouts (or comes) back ...
computerized phone voice recognition trees- artificial stupidity
It often helps, and won't hurt, to keep on saying (shouting) "Customer service!" "Customer service!"
Your dilemmas cracked me up :)
I do remember Hawking worrying about evil aliens. I suppose others have too. I think it's possible, but my guess is that it's a lot more likely that they are benign, even helpful.
Artificial Intelligences are far more likely a concern than aliens, and human controlled technology is the most scary.
The time when Artificial Intelligence is not close at hands. Before it comes to that, human beings will prevent it.
I wouldn't bet the farm on that. Moore's Law continues to be functional in predicting the reduction of circuitry size while continuing to increase processor and storage capacity and density. How much further we have to go to make a neural network feasible is a question mark, but there are people out there pursuing it, trust me.
Interesting side note: When I was at The Unbelievers screening in Columbus, I think it was Lawrence Krauss who estimated that a silicon-based simulation of the human brain (a device requiring 10 watts of power) would need 10 TERAWATTS to accomplish the same result. That, as they say, is a fair amount of juice. Still:
Everything is theoretically impossible, until it is done.
-- Robert A. Heinlein
There can be no doubt that if true AI were realized, Homo sapiens could - and I must emphasize - COULD be in deep sneakers. The issue would be one of implementation. What kind of network access would this AI have? What means would be at its disposal, if any? An AI computer which could learn from the internet, but have limited or no outbound access to it would be safer in that it could grow its own abilities while its capacity for manipulating external systems might be frustrated, preferably in hardware whose functions could not be altered by the machine's efforts. As it comes to prophylactic measures, something as simple as a hard-wire EMO for mains power to the computer not dependent on any other electronics would also be a simple and highly effective deterrent to any chance of the machine's usurpation of power from those who created it.
Treat the problem as you would handling a snake - recognize the danger while also recognizing that the means to control said danger do exist, so long as care is taken and deliberate thought is used. Forewarned is forearmed.
Is there a reason why AI would be motivated to usurp power?
We do have technology used to control people of course (like traffic lights and automated speed-monitoring devices). Not that the technology itself is motivated to control us, though :)
And we have technology that ends up controlling us, like SB's quirky car, and machines that pester you with reminder beeps, etc. etc.
It seems like the fear is that these effects would somehow morph into AI with a drive to power though.
How would this happen - why would AI systems be devised to want power? People struggle for control because we evolved that way, by natural selection. But the AI systems would be evolving by our selection. Unless we actually chose to design AI to have a drive to power, or we designed the AI to be subject to selection for a drive to power, it wouldn't develop a drive to power.
There are many sci-fi stories about machines taking over - but without explaining why they want to take over.
Never heard that question before Luara. I'm going to think about it.
For now, I wonder if they would not just get annoyed at our stupidity enough to want to eliminate us.
Same question applies there - why would they be designed to get annoyed at our stupidity?