There has been a TON of conversations about the risk of AI recently. Leading IT organizations such as Google, Cisco, and Microsoft have all shared concerns about the risk of AI leading to a similar threat as nuclear war. A easy to understand summary of where most leaders stands on this topic is the statement “with great power comes great responsibility”.
One of my favorite security thought leaders, Bruce Schneier, posted his thoughts on the topic HERE. He also agrees that the risk of AI can be in the same room as nuclear war however, it’s not like the matrix movie leading to human extinction. It’s more about the risks that will come from abusing the power of AI as technology evolves leading to potential terrible places for humanity. Here is one of Bruces thoughts on the topic.
“I am less worried about AI; I regard fear of AI more as a mirror of our own society than as a harbinger of the future. AI and intelligent robotics are the culmination of several precursor technologies, like machine learning algorithms, automation, and autonomy. The security risks from those precursor technologies are already with us, and they’re increasing as the technologies become more powerful and more prevalent. So, while I am worried about intelligent and even driverless cars, most of the risks arealready prevalent in Internet-connected drivered cars. And while I am worried about robot soldiers, most of the risks are already prevalent in autonomous weapons systems.
Also, as roboticist Rodney Brooks pointed out, “Long before we see such machines arising there will be the somewhat less intelligent and belligerent machines. Before that there will be the really grumpy machines. Before that the quite annoying machines. And before them the arrogant unpleasant machines.” I think we’ll see any new security risks coming long before they get here.”.
It’s worth checking out Bruce’s post as well as a reply of his RSA talk on this topic, which the text from that talk can be found HERE.