The New York times posted a very interesting article about the fear of AI and need for security/controls. They claim many big companies are pumping the breaks on delivering AI without considering security/controls but not for the popular concerns. They list concerns for misinformation, deepfake nudes, people losing control of their jobs, students or other cheating to name a few are not the reason. None of this they call out moves the mark to cause big companies to be concerned. They point out a big concern is for Robohacking. This has boardroom’s sweating.
The New York Times posted about robohacking as well HERE. I’ve also seen talks by Bruce Schneier talking about the future SOC is AI vs AI (check out and book mark his blog HERE). A summary of what the threat of Robohacking means is threat actors used to spend tremendous amounts of time and resources to find vulnerabilities. It’s a extremely tedious task however, there is an ocean of targets as everything needs updates, which updates could include net new vulnerabilities. Many vulnerabilities have too much investment for the average hacker to invest time into creating a weaponize vulnerability … but what if that investment shank to a simple AI chat? That is the fear. Robohacking allows the average threat actor to dramatically speed up and increase the number of targets to research for vulnerabilities. In an AI vs AI world, the threat actors would have the advantage as they only need one vulnerability to work while defenders have to be dealing with everything. Game over for defenders.
I can see this threat translated to the big companies as if you allow AI to be good enough to rapidly detection vulnerabilities, you will create an AI hacking spree leading to tons of PSIRT cases (product vulnerabilities) costing you and everybody else billions. Very interesting point and one that can impact the boardroom.
Check out their post HERE. Very interesting point regarding defending the need to control how AI is being provided to the public.