In 2010 McAfee, the computer virus-protection company, had a huge glitch affecting Windows XP that took down a significant part of the Internet. The Chief Technology Officer for McAfee at the time was George Kurtz.
George didn’t let that infamous milestone be the end of his career. He went on to found CrowdStrike, which was the company responsible for the huge glitch last month affecting Microsoft Windows operating system that took down a significant part of the Internet. Who says you can’t teach old dogs to get struck by lightning twice . . . or something?
There are a lot of lessons to be learned from this event—both direct and general. Let’s start with the short version.
Jennifer Huddleston explains the problem centering it on the “Brussels Effect”.
At least part of the blame for CrowdStrike’s catastrophic impact belongs to European regulations that require Microsoft to structure certain features for compliance purposes, opening up potential security liabilities.
While the CrowdStrike incident may be the most timely and significant example, it is just one of the many ways European regulation is increasingly impacting American consumers and providing them with less innovative and less secure products.
Why is that? Let’s start with the CrowdStrike situation. A 2009 agreement with European regulators required that Microsoft give other security services the same level of access to its Windows system that it has itself. The result is that when a flaw in a security system—like CrowdStrike’s faulty update—occurs, it can have far more devastating effects on the entire operating system and therefore a broader global impact.
Moving on to the bigger lessons, Zvi Mowshowitz gives one of his typical deep dives. This piece is VERY long, but it is equally very important. He breaks down the event, the various misplaced critics and exploitative commenters (e.g., Lina Khan), and ties it back to AI risk.
These questions can have very different answers for catastrophic or existential risk, versus mundane risk.
For mundane risk, you by default want your systems to fail at different times in distinct ways, but you need to worry about long dependency chains where you are only as strong as the weakest link. So if you are (for example) combining five different AI systems that each are the best at a particular subtask, and cannot easily swap them out in time, then you are vulnerable if any of them go haywire.
For existential or catastrophic risk, it depends on your threat model.
Any single rogue agent under current conditions, be it human or AI, could potentially have set off the CrowdStrike bug, or a version of it that was far worse. There are doubtless many such cases. So do you think that ‘various good guys with various AIs’ could then defend against that? Would ‘some people defend and some don’t’ be sufficient, or do you need to almost always (or actual always) successfully defend?
I am very skeptical of the ‘good guy with an AI’ proposal, even if such defenses are physically possible (and I am skeptical of that too). Why didn’t a ‘good guy with a test machine or a debugger’ stop the CrowdStrike update? Because even if there was a perfectly viable way to act responsibly, that does not mean we are going to do that if it is trivially inconvenient or is not robustly checked.
There are many lessons and parallels here:
Reverse regulatory capture whereby the regulators capture a willingly-compliant company for fear of destruction at the hands of the regulator(s) but with the inducement of protection. Note the mafia parallels.
How regulation becomes check-the-box outsourcing to poorly incentivized entities that suffer from insulated incompetence. This creates an interesting analogue to FDIC, et al.—creating moral hazard. The market is a strong and dynamic regulator. You neuter it at your own peril.
Diversification beats control and resilience beats robust.
Good rules and practices beat good actors. There is a great analogue regarding how we cannot rely on electing the “right” people. “Once we have the right people in place…” is a Russian Roulette game. It takes one failure for ruination.
As he concludes in passing, “This is also a damn good reason to not ban or eliminate physical cash, in general.”