Revisiting “Security Biodiversity”
- 7th Monday 2011
- Information Security
- 0
In the second half of 2003, Dan Geer, Becky Bace, Bruce Schneier, and several other well-known security personalities co-authored a paper entitled “CyberInsecurity: The Cost of Monopoly“. Back in those days, the most pressing issues related to infosec were primarily fast-moving automated or semi-automated worms and other malware variants, a deluge of spam, and the rapid onset of a nasty feeling that we were in way over our heads. Any of you that worked in infosec at a mid- to large-sized organization at the time, chances are you felt pretty damn overwhelmed at one point or another, if not daily. Dan Geer got fired from @stake for the position he espoused in this paper – namely, that Microsoft was working to “lock people in” to the use of their platform, which created a huge single point of failure whenever a bug was found and exploited. At the time, it was estimated that 94% of people using computers were using MS Windows, and numerous examples abound of how devastating that could be, as Nimda, Code Red I/II, SQL Slammer, and numerous other worms hacked and slashed their way through these exposed, unpatched systems at an alarming rate. Here’s another good analysis of the paper, situation it caused, and infosec moment-in-time in general in The Economist.
Along the way, one of the terms that was used to describe the situation as Geer, et al. assessed it was “biodiversity” – namely, that our systems infrastructure was lacking in it. The analogy is simple – if all biological entities in a culture are somewhat homogenous (similar), then they are all likely susceptible to a plague or other devastating illness that could wipe them out. Interconnected networks of systems with the same degree of homogeneity could have the same problem, and this was borne out when the worm onslaughts came about. Fast forward to 2011 – where do we stand now?
The problem really hasn’t gone away. More folks are running Linux, Mac OS X, and other platforms, sure, but MS still has a healthy grip on the OS market. What we’ve gotten better at is the surrounding security – examples include better network access controls and segmentation, intrusion detection, and perhaps a smattering of patch and configuration management tools and processes. I am oversimplifying this in a big way (and MS started taking security more seriously, as well, which helps), but the original problem is still there….just masked a bit. Since I do a lot of work in virtualization and a bit in cloud computing, I started thinking about the underlying hypervisor components and layers of the infrastructure ”stack” that could potentially lead to the same problem now and down the road.
Virtualization as a standalone technology, or as the basis for multi-tenant environments either public or private/semi-private (“cloud”, FWIW), emphasizes isolation and segmentation. Virtual network components can be kept distinct from others, additional controls and tools can be implemented to restrict traffic and interaction based on application behavior or other attributes, and virtual machines themselves can be limited in terms of interaction with the underlying hypervisor itself. However, a prevailing theme of network environments is one that echoes biological entities and their cultures – things like to be connected. People need other people and interaction, as do most sophisticated animals, and this seems to be the case in networks, as well. With cloud computing environments, the real hooks are the APIs that allow applications to be developed and run within those environments. On the back end, there’s a VERY good chance that these environments will be running on VMware, Xen, or Hyper-V (perhaps modified in some ways). Does this potentially create the same problem we had before? Does the exposure of those APIs leave the underlying hypervisor platforms exposed, and if so, will attackers start targeting these three vendors even more so than before? If the goal is to allow more connectivity, it seems to be a safe assumption.