Verizon-USSS 2011 data breach investigations report released – what do they know that we don’t?
The annual report based on breaches investigated by Verizon and the U.S. Secret Service is out. On first reading of the report and the available media coverage, the big headline seems to be that while the number of records or data lost is down significantly, the number of breaches is significantly up – and more small businesses are being attacked. Some of the key findings:
- 83% of victims were targets of opportunity (no change from previous year’s estimates)
- 92% of attacks were not highly difficult (+7%)
- 76% of all data was compromised from servers (-22%)
- 86% were discovered by a third party (+25%)
- 96% of breaches were avoidable through simple or intermediate controls (<>)
- 89% of victims subject to PCI-DSS had not achieved compliance (+10%)
- 92% stemmed from external agents (+22%)
- 17% implicated insiders (-31%)
- <1% resulted from business partners (-10%)
- 9% involved multiple parties (-18%)
- 50% utilized some form of hacking (+10%)
- 49% incorporated malware (+11%)
- 29% involved physical attacks (+14%)
- 17% resulted from privilege misuse (-31%)
- 11% employed social tactics (-17%)
Although much of the report makes sense to me and is fairly congruent with what I’ve been seeing as I compile reports for this blog, it strikes me at how unfortunate it is that Verizon-USSS do not include some comparative analyses involving all of the other breach reports that are out there and that they may not have investigated. They recorded 761 data breaches investigated globally for 2010. Possibly (if not actually likely), many of those are breaches that the USSS and/or Verizon investigated and that may never have been revealed in the media. At the same time, ITRC, which focuses on U.S. data breaches that may lead to ID theft, reported 662 incidents during 2010 and DataLossDB.org reported 405 U.S. breaches. How much overlap is there between Verizon’s 761 cases and ITRC’s and DataLossDB.org’s reports for U.S. incidents? And would inclusion of cases reported in the media or in these databases change anything in terms of our understanding of patterns of breaches?
All sources seem to agree that the number of records compromised was down in 2010. But what about patterns of breaches? Does the Verizon analysis agree with that generated by DataLossDB or ITRC? While Verizon reports that approximately half of their investigated breaches involved hacking, DataLossDB’s 2010 data indicate that only 63 of 464 incidents (13.6% ) involved hacking , and ITRC’s analyses of their data indicated that hacking was involved in 17.1% of cases. Similarly, while Verizon reports that 49% of their 2010 cases involved malware, analysis of DataLossDB 2010 breaches indicates that only 15 of 464 incidents (3%) reported malware was involved. Those are fairly huge differences.
Verizon’s report is clear about their methodology and inclusion criteria and there is value in keeping the criteria consistent across years. There is also value in analyzing breaches in which there has been some degree of confirmation or validation. But the differences noted plus the fact that they reported/investigated only 141 incidents for 2009 when ITRC reported 498 U.S. incidents and DataLossDB.org reported 538 U.S. incidents and 618 incidents globally raises the possibility in my mind that some of what appears to be a huge increase in number of incidents from 2009 to 2010 in the Verizon report may be an artifact of their 2009 estimates being much lower than what was reported publicly. Indeed, as more entities report cases to law enforcement and/or as more businesses arrange for Verizon to investigate breaches, their yearly numbers may climb somewhat artifactually.
I wish Verizon would attempt to reconcile their database with other available databases. The basic patterns uncovered may not change, but we won’t really know with any confidence until it’s done. Of course, if Verizon and the USSS would like to disclose all of the breaches we don’t already know about, we’d be happy to enter them in our databases to see if they change impressions based on what’s publicly available. But since I don’t think they’ll do that, I would simply encourage them to address this simple question with some statistical analysis:
Is there a qualitative difference or any significant quantitative difference in pattern between the cases we know about and the cases Verizon-USSS may know about that weren’t publicly disclosed?
I tweeted this query to Alex Hutton, who says he’ll address this question next week. I look forward to seeing what he has to say.