Oct 152018
 

From HHS/OCR, this record-setting announcement:

Anthem, Inc. has agreed to pay $16 million to the U.S. Department of Health and Human Services, Office for Civil Rights (OCR) and take substantial corrective action to settle potential violations of the Health Insurance Portability and Accountability Act (HIPAA) Privacy and Security Rules after a series of cyberattacks led to the largest U.S. health data breach in history and exposed the electronic protected health information of almost 79 million people.

The $16 million settlement eclipses the previous high of $5.55 million paid to OCR in 2016.

Anthem is an independent licensee of the Blue Cross and Blue Shield Association operating throughout the United States and is one of the nation’s largest health benefits companies, providing medical care coverage to one in eight Americans through its affiliated health plans.  This breach affected electronic protected health information (ePHI) that Anthem, Inc. maintained for its affiliated health plans and any other covered entity health plans.

On March 13, 2015, Anthem filed a breach report with the HHS Office for Civil Rights detailing that, on January 29, 2015, they discovered cyber-attackers had gained access to their IT system via an undetected continuous and targeted cyberattack for the apparent purpose of extracting data, otherwise known as an advanced persistent threat attack.  After filing their breach report, Anthem discovered cyber-attackers had infiltrated their system through spear phishing emails sent to an Anthem subsidiary after at least one employee responded to the malicious email and opened the door to further attacks. OCR’s investigation revealed that between December 2, 2014 and January 27, 2015, the cyber-attackers stole the ePHI of almost 79 million individuals, including names, social security numbers, medical identification numbers, addresses, dates of birth, email addresses, and employment information.

In addition to the impermissible disclosure of ePHI, OCR’s investigation revealed that Anthem failed to conduct an enterprise-wide risk analysis, had insufficient procedures to regularly review information system activity, failed to identify and respond to suspected or known security incidents, and failed to implement adequate minimum access controls to prevent the cyber-attackers from accessing sensitive ePHI, beginning as early as February 18, 2014.

In addition to the $16 million settlement, Anthem will undertake a robust corrective action plan to comply with the HIPAA Rules.  The resolution agreement and corrective action plan may be found on the OCR website at http://www.hhs.gov/hipaa/for-professionals/compliance-enforcement/agreements/anthem/index.html.

Oct 122018
 

Here’s what appears to be a serious breach involving Google drive and syncing. Henrietta Cook reports:

Confidential files detailing high school students’ medical conditions, including anxiety issues and those at risk of suicide, have been found on a Melbourne schoolgirl’s iPad.

The document contains photos, names and medical and family details of years 7 to 12 students at Manor Lakes P-12 College in Wyndham Vale in Melbourne’s south-west.

[…]

The 14-year-old girl discovered the document on her iPad last month and said she had no idea how it got there.

Now read the following explanation from the Education Department carefully, because this looks very much like what some people reported in Springfield, Missouri Public Schools:

He said the private student information had been inadvertently shared with one student.

He said in May, the student borrowed a teacher’s laptop because she did not have her own device. The teacher sat next to the student while she completed an assignment on the borrowed computer, the spokesman said.

The student accessed her own Google documents on the machine.

The spokesman said that when the teacher later used her laptop the document they opened synced with the student’s account. This meant it turned up on the student’s own Google drive.

The spokesman said there was no evidence that private and personal school documents had been obtained by anyone other than the individual student.

But the girl’s father said that his daughter never used the teacher’s laptop.

“She doesn’t recall using a teacher’s device at all this year,” he said.

Read more on Canberra Times.  How did the teacher’s laptop sync with the student’s own Google drive? What configuration hell led to this mess? What should the district have done to prevent this from ever happening? COULD the district have prevented it or is there something in Google’s G-Suite coding that pretty much makes this kind of nightmare not only predictable but inevitable?
I’ll be reporting more on the Springfield case in the near future, but it’s interesting – albeit frustrating – that the reporting on this Melbourne case does not do a deeper dive into how this happened and how it could have been prevented – if it could have been.
I know there are those whose immediate hypothesis will be poor password hygiene or poor browser hygiene on the part of the users (in this case, the teacher). But by now, Google has to know that there’s poor password hygiene and poor browser hygiene. So why doesn’t it code take that into account enough?  Or did it take it into account but the district failed to follow directions? And how often do districts fail to configure Google products to be appropriately privacy-protective? Does Google’s coding and default settings take that into account?
Oct 112018
 

HIPAA lawyer Matt Fisher has a thoughtful commentary inspired by an OCR investigation first reported on this site. Unlike the FTC who have tended to demand 20-year monitoring plans as part of their settlements with entities that have data security breaches, OCR tends to use a more educative approach without monetary penalties or long-term monitoring in responding to breaches. But is that enough to satisfy those whose PHI may have been compromised or who may have suffered because of a breach?  Should there be more monetary penalties or public disclosure? Matt writes, in part:

While behind the scenes resolutions work very well for the entities involved, a different perspective should also be considered. Specifically, the perspective of the complainant if there is an alleged violation of a HIPAA requirement or the individuals whose protected health information is impacted in the event of a breach. In those instances, the aggrieved individuals may ask why more was not done to penalize the entity or impose some punishment given the harm to the individual that likely cannot be “remedied” in the individual’s eye. While retribution will not necessarily result in satisfaction, a very real human desire can arise to see it imposed regardless.

Given what should be a real consideration of not discounting the harm to individuals, should OCR pursue more enforcement actions that result in penalties or another form of public reprimand? The answer is not clear and not one subject to easy advocacy.

Read all of Matt’s commentary on Mirick O’Connell’s The Pulse.

Oct 112018
 

Asia Times reports:

Chinese tech giant Alibaba warned users on Wednesday that they could be at risk from making cashless transactions or paying bills with its Alipay application on Apple gadgets, stressing that the security loophole was not the fault of its app but of the US firm.

An Apple security breach was blamed by Alibaba for a recent string of thefts after some disgruntled Alipay users complained that their accounts had been marauded.

Read more on Asia Times.

Oct 082018
 

Douglas MacMillan and Robert McMillan report:

Google exposed the private data of hundreds of thousands of users of the Google+ social network and then opted not to disclose the issue this past spring, in part because of fears that doing so would draw regulatory scrutiny and cause reputational damage, according to people briefed on the incident and documents reviewed by The Wall Street Journal.

Read more on WSJ.

So Google is shutting down Google+ after finding a vulnerability that exposed 500,000 users’ personal information.

But that’s not the big story. The big story is that they found the vulnerability, addressed it in March, and made a conscious decision NOT to disclose it back then, for fear of regulators.