Header graphic for print

Password Protected

Data Privacy & Security News and Trends

UPDATE: Got Data? Actual Harm Not Required for FTC Enforcement Action for Lax Security Measures

Posted in Consumer Privacy/FTC, Data Security, FTC enforcement

As anticipated, things are getting even more exciting with the case previously covered in Password Protected.  Specifically, LabMD is appealing the landmark data security case between it and the Federal Trade Commission (“FTC”) that examines an alleged data breach, despite the absence of identifiable harm. The case is poised to become a major driver of data security practices because it reveals the FTC’s expectations regarding reasonable data security practices and, if upheld, would solidify the FTC’s authority to enforce such actions.

Prior to the appeal, the FTC overturned the ALJ decision and found that an enforcement action was appropriate even though there was no evidence that any consumers were actually harmed. The decision was notable for two reasons; first it illustrated the seriousness with which the FTC takes data security and, secondly, it confirmed the FTC’s broad data security enforcement authority.

Unsurprisingly, LabMD has appealed the decision and asked the U.S. Court of Appeals for a Stay of the FTC Final Order pending review of the substantive appeal. LabMD maintains there are several unresolved legal issues including whether or not the FTC can enforce data security standards as it did in LabMD’s case, particularly in the absence of identifiable harm, and whether the FTC may exercise jurisdiction under Section 5 of the FTC Act over a HIPAA-covered data security entity.  The FTC, in its Opposition to the Stay, reiterates that consumers continue to suffer harm until the Final Order is implemented.

The outcome of the appeal carries several future implications for data security practices. If the FTC wins, businesses will be expected to maintain extensive and robust security procedures. The appeal also sets precedent for the FTC to maintain its current level of enforcement in consumer protection data privacy cases. In other words, a win for the FTC paves the way for the agency to continue exercising its expansive enforcement authority over data security issues.

This case is far from over. In the meantime, the fact remains that when it comes to the FTC there is no excuse for lax data security – either protect your data now, or pay the price later.

Better Late Than Never? Yahoo Reveals 500 Million Affected From 2014 Hack

Posted in Cybersecurity, Data breach, Data Security, Identity Theft

Quick to blame a state-sponsored organization, Yahoo announced at least 500 million of their account holders had their information stolen – in 2014.

A statement released on September 22, 2016, by Yahoo’s Chief Information Security Officer, Bob Lord, says that the hackers likely have, “names, email addresses, telephone numbers, dates of birth, hashed passwords (the vast majority with bcrypt) and, in some cases, encrypted or unencrypted security questions and answers.” Yahoo says that the “on-going” investigation suggests no payment card data or bank account information was stolen. Nevertheless, they advise users to monitor their accounts for suspicious activity.

At this point Yahoo has revealed very little about the investigation. But its statement did say that there is “no evidence that the state-sponsored actor is currently in Yahoo’s network.”

What the statement noticeably does not say is why it took Yahoo so long to disclose the hack.  In August, cybercriminal “Peace” claimed to have account information for over 200 million Yahoo users. At the time, Yahoo confirmed it was aware of the claim, but it was unclear if it was legitimate and Yahoo made no statement regarding the security of user information. This begs the question, when did Yahoo become aware of the hack?

As the investigation continues Yahoo will be held accountable to answer that question as well as several others. And while it has barely been 24 hours since the announcement there are takeaways from Yahoo’s breach.  First, any business with sensitive information must always think defensively.  Assume your network is constantly under attack and prepare accordingly. Otherwise, be ready to explain to shareholders and customers why your network was compromised.  Secondly, routinely monitor your network – just because you did not detect the breach, does not mean the breach did not occur.  In other words, don’t wait for a cybercriminal on the dark web to start selling sensitive information stolen from your network before you secure your network.

And lastly, do not become complacent with your security. From low end hackers to state-sponsored organizations, criminals are constantly crafting new ways to steal data so your network must be equipped to handle the attacks.  Because whether we like it or not, data breaches are here to stay – just ask Yahoo and about 500 million users.

New York Raises the Bar – Will Other States Follow?

Posted in Cybersecurity, Financial Services Information Management

On September 13, 2016, the New York Department of Financial Services (DFS) proposed new first-in-the-nation cybersecurity regulations (Regulations) that would require banks and other financial institutions to adopt minimum cybersecurity standards. In some ways the regulations are consistent with existing Federal Financial Institutions Examination Council (FFIEC) cybersecurity guidelines and FFEIC’s Information Technology (IT) Examination Handbook (IT Handbook). However, the Regulations go beyond FFIEC standards in certain ways.

If adopted, New York would also be the first state in the nation to require a prescriptive cybersecurity program for licensed financial institutions. New York banks regulated by federal banking agencies will need to review existing FFIEC cybersecurity programs to confirm such programs comply, but many insurance companies and other financial institutions licensed and regulated by the DFS may be challenged to comply by the proposed January 1, 2017 effective date, even taking into account a 180-day compliance transition period under the Proposed Regulations.   The Proposed Regulations target an understandable concern, however, in light of the economic harm caused by cyberattacks, their increasing frequency and sophistication (click here for our post on the recent SWIFT hacks), and New York’s status as a financial center. The Proposed Regulations follow DFS’s February 2015 Report on Cybersecurity in the Insurance Sector which found that 23% of New York insurance companies had been the target of “phishing” or other email scams and DFS’s May 2014 Report on Cybersecurity in the Banking Sector which found that 21% of banks had experienced phishing attacks.

It is almost certain that other states will follow and require financial institutions to adopt cybersecurity programs. In the future, a patchwork of state law may apply depending on how broadly those standards apply to financial institutions doing business in each state.  Firms should focus on proactively developing a comprehensive, robust cybersecurity program that can evolve appropriately in order to be well-positioned to comply with any other states that follow DFS’s lead.

Is my firm in-scope?

The Regulation applies to entities licensed, required to be licensed, or subject to other registration under New York banking, insurance or financial services laws (Covered Entities). The Regulations include an exemption that would apply only to a small subset of smaller institutions.

What do the Regulations require?

The Regulations prescribe written policies and procedures and require Covered Entities to adopt cybersecurity programs designed to ensure the safety and soundness of the institution by safeguarding customer “nonpublic information”. The Regulations’ definition of “nonpublic information” is broader than FFIEC’s, so Covered Entities already complying with FFIEC may find the new definition presents a gap that needs to be bridged.

  1. Establishment of a program

The institution would be required to adopt a formal cybersecurity program around six core functions, which are similar to FFIEC’s five cybersecurity preparedness functions, with the additional requirement to report to DFS specifically.

  1. Adoption of a cybersecurity policy

Federally regulated banks should have a written cybersecurity policy based on the Office of the Comptroller of the Currency (OCC) Part 30 “safety and soundness” standards, and FFIEC examination guidelines. However, Covered Entities must review cybersecurity policies to confirm that they address the issues required by the Regulations.

  1. Chief Information Security Officer

The FFIEC IT Handbook describes the role and responsibilities of the Chief Information Security Officer (CISO). The Regulations go beyond the FFIEC guidelines and require Covered Entities to formally designate a CISO. The CISO must report, at least bi-annually, to the board of directors in relation to specified topics.  Covered Entities may outsource the CISO function, but remain responsible for CISO requirements.

  1. Third party service providers

Covered Entities would be required to adopt policies and procedures to ensure the security of information systems and nonpublic information accessible by third parties. The Regulations’ expand upon the OCC’s October 2013 Third Party Risk Management Guidance and the Federal Reserve Board’s December 2013 Guidance on Managing Outsourcing Risk. Covered Entities must include preferred provisions in contracts with third party service providers. It is unclear whether the standards in the Regulations should be added to existing agreements.  If not already required, institutions should confirm that the applicable provisions are included in their policies, procedures and agreements with third party service providers.

  1. Additional requirements
  • Testing and assessments – The Proposed Regulations would require penetration tests at least annually and vulnerability assessments at least quarterly. FFIEC guidelines do not prescribe any specific frequency for penetration tests (so-called Pen Tests) or vulnerability assessments. This could present a compliance challenge for community banks and smaller financial institutions, many of which perform vulnerability assessments on an annual basis.
  • Audit trail – Track and maintain records, and all data relating to system access, for at least six years.
  • Access – Limit privileges to information systems that provide access to nonpublic information solely to those individuals who require such access
  • Application security – programs developed in-house must have cybersecurity programs to ensure secure development, and include written policies and procedures assessing and testing application security, which must be reviewed annually by the CISO.
  • Risk assessment – Conduct a risk assessment annually and include criteria for identifying and assessing risks.
  • Personnel – Employ (or outsource) IT personnel sufficient to manage the institution’s cybersecurity risk.
  • Multi-factor authentication – Use multi-factor authentication for any individual accessing the institution’s internal systems or database servers is required. FFIEC encourages multi-factor authentication for mobile financial services, but does not require it for individuals accessing internal systems or servers.
  • Limitations of data retention – Implement policies and procedures for the “timely” destruction of nonpublic information that is no longer needed (except where such information is required to be retained). The Regulations do not define “timely”.
  • Training – Adopt policies and procedures designed to monitor authorized users’ activities, detect unauthorized use of information systems and require personnel to attend training.
  • Encryption – Encrypt all nonpublic information, both in transit and at rest, unless infeasible.
  • Incident response plan (IRP) – the Regulations require an IRP similar to FFEIC’s, except that the Regulations do not specifically address any requirements to file SARs or give notice to information sharing organizations; however, they require notification to DFS within 72 hours of becoming aware of a cybersecurity event and delivery of a certification to DFS of compliance with the relevant cybersecurity program annually by January 15.

What is not required?

The Regulations require notice to DFS within 72 hours, but not a necessarily public announcement or notice to an institution’s customers. The Regulations do not require or recommend cybersecurity insurance coverage.  The omission of insurance in the Regulations is notable because in December 2014, DFS became the first regulator to include insurance as part of its examination procedures for New York chartered banks.

Start Hiring: 28,000 Data Protection Officers Needed by 2018

Posted in EU Data Protection, Legislation, Privacy

A study by the International Association of Privacy Professionals has found that 28,000 data protection officers (DPO) will be needed in the next two years for companies to comply with the EU’s new General Data Protection Regulation (GDPR).  By the time the GDPR comes into force in 2018, in-scope entities will have to have their DPO in place. Competition for DPOs will likely be strong in light of the ongoing shortage of privacy professionals. With this in mind, businesses should start thinking now about how best to recruit, train and resource a DPO and not wait for the GDPR to come into effect.

The GDPR requires data controllers and processors to appoint a DPO when processing is carried out by a public authority or when “core activities” require “regular and systematic monitoring of data subjects on a large scale” or consist of “processing on a large scale of special categories of data”. Even where not required, businesses may voluntarily appoint a DPO. This will not only include EU companies but also companies based in the U.S. and elsewhere who fall within the scope of the GDPR and the DPO requirements.

DPOs must possess “expert knowledge of data protection law and practices”, plus have an understanding of the company’s technical and organizational structure and its IT infrastructure and technology. Key tasks include ensuring regulatory compliance; training staff; coordinating with regulators and understanding applicable data processing risks.

Businesses can either assign this role to an existing or new employee provided that the employee’s other professional duties do not create a conflict with his or her new duties as DPO, or businesses can appoint an external candidate under a service contract. A corporate group may appoint a single DPO provided that the person is “easily accessible” for each entity. This means that the DPO must not only be able to speak the local language but also understand and address differences in data protection laws across the Member States in which the business operates.

DPOs must be independent in the performance of their tasks and are not only responsible for managing data privacy compliance, but also reporting any non-compliance to the relevant data protection authority. The role, therefore, is one of internal policeman and whistleblower at the same time, which businesses may, at first, find challenging. Breach of the DPO provisions may lead to huge administrative fines (up to the greater of EUR 10,000,000, or up to 2% of an organizations’ total worldwide annual turnover of the preceding financial year).

Companies should take steps now to determine whether they are subject to the GDPR and if so, whether a DPO must be appointed. Given the significance of privacy compliance today and the potential new administrative fines, even if a business is not required to appoint a DPO, larger companies that regularly process data may wish to consider appointing one in any event in order to assist with GDPR preparations and demonstrate compliance when the new law comes into effect.

Cyber Risk “IRL”: Insurance Issues Arising from Cyber-Related Property Damage and Bodily Injury Claims

Posted in Cyber Insurance, Cybersecurity

Many an unhappy modern tale arises when a cyber predator suggests to his victim that they transition their dealings from the virtual world to a meeting “IRL” – “in real life.”  But the perils that arise when the internet meets the “real world” are not limited to vulnerable individuals:  advances in technology, coupled with the ingenuity of malefactors, create the real risk that acts taking place wholly within cyberspace can have substantial impacts “in real life” – in the outside world – that go well beyond the loss of data or computer functionality.  The best-known example is the STUXNET virus, which seized control of Iran’s nuclear centrifuges and caused them, in effect, to commit mechanical suicide.  Nearly as well-publicized was the 2014 cyber-attack on a German steel mill, which prevented a blast furnace from properly shutting down, reportedly causing massive damage.  Any commercial entity who relies on internet-connected systems to control the operation of physical assets (such as manufacturing companies or utilities), and any entity that manufactures or distributes internet-connected products, is potentially at risk.

The risks go beyond the threat of damage to one’s own property: malicious computer activity could cause damage to third-party property or, worse yet, bodily injury or death. Many readers will recall the 2015 event (staged by “white hat” hackers) showing that a motor vehicle could be remotely disabled while traveling on a highway.  It is not hard to imagine that similar vulnerabilities could provide an entrée for hackers to precipitate catastrophic accidents.  Imagine what would happen, for example, if hackers remotely caused cardiac pacemakers to speed up patients’ heart rates to dangerous levels (this was the mechanism used, fictionally, to dispatch a victim in a 2013 episode of the TV show “Elementary”).  As the “internet of things” becomes more prevalent, the risk grows commensurately.  And the consequences of even minor disruptions (for example, the remote manipulation of an Internet-connected refrigerator that causes food spoilage) can be substantial when aggregated across thousands of products (through class action lawsuits or otherwise).

Faced with these sorts of losses, businesses and individuals would justifiably look to their insurance for coverage. After all, what is insurance for if it is not to protect against unexpected risks of damage or injury?  Unfortunately, but not surprisingly, insurance coverage for these risks – both first-party property insurance to cover loss to one’s own property, and third-party liability insurance to cover one’s legal obligations to others – remains unclear.

Continue Reading

FFIEC Provides Banks with Guidance Following the SWIFT Hacks

Posted in Cybersecurity, Financial Services Information Management

On June 7, 2016, the Federal Financial Institutions Examination Council (FFIEC) reminded banks of the cyber risks associated with interbank messaging and wholesale payment networks. FFIEC made its announcement after hackers allegedly used the Society for Worldwide Interbank Financial Telecommunication (SWIFT) messaging system to steal millions of dollars from banks around the world, including $81 million from the Bangladesh central bank.  According to FFIEC, the hackers may have used the SWIFT system to:

  • bypass a bank’s wholesale payment information security controls;
  • obtain operator credentials to create, approve and submit messages;
  • demonstrate a sophisticated understanding of funds transfer operations;
  • conceal and delay detection with customized malware to disable security logging and reporting; and
  • quickly transfer stolen funds across multiple jurisdictions quickly to avoid recovery.

To mitigate interbank messaging and wholesale payment risks, banks should update their information security procedures to address risks posed by compromised credentials. When reviewing their procedures, banks should consult the FFIEC IT Examination Handbook, specifically the Information Security, Business Continuity Planning, Outsourcing Technology Services, and the Wholesale Payment Systems booklets.

Consistent with federal banking agency regulations and FFIEC guidance, financial institutions should take the following steps to improve cybersecurity controls:

  • conduct ongoing information security risk assessments and ensure that third party service providers also perform effective risk management and implement cybersecurity controls;
  • perform security monitoring, prevention and risk mitigation by confirming protection and detection systems, such as intrusion detection systems and antivirus protection, are up-to-date and firewall rules are configured properly and reviewed periodically;
  • protect against unauthorized access by limiting the number of credentials with elevated privileges across the institution, especially administrator accounts, with the ability to assign elevated privileges to access critical systems;
  • implement and test controls around critical systems by adopting cybersecurity controls, such as access control, segregation of duties, audit, and fraud detection and monitoring systems;
  • manage business continuity risk by validating existing policies and procedures that support the bank’s ability to recover and maintain payment processing operations;
  • enhance information security awareness and training programs by conducting regular, mandatory education and employee training across the enterprise, including how to identify and prevent phishing attempts; and
  • participate in industry information-sharing forums including the Financial Services Information Sharing and Analysis Center (FS-ISAC) and the U.S. Computer Emergency Readiness Team (U.S.-CERT).

While FFIEC’s statement does not contain new regulatory expectations, the recent manipulation of the SWIFT system demonstrates the importance of regularly assessing the bank’s inherent risk profile and evaluating each of the five cybersecurity domains, particularly cybersecurity controls. FFIEC’s statement regarding the cybersecurity of interbank messaging and payment networks is available here and SWIFT’s customer communication on cybersecurity cooperation is available here.

Dude, Where’s My Bitcoin?

Posted in Cybersecurity, Data breach, Financial Services Information Management

Somewhere in a lavish Mediterranean villa a drug lord wearing an off-white suit had a heart attack. Elsewhere a tech whiz in Silicon Valley refreshed his browser multiple times as his heart sank further with each reloaded page.  And a banker in New York put a hold on an equity trade and cursed louder than he ever had before.  Like the beginning of a classic joke, the drug lord, the tech whiz and the banker had all been fooled.  Through each of their minds, the question raced:  “Dude, where’s my bitcoin?”


In early August, hackers stole almost 120,000 bitcoins (worth approximately $72 million at the time) from client accounts of a high-profile Bitcoin exchange, Bitfinex, based out of Hong Kong. This caused Bitcoin prices to briefly plummet and followed a similar attack in 2014 on Mt. Gox, which was then the world’s largest Bitcoin exchange (of note, Mt. Gox subsequently went bankrupt).

This latest heist comes on the heels of Bitfinex CFO Giancarlo Devasini’s very forward-thinking proclamation, “With our BitGo wallet solution it becomes impossible for our users to lose their bitcoins due to us being hacked or stealing them.” With such a bold statement, combined with the impervious view of hindsight, one must carefully ponder the future tenure of the CFO, or the future of Bitfinex, or even that of Bitcoin itself.

The theft is obviously a problem for those customers whose precious cryptocoins were stolen, fans of digital currency generally, operators of Bitcoin exchanges and various Bitcoin “banks” or “wallets.” Bitfinex’s response to the hack is unlikely to resonate with its clients after they indicated that losses would be spread across all customer accounts, amounting to an approximately 36% generalized loss.  Despite attempting to assure their clients that they would be made whole at some point in the future, a potential investor might be prone to pause at this juncture in any bitcoin venture.

Continue Reading

Got Data? Actual Harm Not Required for FTC Enforcement Action for Lax Security Measures

Posted in Consumer Privacy/FTC, Data Security, FTC enforcement, Privacy

While much of Washington, D.C. is enjoying the slow and hazy days of summer, the Federal Trade Commission (FTC) is staying busy solidifying its presence as the go-to authority for data security. Most recently, on July 29, 2016, the FTC issued a unanimous Opinion and Final Order against LabMD, Inc., for its unreasonable data security practices, reversing an Administrative Law Judge (ALJ) Initial Decision that had dismissed FTC charges.

Between 2001 and 2014, LabMD collected and tested patient medical samples for physicians. The FTC’s decision found that from 2005 to 2010, LabMD failed to maintain basic security practices. Among other things, LabMD:

  • lacked file integrity monitoring and intrusion detection;
  • failed to monitor digital traffic;
  • failed to provide security training to its personnel;
  • lacked a strong password policy and allowed at least a half a dozen employees to use the same, weak password, “labmd”;
  • failed to update its software to address known vulnerabilities;
  • granted employees administrative rights to their laptops, which allowed these employees to download any software they wanted;
  • allowed the downloading of peer-to-peer software (LimeWire), which enabled a file containing 1,718 pages of confidential information relating to approximately 9,300 customers to be downloaded through LimeWire; and
  • failed to respond to warnings about data vulnerability after being made aware of the issue with respect to LimeWire.

The case was heard by an ALJ, who issued a decision in November 2015 (the “ALJ Decision”). The ALJ decision dismissed the complaint due to lack of evidence that LabMD’s data security practices either caused or were likely to cause substantial injury to its consumers. In its recent Opinion and Final Order, however, the FTC reversed the ALJ Decision and found that LabMD’s data security practices were unreasonable and caused, or were likely to cause, substantial injury to consumers.

What Are Unreasonable Data Security Practices?

The FTC’s thirty-seven page Opinion and Final Order details what the FTC found to be insufficient data security standards that left consumers at risk.  In reaching its decision, the FTC repeatedly referenced the well-known data privacy and security standard in the Health Insurance Portability and Accountability Act (HIPAA).

Indeed, for all of the FTC’s concerns, there are corresponding HIPAA standards which provide important industry guidance with respect to data privacy and security. The FTC noted, however, that HIPAA does not itself determine the reasonableness of LabMD’s data security practice, since HIPAA is a multi-factored law that takes a “flexible approach” to Security Rule compliance. In fact, the FTC’s decision is separate from any specific HIPAA enforcement action that may result from the practices described above.  Nevertheless, the repeated references to HIPAA provide a helpful reference point for the FTC’s expectations with respect to data privacy and security—a reference that should be universally known in the healthcare services world.

While the FTC used HIPAA to identify reasonable data security practices, its analysis of substantial injury is not limited to the health care industry. Indeed, the FTC has made it clear that any industry in possession of sensitive consumer data (such as names, addresses, dates of birth, Social Security numbers, and insurance information) will be required to maintain reasonable data security practices, and that enforcement actions may result even if there has been no identifiable harm to the subjects of such data.

What is Substantial Injury?

Having determined that LabMD had insufficient data security practices in place, the FTC looked at what constitutes substantial injury. In its analysis, the ALJ Decision relied on the fact that there is “no evidence that any consumer has suffered any injury as a result of the 2008 exposure.” In the Matter of LabMD, Inc., A.L.J. Docket No. 9357. In fact, even the FTC Final Order notes that it is unclear if the exposure “resulted in actual identity theft, medical identity theft, or physical harm for any of the consumers whose information was disclosed.” In the Matter of LabMD, Inc., FTC Docket No. 9357, at 23.

Nevertheless, the FTC determined that the mere “disclosure of sensitive health information causes additional harms that are neither economic nor physical in nature but are nonetheless real and substantial” and therefore actionable. In other words, the FTC does not require consumers to show they have suffered “known harm” to enforce Section 5 of the FTC Act against unreasonable data security practices. In the Matter of LabMD, Inc., FTC Docket No. 9357. Rather, it is the timing of the data security practice that guides the FTC’s analysis of whether or not the consumer is subject to substantial injury. The FTC stated that when determining if an industry’s data security practice will cause harm, it will do so “at the time the practice occurred, not on the basis of actual future outcomes.” In the Matter of LabMD, Inc., FTC Docket No. 9357 at 23.

Under that analysis, the Commission found that LabMD put consumers at risk of substantial injury and ordered, among other things, that it notify consumers of the risk and adopt a comprehensive compliance plan to address the identified security shortcomings. LabMD now has 60 days to file a petition for review with a U.S. Court of Appeals – which it seems quite likely to do.

In the meantime, companies can use this decision to help review their own data privacy security practices, knowing that the FTC will undoubtedly continue to act as a leader in the data and privacy security field. And, for any HIPAA covered entity or business associate, this decision should be a wake-up call that that non-compliance with HIPAA may create two-fold liability.

HIPAA Hat Trick: Security Violations Lead to Three Major Settlements

Posted in Data retention, Data Security, Health Information

Look no further than the last three weeks for proof that HIPAA enforcement is on the rise.

Failure to maintain the security of information systems containing patient information has cost healthcare providers over $10 million in recent settlements of alleged violations of the Health Insurance Portability and Accountability Act (HIPAA). The Department of Health and Human Service’s Office for Civil Rights (OCR) is making it clear that enforcement of HIPAA’s security requirements is a priority and not likely to slow down. Indeed, OCR recently announced three major settlements of alleged HIPAA security violations in as many weeks. The settlements all involve large health systems and include the largest ever settlement of HIPAA claims, at a record $5.55 million.

  1. On July 18, 2016, OCR announced that Oregon Health & Science University (“OHSU”) agreed to pay $2.7 million and enter into a three-year comprehensive corrective action plan as part of a settlement following OCR’s investigation of OHSU’s compliance with the HIPAA Security Rule.

OCR reports that OHSU submitted multiple reports of HIPAA breaches involving the unsecured protected health information (PHI) of thousands of individuals. Two of the breaches involved unencrypted laptops, and the third involved a stolen, unencrypted thumb drive. OCR’s investigation uncovered widespread security vulnerabilities and failure to comply with the HIPAA Security Rule. For example, OCR found that OHSU stored electronic PHI (ePHI) of more than 3,000 individuals on a cloud-based server, but OHSU did not have a business associate agreement in place with the vendor. OCR determined that this oversight put 1,361 individuals at significant risk of harm.

Although OHSU has performed security risk assessments periodically since 2003, the risk assessments did not cover all of the ePHI in OHSU’s information systems, and OHSU did not address the vulnerabilities identified in the risk assessments. For example, although OHSU identified that its lack of encryption of ePHI stored on its workstations was a risk, it failed to implement encryption or an equivalent protection. OCR also found that OHSU lacked policies and procedures required by the Security Rule to prevent, detect, contain, and correct security violations.

  1. Just a week after the OHSU announcement, OCR announced a similar settlement with the University of Mississippi Medical Center (“UMMC”) for $2.75 million. Like OHSU, OCR investigated UMMC’s HIPAA compliance after UMMC reported a HIPAA breach involving a stolen laptop containing ePHI.

OCR’s investigation found that users of UMMC’s wireless network could use a generic username and password to access an active directory on UMMC’s network drive containing 67,000 files. OCR estimates that the directory included files containing the ePHI of 10,000 patients. OCR also found that UMMC violated the Security Rule by failing to implement appropriate security policies and procedures, restrict access on workstations that access ePHI to authorized users, and assign unique user names for identifying and tracking users of systems containing ePHI. Further, UMMC failed to notify each individual whose ePHI was reasonably believed to have been affected by the breach of the stolen laptop.

  1. Finally, in keeping with its once-a-week settlements, OCR announced on August 4, 2016 that it had entered into the largest ever settlement of HIPAA claims with Advocate Health Care Network (“Advocate”). Advocate agreed to pay $5.55 million, due in part to the extent and duration of Advocate’s alleged noncompliance and the large number of individuals whose PHI was affected.

OCR investigated Advocate’s HIPAA compliance after it reported three separate HIPAA breaches involving its subsidiary, Advocate Medical Group, affecting approximately 4 million individuals. OCR reports that Advocate failed to conduct accurate and thorough risk assessments, implement appropriate security policies and procedures, enter into written business associate agreements to protect ePHI, and reasonably safeguard an unencrypted laptop that was left in an unlocked car.


Aside from confirming that HIPAA enforcement is dramatically up, these settlements highlight the importance of Security Rule compliance.  Among other things, this means that covered entities (and business associates) must:

  1. have adequate security policies and procedures to prevent, detect, contain and correct security violations;
  2. have thorough risk assessments that assess all information systems containing ePHI;
  3. respond to all risks and vulnerabilities that they have identified in their risk assessments; and
  4. handle security breaches in accordance with the requirements of the Breach Notification and Security Rules — and be prepared for significant breaches to result in enforcement actions.

The Cost of Noncompliance: LifeLock Continues to Pay

Posted in Consumer Privacy/FTC, Cybersecurity, FTC enforcement, Identity Theft, Information Management

LifeLock, Inc. made headlines in December 2015 when it finalized a $100 million settlement with the Federal Trade Commission—the largest monetary award ever in an FTC order enforcement action. As reported by McGuireWoods’ Password Protected blog, the 2015 enforcement action stemmed from allegations that LifeLock breached a 2010 settlement with the FTC mandating, among other things, that LifeLock maintain a comprehensive data privacy and security program.  Though resolution of the 2015 action was a significant step towards clearing the slate with state and federal regulators, the story did not end there.

Lawsuits related to the FTC action continued and, in the wake of the FTC settlement, LifeLock announced a shake-up in company leadership. Effective March 1, 2016, LifeLock president and former Yahoo Americas executive Hilary Schneider succeeded founder Todd Davis as LifeLock’s CEO, while lead director Roy C. Guthrie ascended to Davis’ former role as chairman of the board.

On June 30, 2016, LifeLock took another step to eliminate liability relating to its data privacy and security practices by agreeing to settle a consolidated shareholder derivative lawsuit pending in Arizona. The lawsuit, captioned In re: LifeLock, Inc. Derivative Litigation, alleged in part that LifeLock’s directors breached their fiduciary duties in failing to ensure compliance with FTC regulations post-2010. The newly proposed settlement will release those claims and others in exchange for terms including LifeLock’s agreement to: (1) spend at least $4 million annually on information security from 2016-17, (2) monitor and report on the effectiveness of its information security program, and (3) pay $6 million in attorneys’ fees to lead counsel for the plaintiffs.

Wall Street reacted favorably to the news. On July 11, 2016, shares of LifeLock (NYSE: LOCK) eclipsed $16.22 during trading for the first time since the FTC announced its 2015 enforcement action nearly a year earlier, and LifeLock’s stock hit a 52-week high of $16.89 during intraday trading on August 3, 2016.

Again, however, the story will not end here. LifeLock’s continuing obligations under recent settlements and the ever-looming threat of a third FTC enforcement action are sure to influence the deployment of company resources for years to come.  Likewise, healthy skepticism about LifeLock’s ability to keep its data security related promises could limit growth in the company’s market capitalization for the foreseeable future.  The 2015 FTC settlement, for example, (as amended on January 4, 2016) requires LifeLock to comply with its terms for 5 years.

These self-inflicted restraints, along with significant financial and other consequences for LifeLock’s customers, investors, executives, and board members, serve as a reminder that—while the costs of implementing a comprehensive data privacy and security program can be high—the cost of not complying with industry best practices can be catastrophic. If data privacy and security is not yet a priority at your company, make the case before it is too late.  Or you too, like LifeLock, may learn (not) to appreciate the FTC’s “pound of cure.”

The full text of the stipulation of settlement may be found here.