Header graphic for print

Password Protected

Data Privacy & Security News and Trends

A Little Help From HIPAA

Posted in Data Security, Health Information

HIPAA’s Security Rule requires that Covered Entities perform “periodic” Security Risk Assessments. All too often, however, this regulatory obligation is ignored altogether, performed extremely sporadically, or treated as a regulatory hoop-jumping exercise to be completed as quickly as possible.  Aside increasing the risk of HIPAA liability, treating the Security Rule Risk Assessment in these ways means missing out on an opportunity to explore and shore up the entity’s data security systems.

Despite what criticisms may exist for other parts of the HIPAA regulations, the Security Rule can be a remarkably helpful tool.  It was rolled out in 2013, and it has survived the test of time despite astonishing changes in technology.  Indeed, one of the reasons for this is that the Security Rule expressly incorporates a “flexibility of approach,” making it applicable to Covered Entities of all sizes and configurations.

At its core, the Security Rule risk aims to ensure the confidentiality, integrity, and availability of electronic PHI, and the elements of the rule are pretty much the very same things that would be expected of any responsibility organization operating in the digital age anyway.

When done properly, the Security Rule Risk Assessment helps entities to examine their operations to identify where and how their data is stored; reasonably anticipate and address the risks that may exist to their data; and identify the various ways in which the entity manages its operations with respect to a fairly logical set of required and addressable criteria.  This exercise can be critically important in helping in-house counsel and the compliance team to understand where the organization’s information “lives,” who is in charge of securing the data, and what areas of potential vulnerability require attention.

Lawyers do not often applaud regulations, but in the case of data security practices, HIPAA Security Rule can be tremendously helpful, and all entities should take it very seriously.

FTC Provides Guidance on Data Security in Its “Stick With Security” Blog

Posted in Data Security, FTC enforcement

Building on the FTC’s “Start with Security” guide for businesses, the agency launched the “Stick with Security” blog on July 21, 2017. The blog provides additional guidance on each of the 10 fundamental principles of data security through hypotheticals based on FTC decisions, questions submitted, and FTC enforcement actions. Each week, the FTC publishes a post dedicated to one of the 10 data security principles.

The 10 fundamental “Start with Security” principles include:

  1. Start with security. The first principle urges companies to factor data security into all aspects of the business and to make conscious decisions about how, when, and whether to collect, retain and use personally identifiable information.
  2. Control access to data sensibly. The second principle recommends restricting access to personal data to employees who have a legitimate need to access the data. This recommendation includes restricting administrative access to the company’s systems to employees tasked with making system changes.
  3. Require secure passwords and authentication. According to the third principle, companies should require “complex and unique” passwords, store passwords securely, and test for common vulnerabilities to protect against unauthorized access to data.
  4. Store sensitive personal information securely and protect it during transmission. The fourth principle advises companies to encrypt data while in transit and when at rest throughout the data’s entire lifecycle. Companies should use industry-tested methods of securing data and ensure that the measures are implemented and configured appropriately.
  5. Segment your network and monitor who’s trying to get in and out. The fifth principle speaks to the design of a company’s network; it should be segmented and include intrusion detection and prevention tools.
  6. Secure remote access to your network. The sixth principle considers a company to be responsible not only for the security of its internal network, but also for examining the security of employees’ computers and systems of others to whom the company grants remote access to its systems. In addition, companies should limit remote access to only the areas that are necessary to achieve the purpose.
  7. Apply sound security practices when developing new products. The seventh principle urges companies to use engineers trained in secure coding practices and to follow explicit platform guidelines designed to make new products more secure. This principle also indicates that companies are expected to ensure that their privacy and security features function properly and meet advertising claims.
  8. Make sure your service providers implement reasonable security measures. The eighth principle advises companies to choose providers with appropriate security measures and standards and to require providers to meet expectations by expressly including those obligations in provider contracts. Also companies should preserve contractually the right to verify that the provider is meeting expectations on data security matters.
  9. Put procedures in place to keep your security current and address vulnerabilities that may arise. The ninth principle instructs companies to implement and maintain up-to-date security patches, heed warnings regarding known vulnerabilities, and establish a process for receiving and responding to security alerts.
  10. Secure paper, physical media, and devices. The tenth principle applies similar security lessons to non-electronic data, such as data on paper and other physical media. This principle recommends storing paper containing sensitive data in a secure area, using PINs and encryption to secure data housed on other physical media, establishing security policies for employees when traveling with media that contains sensitive data, and disposing of sensitive data on paper and other physical media securely.

Since July 21st, the FTC has published seven helpful posts. Up next, the FTC will discuss the eighth principle: Make sure your service providers implement reasonable security measures.

Government Response to Increasing Cyber Threats

Posted in Cybersecurity, Legislation

Government agencies collect and hold massive amounts of personally identifiable information (PII), creating valuable targets for cybercrime. Recently proposed legislation would impose baseline standards for cyber hygiene on federal agencies. State and local governments, as well as private industry, should measure themselves against the same federal standards to protect against catastrophic loss of PII.

Security experts estimate that approximately 90% of successful cyberattacks are due to poor cyber hygiene and security management at the targets. The Promoting Good Cyber Hygiene Act of 2017 (the “Act”), introduced in the Senate, as well as comparable legislation introduced in the House, is designed to address potential shortcomings in federal agencies’ cyber hygiene practices. The Act would require the National Institute of Standards and Technology (NIST) to establish a list of best practices for effective and usable cyber hygiene for use by the Federal Government. The list also would be published as a standard for state and local government agencies, as well as the private sector.

Specifically, NIST must provide a list (1) of simple, basic controls that have the most impact in defending against common cyber security threats, (2) that utilizes commercial off-the-shelf technologies, based on international standards, and (3) that, if practicable, is based on and consistent with the Cybersecurity Framework contained Executive Order 13636 (“Improving Critical Infrastructure Cybersecurity”). Also, the Act requires DHS, in coordination with the FTC and NIST to conduct a study on cybersecurity threats relating to the Internet of Things (“IoT”), and in August, 2017, the Senate introduced the IoT Cybersecurity Improvement Act of 2017, which includes minimum security standards for IoT devices connecting to federal government systems.

The Act requires NIST to consider the benefits of emerging technologies and processes such as multi-factor authentication, data loss prevention, micro-segmentation, data encryption, cloud services, anonymization, software patching and maintenance, phishing education and other standard cybersecurity measures. NIST, as well as Federal and state governments should also consider implementing the following security best practices:

  • Compartmentalize and segment data and limit access to segmented data on a need to know basis. Only collect data that is necessary to provide its services.
  • Train all users (everyone with access to its systems, including contractors and subcontractors) on identifying and avoiding security threats.
  • Create comprehensive forensic evidence logs for data breaches to help identify and plug deficiencies in its systems.
  • Keep up to date on all operating systems versions and patches, and ensure its vendors are also up to date on its systems.
  • Monitor user activities and look for anomalies and discrepancies in access or usage patterns; track potentially suspicious activities.
  • Automate workflows and courses of action to reduce incident response times, and minimize the impact of a security breach.
  • Create, implement, and improve upon incident response and disaster recovery plans and risk mitigation strategies and best practices, both internally, as well as externally by requiring third party contractors to implement comparable practices.
  • Back up critical data on a continual basis to avoid susceptibility to ransomware demands.

In addition to new standards contemplated by the Act, NIST standards currently are being implemented into federal procurements. Federal Acquisition Regulation (“FAR”) and Department of Defense FAR Supplement (DFARS) provisions incorporated into government contracts require contractors to safeguard systems and information in accordance with all or part of NIST Special Publication 800-171, “Protecting Controlled Unclassified Information in Nonfederal Information Systems and Organizations.” These new mandatory contract clauses underscore the vulnerability of information that may not remain in a single system. True risk mitigation includes requiring strategic partners to comply with proper cybersecurity measures.

In addition to storing PII, government agencies also own and operate critical systems, networks and infrastructure. In light of the increasingly high profile, more sophisticated, and numerous ransomware and other malware attacks, such as “Wanna Cry” and “not-Petya” infecting networks worldwide in the first half of 2017, it is more critical than ever for government agencies to identify, contain, remediate, and prevent cyberattacks. State and local government, as well as industry, should take advantage of the lessons learned and best practices incorporated in current and pending federal cybersecurity standards.

Federal standards such as those incorporated into government contracts and contemplated under the Act serve as a baseline starting point, and should continually be re-examined and updated once such best practices are implemented. Cyberattacks are not static and will evolve into sophisticated, higher volume attacks Cyber-countermeasures and best practices must follow suit and evolve and improve with each lesson learned from every attack.

European Court of Human Rights Overturns Decision on Employee Email Monitoring

Posted in EU Data Protection

Back in January 2016 Sarah Thompson reported on the European Court of Human Rights (ECHR) which ruled in favour of an employer who had terminated an employee’s employment, after investigating his misuse of a company email account.

Earlier this week, the Grand Chamber of the ECHR overturned that ruling, finding that the Romanian employee’s right to privacy had in fact been infringed by his employer, when his personal messages were read in the course of an investigation, even though they were sent using company equipment and during working hours. The decision of the Grand Chamber represents the final decision of European courts on this issue, as it is the highest court of appeal and this judgment is therefore conclusive. As a result, Mr. Barbulescu is now entitled to compensation, although as can be seen from the decision, the court determined the amounts to be relatively low.

Employers should already be aware that employees have a certain right to privacy at work and must be properly informed if their communications are to be monitored and in what, if any, limited circumstances such monitoring may be conducted, always bearing in mind the need to balance employee rights and legitimate business interests.

The ECHR Grand Chamber’s decision considers this in detail, and although the judgment is lengthy, the key points benefit from the further clarification given by the court’s Q&A on the judgment. This helpful summary points out that Mr. Barbulescu’s right to private life and correspondence (protected by Article 8 of the European Convention on Human Rights) was violated by his employer because his employer failed to strike the necessary fair balance between each party’s rights and the Romanian courts had failed to determine whether he had properly been informed that his communications could be monitored.

The Q&A also states that this decision “does not mean that employers cannot, under any circumstances, monitor employees’ communications when they suspect them of using the internet at work for private purposes. However, the Court considers that States should ensure that, when an employer takes measures to monitor employee’s communications, these measures are accompanied by adequate and sufficient safeguards against abuse.”

Japan and South Korea in the Pipeline for Adequacy Decision

Posted in EU Data Protection, Legislation

In early 2017, the EU Commission published a communication about Exchanging and Protecting Personal Data in a Globalized World in which the EU Commission prioritizes discussions on possible adequacy decision with key trading partners, starting from Japan and South Korea in 2017.  More particularly, on July 3, 2017, the EU Commission and a representative of the Japanese Personal Information Protection Commission met in Brussels to move forward on a possible adequacy decision.

With the recent reform of the Japanese Act on the Protection of Personal Information on May 30, 2017 and with the new EU General Data Protection Regulation (the “GDPR”, which will apply from May 25, 2018), Japan and the EU have strengthened their respective data protection regimes. As a result, both countries have a very similar regime and ensure a very high level of protection for personal data. This convergence offers new opportunities to pursue a dialogue on adequacy decision.

The EU Commission considers that, in particular, the following criteria should be taken into account to assess with which countries a dialogue on adequacy should be pursued:

  • The extent of the EU’s (actual or potential) commercial relation with a given third country;
  • The extent of personal data flows from the EU, reflecting geographical and/or cultural ties;
  • The pioneering role that the third country plays in the field of privacy and data protection that could serve a model for other countries in its region; and
  • The overall political relationship with the third country in question.

An adequacy decision is an implementing decision taken by the EU Commission to make a determination that a third country ensures an adequate level of protection of personal data. Once an adequate level of protection is recognized by the EU Commission, transfers can be made without specific authorizations. For now, the Commission has adopted 12 adequacy decisions, including the EU-US Privacy Shield.

The EU Commission, when determining whether a third country has an adequate level of protection, must take into account among others (GDPR, art. 45.2):

  • the rule of law, respect for human rights and fundamental freedoms, relevant legislation, both general and sectoral, including concerning public security, defence, national security and criminal law and the access of public authorities to personal data, as well as the implementation of such legislation, data protection rules, professional rules and security measures, including rules for the onward transfer of personal data to another third country or international organisation which are complied with in that country or international organisation, case-law, as well as effective and enforceable data subject rights and effective administrative and judicial redress for the data subjects whose personal data are being transferred;”
  • “the existence and effective functioning of one or more independent supervisory authorities in the third country or to which an international organisation is subject, with responsibility for ensuring and enforcing compliance with the data protection rules, including adequate enforcement powers, for assisting and advising the data subjects in exercising their rights and for cooperation with the supervisory authorities of the Member States”; and
  • “the international commitments the third country or international organisation concerned has entered into, or other obligations arising from legally binding conventions or instruments as well as from its participation in multilateral or regional systems, in particular in relation to the protection of personal data.”

The overall evaluation does not require a level of protection identical to that offered within the EU, but requires a level of protection that is “essentially equivalent”.

Under the GDPR, an adequacy decision is not a definitive decision but a decision that once adopted needs close monitoring by the EU Commission and review, at least every four years, to take into account all relevant developments affecting the level of protection ensured by the third country.

This two-way dialogue with Japan will include exploring ways to increase convergence of Japan’s laws and practice with the EU data protection rules. The EU Commission and Japan have reaffirmed their commitment to intensify their efforts and to conclude this dialogue by early 2018.

Call Me Maybe: Equivocal Statements May Partially Revoke Consent Under TCPA

Posted in Litigation, Privacy

In a recent decision, the 11th U.S. Circuit Court of Appeals reversed a grant of summary judgment in favor of a bank on Telephone Consumer Protection Act (TCPA) claims, by holding that a consumer can partially revoke her previously provided consent.

In Schweitzer v. Comenity Bank, the plaintiff sued the bank under the TCPA for calls placed to her cell phone after she allegedly revoked her consent. The revocation at issue purportedly occurred during a call the bank placed to the plaintiff, in which the plaintiff said, “And if you guys cannot call me, like, in the morning and during the workday, because I’m working, and I can’t really be talking about these things while I’m at work.”

The bank argued, and the district court had agreed, that this statement did not constitute a clear statement that the plaintiff did not want any further calls. The plaintiff appealed, arguing that the TCPA allows a consumer to partially revoke her consent to receive automated calls and that the plaintiff had revoked her consent to receive calls in the morning or during the workday.

In analyzing the issue of partial revocation, the 11th Circuit turned to its prior decision in Osorio v. State Farm Bank, F.S.B., which held that a consumer may orally revoke her consent under the TCPA in the absence of a contractual restriction, to hold that the common-law understanding of consent applies to the TCPA. Under the common law, the court explained, a person may limit her consent as she likes, permitting a consumer under the TCPA to provide limited consent. Therefore, the court concluded that “unlimited consent, once given, can also be partially revoked as to future automated calls under the TCPA.”

Turning to the effect of the plaintiff’s statements, the court held that a jury may find that the plaintiff was too equivocal to constitute partial revocation, but the lack of specificity in the plaintiff’s request did not preclude her from being able to have a jury decide the question. This holding highlights that the question of whether a consumer adequately revoked her consent, in many circumstances, will require a trial.

Of note, the 11th Circuit did not reference the recent Reyes decision by the 2nd Circuit, which held that a consumer cannot unilaterally revoke contractually agreed-upon consent under the TCPA. The reference the court made to its prior decision in Osorio, however, did highlight the distinction the 2nd Circuit drew in its decision limiting revocation. Specifically, the court noted that only in the “absence of any contractual restriction to the contrary, [consumers] were free to orally revoke any consent previously given.”  In addition, given that the court relied upon the common-law principles for revocation, like the 2nd Circuit in Reyes, it appears the two decisions are consistent. Thus, a company may be able to avoid the issues faced in Schweitzer by utilizing contractual provisions addressing consent and revocation.

Proposed Bipartisan Bill Intended to Strengthen Security of Internet of Things (IoT) Devices

Posted in Cybersecurity, Legislation

Earlier this month, Senators from both sides of the aisle introduced the “Internet of Things Cybersecurity Improvement Act of 2017,” outlining new security requirements for vendors who supply the U.S. Government with IoT devices. The bill was proposed by U.S. Senators Mark R. Warner (D-VA) and Cory Gardner (R-CO), co-chairs of the Senate Cybersecurity Caucus, along with Senators Ron Wyden (D-OR) and Steve Daines (R-MT).

In a Press Release for the bill, Senator Warner notes that the sheer number of IoT devices – expected to exceed 20 billion devices by 2020 – presents increasing opportunities for cyberattacks. “While I’m tremendously excited about the innovation and productivity that Internet-of-Things devices will unleash, I have long been concerned that too many Internet-connected devices are being sold without appropriate safeguards and protections in place,” said Senator Warner. “This legislation would establish thorough, yet flexible, guidelines for Federal Government procurements of connected devices. My hope is that this legislation will remedy the obvious market failure that has occurred and encourage device manufacturers to compete on the security of their products.”

Specifically, the Internet of Things (IoT) Cybersecurity Improvement Act of 2017 would:

  • Require vendors of Internet-connected devices purchased by the federal government ensure their devices are patchable, rely on industry standard protocols, do not use hard-coded passwords, and do not contain any known security vulnerabilities.
  • Direct the Office of Management and Budget (OMB) to develop alternative network-level security requirements for devices with limited data processing and software functionality.
  • Direct the Department of Homeland Security’s National Protection and Programs Directorate to issue guidelines regarding cybersecurity coordinated vulnerability disclosure policies to be required by contractors providing connected devices to the U.S. Government.
  • Exempt cybersecurity researchers engaging in good-faith research from liability under the Computer Fraud and Abuse Act and the Digital Millennium Copyright Act when in engaged in research pursuant to adopted coordinated vulnerability disclosure guidelines.
  • Require each executive agency to inventory all Internet-connected devices in use by the agency.

While this bill is aimed at U.S. Government vendors, the growing concern related to IoT device security is not limited to federal procurements. Michelle Richardson, Deputy Director of the Freedom, Security and Technology Project, Center for Democracy and Technology describes this bill as an “important first step” and others speculate that the bill may have a ripple effect on companies manufacturing IoT devices for private consumers.  With the rapid advancements in IoT devices and the increased sophistication of cyberattacks, securitization of these devices will continue to be a moving target, however, this bill may mark a first step in a trend toward increased legislative focus on the overall security of the Internet of Things.

 

NY Cybersecurity Regulations for Financial Services Companies: Enforcement Begins Aug. 28

Posted in Cybersecurity, Financial Services Information Management, Regulation

The 180-day transitional period under the New York Department of Financial Services (NYDFS) Cybersecurity Requirements for Financial Services Companies is set to expire Aug. 28, 2017. Financial services companies must achieve compliance with the cybersecurity regulations prior to this deadline or face substantial monetary penalties and reputational harm.

Cybersecurity Regulation Overview

The cybersecurity regulations became effective March 1, 2017. In its official introduction to the regulations (23 NYCRR 500), NYDFS observed that the financial services industry has become a significant target of cybersecurity threats and that cybercriminals can cause large financial losses for both financial institutions and their customers whose private information may be stolen for illicit purposes. Given the seriousness of this risk, NYDFS determined that certain regulatory minimum standards were warranted but avoided being overly prescriptive, to allow cybersecurity programs to match the relevant risks and keep pace with technological advances.

The cybersecurity regulations require each financial services company regulated by NYDFS to assess its specific risk profile and design a program that addresses its risks in a robust fashion. The required risk assessment, however, is not intended to permit a cost-benefit analysis of acceptable losses where an institution faces cybersecurity risks. Senior management must be responsible for an organization’s cybersecurity program and file an annual certification confirming compliance with the regulations. A regulated entity’s cybersecurity program must ensure the safety and soundness of the institution and protect its customers.

NYDFS has issued a clear warning of its intent to pursue strong enforcement of the Cybersecurity Regulations:  “It is critical for all regulated institutions that have not yet done so to move swiftly and urgently to adopt a cybersecurity program and for all regulated entities to be subject to minimum standards with respect to their programs.  The number of cyber events has been steadily increasing and estimates of potential risk to our financial services industry are stark.  Adoption of the program outlined in these regulations is a priority for New York State.”

To learn more about who is affected, required actions to comply, possible penalties and upcoming deadlines, click here.

Part Two: Abandoned Mines and Data Retention Policies

Posted in Data retention

As discussed in Tuesday’s post, in addition to taking reasonable precautions to secure data, companies should consider whether they have an affirmative duty to destroy data in the United States – to clear the explosives out of the shed, so to speak.

Contractual duties to destroy records have been in existence since judges wore powdered wigs. For example, in M&A transactions, if after due diligence a company decides not to proceed with an acquisition, typically the purchaser must return or destroy any confidential data that was obtained.  The same is true in vendor agreements involving proprietary processes and methods.  But there may be another, less obvious source of a contractual duty to destroy records.  Every company with employees necessarily maintains confidential data from employment applications, background checks, personnel files, payroll records, health insurance records, marital orders, collections orders, and compliance with subpoenas regarding individual employees, etc.  Businesses frequently make representations to and agreements with prospective and current employees regarding how they will treat their confidential information, and those representations may give rise to conflicting interpretations after an employee has separated from the company.

Every company should review its employment application solicitation representations and employee handbooks for statements that expressly or implicitly fix an end date to the company’s document retention periods (“we will keep your application on file for one year, after that you will need to re-apply”), or for statements that impose an obligation to take “reasonable” care to ensure the privacy of confidential data. An employee’s understanding of the term “reasonable” may potentially include the understanding that confidential records will not be kept by the company for a period longer than necessary to defend a lawsuit regarding the employee’s employment performance.  For example, if you last worked for a company 10 years ago, would it surprise you to learn that your complete personnel file is still sitting in a mini-storage unit rented by the company?  Would you be troubled to learn that the company still had a copy of an investigator’s background report, including your residence history, social security number, neighbor’s comments, salary history, divorce records, and credit reports?  What if you were the subject of a meritless harassment complaint, and were never told because the company was previously planning to change your office location and simply accelerated your move as a response to the complaint?  Could you be harmed if the information in those confidential files was made public?  What the courts may construe today as the reasonable expectation of the parties regarding the length of time confidential files should be kept may differ from what one of the parties subjectively expected at the time.

A discrete statutory duty to destroy data may also exist. For example, health care institutions often address compulsory document destruction requirements under HIPAA, including standards relating to the manner of data destruction, and the number of degaussing passes required and/or level of physical destruction necessary before an electronic document can be deemed destroyed.  There can be other state and federal requirements that expressly require document destruction, in surprising contexts.

A third potential duty to destroy documents is worth considering. When a data breach occurs, the company whose records were exposed can expect to be sued by a variety of persons and entities, including state attorneys general, class action lawyers representing victims of the breach, banks and insurers who have paid damages for losses, and/or disgruntled investors who take umbrage with the quality of the company’s data privacy practices.  Discovery regarding the breach likely will first revolve around the nature of the breach, the steps the company took to secure and control access to the records, and the reasonableness of the company’s policies and practices regarding data privacy.  But for older documents, one question is inevitably going to be asked: Why did the company even have those records at that point in time? If the answer falls short of common industry practices, contravenes a representation in the employee handbook, runs afoul of some then-existing judicial decision, or simply fails to account for the reasonable expectations of the pertinent parties, the company may have difficulty defending its failure to timely destroy the records that were exposed.

Data breach class actions typically allege violations of state unfair business practice and consumer protection laws (in addition to statutory notice, negligence, breach of contract, conversion, and esoteric claims). That is no accident; unfair business practice standards for liability are often nebulous and ill-suited for summary resolution.  In 2015, a California consumer protection advocacy group filed a complaint with the Federal Trade Commission against Google.  In that complaint, the group argued that Google’s refusal to recognize in the U.S. the “right to be forgotten” that is codified in the EU constitutes an unfair business practice in the U.S. The complainant cited Section 5 of the Federal Trade Act (prohibiting unfair business practices).  It is not a great stretch to imagine a similar claim being filed by plaintiffs placed at risk by a company’s failure to timely destroy–not merely secure–old records.  There may be no legal precedent for such a claim, but what company wants to become that precedent by suffering a data breach involving old documents that the company no longer needs?

Alfred Nobel’s patented method for combining diatomaceous earth with nitroglycerin made the nineteenth century explosives shed a less dangerous place to be. But nitroglycerin in any form becomes less stable –and far more dangerous—as it ages.  Perhaps we should extend the analogy to old company documents, and take some time to clear the old explosives out of the company shed.

Another Circuit Joins the Trend of Setting a “Low Bar” for Standing in Data Breach Actions

Posted in Health Information, Litigation

Consistent with a growing trend among courts nationwide, the D.C. Circuit Court unanimously held that a group of plaintiffs had cleared a “low bar” to establish constitutional standing for their claims in a data breach case against health insurer CareFirst by alleging potential future harm as a result of the breach. The plaintiffs alleged that there was a substantial risk that their personal information could be used for medical identity theft after a breach of CareFirst’s systems. Despite the fact that (i) no actual misuse of the information had yet occurred and (ii) the breach involved medical information, rather than financial or other sensitive information typically involved in successful data breach claims, the D.C. Circuit Court held that the plaintiffs had established standing and their claims could move forward.

In 2016, the U.S. Supreme Court held in Spokeo v. Robins that plaintiffs must allege an actual or imminent injury, not hypothetical harm, to establish standing and proceed past the pleadings stage. The Supreme Court found that plaintiffs cannot rely on statutory violations for standing and remanded the case for the lower court to identify a “concrete injury.” Even after the Supreme Court’s decision, appellate courts have split on how to interpret the standard in data breach cases and whether to find standing based on a risk of harm, and courts are increasingly sympathetic to data breach claims.

The D.C. Circuit Court joins several other circuit courts that have interpreted the pleading standard liberally and in favor of data breach victims. As a result, more claims in these jurisdictions will survive past the pleading stage based on a risk of injury to the individuals affected by a breach. These rulings are largely based on an assumption that the perpetuators of information theft intend to misuse the information, indicating that the bar to claims at the pleading stage would require proof that the breached information could not or would not be used for fraud or identity theft.

Significantly, the D.C. Circuit’s ruling focused on the risk of harm from breaches of information other than financial information and social security numbers, which typically form the basis for data breach claims. The D.C. Circuit noted that there was a substantial risk to the plaintiffs of medical identify theft based on a breach of information such as names, birthdates, email addresses, and health insurance policy numbers. In addition to an overall increase in data breach claims based on potential harm, this type of ruling could expand the success of claims based in negligence or other state law doctrines arising out of breaches of health information.

It is likely that the Supreme Court will eventually weigh in on whether plaintiffs have standing in claims arising out of data breaches based on the potential for harm. In the meantime, individuals and entities who maintain personal information, whether financial or medical, should be aware that individuals affected by data breaches are increasingly likely to get their day in court.