Header graphic for print

Password Protected

Data Privacy & Security News and Trends

FFIEC Provides Banks with Guidance Following the SWIFT Hacks

Posted in Cybersecurity, Financial Services Information Management

On June 7, 2016, the Federal Financial Institutions Examination Council (FFIEC) reminded banks of the cyber risks associated with interbank messaging and wholesale payment networks. FFIEC made its announcement after hackers allegedly used the Society for Worldwide Interbank Financial Telecommunication (SWIFT) messaging system to steal millions of dollars from banks around the world, including $81 million from the Bangladesh central bank.  According to FFIEC, the hackers may have used the SWIFT system to:

  • bypass a bank’s wholesale payment information security controls;
  • obtain operator credentials to create, approve and submit messages;
  • demonstrate a sophisticated understanding of funds transfer operations;
  • conceal and delay detection with customized malware to disable security logging and reporting; and
  • quickly transfer stolen funds across multiple jurisdictions quickly to avoid recovery.

To mitigate interbank messaging and wholesale payment risks, banks should update their information security procedures to address risks posed by compromised credentials. When reviewing their procedures, banks should consult the FFIEC IT Examination Handbook, specifically the Information Security, Business Continuity Planning, Outsourcing Technology Services, and the Wholesale Payment Systems booklets.

Consistent with federal banking agency regulations and FFIEC guidance, financial institutions should take the following steps to improve cybersecurity controls:

  • conduct ongoing information security risk assessments and ensure that third party service providers also perform effective risk management and implement cybersecurity controls;
  • perform security monitoring, prevention and risk mitigation by confirming protection and detection systems, such as intrusion detection systems and antivirus protection, are up-to-date and firewall rules are configured properly and reviewed periodically;
  • protect against unauthorized access by limiting the number of credentials with elevated privileges across the institution, especially administrator accounts, with the ability to assign elevated privileges to access critical systems;
  • implement and test controls around critical systems by adopting cybersecurity controls, such as access control, segregation of duties, audit, and fraud detection and monitoring systems;
  • manage business continuity risk by validating existing policies and procedures that support the bank’s ability to recover and maintain payment processing operations;
  • enhance information security awareness and training programs by conducting regular, mandatory education and employee training across the enterprise, including how to identify and prevent phishing attempts; and
  • participate in industry information-sharing forums including the Financial Services Information Sharing and Analysis Center (FS-ISAC) and the U.S. Computer Emergency Readiness Team (U.S.-CERT).

While FFIEC’s statement does not contain new regulatory expectations, the recent manipulation of the SWIFT system demonstrates the importance of regularly assessing the bank’s inherent risk profile and evaluating each of the five cybersecurity domains, particularly cybersecurity controls. FFIEC’s statement regarding the cybersecurity of interbank messaging and payment networks is available here and SWIFT’s customer communication on cybersecurity cooperation is available here.

Dude, Where’s My Bitcoin?

Posted in Cybersecurity, Data breach, Financial Services Information Management

Somewhere in a lavish Mediterranean villa a drug lord wearing an off-white suit had a heart attack. Elsewhere a tech whiz in Silicon Valley refreshed his browser multiple times as his heart sank further with each reloaded page.  And a banker in New York put a hold on an equity trade and cursed louder than he ever had before.  Like the beginning of a classic joke, the drug lord, the tech whiz and the banker had all been fooled.  Through each of their minds, the question raced:  “Dude, where’s my bitcoin?”


In early August, hackers stole almost 120,000 bitcoins (worth approximately $72 million at the time) from client accounts of a high-profile Bitcoin exchange, Bitfinex, based out of Hong Kong. This caused Bitcoin prices to briefly plummet and followed a similar attack in 2014 on Mt. Gox, which was then the world’s largest Bitcoin exchange (of note, Mt. Gox subsequently went bankrupt).

This latest heist comes on the heels of Bitfinex CFO Giancarlo Devasini’s very forward-thinking proclamation, “With our BitGo wallet solution it becomes impossible for our users to lose their bitcoins due to us being hacked or stealing them.” With such a bold statement, combined with the impervious view of hindsight, one must carefully ponder the future tenure of the CFO, or the future of Bitfinex, or even that of Bitcoin itself.

The theft is obviously a problem for those customers whose precious cryptocoins were stolen, fans of digital currency generally, operators of Bitcoin exchanges and various Bitcoin “banks” or “wallets.” Bitfinex’s response to the hack is unlikely to resonate with its clients after they indicated that losses would be spread across all customer accounts, amounting to an approximately 36% generalized loss.  Despite attempting to assure their clients that they would be made whole at some point in the future, a potential investor might be prone to pause at this juncture in any bitcoin venture.

Continue Reading

Got Data? Actual Harm Not Required for FTC Enforcement Action for Lax Security Measures

Posted in Consumer Privacy/FTC, Data Security, FTC enforcement, Privacy

While much of Washington, D.C. is enjoying the slow and hazy days of summer, the Federal Trade Commission (FTC) is staying busy solidifying its presence as the go-to authority for data security. Most recently, on July 29, 2016, the FTC issued a unanimous Opinion and Final Order against LabMD, Inc., for its unreasonable data security practices, reversing an Administrative Law Judge (ALJ) Initial Decision that had dismissed FTC charges.

Between 2001 and 2014, LabMD collected and tested patient medical samples for physicians. The FTC’s decision found that from 2005 to 2010, LabMD failed to maintain basic security practices. Among other things, LabMD:

  • lacked file integrity monitoring and intrusion detection;
  • failed to monitor digital traffic;
  • failed to provide security training to its personnel;
  • lacked a strong password policy and allowed at least a half a dozen employees to use the same, weak password, “labmd”;
  • failed to update its software to address known vulnerabilities;
  • granted employees administrative rights to their laptops, which allowed these employees to download any software they wanted;
  • allowed the downloading of peer-to-peer software (LimeWire), which enabled a file containing 1,718 pages of confidential information relating to approximately 9,300 customers to be downloaded through LimeWire; and
  • failed to respond to warnings about data vulnerability after being made aware of the issue with respect to LimeWire.

The case was heard by an ALJ, who issued a decision in November 2015 (the “ALJ Decision”). The ALJ decision dismissed the complaint due to lack of evidence that LabMD’s data security practices either caused or were likely to cause substantial injury to its consumers. In its recent Opinion and Final Order, however, the FTC reversed the ALJ Decision and found that LabMD’s data security practices were unreasonable and caused, or were likely to cause, substantial injury to consumers.

What Are Unreasonable Data Security Practices?

The FTC’s thirty-seven page Opinion and Final Order details what the FTC found to be insufficient data security standards that left consumers at risk.  In reaching its decision, the FTC repeatedly referenced the well-known data privacy and security standard in the Health Insurance Portability and Accountability Act (HIPAA).

Indeed, for all of the FTC’s concerns, there are corresponding HIPAA standards which provide important industry guidance with respect to data privacy and security. The FTC noted, however, that HIPAA does not itself determine the reasonableness of LabMD’s data security practice, since HIPAA is a multi-factored law that takes a “flexible approach” to Security Rule compliance. In fact, the FTC’s decision is separate from any specific HIPAA enforcement action that may result from the practices described above.  Nevertheless, the repeated references to HIPAA provide a helpful reference point for the FTC’s expectations with respect to data privacy and security—a reference that should be universally known in the healthcare services world.

While the FTC used HIPAA to identify reasonable data security practices, its analysis of substantial injury is not limited to the health care industry. Indeed, the FTC has made it clear that any industry in possession of sensitive consumer data (such as names, addresses, dates of birth, Social Security numbers, and insurance information) will be required to maintain reasonable data security practices, and that enforcement actions may result even if there has been no identifiable harm to the subjects of such data.

What is Substantial Injury?

Having determined that LabMD had insufficient data security practices in place, the FTC looked at what constitutes substantial injury. In its analysis, the ALJ Decision relied on the fact that there is “no evidence that any consumer has suffered any injury as a result of the 2008 exposure.” In the Matter of LabMD, Inc., A.L.J. Docket No. 9357. In fact, even the FTC Final Order notes that it is unclear if the exposure “resulted in actual identity theft, medical identity theft, or physical harm for any of the consumers whose information was disclosed.” In the Matter of LabMD, Inc., FTC Docket No. 9357, at 23.

Nevertheless, the FTC determined that the mere “disclosure of sensitive health information causes additional harms that are neither economic nor physical in nature but are nonetheless real and substantial” and therefore actionable. In other words, the FTC does not require consumers to show they have suffered “known harm” to enforce Section 5 of the FTC Act against unreasonable data security practices. In the Matter of LabMD, Inc., FTC Docket No. 9357. Rather, it is the timing of the data security practice that guides the FTC’s analysis of whether or not the consumer is subject to substantial injury. The FTC stated that when determining if an industry’s data security practice will cause harm, it will do so “at the time the practice occurred, not on the basis of actual future outcomes.” In the Matter of LabMD, Inc., FTC Docket No. 9357 at 23.

Under that analysis, the Commission found that LabMD put consumers at risk of substantial injury and ordered, among other things, that it notify consumers of the risk and adopt a comprehensive compliance plan to address the identified security shortcomings. LabMD now has 60 days to file a petition for review with a U.S. Court of Appeals – which it seems quite likely to do.

In the meantime, companies can use this decision to help review their own data privacy security practices, knowing that the FTC will undoubtedly continue to act as a leader in the data and privacy security field. And, for any HIPAA covered entity or business associate, this decision should be a wake-up call that that non-compliance with HIPAA may create two-fold liability.

HIPAA Hat Trick: Security Violations Lead to Three Major Settlements

Posted in Data retention, Data Security, Health Information

Look no further than the last three weeks for proof that HIPAA enforcement is on the rise.

Failure to maintain the security of information systems containing patient information has cost healthcare providers over $10 million in recent settlements of alleged violations of the Health Insurance Portability and Accountability Act (HIPAA). The Department of Health and Human Service’s Office for Civil Rights (OCR) is making it clear that enforcement of HIPAA’s security requirements is a priority and not likely to slow down. Indeed, OCR recently announced three major settlements of alleged HIPAA security violations in as many weeks. The settlements all involve large health systems and include the largest ever settlement of HIPAA claims, at a record $5.55 million.

  1. On July 18, 2016, OCR announced that Oregon Health & Science University (“OHSU”) agreed to pay $2.7 million and enter into a three-year comprehensive corrective action plan as part of a settlement following OCR’s investigation of OHSU’s compliance with the HIPAA Security Rule.

OCR reports that OHSU submitted multiple reports of HIPAA breaches involving the unsecured protected health information (PHI) of thousands of individuals. Two of the breaches involved unencrypted laptops, and the third involved a stolen, unencrypted thumb drive. OCR’s investigation uncovered widespread security vulnerabilities and failure to comply with the HIPAA Security Rule. For example, OCR found that OHSU stored electronic PHI (ePHI) of more than 3,000 individuals on a cloud-based server, but OHSU did not have a business associate agreement in place with the vendor. OCR determined that this oversight put 1,361 individuals at significant risk of harm.

Although OHSU has performed security risk assessments periodically since 2003, the risk assessments did not cover all of the ePHI in OHSU’s information systems, and OHSU did not address the vulnerabilities identified in the risk assessments. For example, although OHSU identified that its lack of encryption of ePHI stored on its workstations was a risk, it failed to implement encryption or an equivalent protection. OCR also found that OHSU lacked policies and procedures required by the Security Rule to prevent, detect, contain, and correct security violations.

  1. Just a week after the OHSU announcement, OCR announced a similar settlement with the University of Mississippi Medical Center (“UMMC”) for $2.75 million. Like OHSU, OCR investigated UMMC’s HIPAA compliance after UMMC reported a HIPAA breach involving a stolen laptop containing ePHI.

OCR’s investigation found that users of UMMC’s wireless network could use a generic username and password to access an active directory on UMMC’s network drive containing 67,000 files. OCR estimates that the directory included files containing the ePHI of 10,000 patients. OCR also found that UMMC violated the Security Rule by failing to implement appropriate security policies and procedures, restrict access on workstations that access ePHI to authorized users, and assign unique user names for identifying and tracking users of systems containing ePHI. Further, UMMC failed to notify each individual whose ePHI was reasonably believed to have been affected by the breach of the stolen laptop.

  1. Finally, in keeping with its once-a-week settlements, OCR announced on August 4, 2016 that it had entered into the largest ever settlement of HIPAA claims with Advocate Health Care Network (“Advocate”). Advocate agreed to pay $5.55 million, due in part to the extent and duration of Advocate’s alleged noncompliance and the large number of individuals whose PHI was affected.

OCR investigated Advocate’s HIPAA compliance after it reported three separate HIPAA breaches involving its subsidiary, Advocate Medical Group, affecting approximately 4 million individuals. OCR reports that Advocate failed to conduct accurate and thorough risk assessments, implement appropriate security policies and procedures, enter into written business associate agreements to protect ePHI, and reasonably safeguard an unencrypted laptop that was left in an unlocked car.


Aside from confirming that HIPAA enforcement is dramatically up, these settlements highlight the importance of Security Rule compliance.  Among other things, this means that covered entities (and business associates) must:

  1. have adequate security policies and procedures to prevent, detect, contain and correct security violations;
  2. have thorough risk assessments that assess all information systems containing ePHI;
  3. respond to all risks and vulnerabilities that they have identified in their risk assessments; and
  4. handle security breaches in accordance with the requirements of the Breach Notification and Security Rules — and be prepared for significant breaches to result in enforcement actions.

The Cost of Noncompliance: LifeLock Continues to Pay

Posted in Consumer Privacy/FTC, Cybersecurity, FTC enforcement, Identity Theft, Information Management

LifeLock, Inc. made headlines in December 2015 when it finalized a $100 million settlement with the Federal Trade Commission—the largest monetary award ever in an FTC order enforcement action. As reported by McGuireWoods’ Password Protected blog, the 2015 enforcement action stemmed from allegations that LifeLock breached a 2010 settlement with the FTC mandating, among other things, that LifeLock maintain a comprehensive data privacy and security program.  Though resolution of the 2015 action was a significant step towards clearing the slate with state and federal regulators, the story did not end there.

Lawsuits related to the FTC action continued and, in the wake of the FTC settlement, LifeLock announced a shake-up in company leadership. Effective March 1, 2016, LifeLock president and former Yahoo Americas executive Hilary Schneider succeeded founder Todd Davis as LifeLock’s CEO, while lead director Roy C. Guthrie ascended to Davis’ former role as chairman of the board.

On June 30, 2016, LifeLock took another step to eliminate liability relating to its data privacy and security practices by agreeing to settle a consolidated shareholder derivative lawsuit pending in Arizona. The lawsuit, captioned In re: LifeLock, Inc. Derivative Litigation, alleged in part that LifeLock’s directors breached their fiduciary duties in failing to ensure compliance with FTC regulations post-2010. The newly proposed settlement will release those claims and others in exchange for terms including LifeLock’s agreement to: (1) spend at least $4 million annually on information security from 2016-17, (2) monitor and report on the effectiveness of its information security program, and (3) pay $6 million in attorneys’ fees to lead counsel for the plaintiffs.

Wall Street reacted favorably to the news. On July 11, 2016, shares of LifeLock (NYSE: LOCK) eclipsed $16.22 during trading for the first time since the FTC announced its 2015 enforcement action nearly a year earlier, and LifeLock’s stock hit a 52-week high of $16.89 during intraday trading on August 3, 2016.

Again, however, the story will not end here. LifeLock’s continuing obligations under recent settlements and the ever-looming threat of a third FTC enforcement action are sure to influence the deployment of company resources for years to come.  Likewise, healthy skepticism about LifeLock’s ability to keep its data security related promises could limit growth in the company’s market capitalization for the foreseeable future.  The 2015 FTC settlement, for example, (as amended on January 4, 2016) requires LifeLock to comply with its terms for 5 years.

These self-inflicted restraints, along with significant financial and other consequences for LifeLock’s customers, investors, executives, and board members, serve as a reminder that—while the costs of implementing a comprehensive data privacy and security program can be high—the cost of not complying with industry best practices can be catastrophic. If data privacy and security is not yet a priority at your company, make the case before it is too late.  Or you too, like LifeLock, may learn (not) to appreciate the FTC’s “pound of cure.”

The full text of the stipulation of settlement may be found here.

Are You an Insider? Data Privacy Challenges Posed by New Insider List Requirements

Posted in EU Data Protection, Legislation, Privacy

The EU’s Market Abuse Regulation (“MAR”) came into effect on July 3, 2016 replacing the EU’s Market Abuse Directive. Unlike the Directive, the MAR has direct effect in each EU member state, including the UK.

The MAR, a civil market abuse regime, is intended to ensure the smooth functioning of the financial markets and enhance market integrity and investor protection by penalizing abusive behavior in the financial markets, market manipulation and insider dealing.

MAR is extra-territorial in its application applying to all issuers who have their financial securities traded on, or are in the process of admitting financial securities to trading, on an EU regulated market or multilateral or organized trading platforms.

MAR requires that issuers or persons acting on their behalf or account maintain insider lists in the prescribed form to monitor and control the flow of insider information. In order to ensure compliance with their obligations, issuers will require their advisory team to also maintain such insider lists and provide them with information maintained on such insider lists.

To ensure that there is uniformity on insider lists, the European Securities and Markets Authority has produced the precise form of the insider lists. This prescribed format requires, amongst other things, personal data of individuals who have access to insider information, including former surnames, home address, home and mobile telephone numbers.

Those who are required to maintain insider lists, especially the advisers to issuers, will have to properly consider their duties as data controllers as such insider lists will contain personal data. Consideration will also have to be given in relation to any requirements to make such insider lists available to issuers or any third parties. Parties will have to consider internal data privacy policies, training employees, getting appropriate consents and building appropriate protections into any contractual arrangements with issuers who require such lists be maintained on their behalf.

Is the Privacy Shield Viable? Article 29 Working Party Proposes to Wait for Its Final Verdict

Posted in EU Data Protection, Legislation, Privacy

After its first draft of February 29, 2016, the European Commission adopted the EU-U.S. Privacy Shield adequacy decision on July 12, 2016.  The first draft was adopted after the cancellation of the Safe Harbor by the Court of Justice of the European Union (CJEU) on October 15, 2015 (Schrems case). A new adequacy decision was therefore highly welcome to allow the tens of thousands of U.S. and EU companies that rely on Safe Harbor to transfer personal data across the Atlantic. After the first draft of the adequacy decision, several EU institutions addressed numerous concerns regarding this first draft. First, on April 13, 2016, Article 29 Working Party (WP 29), released an  opinion, noting the Privacy Shield offers major improvementscompared to the invalidated Safe Harbor decisionbut, at the same time, urged the European Commission to resolve all concerns expressed by WP 29 in order to ensure that the protection to be offered by the Privacy Shield is indeed essentially equivalent to that of the EU. This opinion was followed on May 26, 2016 by a resolution of the EU parliament where it also expressed several concerns about the proposed Privacy Shield.  Finally, on May 30, 2016 the European Data Protection Supervisor (EDPS) published its opinion where, although it “welcomed the efforts shown by the parties to find a solution for transfers of personal data”, EDPS added that “robust improvements” were needed “in order to achieve a solid framework, stable in the long term”.

The EU-U.S. Privacy Shield adequacy decision adopted on July 12, 2016 by the European Commission was supposed to cure all the concerns expressed after the first draft. The surprise is of course that WP 29’s press release of July 26, 2016 does not consider that the improvements brought by the EU Commission and the U.S. authorities to the proposal of Privacy Shield adequately respond to the concerns expressed.  For instance, WP 29 regrets:

  • The lack of specific rules on automated decisions and of a general right to object;
  • That it remains unclear how the Privacy Shield Principles will apply to processors;
  • The lack of concrete assurance that bulk collection of personal data will not again happen, despite the commitment of the U.S. Office of the Director of National Intelligence (ODNI);
  • The lack of strict guarantees concerning the independence and the powers of the Ombudsmen in case of conflict caused by access by U.S. public authorities to personal data.

After expressing these criticisms, WP 29 proposes however to decide on the viability of the Privacy Shield after the first annual review of the framework that will take place in May 2017. In other words, WP 29 will not push for a legal challenge of the Privacy Shield before the first review.  This said, even though the timing proposed by WP 29 seems practicable, in case of action by data subjects of privacy activists, the “wait and see” attitude of WP 29 will probably be difficult to maintain. Finally, the position of WP 29 seems very practical.  Indeed, it is difficult to assess the adequacy of the Privacy Shield because it is mainly based on commitments taken from letters by different U.S. heads of administrative bodies and among others the ODNI. This meets one of the very general remarks expressed by the EDPS in its May 30, 2016 opinion, which called for longer term solutions” “with more robust stable legal frameworks to boost transatlantic relations”. The nearly one year deadline given by WP 29 is probably the opportunity to reach robust stable legal frameworks not only for the Privacy Shield, but also for Standard Contractual Clauses and Binding Corporate rules when they relates to transfers of personal data to the U.S.

Continue Reading

Uniform Approach Proposed to Protect Employee and Student Online Login Information

Posted in Legislation, Privacy, Social Media

State legislatures are increasingly legislating in the area of employee and student online privacy. Privacy practitioners should be aware that there is now a proposed uniform law for the states to consider enacting.  At its recent annual meeting in Stowe, Vermont, the Uniform Law Commission adopted a proposed uniform state law titled “Uniform Employee and Student Online Privacy Protection Act” (ESOPPA).  ESOPPA is the result of a two-year effort by the Commission to study the issues involved in online privacy and draft proposed legislation aimed at protecting an employee or student’s login information to certain personal online accounts.

What is the ULC?

Created in 1892, the Uniform Law Commission develops and drafts uniform legislation for consideration by state legislatures. The Commission is comprised of several hundred judges, law professors, legislative staff, legislators and attorneys in private practice (“Commissioners”).  Each state, the District of Columbia, Puerto Rico and the U.S. Virgin Islands appoint Commissioners.  I serve as a Commissioner for Virginia and attended the annual meeting in Vermont.

What is ESOPPA?

ESOPPA was drafted to address the situation where employers and education institutions attempt to require or coerce an employee or student to disclose login information to personal online accounts. The Act applies to employers and public and private post-secondary educational institutions and their agents or designees. See § 2 of the Act.

The Act is designed to protect an employee or student’s login information to, and content, on “protected personal online accounts.” These accounts can take a variety of forms – social media, personal finance, etc.  The crucial determinates are that the online account is protected by a login requirement and is a personal account. See § 2.

In addition to a typical definition of employee, ESOPPA’s definition also includes prospective employees, independent contractors and unpaid interns. ESOPPA’s definition of student includes current and prospective students. See § 2.

What does ESOPPA Prohibit?

ESOPPA prohibits an employer or post-secondary educational institution from “requesting, requiring or coercing an employee [or student]” to (i) disclose to such entity the employee or student’s login information to a “protected personal online account” or to disclose the content of such account; (ii) alter the account settings for the protected personal online account in such a way that makes the information or login information more accessible by others; or (iii) login to a protected personal online account in the presence of the entity in such a way that allows viewing the login information. See §§ 3 and 4.

The proposed Act also prohibits an employer or post-secondary educational institution from retaliating against an employee or student who does not comply with a request for access that is in violation of the Act or refusing to accept a “friend” request or “unfriending” the entity from the employee or student’s protected personal online account.

What does ESOPPA allow?

ESOPPA allows an employer or post-secondary educational institution to:

  1. Access information that is available to the general public;
  2. Comply with federal or state law, court order or requirements of certain regulatory organizations;
  3. Require or request access to the content (but not the login information) of a protected personal online account when the entity has “specific facts” about the account to ensure compliance with federal or state law or work or school related misconduct, or to investigate non-compliance with such laws or policies.  A covered entity may also require access to content based on specific facts about the account in order to protect against health or safety threats; threats to the entity’s IT systems, communications infrastructure or other property; or the disclosure of proprietary information.  If an employer or educational institution accesses the content of an account through this exception (#3), the entity is required to (i) only access relevant content, (ii) only use the content for the reason it is being accessed and (iii) not change the content unless necessary for the underlying reason for needing access (i.e. the “specified purpose.”);
  4. Request an employee or student to allow the entity to “friend” them on an online account or request the person to not “unfriend” the entity from the online account; and
  5. Conduct network system monitoring and the acquisition of login information through such a program as long as the entity complies with certain retention, use, and disposal requirements.

See §§ 3 and 4.

What remedies are provided under ESOPPA?

ESOPPA allows the state attorney general to obtain injunctive or other equitable relief and seek a $1,000 civil penalty for each violation of the Act, but limits the total recovery to $100,000 when the same act results in multiple violations. See § 5.

ESOPPA provides a private right of action for an employee or student to bring a civil action against the offending employer or post-secondary educational institution, respectively, to obtain injunctive relief, actual damages and costs and attorney fees. Such remedies are not exclusive nor do they supplant any other remedy available under other laws. See § 5.

Why does it matter?

Increasingly, states are legislating in this area of the law. According to the National Council of State Legislatures, in 2015, 23 states introduced or considered legislation on employee or student online privacy, with nine states enacting a law.  In 2016, 15 states considered such legislation and three states enacted laws in this area.  States are taking a variety of approaches to address the issues involved in online privacy in the employment and student contexts.

The Uniform Law Commission has a record of producing draft legislation that is appealing to state legislatures. One of the Commission’s most well-known products is the Uniform Commercial Code, the entirety of which is adopted in virtually every state. The uniform acts produced by the Commission, often set the standard for what the legislation contains, even when a state does not adopt the entirety of the proposed uniform act. Among other things, states may use the definition section of a proposed act, or adopt the burden of proof or elements of a claim from a uniform act.  As such, it is important for the employers, post-secondary educational institutions and advisors to those entities to be aware of the consideration by the various states of uniform acts such as ESOPPA.

Click Here to View the Proposed Act

OCR Makes It Official: Ransomware Attacks Are HIPAA Breaches

Posted in Data Security, Health Information

Ransomware attacks appear to be increasing in frequency as well as severity. Ransomware is malicious software that encrypts data until a ransom is paid to the hacker. For healthcare providers, the inability to access electronic health records systems due to a ransomware attack is a disaster scenario. While the decision whether to pay a ransom likely will continue to plague providers who are attacked, there is new guidance from the Department of Health and Human Services Office for Civil Rights (OCR) on how to handle ransomware attacks under the Health Insurance Portability and Accountability Act (HIPAA).

The new OCR guidance explains how the security requirements under HIPAA can assist in preventing, detecting and recovering from ransomware attacks. Most importantly, OCR states that these attacks constitute “breaches” under HIPAA. OCR explains how covered entities and business associates should manage the breach notification process under HIPAA in the event that a ransomware attack occurs.

Preventing Ransomware Attacks

HIPAA’s Security Rule contains standards and requirements for all covered entities and business associates to evaluate and address vulnerabilities in their information systems to prevent unauthorized access to electronic protected health information (ePHI). OCR’s guidance explains that organizations may prevent ransomware attacks or lessen their severity by complying with the HIPAA security requirements, including conducting a risk analysis of vulnerabilities, implementing procedures to guard against and detect malware, training users on malware protection, and limiting access to ePHI to only persons or software programs requiring access.

Detecting Ransomware Attacks

The OCR guidance provides a list of several indicators of a ransomware attack. OCR notes that appropriately training employees on these indicators can assist organizations in detecting the ransomware. The HIPAA Security Rule requires covered entities and business associates to train their workforces on security procedures, including detection of unauthorized activity.

Recovering from Ransomware Attacks

Compliance with the HIPAA Security Rule standards can also help organizations recover from a ransomware attack. The Security Rule requires organizations to implement plans for responding to security incidents, including malware attacks. Such plans should incorporate procedures to isolate infected computer systems and prevent ransomware from spreading. Response plans should also include processes to analyze ransomware, contain its impact, eradicate the ransomware and remediate the vulnerabilities that allowed the ransomware attack. OCR emphasizes that frequent data backups and ensuring the ability to recover data from such backups will facilitate recovery from an attack. OCR also encourages organizations to periodically conduct data restoration tests and maintain backups offline, away from the networks where data are stored.

Breach Analysis and Notification

As with any unauthorized access of health information, covered entities and business associates must conduct an analysis of a ransomware attack to determine whether it constitutes a “breach” under HIPAA. OCR confirms that ransomware attacks constitute a breach, because unauthorized individuals have taken possession or control of the ePHI, constituting an unauthorized disclosure. It is presumed that a breach occurred unless the organization can demonstrate that there is a low probability that the ePHI has been compromised, based on several factors set forth in the HIPAA breach notification rule, and the organization must follow the notification processes required by HIPAA. The OCR guidance notes, however, that the HIPAA breach notification requirements apply only to “unsecured PHI.” Thus, if the ePHI that is targeted in a ransomware attack is encrypted in a manner consistent with HIPAA guidelines, the breach notification safe harbor may apply. As OCR noted, this determination is fact-specific.

OCR emphasizes throughout the new guidance that security measures, risk analyses and breach analyses vary depending on an organization’s individual infrastructure and the specific facts of a potential breach, including ransomware attacks.

Pokémon Go: Catching More Than Just Users

Posted in Consumer Privacy/FTC, Cybersecurity, Data breach, Privacy, Social Media

Since its release on July 6, 2016, Pokémon Go has unofficially become the most successful mobile app to date.  Generating over 2 million dollars in revenue per day, it already has more daily users than Twitter, and the highest average time spent per day– more than WhatsApp, Instagram and Snapchat.  But that level of success does not come without data challenges. MediatedReality_on_iPhone2009_07_13_21_33_39

Pokémon Go is a free, location based augmented mobile reality game developed by Niantic and published by The Pokémon Company. To play the game, a user downloads the app, creates an account, logs in, and based on their physical location the app alerts the user to nearby Pokémon available for capture. The app accesses a user’s camera and GPS to allow a player to capture and battle Pokémon in virtual reality.

It was not long after its release that Pokémon Go was caught up in its first data privacy problem. By downloading the app, Pokémon Go users had given the app full access to their personal Google account, meaning the app was granted access to see and modify Google user account information, including everything stored in Google Drive.

When this error came to light, just six days after the app was released, the Pokémon Company and Niantic released a joint statement that the app “erroneously request[ed] full access permission for the user’s Google account.” The statement went on to say that the app “only accesses basic Google profile information (specifically, your User ID and e-mail address) and no other Google account information is or has been accessed or collected.”

After discovering the problem, Niantic released a security patch to correct the problem and limit the data collection to the more basic e-mail and User ID information. A review of Pokémon Go’s current Privacy Policy and Terms of Use do not reveal any unusual or unexpected data collection policies. In fact, the data security concerns were pushed aside as the app, which forces the user to physically move around to find and capture Pokémon, has been applauded for successfully intergrading mobile phones with physical activity. Nevertheless, the app’s unprecedented popularity has opened it up to extreme scrutiny, including catching the attention of Senator Al Franken, ranking member on the Senate Privacy, Technology, and the Law Subcommittee.

Senator Franken sent a letter to Niantic about the app’s privacy policy. The letter outlines seven specific questions about the app’s privacy policy including why Pokémon Go collects location data and asks for a list of Pokémon Go service providers with access to user information. The Senator requested a response by August 12, 2016.

While there are no official investigations into the app’s data policies, given the Federal Trade Commission’s interest in mobile privacy, location tracking and consumer protection, it is likely the agency will be keeping a close eye on the app to ensure Pokémon Go has followed appropriate consumer protection measures. There is also an opportunity for the Federal Communication Commission to get involved.  Using Pokémon Go can quickly consume a user’s data plan.  In response to that concern, telecommunication carriers are already considering a new kind of data plan – offering customers unlimited, or free data plans for a period time while using Pokémon Go. The practice of not charging a customer for specific data is known as zero-rating. The FCC’s net neutrality rules prevent access providers from prioritizing content but they do not ban zero-rating policies. Zero-rating is not new to telecommunications, but its application to Pokémon Go comes at an interesting time because of its similarity to net neutrality.

Despite the questions surrounding the app’s data policies, there is no obvious damage to Pokémon Go’s success. Within a week of release Pokémon Go faced, and arguably recovered from, its first major privacy data problem. But that is just the beginning for Pokémon Go.  Internet hackers have already targeted the app as a potential target, claiming to have shut down the app for a period of time on July 16th and July 17th.  Nevertheless, nothing seems to be slowing down the growth of Pokémon Go, which has caught the attention of millions of users worldwide and a few lawmakers as well.