Header graphic for print

Password Protected

Data Privacy & Security News and Trends

Trump Privacy Rollback Continues, States Step Up

Posted in Consumer Privacy/FTC, FCC, Legislation, Privacy, Surveillance

On April 3, 2017, President Trump signed a repeal of new Federal Communications Commission (FCC) rules that would have subjected broadband internet service providers (ISPs) to more stringent consumer privacy regulations. Specifically, the FCC’s rule would have required ISPs to obtain opt-in consent from consumers before using and sharing sensitive information such as geo-location, web browsing history and app usage history.  This repeal allows Internet providers to compete with “edge providers” (which were not covered by the new FCC rules) in mining consumer browsing history and contributing to targeted online advertising.

This repeal, in and of itself, does not create any landmark changes in the legal landscape–the new FCC rules were only passed late last year, and had not yet taken effect. However, it is symptomatic of the Trump administration’s antipathy towards government regulation of consumer privacy.  More importantly, President Trump’s retreat has already begun to spur state legislatures and Attorneys General to strengthen their stance on privacy, concentrating scrutiny at the state level.

For example, in Massachusetts, Republican state senators introduced legislation on April 7 that would bar ISPs from selling browsing histories without customers’ explicit permission. That bill would also prohibit ISPs from charging increased rates to consumers who refuse to share their personal information.

Similarly, last week in Illinois, lawmakers introduced multiple measures that would impose new restrictions on companies that collect or use geo-location information, enable or turn on device microphones, and transfer Illinois consumers’ data to third parties. Illinois legislators are also scheduled to hear two more bills, introduced in March, that specifically target commercial website operators.  Other state legislatures that have introduced or otherwise begun to consider Internet privacy bills in the last three weeks include Connecticut, Kansas, Maryland, Montana, New York, Washington, and Wisconsin.

This shift is also becoming evident via increased executive enforcement at the state level. Advertisements and applications that use and share consumers’ location appear to be an area of particular concern.  For example, in March, the Massachusetts AG’s office obtained a settlement with an advertising company that used geofencing to send targeted anti-abortion ads to consumers in certain cities who entered reproductive health clinics.  In New York, the Office of the Attorney General (OAG) recently entered settlements with three health and fitness mobile application operators, which demand, among other things, that the app providers limit or obtain affirmative consent prior to collection of certain sensitive information.

Though the Trump administration’s laissez-faire approach toward privacy might, at first glance, appear to signal a shift towards lightening the burden of privacy regulations, it may well have the opposite effect, by creating backlash at the state level.  Accordingly, businesses, particularly those who operate online, will need to be more cognizant than ever of differing state policies moving forward.

D.C. Circuit Strikes FCC’s Rule Requiring Opt-Out Notice on Solicited Faxes

Posted in Privacy

On March 31, the U.S. Court of Appeals for the D.C. Circuit struck down a Federal Communications Commission (FCC) rule requiring that solicited fax advertisements contain a notice on how to opt out of future faxes. Following the ruling, such opt-out notices will be required only in unsolicited fax advertisements. The decision in Bais Yaakov of Spring Valley, et al. v. Federal Communications Commission, et al. will significantly impact litigation — particularly class action litigation — involving the failure to include an opt-out notice on fax advertisements.

Under the Junk Fax Prevention Act of 2005, an amendment to the Telephone Consumer Protection Act applicable to fax communications, businesses are prohibited from faxing unsolicited advertisements. “Unsolicited advertisements” are defined as advertising material “transmitted to any person without that person’s prior express invitation or permission.” The law contains an exception when three requirements are met: (1) the sender and recipient have an established business relationship; (2) the sender obtained the fax number from the recipient, through their communications or by virtue of the recipient publishing it to a directory or website; and (3) as relevant here, the advertisement contains an opt-out notice. The law goes on to require the opt-out notice to be “clear and conspicuous” and provide a free mechanism to opt out from future faxes.

In 2006, the FCC, purporting to exercise its authority to issue regulations and implement the law, issued a rule requiring that solicited fax advertisements contain opt-out notices. The law already required unsolicited fax advertisements to include an opt-out notice. Accordingly, under the FCC’s revised rules, businesses had to include opt-out notices on all fax advertisements — even if the recipient expressly consented to receive them.

This rule was challenged by a petitioner facing a $150 million class action lawsuit for failing to include opt-out notices on fax advertisements, many of which it had permission to send. The FCC argued that because the law required businesses to include opt-out notices on unsolicited fax advertisements, the FCC also had the authority to require businesses to include opt-out notices on solicited faxes.

The majority of the D.C. Circuit panel disagreed, finding nothing in the text of the law to convey such authority. Instead, the court noted that Congress had drawn a line between unsolicited and solicited fax advertisements, but the law did not require (or give the FCC authority to require) opt-out notices on solicited faxes. That was all the court needed to know to resolve the case.

The D.C. Circuit also rejected the FCC’s argument that it could require opt-out notices on solicited faxes because Congress did not define the phrase “prior express invitation or permission” in the law. The court found the argument “difficult to follow,” noting that the phrase “prior express invitation or permission” went to whether a fax was solicited or unsolicited (and requiring an opt-out notice) — not the other way around. The court also found the FCC’s argument that its rule was good policy to be irrelevant because a “good policy does not change the statute’s text.”

Notably, Judge Pillard, who also serves on the panel deciding ACA International’s appeal of the FCC’s 2015 TCPA Omnibus Order, dissented. Judge Pillard determined that the FCC had the implicit authority to require opt-out notices for solicited fax advertisements stemming from Congress’ direction to the FCC to prescribe regulations to implement the law. In addition, Judge Pillard adopted the FCC’s difficult-to-follow argument that “the inclusion of an opt-out notice is part of what makes subsequent faxes ‘solicited’ at all.”

Judge Pillard’s opinion appears to be motivated by a desire to provide a uniform mechanism for opting out. She reasoned that if a fax contains an opt-out mechanism and a recipient does not opt out, then the recipient has agreed to receive future advertisements (i.e., solicited advertisements). As the panel recognized, such reasoning removes any distinction Congress drew between solicited and unsolicited advertisements in the law. Judge Pillard’s ruling in this case may suggest that she will also rule in favor of the FCC in the much-anticipated decision in the ACA International appeal.

The D.C. Circuit’s decision will impact litigation relating to the absence of an opt-out notice on fax advertisements. First, there is no longer any liability for the failure to include an opt-out notice where the recipient consented to receive the fax. Second, the decision will undoubtedly impact class certification in actions arising from the failure to include an opt-out notice because the question of whether the opt-out notice is required is now an individualized question that turns on whether the recipient consented to receive the fax.

Virginia Amends Breach Notification Law

Posted in Data breach

As previously reported, the significant rise in Form W-2 phishing e-mails has prompted increased awareness surrounding these fraudulent tax schemes. Most recently, Virginia has responded to these types of attacks by amending its data breach notification law, Va. Code Ann. § 18.2-186.6(M). The amended law will require all employers and payroll service providers to notify the Virginia Attorney General if they are subject to a breach of payroll data, including a Form W2 e-mail phishing scam.

The new law, effective July 1, 2017 and first of its kind, requires that employers notify the Virginia Attorney General if they discover, “unauthorized access and acquisition of unencrypted and unredacted computerized data containing a taxpayer identification number in combination with the income tax withheld for that taxpayer” and the “the employer or payroll provider reasonably believes has caused or will cause, identity theft or other fraud.”

The notification must include the employer or payroll service provider’s name and federal employer identification number. Once alerted, the Office of Attorney General will report the incident to the Department of Taxation. Notification to the Attorney General is required even if the breach does not otherwise trigger the statute’s requirement that the company notify state residents of the breach. A copy of the new law can be found here. In another development, the IRS has a webpage businesses and payroll service providers now can access to learn how to quickly report data losses resulting from a Form W-2 fraudulent tax scheme. To view the IRS webpage, click here.

HIPAA Guidance Issued on Man-In-The-Middle Attacks

Posted in Data Security, Health Information

Last week, the Office of Civil Rights (OCR) issued guidance on securing end-to-end communications for sensitive information transmitted between parties over the internet. The OCR warns against “man-in-the-middle” (MITM) attacks that can occur during the transmission of information. In a MITM attack, a third party intercepts communications between two parties and, in addition to accessing the information, may alter the communication by injecting malicious codes or modifying trusted information.

If the intercepted information is sensitive in nature, it is likely that the information is protected under one or more state or federal laws that require certain security protocols. OCR states that when electronic protected health information (ePHI) that is protected under the Health Insurance Portability and Accountability Act (HIPAA) is transmitted over the internet, covered entities and business associates should include factors for securing end-to-end communication in their security risk analysis required by the HIPAA Security Rule.

According to OCR, many organizations use HTTPS inspection products in an effort to monitor the security of confidential communications. These products intercept HTTPS communications, decrypt and review them for attacks, and then re-encrypt the communications. OCR cautions that the inspection process can actually make communications more vulnerable to MITM attacks. For example, some interception products do not verify the trust certificate chains between the organization and the server before re-encrypting the communications. Once an HTTPS interception product is in use, an organization is no longer able to validate the certificates in the connection itself. OCR recommends verifying that an HTTPS inspection product properly validates certificate chains and informs the user of any errors prior to using the product. Further, an organization’s poor implementation of inspection products can impair security and introduce new vulnerabilities. OCR states that covered entities and business associates who use an HTTPS inspection product for transmissions of ePHI should consider these risks as part of their HIPAA security risk analysis.

OCR emphasizes its long-standing guidance for covered entities and business associates to encrypt ePHI to ensure that the ePHI is not unsecured. OCR has issued specific guidance on securing ePHI, including encryption. OCR also encourages covered entities and business associate to review recommendations from the National Institute of Standards and Technology for securing end-to-end communications, as well as recommendations from the United States Computer Emergency Readiness Team on protecting internet communications and preventing MITM attacks. All of these resources provide valuable tools for organizations, including covered entities and business associates under HIPAA, to ensure the security of end-to-end communications and reduce the risk of associated liability.

21st Century Data Breaches: Not All Fun and Games

Posted in Data breach, Data Security, Privacy, Retail

Data breaches can occur in the most surprising places. When data breaches affect sensitive, private information—especially those of children—companies can face scrutiny from regulatory agencies and be exposed to civil (and perhaps even criminal) liability.  While hackers are still targeting retail corporations and financial institutions, some hackers have moved onto an unexpected new area: children’s toys.

Spiral Toys Inc. sells stuffed animals called “CloudPets.” These 21st century stuffed animals are connected to the internet, allowing parents, their children, and anyone with access to the stuffed animals to record and send voice messages to each other.  Users simply download the “CloudPets” phone app (the Android app has been downloaded over 100,000 times already), and create an account by registering their emails and other personal information with the CloudPets app.  Unfortunately, the combination of a vulnerable security network and the sensitive nature of the private information held on the CloudPets’ server made it an attractive target for hackers.

In February 2017, cybersecurity experts discovered that the account information of more than 800,000 CloudPets could be easily accessible by anyone browsing the internet, without the need for a password. Even more disturbing, as reported by cnet.com, nearly 2.2 million voice recordings were also stored online in an unsecure manner.  This includes potentially millions of voice recordings of children.  According to the cybersecurity experts, hackers appeared to have wiped the user database and held its contents for ransom from the company.

Unfortunately, CloudPets’ security flaws do not appear to be an isolated event. While retailers and banks have beefed up their cybersecurity in recent years after a number of high-profile breaches, toy manufacturers appear to be lagging behind.  In prior years, cybersecurity experts raised similar concerns with an internet-connected Barbie doll.  Likewise, cybersecurity concerns have been raised with other connected devices that contain private information, such as the fitness tracking devices like Fitbit.

Data breaches result in serious legal and public relations consequences, including a duty to disclose breaches to the public, regulatory fines, and potential class action lawsuits. Civil actions premised on torts law, i.e., invasion of privacy, are also colorable causes of action against breach involving sensitive private information.

Finally, data breaches can also result in severe financial consequences for the companies involved. For CloudPets, its security breach has directly or indirectly caused their stock price to drop to 1 cent.  Moving forward, manufacturers of “connected” 21st century toys and gadgets should study cybersecurity best practices and cyber-threat trends to stay ahead of the pack and reduce their likelihood of becoming targets for opportunistic hackers.

Court Confirms Right to Be Forgotten Is Not Absolute

Posted in EU Data Protection, Legislation, Privacy

It has been less than three years since the Court of Justice of the European Union (CJEU) decided that people have the right to have incorrect information about them removed from online search engine results. However, this so-called “right to be forgotten” is not absolute, as confirmed by the CJEU’s most recent ruling last week.

This case concerned an Italian director, Mr. Salvatore Manni, who sought to have his personal details removed from company records in an official public register. He believed that his properties had failed to sell because the companies register showed that he had been an administrator of another company that went bankrupt.

The CJEU held that Mr. Manni could not demand the deletion of his personal data from the official register because the public nature of company registers is intended to ensure legal certainty and to protect the interests of third parties. It was held that this inference with an individual’s fundamental rights to a private life and to protect personal data was not disproportionate in the circumstances. This was because company registers only disclose a limited amount of personal data and company executives should be required to disclose data relating to their identity and functions within a company. The CJEU concluded by saying that in specific and exceptional situations, overriding and legitimate reasons may justify limiting the rights of third parties to access such data, and left it up to national courts to determine whether “legitimate and overriding reasons” exist on a case-by-case basis.

This decision echoes the ruling in the 2014 Google Spain Case; the right to be forgotten must be balanced against individuals’ fundamental rights, such as the right of freedom of expression and the public’s right to know information about persons holding key positions within a company. The General Data Protection Regulation (GDPR) which codifies the right to be forgotten also confirms this position. The right to be forgotten allows individuals to request the deletion of personal data in specific circumstances. However, the GDPR contains certain exemptions where companies can refuse to deal with a deletion request, such as where the processing is necessary to exercise the right of freedom of expression, and for archiving purposes in the public interest.

Companies who receive requests by individuals asking that their personal data be deleted will need to determine, on a case-by-case basis, whether or not such data should be erased. Organizations will be required to perform a balancing act against any competing rights when considering such erasure requests.

See also:

UK’s First Ever Right To Be Forgotten Enforcement: Google In the Firing Line Again

The French Data Protection Authority Puts Google On Notice To Delist Domain Names Beyond Site’s EU Extensions

The CJEU’s Google Spain Decision: A Right to be Forgotten Within the Limits of the Freedom of Expression

Costeja’s Revenge: Orders to Delete Accurate Data and the Right to be Forgotten in the EU

Court Gives Broad Reading to Illinois Biometric Privacy Act

Posted in Privacy, Profiling, Social Media

The Illinois Biometric Information Privacy Act (IBIPA) covers face geometry scans that are created from digital images, according to a preliminary ruling last month in a lawsuit against Google. Rivera v. Google Inc., No. 16 C 02714 (N.D. Ill. February 27, 2017). The suit seeks monetary compensation for individuals identified by face recognition technology in photos uploaded to the “Google Photo” service. The ruling rejected Google’s argument that the IBIPA should only cover facial scans that are made in person and potentially subjects Google and other providers of widely used facial recognition technology to significantly expanded privacy requirements in Illinois to protect biometric privacy of individuals whose faces are in the tech companies’ databases.

Two individuals sued Google, seeking class action status and claiming that Google violated the IBIPA when, without their consent, Google’s software obtained facial geometry for their faces from photos that were uploaded to Google Photo. Google Photo is a cloud based offering of Google that, among other things, uses facial recognition technology to assist users in organizing and retrieving their photos.  The IBIPA requires anyone who collects and stores certain “biometric identifiers” such as “face geometry” to first obtain the person’s consent and also requires a written policy for retention and eventual destruction of those identifiers.  The statute provides for damages of $1,000 for each negligent violation and $5,000 for each intentional violation.

In seeking to have the suit dismissed before proceedings begin, Google argued that language in the statute excluding photographs from some parts of the IBIPA should be applied to interpret the statute’s definition of “biometric identifiers” that are covered by the statute to mean that only in-person scans are covered. The statute defines “biometric identifier” as, “a retina or iris scan, fingerprint, voiceprint or scan of hand or face geometry.” The Court, in a detailed 30-page ruling carefully analyzing the text of the statute and the legislative history, concluded that despite “photograph” being expressly excluded from a different definition in the statute, the Illinois legislature did not intend to distinguish between in person and virtual scans in the definition of “biometric identifier.”  As a result, it interprets “biometric identifier” to include face geometry extracted from Google Photo images.

If this interpretation ultimately prevails, it would have a significant impact, at least in Illinois, on the privacy compliance requirements for a broad and growing category of technology products. In addition to Google, a great many photo sharing and social media product providers use similar facial recognition technology to identify people, to organize photos and to add features and images to photos.  The IBIPA would require all the entities providing these functions to specifically inform their users about the collection of face geometry and to publish a retention schedule, detailing how the data will be kept and when it will be deleted.

The impact of this Illinois statute on the rest of the country remains a contested issues. In its ruling, the court concluded that at this early stage of the lawsuit there was sufficient indication that the statute was violated in Illinois so that, unless contrary evidence was introduced, it would apply in this case.  That, however, was based on the assertion that the pictures were taken and uploaded in Illinois, and without an analysis of where the facial geometry was extracted or stored.  The court put aside to a later stage of the litigation the federal constitutional questions about whether this Illinois statute could govern Google’s (and other internet providers’) actions across the United States.

Three Major Security Issues to Consider with SaaS and Cloud Solutions

Posted in Cybersecurity, Data Security

Small and medium-sized businesses are turning to software as a service (SaaS) solutions for their IT needs more and more frequently. SaaS solutions can provide end-users with quicker, cheaper access to software that they might not otherwise have at their disposal. SaaS solutions can also be more scalable which is important for early-stage companies.  However, SaaS and cloud data storage are still relatively nascent technologies and  carry some risks.  When your business turns to SaaS and cloud solutions, consider the following three major issues:

  1. Data Security:  Data breaches happen all the time. News reports of hacking and industrial espionage hit the headlines daily and present a serious threat to small and medium sized businesses. On-premise software still presents its own set of security concerns, but be wary of new technologies and vendors who do not have a robust security system in place.
  2. Ongoing Business Concerns:  Small and medium sized businesses many times have no option but to outsource certain tasks, such as IT. However, when you outsource IT, you lose control over how your service provider is doing business-wise and can open yourself up to various risks.
  3. Availability:  Employees at small or medium sized businesses work 24/7 and need access to company data 24/7. However, with SaaS and cloud computing, outside issues like internet and power outages are a common problem.

Keeping these three issues in mind, what should you do? First, perform due diligence on your vendors, and filter out mediocre SaaS providers and find the right solution for your business.  Ask vendors about their disaster plans and recovery methods, risk analyses and protocols.  Request information and recommendations from current customers.  Find out if there have been prior security breaches.  Read any terms and conditions, and don’t skip the fine print.  Make sure that any software or data that is critical for the continuation of your work is escrowed. A well-drafted software escrow agreement can go a long way in the event of an issue. If any customizations or updates to the software are done specifically for your business, make sure that those are covered as well, not just the original software version.

The bottom line: expect the unexpected and mitigate any future security issues that might arise.

The Validity of EU-U.S. Personal Data Export Tools: A Pending Issue

Posted in EU Data Protection, Legislation

Between the cancellation of the Safe Harbor by the Court of Justice of the European Union (CJEU) and the adoption of the Privacy Shield, a number of data exporters have relied on the Standard Contractual Clauses (SCC) as the safest export tool to transfer personal data from the EU to the U.S. But as announced in our previous blog posts, the validity of the SCC and the Privacy Shield had to pass the EU legal test as regard to the fundamental right to data protection.

Indeed, while the Privacy Shield is facing an action for annulment brought by Digital Right Ireland to the CJEU, it is now the turn of the SCC to be examined in the context of a request filed by Maximilian Schrems against Facebook Ireland Limited to the Irish data protection authority (DPA). This last case has been submitted by the DPA to the Irish High Court, which is now assessing the opportunity to refer the question to the CJEU.

On May 24, 2016, the Irish DPA issued a draft decision summarizing its concerns about the validity of the SCC. It is worth noting that this was a turning point for the Irish DPA: the former Irish Commissioner, Billy Hawkes, defended the Safe Harbor against Maximilian Schrems and some other DPAs, whereas the new Irish Commissioner Helen Dixon basically defends the opposite, despite some improvements in U.S. laws and the SCC that occurred after the cancellation of the Safe Harbor. This might be the sign of an evolution due to the entry into force of the EU General Data Protection Regulation, the new strong and unified piece of data protection legislation that will apply from May 2018.

The main concern of the Irish DPA about the use of the SCC is the absence of an effective court’s remedy in the U.S. legislation for EU citizens to enforce their right to data protection where it might be a risk that personal data is processed by U.S. State agencies for national security purposes. Indeed, even if an EU citizen meets the criteria for a remedy against surveillance under the U.S. Foreign Intelligence Security Act, it appears on foot of the U.S. court’s decisions they cannot sue the U.S. government.

Concerning the Privacy Shield, it is too soon to know if it will survive the new U.S. political era. As observed with the dead Safe Harbor, strong voices start to express themselves opposing the industry and the EU and U.S. Privacy Shield negotiators (pro) to the EU civil society and some members of the EU Parliament and DPAs (contra).

The key issue finally lies in the ability for the U.S. legislation to grant data subjects with enforceable data protection rights that EU authorities and courts would find at least equivalent to those granted by the EU. The two above-mentioned legal cases, as well as the economic stakes of EU-U.S. data flows should put a strong pressure on U.S. government to provide additional guarantees.

For more information on the future of the Privacy Shield and SCC, please refer to the following prior Password Protected blog posts:

Expected Soon: Modifications of the Standard Contractual Clauses

Is the Privacy Shield Viable? Article 29 Working Party Proposes to Wait for Its Final Verdict

New Threat to Transatlantic Personal Data Transfers: Possible Invalidation of Standard Contractual Clauses

WP 29 Expresses Concerns About EU-U.S. Privacy Shield

The Rising Importance of Data Privacy and Security Practices for Healthcare Entities Facing Intensified Challenges

Posted in Data Security, Health Information, Information Management

For those in the healthcare industry, the privacy and security of information is vital to operations, but the importance and value of health information also makes the industry a prime target for threats.  Studies suggest that the vast majority of healthcare organizations have experienced one sort of data breach or another.  In fact, a May 2016 report from the Ponemon Institute found almost 90% of healthcare organizations had experienced a breach in the preceding two years, and 45% experience more than five breaches in the last two years.  Healthcare providers are also increasingly under attack by “ransomware” or “denial of service” attacks which lock up systems and hold them hostage until a ransom is paid to unlock them.  And while various agencies, including the FBI, recommend that providers not pay the demands of cyber criminals who execute ransomware attacks, this may not be a feasible option for providers who have failed to maintain robust data back-up systems.  Furthermore, the Office for Civil Rights has issued guidance that indicates that ransomware attacks need to be treated as security incidents and analyzed under HIPAA’s breach notification rule, although it recognizes that it is a fact-specific matter as to whether the incident will require notification to patients (and the OCR).  Finally, healthcare organizations are also subject to universal scams, such as the W-2 scam, which was previously discussed in the Password Protected Blog.

Preparing for ransomware and other attacks is not the only challenge; healthcare entities should be mindful that failure to comply with HIPAA is becoming increasingly costly.  To be sure, the Office for Civil Rights (“OCR”) has substantially ramped up its enforcement efforts.  Specifically, in 2016, OCR fines totaled $23 million, which is not only a new record but also roughly three times the previous record of $7.4 million (2014).  Aside from nearly doubling the record of enforcement actions (from seven to 13), 2016 witnessed a new record settlement: $5.5 million, paid by Advocate Health Care System.  Notably, the Advocate settlement was part of an enforcement blitz involving a settlement a week for three weeks in a row, as was previously reported in this blog.  Furthermore, in August 2016, the OCR announced an initiative to target smaller breaches (those involving fewer than 500 individuals), which means that small providers should no longer think that they will be able to “fly under the radar” of HIPAA enforcement.

2017 is already off to a strong enforcement start.  The OCR kicked off the year with the announcement of a (relatively modest) settlement of $475,000 for failure to make timely notifications of a breach.  Then, on February 1, 2017, the OCR announced that Children’s Medical Center of Dallas (“Children’s”) had to pay a civil money penalty of $3.2 million for its failure to implement appropriate risk manage plans despite external recommendations to do so.  Indeed, in 2010, Children’s experienced a loss of an unencrypted, non-password protected Blackberry device that contained protected health information of approximately 3,800 people.  And, in 2013, Children’s notified the OCR of a separate breach involving the theft of an unencrypted laptop containing electronic protected health information of 2,462 individuals.

Finally, on February 16, 2017, the OCR announced a HIPAA settlement that matched the previous high-water mark for settlements: $5.5 million.  In this latest case, Memorial Healthcare System settled with the OCR following a situation in which the protected health information of 115,143 individuals was impermissibly accessed by its employees and impermissibly disclosed to an affiliated physician’s office staff.  According to the OCR’s announcement, the login credentials of a former employee of a physician’s office were used from April 2011 to April 2012, without detection, and resulted in the unauthorized disclosure of information regarding 80,000 individuals.  Although the hospital had audit control policies in place, it failed to implement procedures for reviewing, modifying, and terminating rights of access, and it failed to regularly review system activity.

Looking Ahead: Prioritize Robust Data Privacy and Security Practices

The lesson in all of this is that no healthcare organizations should be coasting when it comes to data privacy and security activities.  Not only are providers under nearly constant attack, they are also likely to be subject to more aggressive enforcement and higher penalties if the OCR discovers inadequate compliance initiatives.  See additional discussion here.  To be sure, with the new focus on smaller breaches and the requirement that all breaches be reported to the OCR, no healthcare organization should consider itself to be immune from an enforcement action.  The solution: constant vigilance, routine training, regular updates to security risk assessments, and implementation of policies as they are written.