Header graphic for print

Password Protected

Data Privacy & Security News and Trends

NY Cybersecurity Regulations for Financial Services Companies: Enforcement Begins Aug. 28

Posted in Cybersecurity, Financial Services Information Management, Regulation

The 180-day transitional period under the New York Department of Financial Services (NYDFS) Cybersecurity Requirements for Financial Services Companies is set to expire Aug. 28, 2017. Financial services companies must achieve compliance with the cybersecurity regulations prior to this deadline or face substantial monetary penalties and reputational harm.

Cybersecurity Regulation Overview

The cybersecurity regulations became effective March 1, 2017. In its official introduction to the regulations (23 NYCRR 500), NYDFS observed that the financial services industry has become a significant target of cybersecurity threats and that cybercriminals can cause large financial losses for both financial institutions and their customers whose private information may be stolen for illicit purposes. Given the seriousness of this risk, NYDFS determined that certain regulatory minimum standards were warranted but avoided being overly prescriptive, to allow cybersecurity programs to match the relevant risks and keep pace with technological advances.

The cybersecurity regulations require each financial services company regulated by NYDFS to assess its specific risk profile and design a program that addresses its risks in a robust fashion. The required risk assessment, however, is not intended to permit a cost-benefit analysis of acceptable losses where an institution faces cybersecurity risks. Senior management must be responsible for an organization’s cybersecurity program and file an annual certification confirming compliance with the regulations. A regulated entity’s cybersecurity program must ensure the safety and soundness of the institution and protect its customers.

NYDFS has issued a clear warning of its intent to pursue strong enforcement of the Cybersecurity Regulations:  “It is critical for all regulated institutions that have not yet done so to move swiftly and urgently to adopt a cybersecurity program and for all regulated entities to be subject to minimum standards with respect to their programs.  The number of cyber events has been steadily increasing and estimates of potential risk to our financial services industry are stark.  Adoption of the program outlined in these regulations is a priority for New York State.”

To learn more about who is affected, required actions to comply, possible penalties and upcoming deadlines, click here.

Part Two: Abandoned Mines and Data Retention Policies

Posted in Data retention

As discussed in Tuesday’s post, in addition to taking reasonable precautions to secure data, companies should consider whether they have an affirmative duty to destroy data in the United States – to clear the explosives out of the shed, so to speak.

Contractual duties to destroy records have been in existence since judges wore powdered wigs. For example, in M&A transactions, if after due diligence a company decides not to proceed with an acquisition, typically the purchaser must return or destroy any confidential data that was obtained.  The same is true in vendor agreements involving proprietary processes and methods.  But there may be another, less obvious source of a contractual duty to destroy records.  Every company with employees necessarily maintains confidential data from employment applications, background checks, personnel files, payroll records, health insurance records, marital orders, collections orders, and compliance with subpoenas regarding individual employees, etc.  Businesses frequently make representations to and agreements with prospective and current employees regarding how they will treat their confidential information, and those representations may give rise to conflicting interpretations after an employee has separated from the company.

Every company should review its employment application solicitation representations and employee handbooks for statements that expressly or implicitly fix an end date to the company’s document retention periods (“we will keep your application on file for one year, after that you will need to re-apply”), or for statements that impose an obligation to take “reasonable” care to ensure the privacy of confidential data. An employee’s understanding of the term “reasonable” may potentially include the understanding that confidential records will not be kept by the company for a period longer than necessary to defend a lawsuit regarding the employee’s employment performance.  For example, if you last worked for a company 10 years ago, would it surprise you to learn that your complete personnel file is still sitting in a mini-storage unit rented by the company?  Would you be troubled to learn that the company still had a copy of an investigator’s background report, including your residence history, social security number, neighbor’s comments, salary history, divorce records, and credit reports?  What if you were the subject of a meritless harassment complaint, and were never told because the company was previously planning to change your office location and simply accelerated your move as a response to the complaint?  Could you be harmed if the information in those confidential files was made public?  What the courts may construe today as the reasonable expectation of the parties regarding the length of time confidential files should be kept may differ from what one of the parties subjectively expected at the time.

A discrete statutory duty to destroy data may also exist. For example, health care institutions often address compulsory document destruction requirements under HIPAA, including standards relating to the manner of data destruction, and the number of degaussing passes required and/or level of physical destruction necessary before an electronic document can be deemed destroyed.  There can be other state and federal requirements that expressly require document destruction, in surprising contexts.

A third potential duty to destroy documents is worth considering. When a data breach occurs, the company whose records were exposed can expect to be sued by a variety of persons and entities, including state attorneys general, class action lawyers representing victims of the breach, banks and insurers who have paid damages for losses, and/or disgruntled investors who take umbrage with the quality of the company’s data privacy practices.  Discovery regarding the breach likely will first revolve around the nature of the breach, the steps the company took to secure and control access to the records, and the reasonableness of the company’s policies and practices regarding data privacy.  But for older documents, one question is inevitably going to be asked: Why did the company even have those records at that point in time? If the answer falls short of common industry practices, contravenes a representation in the employee handbook, runs afoul of some then-existing judicial decision, or simply fails to account for the reasonable expectations of the pertinent parties, the company may have difficulty defending its failure to timely destroy the records that were exposed.

Data breach class actions typically allege violations of state unfair business practice and consumer protection laws (in addition to statutory notice, negligence, breach of contract, conversion, and esoteric claims). That is no accident; unfair business practice standards for liability are often nebulous and ill-suited for summary resolution.  In 2015, a California consumer protection advocacy group filed a complaint with the Federal Trade Commission against Google.  In that complaint, the group argued that Google’s refusal to recognize in the U.S. the “right to be forgotten” that is codified in the EU constitutes an unfair business practice in the U.S. The complainant cited Section 5 of the Federal Trade Act (prohibiting unfair business practices).  It is not a great stretch to imagine a similar claim being filed by plaintiffs placed at risk by a company’s failure to timely destroy–not merely secure–old records.  There may be no legal precedent for such a claim, but what company wants to become that precedent by suffering a data breach involving old documents that the company no longer needs?

Alfred Nobel’s patented method for combining diatomaceous earth with nitroglycerin made the nineteenth century explosives shed a less dangerous place to be. But nitroglycerin in any form becomes less stable –and far more dangerous—as it ages.  Perhaps we should extend the analogy to old company documents, and take some time to clear the old explosives out of the company shed.

Another Circuit Joins the Trend of Setting a “Low Bar” for Standing in Data Breach Actions

Posted in Health Information, Litigation

Consistent with a growing trend among courts nationwide, the D.C. Circuit Court unanimously held that a group of plaintiffs had cleared a “low bar” to establish constitutional standing for their claims in a data breach case against health insurer CareFirst by alleging potential future harm as a result of the breach. The plaintiffs alleged that there was a substantial risk that their personal information could be used for medical identity theft after a breach of CareFirst’s systems. Despite the fact that (i) no actual misuse of the information had yet occurred and (ii) the breach involved medical information, rather than financial or other sensitive information typically involved in successful data breach claims, the D.C. Circuit Court held that the plaintiffs had established standing and their claims could move forward.

In 2016, the U.S. Supreme Court held in Spokeo v. Robins that plaintiffs must allege an actual or imminent injury, not hypothetical harm, to establish standing and proceed past the pleadings stage. The Supreme Court found that plaintiffs cannot rely on statutory violations for standing and remanded the case for the lower court to identify a “concrete injury.” Even after the Supreme Court’s decision, appellate courts have split on how to interpret the standard in data breach cases and whether to find standing based on a risk of harm, and courts are increasingly sympathetic to data breach claims.

The D.C. Circuit Court joins several other circuit courts that have interpreted the pleading standard liberally and in favor of data breach victims. As a result, more claims in these jurisdictions will survive past the pleading stage based on a risk of injury to the individuals affected by a breach. These rulings are largely based on an assumption that the perpetuators of information theft intend to misuse the information, indicating that the bar to claims at the pleading stage would require proof that the breached information could not or would not be used for fraud or identity theft.

Significantly, the D.C. Circuit’s ruling focused on the risk of harm from breaches of information other than financial information and social security numbers, which typically form the basis for data breach claims. The D.C. Circuit noted that there was a substantial risk to the plaintiffs of medical identify theft based on a breach of information such as names, birthdates, email addresses, and health insurance policy numbers. In addition to an overall increase in data breach claims based on potential harm, this type of ruling could expand the success of claims based in negligence or other state law doctrines arising out of breaches of health information.

It is likely that the Supreme Court will eventually weigh in on whether plaintiffs have standing in claims arising out of data breaches based on the potential for harm. In the meantime, individuals and entities who maintain personal information, whether financial or medical, should be aware that individuals affected by data breaches are increasingly likely to get their day in court.

Abandoned Mines and Data Retention Policies: Time to Clear the Explosives Out of the Shed

Posted in Data retention

Visit the remains of a nineteenth century gold mine and you may find, far away from the main camp, a lonely little shack. It may be surrounded by trees, or tucked into the crook of a hillside, but it has a distinctly unwelcoming demeanor, as if the building itself wishes to be left alone.  This is the explosives shed, and when the mine was operational, it would have been marked by signs with large letters, secured with a lock, and it would have been off limits to all but a few personnel.  The need to secure this essential component of the business was obvious; the shed’s contents could shut down the mining operation with a single misstep.  Equally obvious is that the shed now stands empty, because removing old explosives the only 100% effective way to prevent a future disaster.

There is a modern equivalent lurking in the data storage facilities of nearly every business. And, while a misstep in the storage of confidential data may not cause immediate physical harm, it still has the ability to shut down the business.  As a result, modern companies spend substantial sums ensuring that confidential records are isolated, secured, and generally off limits to all but a few personnel.

In the past two decades, as electronic means of communication supplanted physical ones, lawyers and courts became more attuned to the needs to, first, retain and not destroy records, and, second, to secure retained company records from unauthorized access and disclosure.  With the 2018 application (in the European Union) of the General Data Protection Regulation (GDPR) on the near horizon, companies should consider whether there is in the United States not merely a duty to take reasonable precautions to secure data, but an affirmative duty to destroy data as well – to clear the explosives out of the shed, so to speak.

The GDPR contains express obligations to destroy certain forms of data (see, e.g., Preamble 39, 65-66; Articles 5(1)(e), 17) (sometimes called “the right to be forgotten”).  As Article 17 provides, a company “shall have the obligation to erase personal data without undue delay where…the personal data are no longer necessary in relation to the purposes for which they were collected.” As of today, there is no U.S. statutory analog to the GDPR.  But a U.S. company may nevertheless have an obligation to destroy domestic records.  Such a duty may arise from contract, from discrete existing state or federal laws, or, importantly, from the need to ensure an adequate defense to unfair business practice and consumer protection claims.

In Part Two of this post, we will examine the affirmative legal obligations to destroy data under U.S. law.

Business as Usual: UK’s New Data Protection Bill and the GDPR

Posted in EU Data Protection, Legislation, Privacy

The UK Government will introduce a new Data Protection Bill (the “Bill”) this year. As highlighted in the Queen’s speech back in June, the Government has committed to introduce the new law and, on Monday, published a Statement of Intent.

The Bill will not change the position that the EU’s new data protection legislation – the General Data Protection Regulation (GDPR) – will bring when it comes into force on 25 May 2018. The UK will still be in the EU at that time and so the GDPR will be automatically transposed into English law and replace the UK’s current Data Protection Act. However, when the UK leaves the EU and is no longer subject to the GDPR, the Bill when then implement the GDPR into English law. The importance of this is two-fold; it will support the UK’s position with regard to preserving personal data flows between the UK, EU and other countries around the world, and gives UK businesses clarity about their data protection obligations following Brexit.

The Bill will also introduce the national member state derogations that are permitted under the GDPR. The Government asked for feedback (Call for Views) on how the UK should deal with these exemptions earlier this year. The Statement of Intent provides some detail on the Government’s proposed approach, which include:

  • Enabling children aged 13 year or older to consent to their personal data being processed (under the GDPR the age for valid consent is 16 unless member states reduce this through national law);
  • Maintaining the UK’s position on processing personal data relating to criminal convictions and other sensitive personal data (enabling employers to carry out criminal background checks in certain circumstances);
  • Enabling organisations to carry out automated decision making for certain legitimate functions (e.g. credit reference checks);
  • Maintaining the UK’s current position with regard to the processing of personal data in relation to freedom of expression in the media, research and archiving.

Two new criminal offences will also be created. An offence of intentionally or recklessly re-identifying individuals from anonymised or pseudonymised data, and an offence of altering records with intent to prevent disclosure following a subject access request. Both offences will be subject to an unlimited fine.

The Bill will also implement the EU’s new Data Protection Law Enforcement Directive (DPLED) in English law. The DPLED sits alongside the GDPR and deals with processing of personal data by the police, prosecutors and other agencies involved in law enforcement. However, unlike the GDPR, the DPLED is an EU Directive (not a Regulation) and so must be implemented into member state law through national legislation by 6 May 2018.

The draft text of the Bill is due to be published and put before Parliament in early September. The Bill will be largely identical in effect to the GDPR. In light of the increased fines imposed by the GDPR (up to €20,000,000 (£17,000,000) or 4 per cent of an organisation’s global annual turnover, whichever is higher), companies should still be continuing with their GDPR compliance efforts to ensure adherence to the new law by 25 May 2018.

Is It Too Late for a Uniform State Approach to Data Breach Notification?

Posted in Data breach, Legislation, Notification

Privacy professionals have long lamented the myriad of approaches each state takes when it comes to data breach notification requirements. According to the National Conference of State Legislatures, 48 states, the District of Columbia, Guam, Puerto Rico and the Virgin Islands have enacted legislation requiring covered companies to make certain notifications to affected consumers and specified regulators when a security breach of personally identifiable information occurs.

When so many states have “left the barn,” can they be corralled into one consistent regulatory scheme? That is the question the Uniform Law Commission plans to delve into over the coming year.  At its recent annual meeting in San Diego, California, the executive committee of the Commission approved a study committee to assess whether it is desirable for the Commission to draft a uniform act governing data breach notification requirements.

What is the ULC?

Created in 1892, the Uniform Law Commission develops and drafts uniform legislation for consideration by state legislatures. Past work of the Commission has produced legislation such as the Uniform Commercial Code and the Uniform Electronic Transactions Act.  The Commission is comprised of several hundred judges, law professors, legislative staff, legislators and attorneys in private practice (“Commissioners”).  Each state, the District of Columbia, Puerto Rico and the U.S. Virgin Islands appoint Commissioners.  I serve as a Commissioner for Virginia and attended the annual meeting in San Diego.

Charge of the Study Committee

The study committee will evaluate “the need for and feasibility of state legislation on data breach notification including consideration of what sorts of personal information should be protected; to whom, when and how notice should be provided and the contents of the notice.” At this time the committee is not authorized “to consider remedies for injury caused by a data breach.”

Why Now?

It was acknowledged by the Commission’s Scope and Program Committee that 48 states have enacted some type of breach notification statute. Often the Commission will not propose a uniform law if a significant number of states have already enacted laws on the subject matter.  Given the various approaches by the States, the lack of uniformity and the emerging importance of privacy issues, the committee determined that there was value in assessing whether a uniform approach might be desirable and attainable notwithstanding that virtually every state has acted in this area.

What Happens Next?

Within the next two months, the President of the Commission will appoint members of the study committee. The committee will begin its work and report its findings to the Scope and Program Committee of the Commission.  If the study committee decides that uniformity in state law is desirable in this area it may recommend that the Commission’s Executive Committee authorize the study committee to being the process of drafting a proposed uniform act that would eventually be provided to the States for consideration and adoption.

It is anticipated that the study committee will seek the input of the National Association of Attorneys General (“NAAG”) on this project. One of the rationales put forth for the Commission to undertake this project was the potential to work jointly with NAAG.  Support of state attorneys general will be important to the overall success of this project.  The study committee will solicit input from a broad array of stakeholders over the next year.

Implications for the Privacy Professional?

The time period for study and then possible drafting of a uniform law may take anywhere from one to three years due to the process followed by the Commission. The study period usually takes a year prior to determining whether to proceed to a drafting committee.  A proposed uniform act is usually debated, revised and further considered for a minimum of two years before it is finalized and sent to the States for consideration.  At that point, the Commissioners of each state are asked to seek introduction of the legislation in their state and to advocate for its passage.

While it is too early to gauge the success of a uniform law on this subject at state legislatures, it is fair to say that the prospects for success will be significantly impacted by whether state attorneys general are onboard with any proposed changes. Consumer protection is a primary focus of every state attorney general and any change to their authority without their support will impact the likelihood of widespread adoption of a uniform approach to this topic.

Short term, this development does not impact a company’s response to a data breach. Long term, if the effort is successful it has the possibility to lower compliance costs when a breach occurs and notification is required.  If there is any possibility of divining a path to a uniform approach, the Commission appears to be the body that has the track record to lead the States to that end.

 

Computer Viruses Have Evolved: Have Your Antivirus Contract Warranties Kept Up?

Posted in Cybersecurity, Data breach, Data Protection and Competition

By many accounts, 2017 is the 35th anniversary of widely propagating computer viruses. The recent “WannaCry” and “NotPetya” ransomware outbreaks demonstrate that computer viruses (or more broadly, “malware”) are still evolving, developing, and posing new threats. But IT contracts don’t move at the same pace. Contract provisions that address computer virus risk have become commonplace in form contracts for software and cloud computing, and in the long list of representations and warranties applicable to M&A transactions, but those provisions have evolved little since their introduction.

Standard contract language may not meet current needs. It is time to review, update and revise contract considerations for computer malware.

Although the concepts behind computer viruses can be traced back decades earlier, it was a 1982 event that is generally credited as starting today’s line of malware. That year, a 15-year-old from Pittsburgh, Pennsylvania launched the “Elk Cloner” virus, which propagated through the sharing of floppy disks on Apple II computers. Since then, the means of propagating viruses has expanded to exploit modern network computer systems and the effects have evolved from simple prank messages to become a threat to daily business operations, information security and even business continuity.

“WannaCry” and “NotPetya” are not unique but are powerful examples of several of the ways in which malware has evolved. Both of these viruses utilized a flaw in an operating system sub-component that provided a path to get past antivirus defenses. Importantly, that vulnerability had been detected and a corrected version made available so that up to date and patched computers were well defended, but so many computers had not been patched that the outbreak was still quite damaging.

There are many reasons why a particular computer system might not have been patched. While it could be that there was an oversight, there are also many reasons for a failure to patch that are independent of any negligence or malfeasance of the owner of the computer:

  1. Unsupported software.  It may be that the vulnerable software was contained in an older version of a third party system (for example, a computer operating system) that the vendor stopped supporting.  While the vendor might provide a fix for the latest version, older versions would remain vulnerable. In the WannyCry case, a key software provider did break its own protocol and provide an update for the old software that remained in use, but that was an exceedingly rare occurrence.
  2. Compatibility with unsupported software. Closely related to the issue above, it often happens that a company builds additional software that is dependent on a particular version of an operating system.  The company would then be unable to update the operating system without having to re-engineer their custom software.  This leaves the combined system exposed to the vulnerability in the older underlying operating system.
  3. Embedded software unable to update.  Software may be built into industrial or medical equipment (or “internet of things” devices) that is not simply designed to promptly receive updates. In the WannyCry case, expensive hospital equipment, such as MRI scanners, were afflicted, likely for this reason.
  4. Unmanaged equipment. In certain equipment, even if there is a technical mechanism by which software updates can be made, the equipment may be managed by non-IT staff so that knowledge of the requirement and the skills to carry out the update may not be brought to bear.
  5. Lack of resources. Even where none of the above concerns apply, it can often be the case that the necessary resources are not simply not available to an organization or that an organization does not apply the necessary resources to carry out necessary security updates to software in a timely manner.

The failure to patch and the occurrence of these other factors are not always addressed by the common sort of antivirus warranty seen in many IT and transactional contracts. Common contract language focuses on the status of IT systems, that is, whether or not a virus is present, and focuses on whether steps are taken to avoid the introduction of viruses. Common language generally does not address the resilience of a system to withstand the introduction of a virus or other malware.  It is also unusual for the scope of the language to encompass all the smart devices in an enterprise.

To address these broader concerns, it is time to update typical anti-malware language to address the broader risks of un-supported software and software known to be vulnerable, both within and outside of the IT department. Here is a checklist of topics to cover in modernized antivirus warranties:

  1. Absence of any systems that are dependent on software that no longer has appropriate security updates available.
  2. Absence of any systems that are engineered to depend on software such that future security updates are unable to be applied.
  3. Processes in place and carried out to apply all necessary software updates.
  4. Scope of warranty expanded to include all systems that may be vulnerable, whether or not they are an IT component.

DOJ Takes Down AlphaBay, the World’s Largest Dark Web Marketplace

Posted in Cybersecurity, Identity Theft

The U.S. Department of Justice has announced the seizure of AlphaBay, the largest criminal marketplace on the Internet, which was used to sell stolen financial information, identification documents and other personal data, computer hacking tools, drugs, firearms, and a vast number of other illegal good and services throughout the world.

AlphaBay was the largest dark web market with estimated annual sales of hundreds of thousands of dollars, which made it nearly ten times the size of the infamous Silk Road dark web marketplace that was shut down by the government in 2013. AlphaBay operated as a hidden service on The Onion Router (Tor) network, which hid the locations of its underlying servers and the identities of its administrators, moderators, and users.  Its user interface was configured like a conventional e-commerce website, where vendors could sell illegal goods or services in exchange for paying a percentage of the transaction as a commission to AlphaBay.

AlphaBay had a dedicated section of the website where users could purchase stolen credit cards and financial information, as well as stolen personal identifying information (PII) – even offering specific search controls to allow potential buyers to search the listings by location (city, state and country), social security number, birth year, credit limit, PIN number, seller, seller rating, price, and more.

The international operation to seize AlphaBay’s infrastructure was led by the United States and involved cooperation with law enforcement authorities in Thailand, the Netherlands, Lithuania, Canada, the United Kingdom, and France, as well as the European law enforcement agency Europol. On July 5, Alexandre Cazes, a Canadian citizen residing in Thailand, was arrested by Thai authorities on behalf of the United States for his alleged role as the creator and administrator of AlphaBay.  On July 12, Cazes apparently took his own life while in custody in Thailand.

The Federal Bureau of Investigation (FBI) and the Drug Enforcement Administration (DEA) have seized millions of dollars’ worth of cryptocurrencies that represent the proceeds of AlphaBay’s illegal activities, including at least 1,943 Bitcoin, 8,669 Ethereum, 3,691 Zcash, and 11,993 Monero. Cazes and his wife had also amassed numerous other high value assets, including luxury vehicles, residences and a hotel in Thailand.

Prior to its takedown, there were over 250,000 listings for illegal drugs and toxic chemicals on AlphaBay, and over 100,000 listings for stolen and fraudulent identification documents and access devices, counterfeit goods, malware and other computer hacking tools, firearms and fraudulent services. Comparatively, the Silk Road dark web marketplace reportedly had approximately 14,000 listings for illicit goods and services at the time of seizure in 2013 and was the largest dark web marketplace at the time. These numbers indicate that the use of dark web marketplaces for illegal commerce will only continue to grow, despite the closure of AlphaBay.

In his public remarks regarding the seizure of AlphaBay, Attorney General Jeff Sessions stated, “This is likely one of the most important criminal case of the year. Make no mistake, the forces of law and justice face a new challenge from the criminals and transnational criminal organizations who think they can commit their crimes with impunity by ‘going dark.’ This case, pursued by dedicated agents and prosecutors, says you are not safe.  You cannot hide. We will find you, dismantle your organization and network.  And we will prosecute you.”

New Guidance Issued by EU Data Protection Regulators – Does Your Organization Use Social Media During Recruitment?

Posted in EU Data Protection, Privacy, Regulation

The Article 29 Data Protection Working Party (comprising representatives from the data protection regulators in each EU Member State, the European Data Protection Supervisor and the European Commission) has issued an opinion on data processing at work (2/2017) (the Opinion).  The Opinion is not legally binding but it does provide an indication as to how EU data protection regulators will consider and interpret EU data protection law.  The new EU data protection law (the General Data Protection Regulation – or the GDPR) comes into force on 25 May 2018 and will impose significant fines on non-compliant organizations (up to 4% of annual worldwide turnover or €20 million, whichever is higher) in addition to giving individuals more rights with regard to their personal data.  The GDPR does not only apply to EU companies, but can also apply to non-EU based organizations processing EU citizens’ personal data.

The Opinion notes that in light of the increasing amount of personal data that is being processed in the context of an employment relationship, the balance between the legitimate interests of the employer and the privacy rights of the employee becomes ever more important. It provides guidance on a number of specific scenarios including the use of social media during recruitment. Nowadays, employers may be tempted to view job applicants’ social media profiles as part of the recruitments process. However, according to the Opinion, employers may only use social media to find out information about a job applicant where: (a) they have a “legal ground” for doing so; (b) doing so is necessary and relevant for the performance of the position being applied for; (c) the applicant has been informed that their social media profiles will be reviewed; and (d) the employer complies with all of the data protection principles set out in the law.

What steps should your organization take if it wishes to review social media profiles as part of the recruitment process while also complying with the Opinion and EU data protection law? Continue Reading

Huge Relief From eClinicalWorks Decision Not to Hold Customers Liable For Its Vendor’s Actions, But Providers Should Not Drop Their Guard

Posted in Health Information, Regulation

There are inherent risks in any vendor relationship. In the healthcare industry, with myriad regulatory pitfalls, the stakes can be even higher. Several customers of the cloud-based electronic health record (EHR) software vendor eClinicalWorks were relieved by a recent decision in which regulators decided not to take action against them as a result of the alleged wrong-doing of eClinicalWorks. While this decision offers a huge sigh of relief, it should not be seen as an open invitation to adopt a lax approach to vendor engagements.

eClinicalWorks recently agreed to pay $155 million and enter into a five-year Corporate Integrity Agreement to settle allegations that it violated the federal False Claims Act by concealing information indicating that its EHR software failed to meet certain certification requirements from its certifying entity. Such requirements are necessary for eClinicalWorks to meet the “Meaningful Use” standard for EHR under the federal HITECH Act.

Under the HITECH Act, providers can receive incentives for using certified EHR. Providers participating in the Meaningful Use program must attest to the certification of their EHR software in order to qualify for the grants. The United States Department of Justice claimed that eClinicalWorks caused its customers to submit false claims for federal incentive payments tied to the Meaningful Use of EHR when they relied on the improper certification of eClinicalWorks.

In response to the eClinicalWorks settlement, the Centers for Medicare and Medicaid Services (CMS) stated that it would not take action against eClinicalWorks customers who had otherwise acted in good faith with respect to eClinicalWork’s technology. The settlement and, more specifically, CMS’ reaction to it, highlights CMS’ position that providers that may reasonably rely on the representations of their software vendors for accuracy of reporting. CMS further indicated that it does not plan to audit eClinicalWorks customers based on the settlement.

Although CMS’ statement certainly relieves some pressure from healthcare providers who contract with third parties, it is important to note that this settlement is a single situation, and the regulators may take a different approach in the future based on different facts. Furthermore, the Office for Civil Rights (OCR), which is responsible for HIPAA compliance, has not issued an opinion on this topic, and CMS has not published formal guidance to support this position more broadly.

Despite the fact that HIPAA does not (currently) require auditing or any form of specific monitoring of business associates, some form of oversight and/or vendor vetting is often appropriate and may significantly help to reduce the risk of liability if there is a breach or some other issue with the business associate vendor.

Finally, providers cannot ignore issues if they learn of them—regardless of how issues are discovered. Indeed, healthcare providers remain responsible for taking corrective action (including making any necessary disclosures) when they become aware of any HIPAA and HITECH violations by their business associates.