Header graphic for print

Password Protected

Data Privacy & Security News and Trends

HIPAA in Due Diligence (Part II): Cloud Server Data and HIPAA Compliance

Posted in HIPAA

Health Information Highlight

Welcome back to our three-part series examining ways to efficiently identify, address and mitigate gaps in HIPAA compliance in transaction diligence. In Part I of this series, we discussed four key diligence questions upon which buyers should focus their efforts in a transaction. Here, we review considerations related to storage of and access to diligence materials, particularly in the context of using a data room or other cloud-based server.

For an online or virtual data room administrator, opening access to an inquiring stakeholder, valuator, or reviewer party to an acquisition target company’s documentation may be as simple as a few clicks and perhaps an email or two. However, if any document contains personal or identifiable health information, a number of privacy and data protection regulations may deem access to such information by an unauthorized party to be a violation. In the case of disclosure of protected health information (PHI) in a healthcare transaction, HIPAA may impose significant penalties on target providers posting the PHI and the unauthorized parties accessing the PHI alike.

There are a number of ways to minimize the risk of inadvertent unauthorized disclosure:

1. Consider Restricted Access. The uploading party can restrict the access of unauthorized parties to uploaded PHI by either (a) preparing separate data rooms with PHI for authorized parties and with no PHI for unauthorized parties, or (b) if the data room’s user features permit, restricting access to unauthorized parties to certain documents or folders which may contain PHI. Prior to permitting or restricting access, a covered entity uploading its data should review and categorize its relationship with each accessing party for HIPAA purposes. All parties accessing data should enter into and be bound by certain confidentiality provisions relative to the data, which may include putting into place a Business Associate Agreement (BAA).

2. Remove Patient Identifiers. Alternatively, prior to uploading any data into the room, ensure that the uploading party scrubs all data and financials of any patient identifiers and only uploads “clean” versions of documents. The uploading party could also elect to provide “model” contracts rather than contracts which might disclose PHI. With respect to provider financial data, which may have patient detail containing PHI identifying a patient, this process may be a particularly time-consuming investment in resources.  Regardless, the up-front investment in cleaning data prior to uploading would reduce the risk of disclosing any actual PHI.

3. Secure Data Rooms. Choose a secure data room provider which complies with data protection laws. Services such as popular file-sharing applications may be exceedingly simple to set up, share, and have no costs, however, many such cloud providers may not have appropriate security or data protection measures in place and may increase the risk of unauthorized access.

Stay tuned for Part Three where we will examine HIPAA risk mitigation strategies.

 

HIPAA in Due Diligence (Part I): Four Key Diligence Questions

Posted in HIPAA

Health Information Highlight

Welcome to a three-part series that will examine several ways to efficiently identify, address, and mitigate gaps in HIPAA compliance in transaction diligence.

A target’s value is often held in its information and people. An increased risk of HIPAA enforcement means that privacy and security diligence should not be a “check the box” activity. Buyers should fully understand the scope of potential risk in the early stages of transaction diligence, take steps to adequately mitigate any potential go-forward risk, and, most importantly, understand the cost of protecting the target’s greatest assets.

Beginning last year, we saw a substantial increase in the economic impact of HIPAA enforcement by the Department of Health and Human Services, Office for Civil Rights (OCR). Since then, several new cases have illuminated the need for increased scrutiny of HIPAA compliance during the transaction diligence process.

To better understand a seller’s overall HIPAA compliance, there are four key diligence questions upon which buyers should focus their efforts in a transaction:

1. Does the seller have the core HIPAA documentation in place? At minimum, the buyer should look for:

  • Privacy and Security Rule Policies and Procedures
  • Breach Notification Policies and Procedures and Risk Assessments
  • Security Audits and Incident Logs
  • HIPAA Risk Analyses (for the last 2-3 years) and corresponding Management Plans
  • Business Associate Agreements (BAAs) with Contractors/Customers
  • As applicable, Notice of Privacy Practices

2. Is the seller complying with its policies? The principal measure of the effectiveness of a HIPAA compliance program is whether the seller’s internal controls and compliance practices live up to the promise set out in the policies. To determine whether a seller is complying with its policies, a buyer should look to whether the seller is:

  • sufficiently training employees and documenting this training;
  • assessing and tracking security incidents;
  • identifying and empowering compliance personnel;
  • auditing and monitoring compliance on a periodic basis; and
  • performing frequent security assessments regarding risk areas.

In some cases, a simple public news search may identify target’s incidents or reputational risks that may be meaningful to the buyer, even where a formal investigation or enforcement has not yet been triggered.

3. How does the seller address potential HIPAA security and breach risk areas? A seller’s representation that “no HIPAA breaches have occurred” may tell the buyer much about what the seller is not doing to identify and take action on various security and privacy compliance risks. The buyer should review seller security risk analyses, breach assessments, and investigation logs to understand the seller’s historical liabilities and what the seller has treated as actionable risks. Buyer may also wish to understand how seller is assessing third party risks, including determining BAA compliance and determining whether and how third parties are accessing and using protected health information (PHI).

4. What is the nature of risk related to any identified gaps? A buyer should carefully consider the spectrum of liability to the parties related to risks identified in transaction diligence. Buyer should review the liabilities in the context of:

  • the risk of governmental enforcement, including more restrictive state and international laws that may attach to the data;
  • civil liability, including contractual breaches;
  • ethical and organizational fines;
  • criminal executive liability for profiting off or knowingly not reporting breaches; and
  • related reputational harm to the parties related to an enforcement action or third party suit.

Stay tuned for Part Two where we will examine cloud server data and HIPAA compliance strategies. 

Is HIPAA A Sleeping Giant?

Posted in Health Information, HIPAA

So far, 2018 has been a light year in terms of HIPAA enforcement.  There have been only two publicly-disclosed settlements.  But that doesn’t mean covered entities and business associates should let their guard down and assume that they don’t need to be mindful of HIPAA.  Indeed, it is hard to know what is going on in the Office for Civil Rights (OCR) with respect to enforcement.  Theories include that the priorities of the current administration are driving less enforcement, that the OCR is focusing its efforts on the current round of audits, and that the OCR is simply holding back on some settlements so that it can ensure a consistent approach to multiple settlements that it will announce in the near future.  No matter the answer, it is not safe to assume that things will remain quiet on the HIPAA front.

Looking at the 2018 settlements, they reflect two very different scenarios, and they both demonstrate that HIPAA settlements can take a long time to work their way through the OCR (which makes enforcement predicting even more difficult).  The first settlement of the year was with Fresenius Medical Care North America (Fresenius) for $3.5 million and the adoption of a comprehensive corrective action plan.  The Fresenius settlement dates back to 2012 when Fresenius experienced breaches at five different facilities around the country.  The OCR’s investigation revealed systematic failures by Fresenius to adopt appropriate policies and procedures to address the Privacy and Security Rules.  In the press release for the Fresenius settlement, the OCR Director stressed the importance of enterprise-wide risk analysis.

The second settlement was for $100,000 with the receiver that was appointed to liquidate the assets of Filefax, as it was closing its operations in 2015.  The OCR’s investigation followed an anonymous complaint regarding improper disposal of medical records, and the OCR found a variety of issues in which records were left unsecured.  Even though Filefax had closed, the receiver was held responsible for on-going compliance with HIPAA.  Thus, the OCR has confirmed that closing operations does not relieve covered entities of HIPAA obligations, and that any entity that assumes custody of health records needs to be mindful of HIPAA.

Given that the Omnibus Final Rule is now more than five years old, the OCR is unlikely to tolerate non-compliance and it is probably only a matter of time before the sleeping giant awakens—or, more likely, that we learn that the giant hasn’t been sleeping at all.  Indeed, because settlements take so long to process, no one outside the OCR really knows how active the OCR is with respect to enforcement activities for situations occurring right now.  Therefore, all covered entities and business associates need to stay vigilant with respect to the three pillars of HIPAA compliance: Privacy Rule Policies and Procedures, reasonably current Security Rule Risk Assessments, and workforce training regarding HIPAA.  And, any entity that experiences a breach—particularly a breach involving 500 or more individuals that requires prompt notice to the OCR—should revisit all three of these compliance pillars.

To better mitigate HIPAA enforcement actions, stay tuned for a three-part series that will examine several ways to efficiently identify and address gaps in HIPAA compliance during transaction diligence.

“White Hat” Ethical Hackers and Corporate Investigations

Posted in Cybersecurity, Data Security, Information Management, Legislation

A “white hat” is an ethical computer hacker who specializes in penetration testing and other testing methodologies to ensure the security of an organization’s information systems. According to the Ethical Hacking Council, “The goal of the ethical hacker is to help the organization take pre-emptive measures against malicious attacks by attacking the system himself or herself; all the while staying within legal limits.”  White hat hackers usually present their skills as benefiting their clients and broader society. They may be reformed black hat hackers or may simply be knowledgeable of the techniques and methods used by hackers.  However, white hats have been known to offer broader hacking services, such as information gathering about persons or entities at odds with those hiring the white hat.  Ethical hackers have been compared to digital versions of private investigators or investigative reporters.

In considering whether to engage a white hat hacker, there are a number of precautions that a company should take to increase the likelihood that the white hat will be credible, professional and ethical and only engage in lawful activities during the course of the engagement.

Credibility.  Consider existing relationships, references and certifications.  For example, the EC-Council offers a Certified Ethical Hacker accreditation.  Many large consulting firms provide ethical hacking services. References from trusted peers are also extremely important.

Background Check.  Conduct a thorough background check.  Although the white hat may be affiliated with a reputable consulting firm, verify his or her experience and credentials and investigate possible criminal history.  Do not assume that what the hacker tells you is true.

Engagement Letter.  Have the hacker sign an engagement letter or similar contract that clearly defines the engagement, prohibits any illegal or unethical conduct, and addresses liabilities, indemnification and remedies where appropriate.  Specify the hacking methods that are and are not acceptable and which information systems, networks and data may be accessed.  Require the hacker to provide proof of adequate professional liability insurance.

Confidentiality Agreement.  Require the hacker to sign a confidentiality or non-disclosure agreement that strictly prohibits the use or sharing with others of any information gathered as part of the engagement and that specifies the penalties for violation or references penalties set forth in the primary agreement.

Oversight.  Monitor the hacker’s activity and be on the lookout for any suspicious activity—both during and after the white hat’s work.  Ensure that the hacker remains within the scope of work defined within the engagement letter.  If the scope of work changes, revise the engagement letter accordingly.  Keep in mind that access to information systems presents opportunities to set conditions for future remote access or other unauthorized, nefarious activities.

Work Product.  Consider the desired work product that will be developed over the course of the white hat’s engagement and whether the white hat should report to the General Counsel or outside counsel to protect privilege.  In order to be admissible in evidence in civil litigation, the white hat must be willing to submit a signed affidavit, which describes under oath the results of the investigation, and to possibly testify.  Not every white hat makes a good witness.

 

D.C. Circuit Issues Long-Awaited Decision on FCC’s 2015 TCPA Order

Posted in Litigation, Privacy

Nearly two and a half years following the appeal of the Federal Communications Commission’s (FCC) July 2015 Order, the U.S. Court of Appeals for the District of Columbia issued a ruling on March 16, 2018.  On appeal, over a dozen entities sought review of the 2015 Order, in which the FCC interpreted various aspects of the Telephone Consumer Protection Act (TCPA).  The appeal addressed four issues: (1) which devices constitute an automatic telephone dialing system (ATDS or “autodialer”); (2) whether a call to a reassigned phone number violates the TCPA; (3) whether the FCC’s approach to revocation was too broad; and (4) whether the FCC’s exemption for certain healthcare related calls was proper.

In short, the court set aside the FCC’s definition of an ATDS and vacated the FCC’s approach to calls placed to reassigned numbers.  The court upheld, however, the FCC’s broad approach to a party’s revocation of consent and sustained the scope of the FCC’s exemption for time-sensitive healthcare calls.

  1. ATDS

The FCC’s 2015 Order held that the analysis of whether equipment constitutes an ATDS is not limited to its present capacities, but also includes its “potential functionalities”—therefore having the apparent effect of encompassing ordinary smartphones. On appeal, the D.C. Circuit concluded that the FCC’s approach could not be sustained in light of the “unchallenged assumption that a call made with a device having the capacity to function as an autodialer can violate the statute even if autodialer features are not used to make the call.”  The court reasoned that if a device’s capacity includes functions that could be added through app downloads and software additions, and if smartphone apps can introduce ATDS functionality into the device, then all smartphones would meet the statutory definition of an autodialer—and therefore, the TCPA’s restrictions on autodialer calls “assume an eye popping sweep.”  Accordingly, the court found the FCC’s interpretation that all smartphones qualify as autodialers is unreasonably and impermissibly expansive.

Regarding functionality, the FCC identified a basic function of an ATDS as the ability to “dial numbers without human intervention,” but declined to clarify this point, apparently suggesting that a device might still qualify as an autodialer even if it cannot dial numbers without human intervention.  The FCC further said that another basic function of an ATDS is to dial thousands of numbers in a short period of time, but the ruling provides no additional guidance on whether that is a necessary, sufficient, or relevant condition, leaving affected parties “in a significant fog of uncertainty.”  In addressing these questions, the court found the FCC’s guidance gave no clear answer and in many ways provided contradictory interpretations. The court seemed particularly concerned with the practical implications that the FCC ruling seemingly imposed liability even if a system was not used to randomly or sequentially generate a call list, as “[a]nytime phone numbers are dialed from a set list, the database of numbers must be called in some order—either in a random or some other sequence.”  The court set aside the FCC’s ruling on what type of functionality a device must employ to qualify as an autodialer, finding that the FCC could not promote competing interpretations in the same order.

  1. Reassigned numbers and consent

If a call is made to a consenting party’s number, but that number has been reassigned to a nonconsenting party, the FCC’s 2015 Order stated that this situation violates the TCPA—except in the instance of a one-call safe harbor, which enables a caller to avoid liability for the first call to a wireless number following reassignment.  The court found that the FCC’s limitation of the safe harbor to only the first call was arbitrary, questioning why a caller’s “reasonable reliance” on the previous subscriber’s consent necessarily stops being reasonable after there has been only one call, as the first call may give the caller no indication of a possible reassignment.  The court set aside the FCC’s treatment of reassigned numbers in its entirety, finding it could not, without consequence, excise the one-call safe harbor, but leave in place the FCC’s interpretation that the “called party” refers to the current subscriber, and not the intended recipient.  This, the court found, would mean a caller is strictly liable for all calls made to the reassigned number, even without knowledge of the reassignment.

  1. Revocation of consent

The FCC, in declining to unilaterally prescribe the exclusive means for consumers to revoke their consent, instead concluded that a called party may revoke consent at any time and through any reasonable means that clearly expresses a desire to receive further messages.  In upholding the FCC’s approach to revocation, the court found that the FCC’s ruling absolves callers of any responsibility to adopt a system that would entail undue burdens, like training every retail employee on the “finer points of revocation.”  And, under this approach, callers have every incentive to avoid TCPA liability by making available clearly-defined and easy-to-use opt-out methods, therefore making a call recipient’s unconventional and idiosyncratic revocation requests unreasonable.  Finally, the court concluded that nothing in the 2015 Order “should be understood to speak to the parties’ ability to agree upon revocation procedures”—thereby leaving open the possibility of contractually specified revocation methods.

  1. Healthcare-related exemption

The final challenge concerns the scope of the FCC’s exemption of certain healthcare related calls from the TCPA’s prior-consent requirement for calls to wireless numbers.  The exemption is limited to calls that have a healthcare treatment purpose, and excludes calls related to telemarketing, solicitation, or advertising.  The court rejected the argument that any partial exemption of healthcare related communications is unlawful because HIPAA supersedes any TCPA prohibition, finding that the two statutes provide separate protections and, therefore, there is no obstacle to complying with both.  Moreover, the court found that the FCC did not act arbitrarily in affording a narrower exemption for healthcare related calls made to wireless callers, finding that the TCPA assumes the fact that residential and wireless numbers warrant different treatment.  Finally, the court rejected the argument that the FCC erred in failing to recognize that all healthcare related calls satisfy the TCPA’s “emergency purposes” exception to the consent requirement, reasoning that it is implausible to conclude that calls related to telemarketing, solicitation, or advertising are made for emergency purposes.  Therefore, the court upheld the way in which the FCC narrowly fashioned the exemption for healthcare related calls.

Without question, the long-awaited ruling will significantly impact TCPA compliance and litigation.  Stay tuned for additional analysis on the impact of the D.C. Circuit’s ruling.

U.S. Companies: Are You Ready for GDPR?

Posted in Data portability, EU Data Protection, Privacy, Regulation

On May 25, 2018, the General Data Protection Regulation (GDPR) goes into effect. Are you ready?

Who’s affected?  

Organizations, anywhere in the world, that process the personal data of European Union (EU) residents should pay attention to GDPR and its territorial scope.

If you collect personal data or behavioral information from someone in the EU (also referred to as a “data subject” in the GDPR), your company will be subject to the requirements of GDPR. The extended scope of GDPR will apply to your company even if:

  1. the processing of personal data takes place outside of the EU;
  2. no financial transaction takes place; or
  3. your company has no physical operations or employees in the EU.

The definition of “personal data” is broader than the definition of “personally identifiable information”, commonly used in U.S. information security and privacy laws.

Why should you care?

Failing to comply with GDPR may result in a maximum fine of €20,000,000 euros or 4% of global turnover, whichever is higher.

There are questions over how EU regulators will enforce these fines on companies outside of the EU. However, it would be ill-advised to underestimate the EU’s desire to create uniform data privacy laws for its market and the lengths to which regulators may go to accomplish this goal. GDPR extraterritorial enforcement mechanisms with authorities in non-EU countries is very possible.

The potential reputational damage that may result from noncompliance is also something organizations should consider. Non-EU companies, especially those with a strong online presence, should think whether action is required now to avoid the possibility of unfavorable headlines down the line.

How to mitigate risk?

  1. Conduct a Data Privacy Audit (DPA). A DPA should show you the location of data in your company and map the flows of this data. A DPA should also map your current data processing activities against the rights of data subjects which are mandated by GDPR. Examples being, the rights of data subjects to access their personal data and the right to be forgotten. The UK information commissioner’s office has provided helpful guidance on DPAs which can be accessed here.
  2. Put in place processes for deleting data.   One of the 7 principles of GDPR is data minimization. Organizations must not keep data for longer than necessary and data subjects have the right to request the deletion of the personal data that you hold about them (known as the “right to be forgotten”). If not already in place, you should establish processes for deleting personal data: (i) on request; and (ii) if its retention is no longer necessary.
  3. Re-examine consent mechanisms. Consent of the relevant data subject is the basis upon which many organizations comply with the requirements of existing EU data protection laws relating to the processing and storing of such data subject’s personal data. If this is true of your organization you should note that the requirements under GDPR for obtaining consent are more stringent. For example, if you use pre-checked opt-in boxes to gain consent, GDPR clarifies that this is not an indication of valid consent. If your current mechanisms for obtaining consent or the consents that you already have do not meet the standards set by GDPR, you should consider updating such mechanisms and seeking new consents which satisfy the requirements of GDPR.
  4. Appoint a data processing officer (DPO).   If your core activities call for either: (i) regular and systematic monitoring of data subjects on a large scale, or (ii) processing on a large scale of certain categories of data you may be required to appoint a DPO.

If you have any questions or concerns regarding GDPR compliance please email EUDataProtection@mcguirewoods.com.

Mixed Signals on The Future of SEC Cyber Enforcement

Posted in Cybersecurity, Regulation, Securities and Exchange Commission

As previously reported, the U.S. Securities and Exchange Commission (SEC) unanimously voted to approve additional guidance for reporting cybersecurity risks last month. However, it is unclear what, if any, impact the new guidance will have on the rate of SEC enforcement actions in the coming months.

According to a recent study by the NYU Pollack Center for Law & Business and Cornerstone Research, SEC enforcement actions significantly declined last year when compared with 2016. In fiscal year 2016, the SEC brought 92 enforcement actions against public companies and their subsidiaries. In fiscal year 2017, SEC enforcement declined by thirty three percent with the SEC filing 62 enforcement actions against public companies and their subsidiaries. Of the 62 enforcement actions, the SEC filed only 17 actions in the second half of fiscal year 2017. This was the largest semiannual decrease for a fiscal year since the Securities Enforcement Empirical Database (SEED) began collecting data in 2010. Similarly, the total monetary settlements declined from $1 billion over the first half of fiscal year 2017 to $196 million in the second half of the year.

The timing of the decline suggests that the Trump Administration may be reining in regulatory enforcement. However, despite the empirical slow down, Stephanie Avakian and Steven Peikin, the co-directors of the SEC’s enforcement divisions, deny that there has been any directive from the Trump Administration to slow the enforcement arm of the SEC. In fact, during the annual American Bar Association’s white collar conference, the co-directors cautioned that more enforcement actions—especially related to cybersecurity—may be on the horizon. Indeed, the SEC’s new cybersecurity guidelines coupled with the creation of the SEC Cyber Unit at the end of fiscal 2017 will give the SEC new tools to combat cyber related misconduct in 2018.

Spokeo Strikes Down Another Data Privacy Class Action

Posted in Litigation

The Supreme Court’s decision in Spokeo, Inc. v. Robins continues to have an impact on class actions involving data privacy statutes. Most recently, a federal district court dismissed yet another class action involving claims under the Fair and Accurate Credit Transactions Act (FACTA) in Kirchein v. Pet Supermarket, Inc. for lack of subject matter jurisdiction under Spokeo, on the grounds that Kirchein did not establish the injury-in-fact necessary to maintain the case in federal court.

In January 2016, Kirchein filed a putative class action in the U.S. District Court for the Southern District of Florida, alleging violations of FACTA, which prohibits printing more than the last five digits of the credit card number or expiration date on the receipt provided to the customer. FACTA provides a private right of action with statutory damages up to $1,000 for any violation. In August 2016, the court preliminarily approved a $580,000 class action settlement. In October 2017, however, the defendant moved to vacate the preliminary approval order and settlement and reopen the class on the grounds that the class was much larger than the parties anticipated. The Court denied the motion on those grounds, but gave the parties an opportunity to brief the issue of subject matter jurisdiction under Spokeo.

After considering the parties’ briefing, the Court dismissed the case on February 8, 2018 for lack of subject matter jurisdiction, finding that the mere “disclosure of the first six digits of a credit card account number” did not result in an imminent, real risk of harm under Spokeo. In doing so, the Court relied heavily on its own September 2017 decision in a case alleging similar violations of FACTA. In that case, the Court held that merely printing the digits of the credit card on a receipt was insufficient to establish standing when the plaintiff did not allege that any disclosure of his private information actually occurred. Similarly here, Kirchein failed to allege that anyone besides Kirchein himself actually saw the receipt. To the extent that Kirchein relied on store employees seeing the receipt, the Court was unconvinced, finding that to be the same type of disclosure that happened any time a consumer uses a credit card to pay for a transaction.

The Court also rejected Kirchein’s argument that the settlement was still enforceable, despite any lack of standing resulting from Spokeo. The Court noted that Spokeo was not a change in the law, but merely clarified well-established principles of standing, and emphasized that it must have subject matter jurisdiction at all stages of a case, including to approve a class action settlement agreement under Rule 23.

The decision joins those of the Seventh and Second Circuits, as well as several other district courts, which have dismissed FACTA claims for lack of standing under Spokeo. These cases continue to suggest that purely technical violations of data privacy statutes will not satisfy the injury-in-fact requirement under Article III’s standing analysis after Spokeo. Instead, plaintiffs will need to show that a violation of the statute caused harm, likely through the actual disclosure to a third party.

New York Cybersecurity Regulations: Additional Testing and Reporting Requirements Take Effect

Posted in Cybersecurity, Financial Services Information Management, Information Management, Regulation

The one-year transitional period under the New York Department of Financial Services (NYDFS) Cybersecurity Requirements for Financial Services Companies expired on March 1, 2018. Financial services companies that are regulated by NYDFS now face additional requirements for assessing, monitoring, testing and reporting on the integrity and security of their information systems and the overall effectiveness of their cybersecurity programs.

Overview of New York Cybersecurity Regulations

The NYDFS cybersecurity regulations became effective on March 1, 2017, and the initial 180-day transitional period expired on August 28, 2017. The regulations that took effect last year require all covered entities to implement a cybersecurity program that identifies and protects against cybersecurity risks and adopt comprehensive policies and procedures for the protection of the company’s information systems and nonpublic information. The cybersecurity regulations apply to any organization operating under or required to operate under a NYDFS license, registration, charter, certificate, permit, accreditation or similar authorization under the New York Banking Law, Insurance Law or Financial Services Law. Click here for more information about the requirements of the regulations that took effect last year.

Additional Actions Required to Achieve Compliance

On March 1, 2018, additional requirements under the cybersecurity regulations took effect. In addition to the requirements that took effect last year, covered entities that are subject to the cybersecurity regulations must implement the following additional cybersecurity measures: Continue Reading

Recap of the 2018 FTC Privacy Con

Posted in Consumer Privacy/FTC

On February 28, 2018, the Federal Trade Commission (FTC) hosted its third Privacy Con conference in Washington D.C., an event that highlights research and facilitates discussion of the latest research and trends related to consumer privacy and data security. The FTC welcomes privacy and data security researches to inform it of their latest findings, and encourages the dialogue between researches and policymakers to continue well after the conference. The 2018 conference was well attended by many professionals in the data privacy field, who shared the results of their studies and research in data privacy.

The Acting Chairman of the FTC, Maureen K. Ohlhausen, delivered the opening remarks at Privacy Con. Chairman Ohlhausen stated that the FTC has been and will continue to be active in the data privacy field and will continue to bring important cases. She emphasized that this year the FTC will focus on an “economic approach” to data privacy. Chairman Ohlhausen explained this approach does not necessarily require crunching numbers, but rather, will involve applying tools of economic analysis to assess the amount of resources that should be devoted to certain matters. Chairman Ohlhausen said that the FTC will try to better understand the types of injuries consumers suffer from a data breach and devote attention to data privacy cases that cause greater injuries, some of which may be personal and not economic.

Following Chairman Ohlhausen’s opening remarks, professors with technical backgrounds provided in-depth analysis regarding data privacy concerns pertaining to, among other things, email tracking, browser extensions, smart devices, web session recordings, social media advertising, interactive use, smart toys, and crowd sourcing. In short, the key takeaways from these studies are: (1) companies need to have greater transparency regarding voluntary and involuntary leaks of personal information to third parties so that consumers can take greater measures to safeguard their personal identifiable information (PII); and (2) balancing the need to inform consumers about PII leaks, with consumers’ desire to not be inundated with too many requests for permission before PII is disclosed.

With respect to the first point, the panelists identified different circumstances where a consumer’s PII is shared with third parties, which consumers may not even be aware. For example, most consumers are not aware of how intrusive web browser extensions can be, that web sessions on certain sites are recorded and sold to third parties, or that children’s smart toys may be recording conversations and posting them on social media. The panelists emphasized that it is critical for companies to disclose to consumers that their PII is disclosed to the public or third parties through these mechanisms so that they can make informed decisions regarding how to safeguard their privacy.

For the second point, the panelists described the studies they conducted regarding consumers’ privacy expectations to determine under what circumstances consumers would like to provide express permission before PII is disclosed and situations where consumers are comfortable providing implicit consent through predictive behavior and usage. The panelists found that if the information was for a beneficial purpose (such as safety) or information obtained in a public setting, consumers are comfortable disclosing their PII without providing express consent. However, if the information was obtained in a private area or was not for a beneficial purpose, consumers said that they did not want their PII disclosed unless they gave express consent. In short, the results of these studies indicate that consumers’ privacy expectations are content and context dependent.

In sum, the 2018 Privacy Con opened up a great dialogue regarding consumer expectations for data privacy, and the FTC’s focus this year on studying the types of injuries consumers can suffer from a data privacy breach.

We use cookies to enhance your experience of our website. By continuing to use this website, you agree to the use of these cookies. For more information and to learn how you can change your cookie settings, please see our policy.

Agree