The 2018 Regular Session of the Virginia General Assembly recently concluded after considering approximately 3700 bills and resolutions during the 60-day session. Several privacy-related bills were on the legislative agenda, but few were enacted into law.

Tax Return Data

As highlighted in January, the General Assembly this year continued its efforts to address the growing problem of criminals filing fraudulent tax returns using stolen identities of unsuspecting taxpayers. Last year, Virginia adopted legislation that requires employers and payroll service providers to provide breach notification to the Attorney General of Virginia when those entities experience an unauthorized access or acquisition of unredacted and unencrypted data containing a taxpayer’s identification number and certain payroll information. Virginia Code Ann. § 18.2-186.6(M).

This year, Virginia enacted legislation aimed at imposing certain obligations on state tax return preparers. Tax return preparers are not required to comply with Virginia’s data breach notification statute. However, effective July 1, 2018, Virginia tax return preparers are required to notify the Virginia Department of Taxation:

“without unreasonable delay after the discovery or notification of unauthorized access and acquisition of unencrypted and unredacted return information that compromises the confidentiality of such information maintained by such signing income tax return preparer and that creates a reasonable belief that an [unprotected] version of such information was accessed and acquired by an unauthorized person and that causes, or such preparer reasonably believes has caused or will cause, identity theft or other fraud.” Acts of Assembly, Chapter 283

Additionally, if a breach occurs, the state tax return preparer is required to provide the Department information concerning the taxpayers whose information was accessed or obtained by unauthorized persons and certain information about the preparer.  It is estimated that the enactment of this legislation will save Virginia approximately $300,000 by avoiding the issuance of unrecoverable fraudulent refunds.

Other Privacy-Related Legislation

Additional bills related to privacy include (partial listing):

  • PASSED: Clarifying that certain student directory information held by institutions of higher education may only be released in limited circumstances in response to Freedom of Information Act requests. HB1
  • PASSED: Reduction in the amount a credit reporting agency may charge a consumer to place a security freeze on his credit report from $10 to $5. 1027 SB16
  • DEFEATED: Eliminating the ability of a credit reporting agency to charge a consumer a fee to place a security freeze on the consumer’s credit report. HB6; HB86; HB1232; SB18; SB22; (partial listing)
  • DEFEATED: Prohibiting companies providing broadband internet access services in the Commonwealth from blocking, throttling, engaging in paid prioritization and interfering or unreasonably disadvantaging a users’ ability to access broadband internet access. The bill also would have limited a broadband service providers’ disclosure of personally identifiable information about consumers to circumstances involving certain court orders, subpoenas or for authorized law-enforcement activities. SB948
  • DEFEATED: Limiting state contracts for internet access services only to those services providers that agree to protect certain personally identifiable information and adhere to certain internet neutrality provisions. Proposed to prohibit internet access service providers that provide such service to a public body from blocking, throttling or providing preference to entities that pay for the optimization of data transfer rates. Additionally, the bill proposed to prohibit such service providers from knowingly disclosing personally identifiable information about users unless such disclosure is pursuant to certain court orders, subpoenas or for authorized law-enforcement activities. SB949
  • DEFEATED: Requiring consumer reporting agencies to disclose within 15 days a breach of the security of a computerized data system, when such disclosure is required by Virginia’s data breach notification statute, § 18.2-186.6. The bill provides that failure to report is a violation of the Virginia Consumer Protection Act. HB1588
  • DEFEATED: Prohibiting state agency employment applications, under certain circumstances, from inquiring whether a prospective employee has been arrested or charged with, or convicted, of any crime (a.k.a. “ban-the-box”). SB252; HB1357
  • DEFEATED: Prohibiting a prospective employer (i) from requiring a prospective employee to disclose his wage or salary history or (ii) attempting to obtain such information from the person’s current or previous employers. HB240
  • DEFEATED: Allowing the use of drones by law-enforcement without obtaining a warrant under certain circumstances. HB1290
  • DEFEATED: Prohibiting a provider of electronic communication or remote computing service from disclosing location data to an investigative or law-enforcement officer except pursuant to a search warrant. HB604
  • DEFEATED: Directing a legislative commission to study how local governments report data breaches, identify ways to promote efficient and timely reporting of such breaches by local governments and to develop best practices to assist localities with cyber security. HJ39

Virginia’s approach on privacy issues this past session reflects its approach on most issues – a measured response in response to actual problems. This approach is in contrast to some states enacting policies in anticipation of future issues or without a solid indication of potential harm to consumers. In the case of the security freeze legislation, the enacted bill was in response to a significant data breach last year involving one of the big three credit reporting agencies. With regard to protecting certain student directory information, the General Assembly acted in response to the perceived misuse of such information by political campaigns. Finally, the legislature continued its efforts to address the continuing problem of tax fraud by attempting to cut off avenues for would be identity thieves to file false state income tax returns.

U.S. Senate leaders may be close to reaching an agreement on a legislative proposal that would establish a national data breach notification and security standard (the Data Acquisition and Technology Accountability and Security Act) which would streamline nationwide reporting requirements for businesses.  However, there are a plethora of reasons it may not make much progress through Congress this year. The current 49-state, soon to be 50-state, patchwork of breach notification laws that are all different in various meaningful ways makes compliance with a nationwide breach (which is what typically occurs in companies) quite tedious.  This proposed federal legislation would set a national standard for securing customer data and reporting data breaches.

Similar legislation has stalled in Congress for nearly a decade, but recent events, including numerous high profile data breaches and other events where data was misused, the EU Parliament’s approval of the General Data Protection Regulation (GDPR) with an enforcement date of May 25, 2018, and California’s proposed ballot initiative on privacy (improving consumers’ rights regarding collection and usage of their data), have catalyzed Congress once more.  Last week, senators introduced legislation called Customer Online Notification for Stopping Edge-provider Network Transgressions (CONSENT Act).  The bill requires explicit opt-in consent from users to share, use, or sell any personal information, notification any time data is collected, shared, or used, and new security and breach reporting requirements. The CONSENT Act relies on the Federal Trade Commission to enforce any violations of those new rules.

There are many obstacles to enacting federal data privacy and security legislation, including disputes over preemption of state law, reasonable security standards, penalties, and exemptions.  After Republicans took control of the White House and both chambers of Congress last year, federal regulatory activity diminished, and cities and states have stepped in to fill the void.  The attorneys general of 31 states are pressing lawmakers to scrap the Data Acquisition and Technology Accountability and Security Act, arguing that it waters down more stringent state laws requiring prompt notification of breaches to consumers.  Since South Dakota passed a new law in March, every state but Alabama has data breach laws in effect which require companies to notify consumers when their personal information hacked.  And last week Alabama’s governor signed the final state data breach law which goes into effect on May 1, 2018.  The attorneys general argue that these state laws have catalyzed greater transparency about data breaches and improved steps companies can take to prevent breaches from occurring again.

In addition to state laws, some cities have taken affirmative steps regarding data security.  NYC Mayor de Blasio announced the launch of a cybersecurity initiative, NYC Secure, which is supposed to defend New Yorkers from malicious cyber activity on mobile devices, public Wi-Fi networks, and beyond.  The first program is a smartphone protection app which issues warnings to users when suspicious activity is detected on their mobile devices.

Stay tuned to see who wins the state versus federal power struggle over data privacy and security—exciting times are ahead!

Nearly two and a half years following the appeal of the Federal Communications Commission’s (FCC) July 2015 Order, the U.S. Court of Appeals for the District of Columbia issued a ruling on March 16, 2018.  On appeal, over a dozen entities sought review of the 2015 Order, in which the FCC interpreted various aspects of the Telephone Consumer Protection Act (TCPA).  The appeal addressed four issues: (1) which devices constitute an automatic telephone dialing system (ATDS or “autodialer”); (2) whether a call to a reassigned phone number violates the TCPA; (3) whether the FCC’s approach to revocation was too broad; and (4) whether the FCC’s exemption for certain healthcare related calls was proper.

In short, the court set aside the FCC’s definition of an ATDS and vacated the FCC’s approach to calls placed to reassigned numbers.  The court upheld, however, the FCC’s broad approach to a party’s revocation of consent and sustained the scope of the FCC’s exemption for time-sensitive healthcare calls.

  1. ATDS

The FCC’s 2015 Order held that the analysis of whether equipment constitutes an ATDS is not limited to its present capacities, but also includes its “potential functionalities”—therefore having the apparent effect of encompassing ordinary smartphones. On appeal, the D.C. Circuit concluded that the FCC’s approach could not be sustained in light of the “unchallenged assumption that a call made with a device having the capacity to function as an autodialer can violate the statute even if autodialer features are not used to make the call.”  The court reasoned that if a device’s capacity includes functions that could be added through app downloads and software additions, and if smartphone apps can introduce ATDS functionality into the device, then all smartphones would meet the statutory definition of an autodialer—and therefore, the TCPA’s restrictions on autodialer calls “assume an eye popping sweep.”  Accordingly, the court found the FCC’s interpretation that all smartphones qualify as autodialers is unreasonably and impermissibly expansive.

Regarding functionality, the FCC identified a basic function of an ATDS as the ability to “dial numbers without human intervention,” but declined to clarify this point, apparently suggesting that a device might still qualify as an autodialer even if it cannot dial numbers without human intervention.  The FCC further said that another basic function of an ATDS is to dial thousands of numbers in a short period of time, but the ruling provides no additional guidance on whether that is a necessary, sufficient, or relevant condition, leaving affected parties “in a significant fog of uncertainty.”  In addressing these questions, the court found the FCC’s guidance gave no clear answer and in many ways provided contradictory interpretations. The court seemed particularly concerned with the practical implications that the FCC ruling seemingly imposed liability even if a system was not used to randomly or sequentially generate a call list, as “[a]nytime phone numbers are dialed from a set list, the database of numbers must be called in some order—either in a random or some other sequence.”  The court set aside the FCC’s ruling on what type of functionality a device must employ to qualify as an autodialer, finding that the FCC could not promote competing interpretations in the same order.

  1. Reassigned numbers and consent

If a call is made to a consenting party’s number, but that number has been reassigned to a nonconsenting party, the FCC’s 2015 Order stated that this situation violates the TCPA—except in the instance of a one-call safe harbor, which enables a caller to avoid liability for the first call to a wireless number following reassignment.  The court found that the FCC’s limitation of the safe harbor to only the first call was arbitrary, questioning why a caller’s “reasonable reliance” on the previous subscriber’s consent necessarily stops being reasonable after there has been only one call, as the first call may give the caller no indication of a possible reassignment.  The court set aside the FCC’s treatment of reassigned numbers in its entirety, finding it could not, without consequence, excise the one-call safe harbor, but leave in place the FCC’s interpretation that the “called party” refers to the current subscriber, and not the intended recipient.  This, the court found, would mean a caller is strictly liable for all calls made to the reassigned number, even without knowledge of the reassignment.

  1. Revocation of consent

The FCC, in declining to unilaterally prescribe the exclusive means for consumers to revoke their consent, instead concluded that a called party may revoke consent at any time and through any reasonable means that clearly expresses a desire to receive further messages.  In upholding the FCC’s approach to revocation, the court found that the FCC’s ruling absolves callers of any responsibility to adopt a system that would entail undue burdens, like training every retail employee on the “finer points of revocation.”  And, under this approach, callers have every incentive to avoid TCPA liability by making available clearly-defined and easy-to-use opt-out methods, therefore making a call recipient’s unconventional and idiosyncratic revocation requests unreasonable.  Finally, the court concluded that nothing in the 2015 Order “should be understood to speak to the parties’ ability to agree upon revocation procedures”—thereby leaving open the possibility of contractually specified revocation methods.

  1. Healthcare-related exemption

The final challenge concerns the scope of the FCC’s exemption of certain healthcare related calls from the TCPA’s prior-consent requirement for calls to wireless numbers.  The exemption is limited to calls that have a healthcare treatment purpose, and excludes calls related to telemarketing, solicitation, or advertising.  The court rejected the argument that any partial exemption of healthcare related communications is unlawful because HIPAA supersedes any TCPA prohibition, finding that the two statutes provide separate protections and, therefore, there is no obstacle to complying with both.  Moreover, the court found that the FCC did not act arbitrarily in affording a narrower exemption for healthcare related calls made to wireless callers, finding that the TCPA assumes the fact that residential and wireless numbers warrant different treatment.  Finally, the court rejected the argument that the FCC erred in failing to recognize that all healthcare related calls satisfy the TCPA’s “emergency purposes” exception to the consent requirement, reasoning that it is implausible to conclude that calls related to telemarketing, solicitation, or advertising are made for emergency purposes.  Therefore, the court upheld the way in which the FCC narrowly fashioned the exemption for healthcare related calls.

Without question, the long-awaited ruling will significantly impact TCPA compliance and litigation.  Stay tuned for additional analysis on the impact of the D.C. Circuit’s ruling.

On May 25, 2018, the General Data Protection Regulation (GDPR) goes into effect. Are you ready?

Who’s affected?  

Organizations, anywhere in the world, that process the personal data of European Union (EU) residents should pay attention to GDPR and its territorial scope.

If you collect personal data or behavioral information from someone in the EU (also referred to as a “data subject” in the GDPR), your company will be subject to the requirements of GDPR. The extended scope of GDPR will apply to your company even if:

  1. the processing of personal data takes place outside of the EU;
  2. no financial transaction takes place; or
  3. your company has no physical operations or employees in the EU.

The definition of “personal data” is broader than the definition of “personally identifiable information”, commonly used in U.S. information security and privacy laws.

Why should you care?

Failing to comply with GDPR may result in a maximum fine of €20,000,000 euros or 4% of global turnover, whichever is higher.

There are questions over how EU regulators will enforce these fines on companies outside of the EU. However, it would be ill-advised to underestimate the EU’s desire to create uniform data privacy laws for its market and the lengths to which regulators may go to accomplish this goal. GDPR extraterritorial enforcement mechanisms with authorities in non-EU countries is very possible.

The potential reputational damage that may result from noncompliance is also something organizations should consider. Non-EU companies, especially those with a strong online presence, should think whether action is required now to avoid the possibility of unfavorable headlines down the line.

How to mitigate risk?

  1. Conduct a Data Privacy Audit (DPA). A DPA should show you the location of data in your company and map the flows of this data. A DPA should also map your current data processing activities against the rights of data subjects which are mandated by GDPR. Examples being, the rights of data subjects to access their personal data and the right to be forgotten. The UK information commissioner’s office has provided helpful guidance on DPAs which can be accessed here.
  2. Put in place processes for deleting data.   One of the 7 principles of GDPR is data minimization. Organizations must not keep data for longer than necessary and data subjects have the right to request the deletion of the personal data that you hold about them (known as the “right to be forgotten”). If not already in place, you should establish processes for deleting personal data: (i) on request; and (ii) if its retention is no longer necessary.
  3. Re-examine consent mechanisms. Consent of the relevant data subject is the basis upon which many organizations comply with the requirements of existing EU data protection laws relating to the processing and storing of such data subject’s personal data. If this is true of your organization you should note that the requirements under GDPR for obtaining consent are more stringent. For example, if you use pre-checked opt-in boxes to gain consent, GDPR clarifies that this is not an indication of valid consent. If your current mechanisms for obtaining consent or the consents that you already have do not meet the standards set by GDPR, you should consider updating such mechanisms and seeking new consents which satisfy the requirements of GDPR.
  4. Appoint a data processing officer (DPO).   If your core activities call for either: (i) regular and systematic monitoring of data subjects on a large scale, or (ii) processing on a large scale of certain categories of data you may be required to appoint a DPO.

If you have any questions or concerns regarding GDPR compliance please email EUDataProtection@mcguirewoods.com.

The increasingly popular use of biometric authentication technology by employers as a means of tracking employee data, including for timekeeping purposes, can create liability.  Biometric data generally consists of an individual’s physical characteristics and the associated technology used to aggregate this data. Biometric data can include fingerprints, DNA, voiceprints or facial recognition technology. This futuristic means of tracking individuals has its benefits in terms of employee time management (e.g., in lieu of a traditional punch cards), to provide access to a secure facility, or for other authentication purposes. But it also has its pitfalls.

Several states have proposed or enacted legislation protecting individuals’ privacy rights in the collection of their biometric data.  Illinois led the pack by enacting the Illinois Biometric Privacy Act (“BIPA”) in 2008, which requires businesses who collect biometric data to: (1) provide written notice to the individual of the collection; (2) inform the individual of the length of time for which the biometric identifiers are being collected, stored, and used; and (3) obtain express, written consent from the individual prior to collection.  Employers and other private entities must also exercise a reasonable standard of care in handling biometric data.

BIPA creates a private right of action for individuals aggrieved by a statutory violation, and violations can create substantial exposure to an employer, including liquidated damages, attorneys’ fees, costs, and/or injunctive relief.  Since enactment of BIPA, similar legislation has been either enacted or proposed in other states including Texas, Alaska, Connecticut, Montana, New Hampshire, and Washington.

The privacy litigation landscape – particularly in the employment context – has already seen an evolution as a result of these laws designed to protect biometric information, with an uptick in litigation between 2015 and 2017.  In one example, in October, 2017, a rehabilitation center in Illinois called Paramount of Oak Park Rehabilitation & Nursing Center LLC was slapped with a BIPA-violation lawsuit for requiring employees to scan fingerprints twice daily as a means of clocking in and clocking out.  The complaint filed in Cook County, IL calls this practice “invasive” and states:

Unlike a Social Security number, which can be changed, no amount of time or money can compensate [workers] if their fingerprints are compromised by the lax procedures through which defendants capture, collect, store and use their workers’ biometrics.

Notably, this and other lawsuits addressing this issue do not necessarily arrive at the point of challenging use of the data.  Instead, employers are facing liability at the outset for the mere collection of this data when not in compliance with statutory requirements.  Over 30 similar class action lawsuits have been filed in federal and state jurisdictions.

Case law under BIPA and other similar statutes is still developing, and employers should keep a watchful eye on trends in court’s treatment of biometric data protections, restrictions, and requirements in order to ensure compliance. In the interim, and because the cost of non-compliance is substantial, employers should be cautious in their approach to collecting, using and storing its employees’ biometric data.  Specifically, employers should:

  • Draft a written policy regarding collection and use of biometric data, including the company’s process for safeguarding the information, and destruction of data, consistent with state law. Employers should consider including a discrimination disclaimer in their policy, which should be disseminated widely, and review should be made an onboarding and training requirement.
  • Obtain express written consent and a release from each employee before collecting or using their biometric data.
  • Implement a data breach response protocol that includes biometric data and provide notice to employees that a protocol exists.

The Financial Industry Regulatory Authority (FINRA) is ramping up on their commitment to assist the industry in its cybersecurity compliance efforts. Recent guidance to the industry from FINRA includes:

  1. an Examination Findings Report, detailing observations from recent broker-dealer examinations with the goal of assisting broker-dealers in enhancing their compliance programs and better anticipating potential areas of concern (FINRA included compliance areas to highlight based on the frequency of deficiencies and the potential impact on investors and markets); and
  2. the 2018 Regulatory and Examination Priorities, in which, notably, FINRA instructed firms to review the priorities in conjunction with the Examination Findings Report.

FINRA called out cybersecurity, in its Examination Findings Report, as one of the “principal operational risks facing broker-dealers.” While acknowledging the increased threats today, FINRA noted that firms have generally increased their focus on cybersecurity issues and some firms examined are at the forefront of developing “cutting-edge cybersecurity programs.”

FINRA detailed areas in which they observed in the examinations that firms’ cybersecurity programs were either effective or deficient. Reviewing the positives and negatives provides valuable information for firms looking to shore up their cybersecurity programs.

Examples of Effective Practices Include

  • Escalation Protocols: Have an escalation process that ensures appropriate level at the firm is apprised of issues to ensure attention and resolution.
  • Plans to Resolve Issues: Implement detailed resolution steps and time frames for completion.
  • Routine Risk Assessments: Conduct regular risk assessments, including vulnerability and penetration tests.
  • Routine Training: Conduct training for firm employees, including training tailored to different functions, in addition to generic cross-firm training.
  • Branch Office Reviews: Include cybersecurity focused branch exams to assess risks and identify issues.
  • Additional Practices: Implement security information and event management practices, use system usage analytics, and adopt data loss prevention tools.

Examples of Deficient Practices Include: 

  • Failure to Follow Access Management Steps:
    • Not immediately terminating access of departing employees.
    • Failing to have processes to monitor or supervise “privileged users” to identify unusual activity (e.g., assigning extra access rights, unauthorized work outside business hours, or logging in from different geographical locations at or about the same time).
  • Infrequent or No Risk Assessments:
    • No formal risk assessment practices.
    • Unable to identify critical assets or potential risks.
  • Informal Processes for or Lack of Vendor Management:
    • Failed to have formal processes to assess vendor’s cybersecurity preparedness;
    • Failed to include required notification of breaches involving customer information in vendor contracts.
  • Noncompliant Branch Offices:
    • Failed to manage passwords.
    • Failed to implement security patches and software updates.
    • Failed to update anti-virus software.
    • Lacked control of employee use of removable storage devices.
    • Use of unencrypted data and devices.
    • Failed to report incidents.
  • Segregation of Duties:
    • Failed to segregate duties for requesting, implementing, and approving cyber-security rules and systems changes.
  • Data Loss Prevention:
    • Lack of rules to ensure all customer sensitive information is covered.
    • Permitted or failed to block large file transfers to outside or untrusted recipients.
    • Failed to implement formal change-management processes for data loss prevention systems changes.

FINRA’s 2018 Examination and Regulatory Priorities also include cybersecurity as a priority area. In addition to the areas noted above, which FINRA also calls out in the Priority Letter, FINRA noted two additional themes.  One, they will evaluate the effectiveness of firms’ cybersecurity programs in protecting sensitive information. Two, FINRA also reminds firms that they need policies and procedures to determine when a Suspicious Activity Report should be filed regarding a cybersecurity event. (See, FinCEN’s Advisory to Financial Institutions on Cyber-Events and Cyber-Enabled Crime, Oct. 25, 2016.)

Conclusion

FINRA reminds firms that, while exam deficiencies must be addressed, firms often benefit from “proactively” remediating issues before the exam is completed. Acting proactively strengthens firms’ programs and enhances regulatory protections. Our observation, as outside counsel, is that when firms take proactive steps to get ahead of issues, it demonstrates to the regulators that the firm has a commitment to a strong compliance program and, in the right circumstances, may have a material impact on how FINRA decides to resolve an issue.

The information FINRA provides in the Examination Report and Priorities Letter provide roadmaps to enhancing overall compliance, supervisory, and risk management programs. With regard to the focus on cybersecurity, by using this resource, firms can effectively prepare for examinations and potentially prevent program gaps and avoiding cybersecurity incidents.

The GDPR (General Data Protection Regulation) will be applicable as of May 25, 2018. The (high) level of penalties under the GDPR will become one of the core issues for companies. Indeed the GDPR is based on the European fundamental rights to privacy and data protection and could potentially apply outside the European Union.

In order to reassure companies and as a first step, the French Data Protection Authority (DPA), the CNIL, assured that the application of the GDPR in France will be flexible. This declaration was made on its website this Monday, February 19, 2018.  The CNIL also assured companies that it will provide some assistance to companies in the first months after the entry into application of the GDPR. In this way, an accompanying information guide will be published by the CNIL (co-edited with the French public investment bank) to help companies.

Finally, the CNIL assured companies that it will not sanction by any means each company that does not comply with the GDPR. The approach will be pragmatic with a distinction between the existing fundamental principles (existing under the current law) and the new requirements that need adjustments within companies.

The existing principles for which there will be no flexibility or tolerance are, for example, the obligation to process in a lawful, fair and transparent manner, the obligation to collect data for an explicit and legitimate purpose, the principles of accuracy and data retention and the principle of ensuring appropriate security when processing data. For these principles, the CNIL will control the companies and will apply the GDPR sanctions as of May 25, 2018. The CNIL announced strong verifications of company compliance with these principles.

However concerning new principles, such as the right to data portability, the requirement to nominate a Data Protection Officer (DPO) and the requirement of maintaining a record of processing activities, the goal of the first verifications will be to assist companies and help them in understanding and implementing  these new principles. The French DPA’s intention will not be to take sanctions immediately on each infringement. Indeed, if a company is acting in good faith and cooperate with the CNIL, these verifications will not lead to procedure of sanctions.

This tolerance only concerns the year 2018 at this time.

The CNIL emphasized that the GDPR will lead to the disappearance of the duty of notification to the national DPA. These notifications will be replaced by the record of processing activities and, where the processing is likely to result in a high risk, by the Data Protection Impact Assessment (DPIA).

In this way and as a first step, it will exist as a tolerance for implementing a DPIA for current processing. This tolerance will be time limited. Indeed, the GDPR will impose a reassessment of risks in a dynamic way. As a result, this DPIA will be carried out within a reasonable time of three years.

A few days before this statement, the French National Assembly adopted the draft law on personal data protection, effective on May 25, 2018.

Tax season is here, which means tax fraud season is here, too.  This year, the Internal Revenue Service (IRS) is warning tax practitioners about a new phishing scam targeted at them and reminding all employers about fraudsters’ continued use of a scam to collect Form W-2 from entire companies.

Cybercriminals have traditionally targeted taxpayers, in an attempt to obtain their personal information, through phone or email scams.  Perhaps due to advances made in educating the public about identity theft, cybercriminals are now shifting tactics and targeting tax professionals to obtain the same sensitive, personal information.

Here is how the scam targeting tax preparers works:  Fraudsters send introductory emails to tax professionals posing as potential clients to gain access to the professionals’ computer systems and collect the personal information of clients.  Some emails reported to the IRS include:

  • “Happy new year to you and yours. I want you to help us file our tax returns this year as our previous CPA passed away in October.  How much will this cost us?  Hope to hear from you soon.”
  • “A friend of mine introduced you to me regarding the job you did for him on his 2017 tax. I tried to reach you by phone earlier today but it was not connecting, attached is my information needed for my tax to be filed.  If you need more details please feel free to contact me.”

The email may contain a phishing URL or an attachment containing a phishing URL claiming the individual’s tax data is enclosed.  Once the recipient clicks the link, malware is secretly downloaded that allows the cybercriminal to track keystrokes or gain remote access to the recipient’s computer and steal personal information.  That information can then be used to file fraudulent tax returns or sold on the Dark Web.

In a twist, a few cases have seen fraudulent returns deposited in taxpayers’ real bank accounts.  Then, a person posing as a debt collection agency official contacts the taxpayer, says a refund has been deposited in error, and asks the taxpayer to forward the funds to the caller.

One scam that is not new about which IRS officials are again warning employers is a phishing scam targeting payroll or human resources departments in an attempt to obtain employees’ Forms W-2.  This scam first appeared in 2016, and the IRS does not expect it to slow down in 2018, calling it “one of the most dangerous phishing emails in the tax community.”

As we reported last year, here is how the Form W-2 scam works:  Cybercriminals pose as an executive in a company in an email to payroll or human resources and request copies of Forms W-2 for all employees.  Fraudsters have even used an executive’s signature block in the email to increase legitimacy.

The initial email to the employee may be a simple “Hi, are you working today?” before the fraudster requests employee information.  Emails typically include language such as:

  • “Kindly send me the individual 2017 W-2 (PDF) and earnings summary of all W-2 of our company staff for a quick review.”
  • “I want you to send me the list of W-2 copy of employee’s wage and tax statement for 2017. I need them in PDF file type, you can send it as an attachment.  Kindly prepare the lists and email them to me asap.”

During the last two filing seasons, cybercriminals have targeted at all types of employers, including large and small businesses, public schools and universities, hospitals, tribal governments, and charities, meaning that all employers should take steps to educate their employees and safeguard employees’ personal information.  Employers may also want to consider limiting those employees who handle Form W-2 requests and requiring additional verification procedures before emailing Forms.

Regardless of the phishing method, the IRS has recommended a number of basic steps all employers should take—whether it be a small tax preparer or a large business:

  • Educate all employees about phishing emails and train them to not click on pop-ups or suspicious links.
  • Use strong, unique passwords.
  • Never take an email from a familiar source at face value.
  • Consider verbal confirmation by phone with the sender of an email before sending further information or accessing links or attachments.
  • Notify the IRS of all suspicious tax-related phishing emails (phishing@irs.gov for all phishing emails, and dataloss@irs.gov for Form W-2 scam emails).

Additional federal resources:

“Don’t Take the Bait” Security Awareness Campaign

Report Phishing and Online Scams

Tax Scams and Consumer Alerts

On January 8, 2018, the FTC announced that VTech, maker of electronic toys for children, agreed to settle charges that it violated the law by collecting personal information without parental consent.

When Congress enacted the Children’s Online Privacy Protection Act (COPPA) in 1998, it directed the FTC to create a rule implementing the goal of protecting the privacy and safety of children.  The regulations are imposed on services made for children under 13, prohibiting covered entities from collecting personal information from children without properly disclosing how the information will be used to parents and getting verifiable consent.  A privacy policy must be clearly linked on the platform.  The information that covered entities do collect should also remain secured and protected.

In the complaint made public along with the settlement, the FTC alleged that VTech violated COPPA by collecting personal information on children without parental consent through the Kid Connect and other applications sold with its internet-connected toys, since there wasn’t a mechanism in place to verify that the parent registering for a Kid Connect account was actually a parent. The FTC also alleged that VTech failed to provide direct notice of its information collection practices to parents and failed to take reasonable steps to protect the information it had collected, which included full names, email addresses, mailing addresses, usernames, and passwords.  Finally, the FTC alleged that VTech violated the FTC Act by falsely stating that personal information submitted by users would be encrypted when in fact none of the information, except for photo and audio files, was encrypted.  In November 2015, VTech learned through a journalist that hackers had accessed its computer network and stolen personal information about parents and children. Decryption keys for the photo and audio files were included in the hacked database.

Hong Kong-based company VTech Electronics Limited and its US subsidiary agreed to pay $650,000 to resolve the charges brought by the FTC.  This settlement marks the FTC’s first privacy case involving internet-connected toys.

Since its passage, COPPA has been actively enforced by the FTC, with recent settlements including a mobile advertiser tracking children’s locations and app developers that allowed third-party advertisers to collect children’s information.

Corporations’ investigations generally deserve (1) privilege protection only if the corporations are primarily motivated by their need for legal advice; and (2) work product protection only if they are motivated by anticipated litigation, and the company would not have created the investigation-related documents in the same form but for that anticipated litigation.

In In re Premera Blue Cross Customer Data Security Breach Litigation, Case No. 3:15-md-2633-SI, 2017 U.S. Dist. LEXIS 178762 (D. Or. Oct. 27, 2017), Premera claimed privilege and work product protection for its data breach investigation.  The court rejected both claims.  Among many other things, the court assessed Premera’s work product claim for documents created by its consultant Mandiant.  Premera had hired Mandiant to review its claims data management system in October 2014.  On January 29, 2015, Mandiant discovered malware on the system.  Premera quickly hired an outside lawyer, and on February 21, 2015, “Premera and Mandiant entered into an amended statement of work that shifted supervision of Mandiant’s [later] work to outside counsel.”  Id. at *22.  Premera predictably argued that Mandiant’s later work was protected, because Mandiant was then working “on behalf of an attorney.”  Id. at *23.  But the court rebuffed the argument — bluntly explaining that the “flaw in Premera’s argument . . . is that . . . [Mandiant’s] scope of work did not change [from the October 2014 agreement] after outside counsel was retained.”  Id.  As the court noted, the “only thing that appears to have changed involving Mandiant was the identity of its direct supervisor.”  Id.

Companies seeking to maximize privilege and work product protection for internal corporate investigations should carefully document the primary motivations, showing that the corporation did something different or special because of its need for legal advice or because of anticipated litigation.  The documentation of course should start with law firms’ and consultants’ retainer letters – but all documents created before, during, and after investigations should help evidence the necessary motivational elements under the privilege and (if appropriate) the work product doctrine.