Last week a National Labor Relations Board (NLRB) administrative judge ruled that AT&T Mobility interfered with employees’ labor rights with an overly broad privacy rule. The rule prohibited employees from recording any conversation without approval from the company’s legal department.

The judge found that the rule was in violation of Section 8(a)(1) of the National Labor Relations Act (Act) which prohibits employers from interfering with Section 7 rights. Section 7 gives employees the right to organize and engage in other concerted activity for the purpose of collective bargaining.

The rule was questioned by sales associate, Marcus Davis after he attended a termination notice meeting for another employee and recorded audio of the meeting without management’s prior knowledge.

After the meeting, local area sales manager, Andrew Collings, contacted the human resources department for guidance. Collings then instructed the local store manager to retrieve the company owned phone, delete the 20 minute recording and coach Davis on the company policy. Davis challenged the rule and filed an unfair labor practice charge at the NLRB.

In defense of the rule, AT&T argued that the policy was in place to protect the privacy of customer information. The judge found that although AT&T has a pervasive and compelling interest in protecting customer information, when balanced against employees’ Section 7 rights, the rule is overbroad and in violation Section 8(a)(1) of the Act. Specifically, the judge noted that recent NLRB decisions had suggested that “protected conduct may include a number of things including recording evidence to preserve it for later use in administrative or judicial forums in employment-related actions,” and there were narrower ways for the employer to protect its legitimate interests without interfering with these employee rights. The judge also found that the employee was illegally threatened with disciplinary action, possibly termination, if he violated the privacy rule.

Accordingly, AT&T was ordered to rescind the rule and refrain from any action that would limit the exercise of employees’ Section 7 rights. It remains to be seen whether the company will comply now, or contest the decision before the NLRB itself. The order fits into the trend of NLRB decisions the last few years finding against work rules prohibiting photography and other forms of recording in the workplace. It does not entirely prohibit all rules limiting workplace recordings, but does reject broad rules containing a blanket ban on all workplace recordings.

On April 6, 2017, New Mexico enacted a data breach notification law. The “Data Breach Notification Act” (H.B. 15) will take effect on June 16, 2017. The recent passage of this statute leaves Alabama and South Dakota as the only two remaining states with no law requiring companies to notify individuals of data breaches involving their personally identifiable information. Earlier drafts of the bill had failed to get past the New Mexico Senate Judiciary Committee because of concerns about the $150,000 damages cap and thirty (30) day notification requirement. The bill’s sponsor, Rep. Bill Rehm, stated that he worked closely with the New Mexico business community to make compromises on the bill so that it would pass this time around. The bill that passed this year still contains the damages cap but the previously proposed thirty (30) day notification requirement was replaced with a forty-five (45) day notification requirement.

For the most part, the New Mexico law requires companies to comply with data breach obligations required by a majority of other states. Like a handful of other states, including Illinois and Texas, the law’s definition of Personal Identifying Information (PII) explicitly includes biometric data along with other more commonly included categories of information like social security number, driver’s license number and financial account numbers.

Some important provisions from the New Mexico security breach notification statute:

  • Like the majority of states, New Mexico’s statute applies only to “computerized data” and not data in paper or other forms.
  • Notifications to New Mexico residents (and to the Attorney General and Consumer Reporting Agencies if over 1,000 residents are affected by a single incident) must be made within forty-five (45) calendar days of discovery of the security breach.
  • Entities subject to GLBA or HIPAA are entirely exempted from the provisions of this statute.
  • Third-party service providers are also required to notify the data owner or licensor and must comply with the same forty-five (45) calendar day notice requirement.
  • However, notification obligations are only triggered if a security breach meets the harm threshold of posing a “significant risk of identity theft or fraud”.
  • Civil penalties for knowing or reckless violations of the statute are the greater of $25,000 or in the case of failed notification, $10 per instance of failed notification up to a maximum of $150,000.
  • Also, unlike Massachusetts’ and California’s data breach notification laws that outline prescriptive security processes that companies must follow, New Mexico’s new law generally gives businesses a lot of discretion in determining how to best protect PII. However, one area in which the New Mexico law is very specific is the requirement that businesses disclosing PII to third-party vendors contractually require such vendors to implement and maintain reasonable security procedures and practices.

The fragmented landscape of state data breach notification laws will only get more complex as states continue to amend current legislation, making compliance with state data breach notification laws increasingly difficult for businesses. Companies wanting to remain compliant with such laws across multiple jurisdictions will now have to contend with the laws of 48 states and 3 territories. Calls for a federal data breach notification requirement that would allow companies to follow one set of rules have received pushback from consumer advocates who fear a superseding federal law might weaken the data breach notification laws of states with heightened requirements.

On April 3, 2017, President Trump signed a repeal of new Federal Communications Commission (FCC) rules that would have subjected broadband internet service providers (ISPs) to more stringent consumer privacy regulations. Specifically, the FCC’s rule would have required ISPs to obtain opt-in consent from consumers before using and sharing sensitive information such as geo-location, web browsing history and app usage history.  This repeal allows Internet providers to compete with “edge providers” (which were not covered by the new FCC rules) in mining consumer browsing history and contributing to targeted online advertising.

This repeal, in and of itself, does not create any landmark changes in the legal landscape–the new FCC rules were only passed late last year, and had not yet taken effect. However, it is symptomatic of the Trump administration’s antipathy towards government regulation of consumer privacy.  More importantly, President Trump’s retreat has already begun to spur state legislatures and Attorneys General to strengthen their stance on privacy, concentrating scrutiny at the state level.

For example, in Massachusetts, Republican state senators introduced legislation on April 7 that would bar ISPs from selling browsing histories without customers’ explicit permission. That bill would also prohibit ISPs from charging increased rates to consumers who refuse to share their personal information.

Similarly, last week in Illinois, lawmakers introduced multiple measures that would impose new restrictions on companies that collect or use geo-location information, enable or turn on device microphones, and transfer Illinois consumers’ data to third parties. Illinois legislators are also scheduled to hear two more bills, introduced in March, that specifically target commercial website operators.  Other state legislatures that have introduced or otherwise begun to consider Internet privacy bills in the last three weeks include Connecticut, Kansas, Maryland, Montana, New York, Washington, and Wisconsin.

This shift is also becoming evident via increased executive enforcement at the state level. Advertisements and applications that use and share consumers’ location appear to be an area of particular concern.  For example, in March, the Massachusetts AG’s office obtained a settlement with an advertising company that used geofencing to send targeted anti-abortion ads to consumers in certain cities who entered reproductive health clinics.  In New York, the Office of the Attorney General (OAG) recently entered settlements with three health and fitness mobile application operators, which demand, among other things, that the app providers limit or obtain affirmative consent prior to collection of certain sensitive information.

Though the Trump administration’s laissez-faire approach toward privacy might, at first glance, appear to signal a shift towards lightening the burden of privacy regulations, it may well have the opposite effect, by creating backlash at the state level.  Accordingly, businesses, particularly those who operate online, will need to be more cognizant than ever of differing state policies moving forward.

On March 31, the U.S. Court of Appeals for the D.C. Circuit struck down a Federal Communications Commission (FCC) rule requiring that solicited fax advertisements contain a notice on how to opt out of future faxes. Following the ruling, such opt-out notices will be required only in unsolicited fax advertisements. The decision in Bais Yaakov of Spring Valley, et al. v. Federal Communications Commission, et al. will significantly impact litigation — particularly class action litigation — involving the failure to include an opt-out notice on fax advertisements.

Under the Junk Fax Prevention Act of 2005, an amendment to the Telephone Consumer Protection Act applicable to fax communications, businesses are prohibited from faxing unsolicited advertisements. “Unsolicited advertisements” are defined as advertising material “transmitted to any person without that person’s prior express invitation or permission.” The law contains an exception when three requirements are met: (1) the sender and recipient have an established business relationship; (2) the sender obtained the fax number from the recipient, through their communications or by virtue of the recipient publishing it to a directory or website; and (3) as relevant here, the advertisement contains an opt-out notice. The law goes on to require the opt-out notice to be “clear and conspicuous” and provide a free mechanism to opt out from future faxes.

In 2006, the FCC, purporting to exercise its authority to issue regulations and implement the law, issued a rule requiring that solicited fax advertisements contain opt-out notices. The law already required unsolicited fax advertisements to include an opt-out notice. Accordingly, under the FCC’s revised rules, businesses had to include opt-out notices on all fax advertisements — even if the recipient expressly consented to receive them.

This rule was challenged by a petitioner facing a $150 million class action lawsuit for failing to include opt-out notices on fax advertisements, many of which it had permission to send. The FCC argued that because the law required businesses to include opt-out notices on unsolicited fax advertisements, the FCC also had the authority to require businesses to include opt-out notices on solicited faxes.

The majority of the D.C. Circuit panel disagreed, finding nothing in the text of the law to convey such authority. Instead, the court noted that Congress had drawn a line between unsolicited and solicited fax advertisements, but the law did not require (or give the FCC authority to require) opt-out notices on solicited faxes. That was all the court needed to know to resolve the case.

The D.C. Circuit also rejected the FCC’s argument that it could require opt-out notices on solicited faxes because Congress did not define the phrase “prior express invitation or permission” in the law. The court found the argument “difficult to follow,” noting that the phrase “prior express invitation or permission” went to whether a fax was solicited or unsolicited (and requiring an opt-out notice) — not the other way around. The court also found the FCC’s argument that its rule was good policy to be irrelevant because a “good policy does not change the statute’s text.”

Notably, Judge Pillard, who also serves on the panel deciding ACA International’s appeal of the FCC’s 2015 TCPA Omnibus Order, dissented. Judge Pillard determined that the FCC had the implicit authority to require opt-out notices for solicited fax advertisements stemming from Congress’ direction to the FCC to prescribe regulations to implement the law. In addition, Judge Pillard adopted the FCC’s difficult-to-follow argument that “the inclusion of an opt-out notice is part of what makes subsequent faxes ‘solicited’ at all.”

Judge Pillard’s opinion appears to be motivated by a desire to provide a uniform mechanism for opting out. She reasoned that if a fax contains an opt-out mechanism and a recipient does not opt out, then the recipient has agreed to receive future advertisements (i.e., solicited advertisements). As the panel recognized, such reasoning removes any distinction Congress drew between solicited and unsolicited advertisements in the law. Judge Pillard’s ruling in this case may suggest that she will also rule in favor of the FCC in the much-anticipated decision in the ACA International appeal.

The D.C. Circuit’s decision will impact litigation relating to the absence of an opt-out notice on fax advertisements. First, there is no longer any liability for the failure to include an opt-out notice where the recipient consented to receive the fax. Second, the decision will undoubtedly impact class certification in actions arising from the failure to include an opt-out notice because the question of whether the opt-out notice is required is now an individualized question that turns on whether the recipient consented to receive the fax.

As previously reported, the significant rise in Form W-2 phishing e-mails has prompted increased awareness surrounding these fraudulent tax schemes. Most recently, Virginia has responded to these types of attacks by amending its data breach notification law, Va. Code Ann. § 18.2-186.6(M). The amended law will require all employers and payroll service providers to notify the Virginia Attorney General if they are subject to a breach of payroll data, including a Form W2 e-mail phishing scam.

The new law, effective July 1, 2017 and first of its kind, requires that employers notify the Virginia Attorney General if they discover, “unauthorized access and acquisition of unencrypted and unredacted computerized data containing a taxpayer identification number in combination with the income tax withheld for that taxpayer” and the “the employer or payroll provider reasonably believes has caused or will cause, identity theft or other fraud.”

The notification must include the employer or payroll service provider’s name and federal employer identification number. Once alerted, the Office of Attorney General will report the incident to the Department of Taxation. Notification to the Attorney General is required even if the breach does not otherwise trigger the statute’s requirement that the company notify state residents of the breach. A copy of the new law can be found here. In another development, the IRS has a webpage businesses and payroll service providers now can access to learn how to quickly report data losses resulting from a Form W-2 fraudulent tax scheme. To view the IRS webpage, click here.

Last week, the Office of Civil Rights (OCR) issued guidance on securing end-to-end communications for sensitive information transmitted between parties over the internet. The OCR warns against “man-in-the-middle” (MITM) attacks that can occur during the transmission of information. In a MITM attack, a third party intercepts communications between two parties and, in addition to accessing the information, may alter the communication by injecting malicious codes or modifying trusted information.

If the intercepted information is sensitive in nature, it is likely that the information is protected under one or more state or federal laws that require certain security protocols. OCR states that when electronic protected health information (ePHI) that is protected under the Health Insurance Portability and Accountability Act (HIPAA) is transmitted over the internet, covered entities and business associates should include factors for securing end-to-end communication in their security risk analysis required by the HIPAA Security Rule.

According to OCR, many organizations use HTTPS inspection products in an effort to monitor the security of confidential communications. These products intercept HTTPS communications, decrypt and review them for attacks, and then re-encrypt the communications. OCR cautions that the inspection process can actually make communications more vulnerable to MITM attacks. For example, some interception products do not verify the trust certificate chains between the organization and the server before re-encrypting the communications. Once an HTTPS interception product is in use, an organization is no longer able to validate the certificates in the connection itself. OCR recommends verifying that an HTTPS inspection product properly validates certificate chains and informs the user of any errors prior to using the product. Further, an organization’s poor implementation of inspection products can impair security and introduce new vulnerabilities. OCR states that covered entities and business associates who use an HTTPS inspection product for transmissions of ePHI should consider these risks as part of their HIPAA security risk analysis.

OCR emphasizes its long-standing guidance for covered entities and business associates to encrypt ePHI to ensure that the ePHI is not unsecured. OCR has issued specific guidance on securing ePHI, including encryption. OCR also encourages covered entities and business associate to review recommendations from the National Institute of Standards and Technology for securing end-to-end communications, as well as recommendations from the United States Computer Emergency Readiness Team on protecting internet communications and preventing MITM attacks. All of these resources provide valuable tools for organizations, including covered entities and business associates under HIPAA, to ensure the security of end-to-end communications and reduce the risk of associated liability.

Data breaches can occur in the most surprising places. When data breaches affect sensitive, private information—especially those of children—companies can face scrutiny from regulatory agencies and be exposed to civil (and perhaps even criminal) liability.  While hackers are still targeting retail corporations and financial institutions, some hackers have moved onto an unexpected new area: children’s toys.

Spiral Toys Inc. sells stuffed animals called “CloudPets.” These 21st century stuffed animals are connected to the internet, allowing parents, their children, and anyone with access to the stuffed animals to record and send voice messages to each other.  Users simply download the “CloudPets” phone app (the Android app has been downloaded over 100,000 times already), and create an account by registering their emails and other personal information with the CloudPets app.  Unfortunately, the combination of a vulnerable security network and the sensitive nature of the private information held on the CloudPets’ server made it an attractive target for hackers.

In February 2017, cybersecurity experts discovered that the account information of more than 800,000 CloudPets could be easily accessible by anyone browsing the internet, without the need for a password. Even more disturbing, as reported by cnet.com, nearly 2.2 million voice recordings were also stored online in an unsecure manner.  This includes potentially millions of voice recordings of children.  According to the cybersecurity experts, hackers appeared to have wiped the user database and held its contents for ransom from the company.

Unfortunately, CloudPets’ security flaws do not appear to be an isolated event. While retailers and banks have beefed up their cybersecurity in recent years after a number of high-profile breaches, toy manufacturers appear to be lagging behind.  In prior years, cybersecurity experts raised similar concerns with an internet-connected Barbie doll.  Likewise, cybersecurity concerns have been raised with other connected devices that contain private information, such as the fitness tracking devices like Fitbit.

Data breaches result in serious legal and public relations consequences, including a duty to disclose breaches to the public, regulatory fines, and potential class action lawsuits. Civil actions premised on torts law, i.e., invasion of privacy, are also colorable causes of action against breach involving sensitive private information.

Finally, data breaches can also result in severe financial consequences for the companies involved. For CloudPets, its security breach has directly or indirectly caused their stock price to drop to 1 cent.  Moving forward, manufacturers of “connected” 21st century toys and gadgets should study cybersecurity best practices and cyber-threat trends to stay ahead of the pack and reduce their likelihood of becoming targets for opportunistic hackers.

It has been less than three years since the Court of Justice of the European Union (CJEU) decided that people have the right to have incorrect information about them removed from online search engine results. However, this so-called “right to be forgotten” is not absolute, as confirmed by the CJEU’s most recent ruling last week.

This case concerned an Italian director, Mr. Salvatore Manni, who sought to have his personal details removed from company records in an official public register. He believed that his properties had failed to sell because the companies register showed that he had been an administrator of another company that went bankrupt.

The CJEU held that Mr. Manni could not demand the deletion of his personal data from the official register because the public nature of company registers is intended to ensure legal certainty and to protect the interests of third parties. It was held that this inference with an individual’s fundamental rights to a private life and to protect personal data was not disproportionate in the circumstances. This was because company registers only disclose a limited amount of personal data and company executives should be required to disclose data relating to their identity and functions within a company. The CJEU concluded by saying that in specific and exceptional situations, overriding and legitimate reasons may justify limiting the rights of third parties to access such data, and left it up to national courts to determine whether “legitimate and overriding reasons” exist on a case-by-case basis.

This decision echoes the ruling in the 2014 Google Spain Case; the right to be forgotten must be balanced against individuals’ fundamental rights, such as the right of freedom of expression and the public’s right to know information about persons holding key positions within a company. The General Data Protection Regulation (GDPR) which codifies the right to be forgotten also confirms this position. The right to be forgotten allows individuals to request the deletion of personal data in specific circumstances. However, the GDPR contains certain exemptions where companies can refuse to deal with a deletion request, such as where the processing is necessary to exercise the right of freedom of expression, and for archiving purposes in the public interest.

Companies who receive requests by individuals asking that their personal data be deleted will need to determine, on a case-by-case basis, whether or not such data should be erased. Organizations will be required to perform a balancing act against any competing rights when considering such erasure requests.

See also:

UK’s First Ever Right To Be Forgotten Enforcement: Google In the Firing Line Again

The French Data Protection Authority Puts Google On Notice To Delist Domain Names Beyond Site’s EU Extensions

The CJEU’s Google Spain Decision: A Right to be Forgotten Within the Limits of the Freedom of Expression

Costeja’s Revenge: Orders to Delete Accurate Data and the Right to be Forgotten in the EU

The Illinois Biometric Information Privacy Act (IBIPA) covers face geometry scans that are created from digital images, according to a preliminary ruling last month in a lawsuit against Google. Rivera v. Google Inc., No. 16 C 02714 (N.D. Ill. February 27, 2017). The suit seeks monetary compensation for individuals identified by face recognition technology in photos uploaded to the “Google Photo” service. The ruling rejected Google’s argument that the IBIPA should only cover facial scans that are made in person and potentially subjects Google and other providers of widely used facial recognition technology to significantly expanded privacy requirements in Illinois to protect biometric privacy of individuals whose faces are in the tech companies’ databases.

Two individuals sued Google, seeking class action status and claiming that Google violated the IBIPA when, without their consent, Google’s software obtained facial geometry for their faces from photos that were uploaded to Google Photo. Google Photo is a cloud based offering of Google that, among other things, uses facial recognition technology to assist users in organizing and retrieving their photos.  The IBIPA requires anyone who collects and stores certain “biometric identifiers” such as “face geometry” to first obtain the person’s consent and also requires a written policy for retention and eventual destruction of those identifiers.  The statute provides for damages of $1,000 for each negligent violation and $5,000 for each intentional violation.

In seeking to have the suit dismissed before proceedings begin, Google argued that language in the statute excluding photographs from some parts of the IBIPA should be applied to interpret the statute’s definition of “biometric identifiers” that are covered by the statute to mean that only in-person scans are covered. The statute defines “biometric identifier” as, “a retina or iris scan, fingerprint, voiceprint or scan of hand or face geometry.” The Court, in a detailed 30-page ruling carefully analyzing the text of the statute and the legislative history, concluded that despite “photograph” being expressly excluded from a different definition in the statute, the Illinois legislature did not intend to distinguish between in person and virtual scans in the definition of “biometric identifier.”  As a result, it interprets “biometric identifier” to include face geometry extracted from Google Photo images.

If this interpretation ultimately prevails, it would have a significant impact, at least in Illinois, on the privacy compliance requirements for a broad and growing category of technology products. In addition to Google, a great many photo sharing and social media product providers use similar facial recognition technology to identify people, to organize photos and to add features and images to photos.  The IBIPA would require all the entities providing these functions to specifically inform their users about the collection of face geometry and to publish a retention schedule, detailing how the data will be kept and when it will be deleted.

The impact of this Illinois statute on the rest of the country remains a contested issues. In its ruling, the court concluded that at this early stage of the lawsuit there was sufficient indication that the statute was violated in Illinois so that, unless contrary evidence was introduced, it would apply in this case.  That, however, was based on the assertion that the pictures were taken and uploaded in Illinois, and without an analysis of where the facial geometry was extracted or stored.  The court put aside to a later stage of the litigation the federal constitutional questions about whether this Illinois statute could govern Google’s (and other internet providers’) actions across the United States.

For the latest developments, click here.

For additional analysis on SaaS, please see our latest blog post which can be found here.

 

Small and medium-sized businesses are turning to software as a service (SaaS) solutions for their IT needs more and more frequently. SaaS solutions can provide end-users with quicker, cheaper access to software that they might not otherwise have at their disposal. SaaS solutions can also be more scalable which is important for early-stage companies.  However, SaaS and cloud data storage are still relatively nascent technologies and  carry some risks.  When your business turns to SaaS and cloud solutions, consider the following three major issues:

  1. Data Security:  Data breaches happen all the time. News reports of hacking and industrial espionage hit the headlines daily and present a serious threat to small and medium sized businesses. On-premise software still presents its own set of security concerns, but be wary of new technologies and vendors who do not have a robust security system in place.
  2. Ongoing Business Concerns:  Small and medium sized businesses many times have no option but to outsource certain tasks, such as IT. However, when you outsource IT, you lose control over how your service provider is doing business-wise and can open yourself up to various risks.
  3. Availability:  Employees at small or medium sized businesses work 24/7 and need access to company data 24/7. However, with SaaS and cloud computing, outside issues like internet and power outages are a common problem.

Keeping these three issues in mind, what should you do? First, perform due diligence on your vendors, and filter out mediocre SaaS providers and find the right solution for your business.  Ask vendors about their disaster plans and recovery methods, risk analyses and protocols.  Request information and recommendations from current customers.  Find out if there have been prior security breaches.  Read any terms and conditions, and don’t skip the fine print.  Make sure that any software or data that is critical for the continuation of your work is escrowed. A well-drafted software escrow agreement can go a long way in the event of an issue. If any customizations or updates to the software are done specifically for your business, make sure that those are covered as well, not just the original software version.

The bottom line: expect the unexpected and mitigate any future security issues that might arise.