Across the country, school districts use technology to facilitate learning and assist in classroom management. From tracking grades and communicating with parents to monitoring bathroom breaks, technology is everywhere in our schools. But as technology becomes more prevalent in the classroom, what does that mean for student data privacy?

Federal Laws Governing Student Data Privacy

There are several federal laws that govern student data privacy. The Family Educational Rights and Privacy Act (FERPA) protects student educational records and requires the consent of parents or students age 18 or older to consent to the release of education records. The Protection of Pupil Rights Amendment (PPRA) requires parental consent for any federally funded student survey or evaluation that requires the student to provide sensitive information. Lastly, the Children’s Online Privacy Protection Act (COPPA) regulates companies collecting data about kids under the age of thirteen. Under the law, educational products may not require parental consent, and instead, schools can consent on behalf of parents. Importantly, the Federal Trade Commission (FTC) is considering updating COPPA’s regulations. The FTC requested comments on the rule in July and held a workshop in October.

Continue Reading Trends in Student Data Privacy

In less than one month, the California Consumer Privacy Act of 2018 (CCPA) will go into effect and begin a new era of data breach litigation. While the California Attorney General is charged with generally enforcing the state’s landmark privacy law, consumers’ ability to rely on a violation of the CCPA as a basis for violations of other state law statutes will be a concern.

For background, Section 1798.150(a)(1) of the CCPA gives consumers a limited private right of action. The provision allows consumers to sue businesses that fail to maintain reasonable security procedures and practices to protect “nonencrypted or nonredacted personal information” of a consumer and further fail to cure the breach within 30 days. A violation of this data security provision allows recovery of statutory damages of $100 to $750 per consumer per incident or actual damages, whichever is greater, as well as injunctive relief. To determine the appropriate amount of statutory damages, courts must analyze the circumstances of the case, including the number of violations, the nature, seriousness, willfulness, pattern, and length of the misconduct, and the defendant’s assets, liabilities, and net worth.

Continue Reading CCPA Review: The CCPA May Prohibit Some, But Not All, State Consumer Protection Law Claims

This week, the California Attorney General held public hearings on the draft California Consumer Privacy Act (CCPA) regulations it issued in October.  We attended the hearings in both Los Angeles and San Francisco.  One clear message resounded — unintended consequences of the proposed regulations if left as drafted.

Both hearings were well-attended, with dozens of comments from businesspeople, attorneys, and a handful of concerned citizens.  In addition to these two hearings, the Attorney General also held public hearings in Sacramento and Fresno, and is accepting written comments through Friday, December 6, 2019.  If the Los Angeles and San Francisco hearings are any indication, there are many areas in which the Attorney General could provide further clarity should it choose to revise the current draft regulations.

Continue Reading California Attorney General’s Public Hearings on CCPA Regulations in Los Angeles and San Francisco—An Overview

While customer data breaches are garnering a lot of media attention, a subtler but equally problematic cybercrime is slowly on the rise — domain spoofing.

In this context, cybercriminals register domain names that are virtually identical to an entity’s legitimate domain name and/or brand, often with subtle misspellings or the addition of business designations or generic words describing the entity’s business. The false domain names are so similar to a company’s actual domain and/or brand that they appear legitimate.

The cybercriminals then use the deceptively similar domain name to create email addresses and send emails impersonating a company or its employees, sometimes using the names of the entity’s actual employees — a tactic commonly called “email spoofing.” Those emails typically contain malware in links or attachments, which are triggered by clicking the link or opening the attachment. Other email spoofing schemes attempt to trick recipients into providing login credentials, providing payment card information, or routing wire transfers to the cybercriminal’s bank account.

Continue Reading *Chime* It’s an Email from Your Favorite Outside Counsel, or Is It?

On October 31, a bipartisan group of senators introduced the Filter Bubble Transparency Act (FBTA), an act which would require large online platforms to be more transparent in their use of algorithms driven by user-specific data.

“This legislation is about transparency and consumer control,” said Senator John Thune (R-S.D.).

“For free markets to work as effectively and as efficiently as possible, consumers need as much information as possible, including a better understanding of how internet platforms use artificial intelligence and opaque algorithms to make inferences.”

The bill is named after Eli Pariser’s book The Filter Bubble, which argues that the personalized search results generated by user-specific data can trap users in an ideological bubble by filtering out content contrary to their ideological viewpoints.

Continue Reading Bipartisan Bill Seeks to Increase Internet Transparency and Consumer Control Over Content

For years, corporate boards have hired third-party companies to conduct financial audits to assure that there is no fraud or other breaches of fiduciary responsibility by management. Cyber risks should be managed similarly. Who can thoroughly evaluate whether management is prepared to protect the company when its systems are attacked or when a data breach occurs? Is management prepared to execute the company’s incident response plan, or is it just sitting on the shelf untested?

Continue Reading Effective Incident Response Requires Good Cyber Exercise—Is Your Company in Shape?

A recent letter from researchers at the Mayo Clinic to the editor of The New England Journal of Medicine outlined a new challenge in de-identifying, or preserving the de-identified nature of, research and medical records.[1]  The Mayo Clinic researchers described their successful use of commercially available facial recognition software to match the digitally reconstructed images of research subjects’ faces from cranial magnetic resonance imaging (“MRI”) scans with photographs of the subjects.[2]  MRI scans, often considered non-identifiable once metadata (e.g., names and other scan identifiers) are removed, are frequently made publicly available in published studies and databases.  For example, administrators of a national study called the Alzheimer’s Disease Neuroimaging Initiative estimate other researchers have downloaded millions of MRI scans collected in connection with their study.[3]  The Mayo Clinic researchers assert that the digitally reconstructed facial images, paired with individuals’ photographs, could allow the linkage of other private information associated with the scans (e.g., cognitive scores, genetic data, biomarkers, other imaging results and participation in certain studies or trials) to these now-identifiable individuals.[4]

Continue Reading Technology Continues to Outflank Health Information Anonymization

In one of this year’s largest HIPAA settlements, the U.S. Department of Health and Human Services Office for Civil Rights (OCR) is set to collect $3 million from the University of Rochester Medical Center (URMC). This settlement over potential violations of the Privacy and Security Rules under HIPAA also requires URMC to follow a corrective action plan that includes two years of HIPAA compliance monitoring by OCR. Continue Reading Unencrypted Mobile Devices Cost Medical Center $3 Million In HIPAA Settlement

The EU-US Privacy Shield (Privacy Shield) has passed its third annual review by the European Commission. A framework constructed by the US Department of Commerce and the European Commission to enable transfers of personal data for commercial purposes, the Privacy Shield enables companies from the EU and the US to comply with data protection requirements when transferring personal data from the EU to the US.

The Privacy Shield was approved by the European Commission on 12 July 2016, and was subject to annual reviews to try and avoid failures that resulted in the downfall of the Safe Harbor Principles, which it replaced. The reviews evaluate all aspects of the functioning of the Privacy Shield framework. Continue Reading EU-US Privacy Shield Passes its Third Annual Review

The U.S. Department of Health and Human Services Office for Civil Rights (OCR) has collected over $2.15 million in civil penalties from Miami-based Jackson Health System (JHS) for multiple violations of the Security and Breach Notification Rules under HIPAA. JHS is a nonprofit academic medical system that serves approximately 650,000 patients a year in six major hospitals and a network of affiliated healthcare facilities. This is the first publicized imposition of civil monetary penalties under HIPAA in recent years, in contrast to the many publicized settlements of alleged violations, indicating that JHS’ violations were severe. Continue Reading Jackson Health System Slammed With $2.15 Million Penalty for Privacy Breaches