The European Union’s (“EU’s”) General Data Protection Regulations (“GDPR”) go into effect on May 25, 2018, and they clearly apply to U.S. companies doing business in Europe or offering goods and services online that EU residents can purchase. Given that many U.S. companies, particularly in the health care space, increasingly are establishing operations and commercial relationships outside the United States generally, and in Europe particularly, many may be asking questions akin to the following recent inquiries that I have fielded concerning the reach of the GDPR:

What does the GDPR do? The GDPR unifies European data and privacy protection laws as to companies that collect or process the personally identifiable information (“PII” or, as the GDPR calls it, “personal data”) of European residents (not just citizens).

Who must comply? The GDPR applies to any company that has personal information of EU residents or citizens or that conducts business in the EU, regardless of its home country.

What is the risk of non-compliance? The GDPR mandates documented compliance. The regulations provide for substantial fines of up to €20 million or 4 percent of global revenues for noncompliance. Willful non-compliance is most heavily fined under this tiered system.

How far along are most companies as to compliance? The consulting firm Gartner estimates that more than half of the companies that are subject to the GDPR will not be in compliance throughout this year. They will be at risk.

Who will adopt the regulations? All 28 EU members, plus Iceland, Norway, and Liechtenstein (collectively known as the “European Economic Area”), and likely the United Kingdom, will adopt the regulations.

Will the regulations be enforced extraterritorially? The GDPR applies worldwide as to any company that offers goods or services (even if they are free) within the EU or collects, processes, or maintains (anywhere) personal data about European residents (again, not just citizens).

How is “personal data” defined? The definition includes any information as to a human being (called a “data subject”) that can directly or indirectly identify him or her, including, but not limited to, names; birthdates; physical addresses; email and IP addresses; and health, biometric, and demographic information.

What constitutes compliance? In general terms, a subject company must limit the use of the retained personal data and maintain it securely.

  • Explicit consent is required for each processing activity as to any covered datum.
  • Access, free of charge, must be afforded to any data subject on request to a “data controller” (a person at the company charged with maintaining data), who, in turn, must assure that any “data processor” (any person or company that takes data from consumers and manipulates or uses it in some way to then pass along information to a third party) is compliant as to the requested action.
  • Data subjects have the right to be “forgotten, i.e., to have their data expunged, and may revoke consent at will.

What does the GDPR require if there is a data breach? Data breaches that “may” pose a risk to individuals must be notified officially within 72 hours and to affected persons without undue delay.

This, of course, is only an outline of GDPR requirements and procedures. Any specific advice only can be provided knowing an individual company’s circumstances and needs. One does note that, as is the case in other regards, for example, antitrust, the assumptions prevalent within the EU are decidedly different from those in the United States. As a number of commentators have observed, while there is no defined “right of privacy” in the United States, a company is required to preserve information, including PII and personal health information, or PHI, in the event of litigation. In Europe, which has very limited litigation discovery, there is a defined right of privacy and individuals can cause data describing them to be erased (“forgotten”).

Many of you know also that there is a case pending a decision in the Supreme Court of the United States involving whether the U.S. government can compel Microsoft to produce PII that is collected and stored outside of the United States. An affirmative decision might create a conflict of law that will complicate the data retention abilities of American companies doing business overseas. So stay tuned.

It is highly likely that the National Association of Insurance Commissioners (“NAIC”) will adopt a model data cyber security law premised largely on the New York State Department of Financial Services (“NYSDFS”) cyber security regulations.  Recently, we discussed the NYSDFS’ proposed extension of its cyber security regulations to credit reporting agencies in the wake of the Equifax breach.  New York Governor Andrew Cuomo has announced, “The Equifax breach was a wakeup call and with this action New York is raising the bar for consumer protections that we hope will be replicated across the nation.”  Upon adoption by the NAIC, the NYSDFS regulations requiring that NYS financial organizations have in place a written and implemented cyber security program will gain further traction toward setting a nationwide standard for cyber security and breach notification.  Indeed, although there are differences, the NAIC drafters emphasized that any Licensee in compliance with the NYSDFS “Cybersecurity Requirements for Financial Services Companies” will also be in compliance with the model law.

The NAIC Working Committee expressed a preference for a uniform nationwide standard: “This new model, the Insurance Data Security Model Law, will establish standards for data security and investigation and notification of a breach of data security that will apply to insurance companies, producers and other persons licensed or required to be licensed under state law. This model, specific to the insurance industry, is intended to supersede state and federal laws of general applicability that address data security and data breach notification. Regulated entities need clarity on what they are expected to do to protect sensitive data and what is expected if there is a data breach.  This can be accomplished by establishing a national standard and uniform application across the nation.”  Other than small licensees, the only exemption is for Licensees certifying that they have in place an information security program that meets the requirements of the Health Insurance Portability and Accountability Act.  According to the Committee, following adoption, it is likely that state legislatures throughout the nation will move to adopt the model law.

The model law is intended to protect against both data loss negatively impacting individual insureds, policy holders and other consumers, as well as loss that would cause a material adverse impact to the business, operations or security of the Licensee (e.g., trade secrets).  Each Licensee is required to develop, implement and maintain a comprehensive written information security program based on a risk assessment and containing administrative, technical and physical safeguards for the protection of non-public information and the Licensee’s information system.  The formalized risk assessment must identify both internal threats from employees and other trusted insiders, as well as external hacking threats.  Significantly, the model law recognizes the increasing trend toward cloud based services by requiring that the program address the security of non-public information held by the Licensee’s third-party service providers.  The model law permits a scalable approach that may include best practices of access controls, encryption, multi-factor authentication, monitoring, penetration testing, employee training and audit trails.

In the event of unauthorized access to, disruption or misuse of the Licensee’s electronic information system or non-public information stored on such system, notice must be provided to the Licensee’s home State within 72 hours.  Other impacted States must be notified where the non-public information involves at least 250 consumers and there is a reasonable likelihood of material harm.  The notice must specifically and transparently describe, among other items, the event date, the description of the information breached, how the event was discovered, the period during which the information system was compromised, and remediation efforts.  Applicable data breach notification laws requiring notice to the affected individuals must also be complied with.

Our colleague Michelle Capezza of Epstein Becker Green authored an article in Confero, titled “Managing Employee Benefits in the Face of Technological Change.”

Following is an excerpt – click here to download the full article in PDF format:

There are many employee benefits challenges facing employers today, from determining the scope and scale of traditional benefits programs to offer that will attract, motivate and retain multigenerational employees, to embracing new models for defining and providing benefits, while simultaneously managing costs. In the midst of these challenges is the wave of technological change that is impacting all areas of the workplace, including human resources and benefits. In recent years, many new technological tools have emerged to aid in the administration of benefit plans, delivery of participation communications, as well as provide education and advice. These tools often require collection of sensitive data or allow employees to provide personal information in an interactive environment, such as:

  • Benefits, HR and payroll software, and plan recordkeeping, systems
  • Online and mobile applications for benefits enrollment and benefits selection assistance
  • Social media tools and applications for benefits information and education
  • Online investment allocation tools, robo advisors, financial platforms
  • Telehealth and wellness programs

These and other advancements are a sign of the times. While they appeal to employees, reduce burdens on employers, and assist in driving down program costs, organizations must be mindful that cyberattacks on benefit plans and participant information have occurred and measures should be taken to protect against such data breaches.

New York State has issued proposed regulations extending existing regulations requiring banks and other financial institutions to have in place a comprehensive cybersecurity program to credit reporting agencies.  Governor Mario Cuomo announced that “The Equifax breach was a wakeup call and with this action New York is raising the bar for consumer protections that we hope will be replicated across the nation.”

Under the proposed regulations, every consumer reporting agency that assembles, evaluates or maintains a consumer credit report on NYS consumers must register with the State by February 1, 2018 and have in place a written cybersecurity program by April 4, 2018. The program must identify and assess internal and external cybersecurity risks that may threaten non-public information, including personally identifying consumer information. The program must include provisions that address data governance and classification, asset inventory and device management, access control and identity management, systems and network security and monitoring, as well as other mandated areas.

Because the elements required to be addressed in the program are comprehensive, credit reporting agencies should begin the process of developing the program now to meet the April 4, 2018 deadline. Once the program is in place, moreover, the regulations also mandate phase in implementation dates for additional minimum protective standards that must be met.  These include requirements for annual penetration testing, bi-annual vulnerability assessments, limitations on data retention, encryption of non-public information and system generated audit trails to detect and respond to cybersecurity events.

Each agency must conduct a risk assessment of its information systems to include criteria for the evaluation and categorization of identified internal and external threats facing the organization. The risk assessment must describe how identified risks will be mitigated or accepted and how the program will address those risks.  Significantly, the risk assessment must not only address external hacking threats, but also require the identification and mitigation of risks posed by employees and other insiders, such as trusted vendors and independent contractors.  For example, employees who remotely access internal networks must be subject to multi-factor authentication or other “reasonably equivalent or more secure access controls.”

Each organization must also designate a qualified person as a Chief Information Security Officer responsible for implementation and enforcement of the program. The CISO will ultimately be responsible for responding to requests for “examination by the Superintendent of Financial Services as often as the Superintendent may deem it necessary.”  There are also breach notification requirements, as well as a mandate that the Board of Directors or a Senior Officer annually certify compliance with the cybersecurity regulations.  Failure to comply may result in revocation of the agency’s authorization to do business with New York’s regulated financial institutions and consumers.

Stay tuned to whether New York State’s call to action takes hold across the nation. In the meantime, you may find the governor’s press announcement by clicking here.

Employers across all industries are deep in the midst of exciting but unchartered and fluid times. Rapid and unforeseen technological advancements are largely responsible for this dynamic. And while there is a natural tendency to embrace their novelty and potential, the reality is that these advancements are often outpacing our regulatory environment, our bedrock legal constructs, and, in some cases, challenging the traditional notions of work itself.

For employers, this presents numerous challenges and opportunities—from the proper design of the portfolio of the modern workforce, to protecting confidential information in an increasingly vulnerable digital world, to managing resources across less and less predictable borders, and to harnessing (while tempering the power of) intelligence exhibited by machines.

The time is now (if not yesterday!) to develop a long-term strategy to help navigate these current issues and anticipate the challenges and opportunities of the future.

The articles in this Take 5 include:

  1. Embracing the Gig Economy: You’re Already a Player in It (Yes, You!)
  2. AI in the Workplace: The Time to Develop a Workplace Strategy Is Now
  1. Best Practices to Manage the Risk of Data Breach Caused by Your Employees and Other Insiders
  1. News Media Companies Entering the Non-Compete Game
  1. Employers Dodge Bullet in Recent U.S. Supreme Court Travel Ban Order

Read the full Take 5 online or download the PDF.

In a decision emphasizing the need for employers to focus on data security, on June 15, 2015, the U.S. District Court for the Central District of California refused to dismiss a lawsuit filed by nine former employees of Sony Pictures Entertainment who allege the company’s negligence caused a massive data breach.  Corona v. Sony Pictures Entm’t, Inc., Case No. 2:14-cv-09600 (C.D. Ca. June 15, 2015).

In November 2014, Sony was the victim of a cyber-attack, which has widely been reported as perpetrated by North Korean hackers in relation for “The Interview,” a Sony comedy parodying Kim Jong Un.  According to the complaint in this case, the hackers stole nearly 100 terabytes of data, including sensitive personal information, such as financial, medical, and other personally identifiable information (“PII”), of at least 15,000 current and former Sony employees.  The hackers then posted this information on the internet and used it to threaten individual victims and their families.  The nine named plaintiffs purchased identity protection services and insurance, as well as took other measures, to protect their compromised PII.

The plaintiffs filed a class action lawsuit alleging Sony failed to implement and maintain adequate security measures to protect its employees’ PII, and then improperly waited at least three weeks to notify plaintiffs that their PII had been compromised.  The plaintiffs asserted claims of negligence, breach of implied contract, and statutory violations of California, Virginia, and Colorado law.

Sony moved to dismiss the complaint.  First, Sony argued that plaintiffs lacked standing because they had not alleged a current injury or a threatened injury that is currently impending.  The court disagreed, concluding that the allegations of increased risk of future identity theft sufficiently established certainly impending injury.

Sony then challenged the viability of each claim.  While the court dismissed certain of the claims, the court allowed the plaintiffs to proceed with their claims of negligence and violations of California’s Confidentiality of Medical Information Act and Unfair Competition Law.  Key to the court’s decision on the negligence claim were its findings that (a) the costs plaintiffs incurred related to credit monitoring, identity theft protection, and penalties resulting from frozen credit constituted a cognizable injury, and (b) an exception to the economic loss doctrine applied because the parties had a “special relationship” whereby plaintiffs had to provide their PII to Sony in order to get paid and receive benefits.

Regarding the Confidentiality of Medical Information Act claim, the court found sufficient the allegations that Sony failed to maintain the confidentiality of the plaintiff’s medical information, which Sony has admitted included HIPAA-protected health information, and failed to institute reasonable safeguards to protect that information from unauthorized use.

While it remains to be seen whether the plaintiffs will prevail on any of their theories of recovery against Sony, this matter should be a lesson to companies that have not implemented appropriate data security measures more than just the loss of proprietary information.  Employers have a duty to protect the personal sensitive information that they obtain from their employees, and the failure to take preventative measures may result in legal claims, reduction in employee morale, and loss of reputation.

Employers should begin by auditing their information technology infrastructure and network for security vulnerabilities.  Any such audit should be done under the supervision of counsel to maintain the privilege and confidentiality of the audit.  Based on that audit, employers should take steps to mitigate the vulnerabilities found to a reasonable and appropriate level given the threats to the organization.  The Sony breach, like nearly all recent breaches, had an element of social engineering. To protect against these types of attacks employers should also train their workforces on information security best practices.  Finally, employers should be prepared to respond to breaches when they occur.  Employers should formulate and implement a breach response plan to minimize the time from the discovery of the compromise to the reporting of the incident to affected persons.

If a data breach does occur, the company should immediately execute the data breach response plan and quickly investigate the nature and scope of the data breach.  A forensic review should be conducted using an IT specialist that can trace the origins of the breach.  Employees and anyone affected should be notified so that they may take appropriate steps to prevent or limit identity theft and other damages.  Employers also should consider proactively notifying the police to work with the local cyber-crimes unit, as well as filing a civil suit against the perpetrator(s) to obtain injunctive relief and reduce further damage.  Appropriate legal counsel can assist in pursuing these options.