Blog

Feb 27, 2020 · post

Privacy, data governance, and machine learning: the regulatory perspective

Why do privacy and governance matter?

Data privacy has been a common conversation topic among the general public since the Cambridge Analytica scandal in 2018. The data “breach,” in which user information was hoovered up through a Facebook quiz and subsequently misrepresented as being used for academic purposes, resulted in over $5 billion in fines for Facebook. However, Facebook’s infringements were, in fact, relatively narrow in scope (though nonetheless egregious) compared to the growing remit of privacy law. Enterprises with wide-spanning data practices should be wary when establishing data-gathering practices, particularly those practices which are covered by the California Consumer Protection Act (CCPA) and EU General Data Protection Regulation (GDPR).

Companies have already begun complying with these regulations, primarily in the form of updated privacy notices. However, precedence has yet to be set around how data governance (oversight of data flows both within and outside the company) is treated in this process. This article will deal with this intersection, and how it pertains to machine learning operations.

Image credit: Photo by Markus Spiske on Unsplash

What laws are actually on the books? What are their implications to my business?

The new laws that directly pertain to company data usage are most prominently the GDPR and CCPA. Let’s talk about each of them at a high level.

The GDPR requires companies that “process” (a purposefully broad term that covers just about anything you can do with data: collection, storage, transmission, analysis, etc.) any personal data of EU citizens must comply with a 19-point checklist. One point on this checklist states: “it’s easy for customers to request and receive all the information you have about them.”

The CCPA mandate applies to any for-profit entity that collects consumers’ personal data, which does business in California, and satisfies at least one of the following thresholds: a) has annual gross revenues in excess of $25 million; b) buys or sells the personal information of 50,000 or more consumers or households; or c) earns more than half of its annual revenue from selling consumers’ personal information. This law stipulates, among other requirements, that consumers have the “right to know what personal information a business holds about a consumer and whether the business sells or discloses personal information to third parties.”

One way to facilitate compliance would be to build a comprehensive data model, which lists viewing, editing, and administrative rights over data repositories. This data model would necessarily extend to all data-collecting and data-maintaining operations that an enterprise is pursuing.

Does my machine learning algorithm fall under this law?

In a word, yes. Each of the regulations approach the topic differently:

The GDPR is much more exacting when it comes to setting standards for machine learning. Not only does it require companies to divulge all instances in which a customer’s data is sold and processed, but it also states the consumer’s “right to an explanation.” This notion of an explanation has yet to be determined; it is currently being hotly debated by legal scholars. At the very least, it grants individuals “information about the existence of automated decision-making and about ‘system functionality,’ but no explanation about the rationale of a decision.” In other words, consumers will be informed if their, say, credit decision was the result of an algorithm, but not the individual variables that contributed most to the decision.

The CCPA mostly focuses on consumer sovereignty around their data, whether that includes the sale, deletion, or correction of their personal data. One potential machine learning-related highlight of the legislation - full text here - includes the ban of any sort of price discrimination based on data: “A business shall not discriminate against a consumer because the consumer exercised any of the consumer’s rights under this title, including by charging different prices or rates for goods or services, including through the use of discounts or other benefits or imposing penalties.” This sort of outcome could potentially be fostered unintentionally through a pricing model or inventory optimization algorithm. Lawyers are still debating the finer implications of the legislation, which was written with the goal of being modified in the future. Still, companies are still pursuing compliance activity regardless.

Compliance, in terms of data governance, starts with a well-built data model. This model should ideally cover not just the “golden record,” but should also include how the golden record is being applied within business operations. For example, a telecommunications company may rely on an algorithm to predict customer churn. Normally, data models only extend to the golden record. However, for governance purposes, it is necessary to ensure that this usage is captured. More importantly, if a customer orders that their data is deleted, governance architects need to know exactly where data needs to be deleted, including within internal business-side operations. This includes machine learning algorithms, and companies can be held liable for this.

Data governance and privacy: past, present, and future

Privacy law and ethics stretches back to the 1600s in the United States, when Governor William Bradford opened mail flowing between the US colonies and England to monitor insurrectionary forces in the Massachusetts Bay Colony. The Townshend Acts of 1768 allowed British tax agents to search colonist homes, and served as the motivation behind the Fourth Amendment within the Bill of Rights, which laid out the definition of a lawful search. Since then, innovations within postal services, telecommunications, and law enforcement have provoked conversations over what level of government snooping ought to be considered an overreach. However, these days, private institutions (as opposed to governments) are coming under the popular/regulatory microscope.

Regulators are beginning to take action to ensure a fair market and prevent market externalities. Companies like Facebook are now being observed to ensure the prevention of undue snooping. While consumers will emerge with more power, the laws in place have been structured so as to be quickly adaptable to technological change. Innovation within location, natural language, or health data analysis will necessarily compel further regulation. District/state attorneys across the nation are building up technical resources in order to build case precedence. In order to ensure compliance, companies are best served by improving their governance techniques. Software solutions such as SAP and Informatica will only go part way in achieving this level of compliance; human capital and change management best practices must also keep up with these changes.

The laws above have pressing implications for data governance practices, as well as activities peripheral to the governance function (e.g., AI & machine learning, automation, business intelligence). Data governance, before these laws were implemented, was primarily tied to core IT operations. However, with the advent of machine learning integration to corporate strategy, governance should now gradually be thought of as a strategic, legal, and technical function. Moreover, it is important for data governance to be conducted on a continual, iterative basis as data is generated in new and evolving ways. All traces of a data point, upon request of a consumer, need to be retrievable at the drop of a hat; thus, a hashing methodology will need to be built to facilitate this capability.

Will the government actually enforce these rules?

Yes and yes.

Read more

Newer
Apr 1, 2020 · post
Older
Feb 20, 2020 · post

Latest posts

Jun 22, 2020 · post

How to Explain HuggingFace BERT for Question Answering NLP Models with TF 2.0

by Victor · Given a question and a passage, the task of Question Answering (QA) focuses on identifying the exact span within the passage that answers the question. Figure 1: In this sample, a BERTbase model gets the answer correct (Achaemenid Persia). Model gradients show that the token “subordinate ..” is impactful in the selection of an answer to the question “Macedonia was under the rule of which country?". This makes sense .. good for BERTbase.
...read more
Jun 16, 2020 · notebook

Evaluating QA: Metrics, Predictions, and the Null Response →

by Melanie · A deep dive into computing QA predictions and when to tell BERT to zip it! In our last post, Building a QA System with BERT on Wikipedia, we used the HuggingFace framework to train BERT on the SQuAD2.0 dataset and built a simple QA system on top of the Wikipedia search engine. This time, we’ll look at how to assess the quality of a BERT-like model for Question Answering.
qa.fastforwardlabs.com
May 19, 2020 · notebook

Building a QA System with BERT on Wikipedia →

by Melanie · So you’ve decided to build a QA system. You want to start with something simple and general so you plan to make it open domain using Wikipedia as a corpus for answering questions. You want to use the best NLP that your compute resources allow (you’re lucky enough to have access to a GPU) so you’re going to focus on the big, flashy Transformer models that are all the rage these days.
qa.fastforwardlabs.com
Apr 28, 2020 · notebook

Intro to Automated Question Answering →

by Melanie · Welcome to the first edition of the Cloudera Fast Forward blog on Natural Language Processing for Question Answering! Throughout this series, we’ll build a Question Answering (QA) system with off-the-shelf algorithms and libraries and blog about our process and what we find along the way. We hope to wind up with a beginning-to-end documentary that provides:
qa.fastforwardlabs.com
Apr 1, 2020 · newsletter

Enterprise Grade ML

by Shioulin · At Cloudera Fast Forward, one of the mechanisms we use to tightly couple machine learning research with application is through application development projects for both internal and external clients. The problems we tackle in these projects are wide ranging and cut across various industries; the end goal is a production system that translates data into business impact. What is Enterprise Grade Machine Learning? Enterprise grade ML, a term mentioned in a paper put forth by Microsoft, refers to ML applications where there is a high level of scrutiny for data handling, model fairness, user privacy, and debuggability.
...read more
Apr 1, 2020 · post

Bias in Knowledge Graphs - Part 1

by Keita · Introduction This is the first part of a series to review Bias in Knowledge Graphs (KG). We aim to describe methods of identifying bias, measuring its impact, and mitigating that impact. For this part, we’ll give a broad overview of this topic. image credit: Mediamodifier from Pixabay Motivation Knowledge graphs, graphs with built-in ontologies, create unique opportunities for data analytics, machine learning, and data mining. They do this by enhancing data with the power of connections and human knowledge.
...read more

Popular posts

Oct 30, 2019 · newsletter
Exciting Applications of Graph Neural Networks
Nov 14, 2018 · post
Federated learning: distributed machine learning with data locality and privacy
Apr 10, 2018 · post
PyTorch for Recommenders 101
Oct 4, 2017 · post
First Look: Using Three.js for 2D Data Visualization
Aug 22, 2016 · whitepaper
Under the Hood of the Variational Autoencoder (in Prose and Code)
Feb 24, 2016 · post
"Hello world" in Keras (or, Scikit-learn versus Keras)

Reports

In-depth guides to specific machine learning capabilities

Prototypes

Machine learning prototypes and interactive notebooks
Library

NeuralQA

A usable library for question answering on large datasets.
https://neuralqa.fastforwardlabs.com
Notebook

Explain BERT for Question Answering Models

Tensorflow 2.0 notebook to explain and visualize a HuggingFace BERT for Question Answering model.
https://colab.research.google.com/drive/1tTiOgJ7xvy3sjfiFC9OozbjAX1ho8WN9?usp=sharing
Notebooks

NLP for Question Answering

Ongoing posts and code documenting the process of building a question answering model.
https://qa.fastforwardlabs.com
Notebook

Interpretability Revisited: SHAP and LIME

Explore how to use LIME and SHAP for interpretability.
https://colab.research.google.com/drive/1pjPzsw_uZew-Zcz646JTkRDhF2GkPk0N

About

Cloudera Fast Forward is an applied machine learning reseach group.
Cloudera   Blog   Twitter