Machine learning used for facial recognition can have both potentially life-saving and privacy-killing consequences. The rise of cheap sensors, more robust data storage, and advances in machine learning have propelled many industries forward - for instance, many sensors used in the automative industry have focused on detecting vehicle malfunctions, maintenance needs, and even the early components of self-driving cars.
Last week Affectiva announced that it had closed a significant round of funding to further develop its emotion and object detection software (powered by machine learning algorithms). The goal here is to recognize the emotions of the occupants and highlight potential dangers (such as drowsiness). There are many likely extensions to this work, including helping computers understand interactions between the passengers.
But truthfully, there is no reason the technology being developed for this product (which is aimed at decreasing vehicular accidents) couldn’t be used in other settings. Perhaps someday this work could be extended to better understanding human emotion and interactions in public spaces, or human reaction to marketing billboards.
Image taken from Affectiva.com
Automatically categorizing faces is a capability that has existed for some time. As facial image capture and categorization become more ubiquitous, we see many products that use this capability to make people’s lives easier (i.e., automatically tagging individuals in photos), and I’ve just highlighted a way in which it is likely to make our lives safer. As many stories of technological advancement go however, this story isn’t all rosy. In the same week as we saw increased investment in Affectiva’s research, we also saw how some Chinese start ups are using facial recognition algorithms to racially profile minority groups. Though these two applications are surely not relying on the same underlying models, architecture, or data, there is a sharing of information and acceleration of these techniques in research communities that enable both.
Even though Affectiva specifically states their intention to develop more ethical software, it’s no longer in the realm of science fiction to believe that other less-ethically minded enterprises could very easily have surveillance installed in our private property (i.e., a car) that is able to categorize what groups the humans inside likely belong to and the emotions involved in their interactions. With this, and other advancements in machine learning at the edge, it’s important for us to continue to think very critically about how to ethically commercialize this research.
More from the Blog
Apr 3 2019
by — Many interesting learning problems exist in places where labeled data is limited. As such, much thought has been spent on how best to learn from limited labeled data. One obvious answer is simply to collect more data. That is valid, but for some applications, data is difficult or expensive to collect. If we will collect more data, we ought at least be smart about the data we collect. This motiv...
Apr 29 2019
by — Active Learner shows how active learning selects which data to be labeled. Active Learner, our new research prototype, is an interactive visualization of different active learning strategies for labeling data. It features three different datasets (MNIST, Quickdraw, and Caltech) and four different data selection strategies (Random, Entropy, Adversarial, and Ensemble). By exploring the differe...
May 22 2019
by — Active learning allows us to be smart about picking the right set of datapoints for which to create labels. Done properly, this approach results in models that are trained on less data performing comparatively to models trained on much more data. In the world of meta-learning, we do not focus on label acquisition; rather, we attempt to build a machine that learns quickly from a small number of ...