Teaching machines to recognize faces: a tale of two outcomes

Apr 29 2019 · by

Machine learning used for facial recognition can have both potentially life-saving and privacy-killing consequences. The rise of cheap sensors, more robust data storage, and advances in machine learning have propelled many industries forward - for instance, many sensors used in the automative industry have focused on detecting vehicle malfunctions, maintenance needs, and even the early components of self-driving cars.

Last week Affectiva announced that it had closed a significant round of funding to further develop its emotion and object detection software (powered by machine learning algorithms). The goal here is to recognize the emotions of the occupants and highlight potential dangers (such as drowsiness). There are many likely extensions to this work, including helping computers understand interactions between the passengers.

But truthfully, there is no reason the technology being developed for this product (which is aimed at decreasing vehicular accidents) couldn’t be used in other settings. Perhaps someday this work could be extended to better understanding human emotion and interactions in public spaces, or human reaction to marketing billboards.

Image taken from

Automatically categorizing faces is a capability that has existed for some time. As facial image capture and categorization become more ubiquitous, we see many products that use this capability to make people’s lives easier (i.e., automatically tagging individuals in photos), and I’ve just highlighted a way in which it is likely to make our lives safer. As many stories of technological advancement go however, this story isn’t all rosy. In the same week as we saw increased investment in Affectiva’s research, we also saw how some Chinese start ups are using facial recognition algorithms to racially profile minority groups. Though these two applications are surely not relying on the same underlying models, architecture, or data, there is a sharing of information and acceleration of these techniques in research communities that enable both.

Even though Affectiva specifically states their intention to develop more ethical software, it’s no longer in the realm of science fiction to believe that other less-ethically minded enterprises could very easily have surveillance installed in our private property (i.e., a car) that is able to categorize what groups the humans inside likely belong to and the emotions involved in their interactions. With this, and other advancements in machine learning at the edge, it’s important for us to continue to think very critically about how to ethically commercialize this research.

More from the Blog