As mobile devices become central to our personal and professional lives, their security is more and more important. Passcodes in particular can be lost (or forcibly surrendered) to law enforcement. Recent research has focussed on behavioural authentication based on patterns of user interaction. This could provide an unintrusive authentication method that operates during normal use.
Figure from Touchalytics: On the Applicability of Touchscreen Input as a Behavioral Biometric for Continuous Authentication
Research in this field addresses two problems. Is it possible to grant access based on the way a user interacts with a phone? This is gating interaction. And once access is granted, can a system continuously monitor use in the background, requesting reauthentication through a gating system when suspicious activity is detected? This is continuous interaction.
To use touch dynamics for authentication, you first have to establish a benchmark of normal user behavior. Current research does this by having subjects type a fixed text or perform gestures on a smartphone. This is repeated a few times to capture variation in behavior. Some researchers run controlled experiments while others try to mirror real life usage scenarios. As an example, Tao Feng and collaborators recruited 40 subjects to perform common gestures such as zooming and spread.
The raw data obtained from the touch display can then be used directly or massaged to obtain timing, spatial and motion features. The extracted features are used to generate a unique user representation. Machine learning classifiers are then used to authenticate a user.
When a touch event occurs on a screen, the operating system records sensor information, which can be accessed through the phone’s API. The API also reports timestamps, which can be manipulated to provide information on dwell and flight time (time the finger stays on a virtual key, and time between presses), and spatial features such as touch size, pressure, and position. Touch size and pressure are normalized values and are usually used without manipulation. On the other hand, position can be used raw or manipulated to provide information on speed, angle, and distance.
The phone’s accelerometer and gyroscope provide yet more user-specific information. The accelerometer measures movement in three dimensions, while the gyroscope measures the rotation.
For gating authentication, many researchers use timing as the only feature, but some combine timing with spatial and motion information. Mario Frank and collaborators propose 30 features based on strokes for continuous authentication. A stroke is a trajectory encoded as a sequence of vectors with location, timestamp, pressure, area occluded by the finger, orientation of the finger, and orientation of the phone. Tao Feng and collaborators complement strokes with zooming motions and finger motion sensor data from a digital glove.
The collected features can then be used to train a machine learning system and classify future users. Gating and continuous authentication research use algorithms like clustering, decision trees, Support Vector Machines (SVMs), and neural networks. For example, Mario Frank and collaborators used SVMs and clustering, specifically k nearest neighbor (kNN), as classifiers. During training, the SVMs constructs a hyperplane to separate out the user and everyone else. The hyperparameters of its radial basis function (a real-valued function which measures distance) are tuned using standard crossvalidation techniques. The kNN classifier looks at each new observation, finds the k nearest training examples, and determines the label of the majority of those k neighbors. The new observation is then assigned that label. SVM takes time to train but only stores the decision hyperplane. kNN is quick, but but must store all training observations and labels. Both storage and CPU are at a premium in a mobile device, but experimental results show that the SVM generally outperforms the kNN for this use case.
False acceptance rate (FAR) and false rejection rate (FRR) are the usual performance metrics for probabilistic authentication systems. FAR is the fraction of intruders that are incorrectly authenticated. FRR is the fraction of authentic users that are incorrectly rejected. A system with high FAR is very insecure while one with high FRR is overly sensitive. In a continuous authentication system, high FRR means that valid users need to reauthenticate too often.
The point where FAR and FRR are equal is known as the Equal Error Rate (ERR). Ideally both FAR and FRR should be low. But when that’s not possible, you can tune the classifier to prioritise one or the other, depending on the application.
It’s currently possible to build a touch-based authentication system with an ERR of less than 5% (see reviews by Teh et al. and Patel et al.). For gating authentication purposes this is too high, but it could be appropriate for continuous authentication.
We think the most useful next step would be the release of large, public datasets. Current datasets are small and mostly proprietary which makes progress slow and difficult to measure. Large public datasets would likely require collaboration between academia and device manufacturers. And it’s time to start thinking about performance not just in terms of accuracy but also computational expense. If you think your phone’s battery drains quickly today, wait until you’ve got a neural network running in the background all the time! Finally — and perhaps most interestingly — the trade off between usability, security and privacy needs to be better understood from a product and user point of view.
More from the Blog
Feb 22 2017
The Fast Forward Labs team will be traveling the world to give talks in March and April! Please join us and let us know if you’d like to meet one-on-one to discuss all things data science and machine learning at these events. You can reach us at firstname.lastname@example.org to schedule a meeting. Hilary Mason on the Future of Data and Analytics Hilary Mason, our CEO and Founder, will be very...
Mar 9 2017
by — Machine learning models are used for important decisions like determining who has access to bail. The aim is to increase efficiency and spot patterns in data that humans would otherwise miss. But how do we know if a machine learning model is fair? And what does fairness in machine learning mean? In this post, we’ll explore these questions using FairML, a new Python library that audits black-...
Sep 11 2017
by — We’re pleased to share the recording of our recent webinar on machine learning interpretability and accompanying resources. We were joined by guests Patrick Hall (Senior Director for Data Science Products at H2o.ai, co-author of Ideas on Interpreting Machine Learning) and Sameer Singh (Assistant Professor of Computer Science at UC Irvine, co-creator of LIME). We spoke for an hour and got ...