What Google Activity Recognition means for apps

There is no shortage of sensors on smart-phones – most devices have a lightmeter, magnetometer, three-axis-accelerometer and sensors for location (GPS, cellular/wifi radio). While it’s relatively easy for developers to access, for example, raw accelerometer data, it can be difficult to transform that data into useful information e.g. “the user is walking”. Most apps don’t bother. This is about to change.

Google IO is a glorious field day for geeks. Last year saw the announcement of Google Glass and this year ActivityRecognition has emerged as the standout from a sea of other headlines. ActivityRecognition is a new set of APIs that has been made immediately available to developers, allowing them to listen to what the device user is doing without having to run complex pattern analysis on raw sensor data. In fact developers don’t need to deal with sensors at all, they just call the API, Google does some thinking and returns a notification that says something like “I think the device’s user is on foot, and I’m 80% confident about that”, except in code form. So far the activities recognised are “tilted”, “on foot”, “on bicycle” and “in vehicle”, but we can expect more to be added.

This is not just for joggers.

The ActivityRecognition APIs lower the barrier for anyone who wants to build a quantified self app, but the implications go far beyond life tracking. App behaviour can now vary depending on user activity. Suppose you have an app that recommends nearby restaurants which works by making a database call to bring back top rated places nearby when the app is opened. If the app knows the user is in a vehicle it could make the queried area larger than if they’re on foot. If your app has widgets you may want to change the update frequency based on user activity, if you run a navigation app you might want to alter the screen that is launched depending on user’s activity.

For anyone operating a sensor network – like WeatherSignal or OpenSignal - then this is a huge windfall. We’ll be able to compare the performance of signal in vehicles, to when the phone is still (most likely at home/work), and we’ll be able to filter our weather readings more precisely.

How can Google roll this out so quickly?

These new APIs are distributed through Google Play Services, an intriguing piece of software downloaded and (automatically) updated from Google Play. While it’s similar to an app, it’s not an app – it’s the thing the drives the latest versions of the Youtube app and the 3D maps in OpenSignal and WeatherSignal. Because of the way it’s distributed, and because it’s independent of the Android version, updates can happen quickly – unlike system updates. Also many Google Play Services, ActivityRecognition included, seem to work by sending data to Google’s servers, which means improvements to the ActivityReocgnition algorithms could happen in near real-time without being too heavy on your phone’s CPU. Update after some testing it seems that ActvityRecognition does work without a network connection.

Just to prove how quickly developers can move with this: we’ll be adding ActivityRecognition to the next version of WeatherSignal (which will be released tomorrow) and we’ll endeavour to get it logging data that you can output to CSV.

What doesn’t it do?

ActivityRecognition is not instantaneous, sensor data needs to be collected and analysed over a small time duration – typically I’m seeing a delay of 3s on my Galaxy Nexus before events are recognised. The number of activities recognised is rather small, I’m hoping another layer will be added to recognise whether the user is indoors or outdoors, I’d also like “on foot” to be broken down into walking/jogging/running/hopping/hobbling/skipping … but it’s a start.

How well does it work?

I’ve found it occasionally thinks I am in a vehicle when I’m walking (and I don’t walk that fast!), the 3s delay is a little annoying and I’m hoping that will be improved. Other than that, I’m impressed. Get WeatherSignal v1.8 to test it out for yourself (to be released 17th may).

ActivityRecognition in its current incarnation is interesting, but it’s indicative of something much bigger: Google is looking to help developers by using its processing power and data to provide them with better contextual information. Don’t be surprised if your phone soon recognises not just that your on foot, but that you’re out shopping or in the park.

This entry was posted in Android Development, Mobile Trends and tagged , , , , , , . Bookmark the permalink.

One Response to What Google Activity Recognition means for apps

  1. Pingback: OpenSignal - How to use WeatherSignal for Quantified Self tracking

Leave a Reply