Measuring the Consumer Mobile Experience: Let’s Get the Facts Straight

When we founded OpenSignal 7 years ago, we had a very specific goal in mind. We sought an alternative to the old way of measuring mobile network performance. Instead of packing a car full of test gear and then driving around searching for signals, we decided to focus on the consumer. We felt what mattered most to consumers were the actual speeds and availability they experienced on their phones, not some abstract representation of network performance based on test equipment. Using cutting-edge crowdsourcing techniques and rigorous scientific analysis, we developed and improved a testing methodology that we are extremely proud of. What’s more, operators, equipment makers and regulators around the world have validated that methodology by relying on OpenSignal data to make key business and policy decisions.

As OpenSignal has grown in influence some have taken to making false claims about our methodology. Sometimes this is motivated by a desire to maintain the status-quo of legacy drive-testing methods or sometimes it’s because people don’t like the facts that OpenSignal’s independent data reveals. Well, OpenSignal represents the cutting-edge of mobile industry analysis, and whilst we would much rather have a conversation about the fascinating trends our data reveals — from the impact of unlimited plans in the U.S. to the shakeup in the Indian mobile market — people need to know the truth. It’s time to set the record straight.

Here are the facts:  

Myth: Crowdsourced network testing is biased towards urban areas, and doesn’t sufficiently represent network performance in rural areas

Fact: OpenSignal’s apps have been downloaded more than 20 million times to consumer devices across 6 continents. Those devices collect 2 billion individual measurements each day. This crowdsourced data is representative of all the places that people go and use their devices – at work, at home, indoors and out. It includes both urban and rural areas, and all the locations in between. With such a huge volume of data collected from real users on real devices, we get clear insight into how consumers experience operators’ mobile services across a broad range of geographic areas, times and situations. In contrast, drive testing is limited to testing on roads and uses a single set of test devices which paints a hypothetical picture of a network’s performance.

Myth: Crowdsourcing methodologies are uncontrolled and not scientific.  

Fact: The results and insights generated from crowdsourced data are statistically robust. They accurately represent real user experience. Rather than simulating the consumer experience with dedicated test devices, crowdsourced data measures real performance as the consumer experiences it on their smart devices. Furthermore, OpenSignal applies rigorous analysis on those measurements, using advanced data science techniques to ensure the accuracy of every metric we publish.

Myth: Crowdsourced apps do not record the result when there is no service/signal  

Fact: Whenever OpenSignal’s apps can’t find a signal they immediately record a location using GPS and the phone’s other location capabilities. These locations are made available in our mapping tools, and that data is a crucial component in calculating our metrics. By properly identifying these no signal areas, dead zones and not spots, operators can improve the service they offer to their customers. In reality, OpenSignal is able to collect no-signal data in places other methodologies can’t reach. Many of these dead zones occur inside buildings where OpenSignal collects billions of measurements. Meanwhile drive testing struggles to collect meaningful data indoors.

Myth: Crowdsourced measurements are biased by user behavior.  

Fact:  While some crowdsourced testing tools rely solely on user-initiated tests, OpenSignal does not. The vast majority of our measurements are recorded automatically in the background, without any user interaction with our apps. This allows analysis to be based on a widely representative and unbiased data set.

Myth: Engineers would never use crowdsourced data to design and engineer their networks.

Fact: Many of the world’s leading network operators have switched from using legacy simulated network tests to using OpenSignal data and analysis for the design, management and optimization of their networks.

Myth: Crowdsourced apps test speed and performance to fake servers that are optimally located in a particular operator’s network.

Fact: OpenSignal conducts end-to-end tests to the same server locations that users connect to on a daily basis when using their mobile data and internet services (i.e. Google, Amazon, etc). This means that our measurements for speed and latency closely reflect a user’s typical everyday experience.

While that covers most of the common misconceptions about OpenSignal and crowdsourcing, there are bound to be many more. If you come across any other attacks or criticisms, we encourage you to take them up with us on Twitter or in the comments section below. I’ll be happy to address them. We welcome any discussion on how to best measure the complex world of mobile networking. If you agree, I encourage you to share this post on your own social media channels to help kick-start that debate.

— Brendan Gill is CEO and co-founder of OpenSignal

This entry was posted in Crowdsourcing, Market Analysis, Networks, Philosophy and tagged . Bookmark the permalink.

One Response to Measuring the Consumer Mobile Experience: Let’s Get the Facts Straight

  1. martinrayner says:

    I’m still recording data, as I have done since you first started, but none of my data appears on the maps even though I receive a good signal. Am I doing something wrong? How many users does it take to register a dot on the map?

Leave a Reply