A highlight was when I used triangulation to correct the data in an economist's presentation (never done that before!). It made me realize that beyond academia, entrepreneurs and startups should also be using triangulated research to validate their product plans and business models.
What is triangulation?
Triangulation is when you ask the same question many different ways and compare the results. You'll see agreement or disagreement between the questions. If they don't agree, something is happening that you don't understand. This lets you self-validate or corroborate your findings. Think of it like running an A/B test on survey question correctness, except that you want zero separation. It's similar to a fundamental part of the scientific method.
To show what I mean, I ran four different survey questions about cat ownership in the US. Here are the results (after some simple math):
|#||Question||Cat ownership||Dog ownership||Pet ownership|
|1.||What kind of pets do you have in your household?||23.9%|
|2.||How many cats do you have in your household?||28.2%|
|3.||Do you have one or more cats in your household?||24.0%|
|4.||Are cats or dogs not present in your household?|
Or do you have both types of pet?
The results converge on the same numbers (cat ownership between 22% and 28%) and agree with each other. Most differences are within the margin of error. The min/max span is 4 percentage points. The numbers also agree with data from the humane society and the AVMA sourcebook. I'm confident I know how many people online have cats.
What happens without triangulation
At AAPOR, one of the researchers from NORC / University Chicago presented test results on how representative Google Surveys are. Their original, less accurate finding was that Google Survey data does not closely agree with benchmarks for telephone ownership. We ran a follow-up survey to find out why. The problem was modal bias: Asking a question over the phone introduces errors that are different than the errors from a microsurvey.
By tweaking the question slightly we were able to reproduce the Pew Internet data within 3 percentage points (our results are here in Q1/2; Q3/4 demonstrate the modal bias).
The NORC folks were happy to hear our data was better than they thought. Had they triangulated their results themselves, they would have seen disagreement and known that something else (modal bias) was happening.
Why not triangulate?
Surprisingly, nobody I asked about triangulation at AAPOR had employed it in their own research. Maybe I missed somebody, but it makes sense:
- Most polling that exists today is extremely rigorous and proven.
- But it's slipping away because:
- This makes traditional market research and opinion polling expensive and introduces bias.
So it's plausible that traditional researchers don't triangulate because they can't afford to. And why would they triangulate if the existing measures and techniques work well? The problem is when old measures are applied to new situations, like the NORC example above.
Now it's easy
With new methods it's cheap to do triangulation. I've seen startups triangulating decisions using Google Surveys and the results are great. I presented one such case in London, recently. Anyone can do it.
So: Whenever you make an important decision about a product, business, or research you should triangulate the data used in your conclusion. Try multiple approaches and find agreement between many measures of the same idea. This will give you confidence in your conclusions. It will provide defense against detractors. It will bring consensus to your team.
And you'll know that you're right.