Wednesday, June 25, 2008

Judging a prediction market's accuracy

Recently we wrote a blog post about the accuracy of prediction markets. As we promised, we follow that up now with more specific data.

Two years ago, Google wrote a blog post about how their internal prediction markets were working. It was an inspiring picture and one that got many people excited about using prediction markets. Now that we've been hosting prediction markets for 2.5 years, we have quite a bit of our own data. Looking at well over 2 million trades and thousands of markets across hundreds of marketplaces, we can easily say Google's impressive results weren't a fluke.

The most popular type of market our market makers create determines the probability of an event happening: Will David Cook win American Idol?, Will Miller Chill still be sold in 6 months, Will a new aircraft design be delivered on time?

To beat the proverbial departed equine, you can't just look at a single outcome of one prediction market question to determine how "accurate" your marketplace is.

When Mrs. Burnette told us in 4th grade that flipping a coin had .50 probability of coming up heads, she didn't stop there. She made us measure it to prove that this was in fact true by measuring the relative frequency of heads coming up. For homework we had to flip the coin over and over and write down the outcome. Flipping a nickel at home I just got: heads tails heads heads tails heads tails tails tails tails heads heads tails. Using our very new skills with fractions we could then measure the probability by computing 6 heads/13 trials = 0.46, pretty close to 0.5.

Mrs. Burnette appeared to know what she was talking about.

So just like flipping a coin, if Inkling told you something has a 15% probability of coming true, you can't just look at one outcome (i.e. one coin flip). You need to look at multiple scenarios where Inkling said something would happen 15% of the time. If those things actually come true, 15% of the time, Inkling is doing well at this.

We plotted a graph a lot like Google showed 2 years ago. Count the number of markets that predicted an event would occur 5% of the time, and see how many of those occurred: almost 5% of them. Count the number of markets that predicted an event would occur 15% of the time, and sure enough 15% of them ended up occurring. And so on. Until we got the graph below.



The green line is what we’d look like if we were perfect: things predicted to happen 15% of the time happen 15% of the time, things predicted to happen 65% of the time happen 65% of the time, etc. Inkling is the black line hugging pretty close to perfect.

Another type of prediction market is one that predicts the numerical outcome of something: What will the population of New York City be in 2010, How many utility patent applications will be filed in the US in 2014?

In this case we’d like to see a plot of what is the value we predicted with what actually happened. We plotted hundred of these markets in the graph below.



The green line is perfect again. We’d be perfect if Inkling said you’d sell 100 units of something, and you sold 100 units. If Inkling said you’d sell 1000 units, you sold 1000 units. The red line is a line of best fit through the data. Not too shabby.

We've discussed several times on this blog and elsewhere the accuracy of a marketplace. There are significant misconceptions about what the results of a prediction market actually mean, especially in the media. Hopefully these graphs reinforce what a prediction market is revealing as this is the first step in using the new information as input to strategic decision making, etc.

3 comments:

nigeleccles said...

Cool stuff guys. It is great to see the Google analysis replicated. I was chatting to Bo Cargill last week about his analysis and he was telling me the biggest problem was everyone (including MBA students) mis-interprets the x axis as time!

Nigel @ Hubdub

Dave said...

Wonderful. Thanks. Could you clarify when the prices were recorded? The day before the outcome? Would be great to see the progression of accuracy over time.

bobdevine said...

A possible area for future research is to look at how the statistical spread of possible answers affects the accuracy of social markets.

For example, in the classic "wisdom of crowds" question of the weight of an animal, nobody would guess an answer that was many orders of magnitude wrong.

Would a market be more accurately guessed if the distribution is normal and less if it is exponential?