Wednesday, September 02, 2015

Inkling's final chapter

After graduating from Y Combinator’s second ever batch in the winter of 2006, then operating for close to 10 years, I have formed a new company with a new co-founder, Ben Roesch. That new company has acquired Inkling and all its assets, and we will be operating under a new name: Cultivate Labs

We have defined Cultivate’s new mission as improving how organizations work by using crowdsourcing to leverage the untapped knowledge, experience, and wisdom of their people to improve the decisions and judgments they're making every day.

All of our current clients have been notified of this change and we are very excited for what is ahead, including an entirely rebuilt forecasting platform we will be rolling out over the next few months, new partnerships, a new crowdsourcing product to help companies prioritize new ideas using crowdfunding, and an expanded, highly talented team.

Here is our new website and Twitter account:
On a personal note, I wanted to thank all our current and former clients who have worked with us over the years. Prediction markets have been a new approach to forecasting at every single company we've worked with, and it was always a risk for someone to try this out. Whether the project ultimately succeeded or failed, someone had to take the chance to try something new in their organization, and for that I was always very thankful.

I also want to thank all the users of our public market at We’ll soon be announcing its future, but there are a hardcore group of people on that site who have been using it for years, spending hundreds, if not thousands of hours making thousands and thousands of forecasts. That site is one of the longest running collaborative forecasting exercises ever, and it’s thanks to users like wstritt, job, chelseaboys, mfwinalford, SneakyPete, and others that it has continued to thrive.

Finally, I wanted to say something about Nate Kontny, my co-founder at Inkling. Nate approached me in 2005 about starting this company while I was working at Accenture and he was at Digital River. We went on this crazy journey together until he left in 2011 to work on some new things. 

During that time we became close friends. It’s like everyone says - you’re married to your co-founder. In 2012 Nate joined the fabled Obama election technology team and soon thereafter built the successful writing tool Draft, ultimately landing at Highrise as the new CEO after 37Signals spun it out.

As these things go, Nate and I don’t talk anymore, and I miss that. I always learned something when I spent time with him, and laughed a lot too. I think the timing was right for us to part ways back in 2011, but that doesn’t lessen the fondness I’ll always have for working with him for 5 years and the respect I have for his talents as a programmer, writer, and entrepreneur. He’s one of the smartest, most creative, productive, and honest people I know. I’m a better person today having worked side by side with him on Inkling.

Upward and onward,

CEO, Co-Founder
Inkling Markets

Someone asked me the other day what my favorite memories were of running Inkling. I have many, but I’ll share 2 early ones now.

First was the journey of getting in to Y Combinator at the inception of our company. Nate and I travelled to Cambridge, MA to meet with Paul, Jessica, Trevor, and Robert. Y Combinator was still in its infancy and had only done one other “batch.” Our batch was going to be the first one they had done in California. The 20 minute interview seemed to hinge not on anything related to our idea, but on Nate’s chops as a developer, and the design of a logo on a political blog I had been writing called “Outrage of the Day.” We walked out dazed and wondering if we were even making the right move trying to do this. Who the hell were these people who didn’t even care about our business idea?

The rest of the day we wandered around Harvard and even went to see a movie. We were walking around some random neighborhood in Cambridge when we finally got the call. They wanted us. We had just a few minutes to call them back or the offer would be off. We took it. 18k investment. The next day we came home to Chicago, quit our corporate jobs, and 2 months later we were living in a rented house in Cupertino building our product, Inkling Markets.

Celebrating quitting our job at Dublins Bar in Chicago in 2005.

Another favorite memory was the night we launched a branded prediction market for CNN for the 2008 elections. At the time we had our own dedicated hardware at a datacenter and we had spent weeks white labeling our site and optimizing as much as we could in preparation for 100x the number of people who had ever used Inkling before.

CNN wasn’t great at communicating with us, and without warning, they put a link to the prediction market right on the front page of the election site. Suddenly it was like a fire hose had been turned on and we were getting crushed with sign ups and trading activity. It was still just me and Nate in the company at the time, so dealing with whatever happened was on us, exclusively. 

Amazingly things seemed to be going fairly smoothly until I got an email from my Dad late in the evening. Apparently he was repeatedly getting email alerts from our site for the exact same event. 13 so far, within an hour. It turned out there was a bug in our mail queue that only appeared under this level of duress. I’ll never forget the chat sessions I had with Nate as he was doing play by play of trying to both fix the bug and prevent even more repeat emails from going out by directly deleting thousands of records in the database representing queued mails that were supposed to go out. 

At around 2am I wrote to our contact at CNN to let them know what happened, thinking we were surely going to get canned after this, but fortunately they couldn’t care less. What are a few thousand pissed off people in the grand scheme of things when you’re (at the time) one of the top 10 most visited web sites on the Internet? We even got a commercial made about the prediction market featuring Wolf Blitzer: “Good luck kicking some assets!”

Thursday, August 06, 2015

Someone Has To Win

On December 7 of last year, the Carolina Panthers were 3-8-1 and I spent about 1100 Inkles forecasting that they would make the playoffs.  Even at long odds, this may seem like wasted Inkles--I'm pretty sure that a 3-8-1 had never gone on to make the playoffs.  But there were a couple other important factors.  The Panthers play in the NFC South; the winner of each division must make the NFL playoffs; and at the time the NFC South was led by the 5-7 Falcons and Saints (even the 2-8 Buccaneers had a chance).  The Panthers were an underdog, but they had a fighting chance.  As it turned out, the Panthers won their last four games to make the playoffs, and my 1100 Inkles turned into two hundred thousand.

I remember this scenario whenever I think about the Republican primary field.  None of the competitors seems all that compelling (in my personal opinion,) and it's easy to find arguments against each of them.  But someone will be the GOP nominee, and I'm trying to figure out which candidates are the favorites, which are legitimate underdogs, and which are best ignored.

Our market suggests that the top three candidates--Bush, Rubio, and Walker--have a combined 93% chance to win the nomination, which is higher but not that far from PredictWise's 72%.  So that's something to keep in mind when watching tonight's debate; of the ten candidates in the main event, prediction markets consider only three of them to be real contenders.  But there's also a chance another candidate emerges from the pack.  Donald Trump leads the polls with 23% support, while Ben Carson and Mike Huckabee are at 6.6%, ahead of Rubio's 5.2%.

One interesting feature of prediction markets is that they can react very quickly to new information (much more quickly than polls, which take days to execute).  Our markets will be open during tonight's debate, so if a candidate delivers a powerful comment (or a deadly gaffe), you can expect their forecasted likelihood of winning to move fairly quickly.

I'll continue to post updates over the course of the primary season, and might even devulge my sleeper picks to be the GOP nominee (hint: neither of them qualified for tonight's main debate.)

Ben Golden is a software developer, economist, and evangelist for Inkling Markets.  You can find him on Inkling Markets as benthinkin, and on Twitter, @BenGoldN.  Email him: ben at inklingmarkets dotcom

Wednesday, July 29, 2015

When Intuition 'Trumps' Analysis

In growing my Inkling score from five thousand to ten million Inkles, one of the most important questions was related to the number of points each team would score in the most recent NBA season.  The question asked about the difference between each team's points and the average of all teams.  As I watched the question play out, I noticed that a couple power users were projecting that teams close to the current average would remain close (their values were low), but there also seemed to be a lot of volatility--teams' projections were constantly changing.  As a result I started forecasting upwards any time a team approached the average.  I didn't know what the forecast should be, but I felt pretty good saying it should be higher than where it was.  Eventually, I created a little Excel file to try to quantify my forecasts, assuming some volatility in their scoring, but mostly I was forecasting based on observation and hunch.

Note the shift in approach between intuition (fast, emotional, human) and analysis (rigorous, logical, quantitative).  Both intuition and analysis are really valuable when forecasting in prediction markets, and one of the key advantages of implementing a prediction market is that it leverages both types of thinking.  We tend to think of analytical approaches as being more accurate, especially when making important decisions, and in many cases they are.  I felt more confident in my Excel projections than my initial hunch.  But the accuracy gain was minimal and it took much longer to generate.  There was a lot of predictive value in my initial hunch, and ultimately the forecasts I made before doing any analysis did more for my score those I made after.

In other cases, analysis may actually lead you astray.  For instance, consider Donald Trump's presidential campaign.  When forecasting primary elections, the best analyses typically rely on candidates' polling average, and Trump is currently leading the polls.  Does that make The Donald The Front-Runner?  Prediction Markets say no, both ours and others, and I'm pretty confident they're right.  I also don't think it's rigorous analysis as much as people's intuition guiding the markets.

As a forecaster, it's important to learn when to rely on intuition versus analysis.  But as a prediction market administrator / consumer, what matters is that the market incorporates both intuitive and analytical approaches.  As a result forecasts and be fast, emotional, and human, while also being rigorous, logical, and quantitative.  Most importantly, they're highly accurate.

Ben Golden is a software developer, economist, and evangelist for Inkling Markets.  You can find him on Inkling Markets as benthinkin, and on Twitter, @BenGoldN.  Email him: ben at inklingmarkets dotcom

Thursday, July 23, 2015

Pundits And Prediction Markets

As I've become more involved with prediction markets, I've grown increasingly frustrated with journalists who make predictions (aka pundits) without linking to prediction market questions.  This is, in my opinion, lousy journalism, and insulting to readers.

Linking to a prediction market question has a number benefits, including:

  • requiring questions to be resolvable, which forces pundits to clarify what they're actually predicting
  • tracking performance, revealing over time whether a pundit's claims are insightful or nonsense
  • allowing readers to respond by forecasting on the prediction market, creating a more engaging user experience

Anyone who believes what they're saying should be willing to stake their reputation on it by creating/linking to a prediction market and disclosing their forecasts.  It also creates a richer audience response; forecasting in prediction markets involves clear expressions of individuals' views, which many readers might prefer over the chaos of comment forums.  My hope is that eventually, articles that contain predictions will always link to a prediction market--or embed it within the media platform itself--similar to how finance articles display tickers for any stocks they mention.

In the meantime, readers can also create prediction markets.  Lately, whenever I come across something that looks like a prediction, I'll create a market myself--here are some recent examples.  Creating new questions on Inkling is easy, and anyone who signs up can do so.  (note: we do require approval, but we almost never reject a question.)  It typically takes just a few days for our users to start moving markets to a reasonable forecast.

The media's job is to provide readers with accurate information, and when it comes to forecasting, prediction markets are an invaluable source of information.

Ben Golden is a software developer, economist, and evangelist for Inkling Markets.  You can find him on Inkling Markets as benthinkin, and on Twitter, @BenGoldN.  Email him: ben at inklingmarkets dotcom

Friday, July 17, 2015

Enterprise Crowdsouring: A Primer

When my grandmother immigrated to the United States, she couldn't afford to call her family on the telephone.  That was about 70 years ago.  Today, I have a friend whose brother moved to Sri Lanka to become a Buddhist monk and literally lives in a cave.  He and his family Skype.  This is the power of the Internet--for a significant portion of the planet, it's now possible for any two people to communicate from anywhere, in real-time, basically for free.

This technology has improved our lives in many ways, but it's also brought a lot of unexpected change, and will continue to do so.  Almost every human institution that exists today--companies, governments, academic institutions, systems of government and economics--were created at times when free real-time communication was unfathomable.  When you hear technology companies talk about 'disruption', they're usually talking about changing an institution whose methods assume that people can't communicate in real-time for free.

Enterprise Crowdsourcing isn't about changing a specific industry, but rather the very nature of how businesses and other enterprises operate.  We're challenging the command-and-control model of businesses, which says that tasks should be done by specialized departments, with information flowing up to decision-makers at appropriate time intervals and then back to departments at the discretion of their managers.  Command-and-control actually makes a lot of sense when communication is expensive, but it's becoming increasingly inappropriate as a way of doing business now that communication is free.

When you need to forecast your company's sales for the next quarter, the old model said you should hire an analyst or consultant and assign them the task of sales forecasting.  The new model says you should ask everyone at your company, perhaps using a prediction market.  This communication is reasonably cheap (compared to paying analysts / consultants) and your employees have access to the insight needed to make highly accurate forecasts.

Forecasting is one example of a business function where the crowd outperforms an individual/department, but there are many others, some of which I'll outline in upcoming posts.

Ben Golden is a software developer, economist, and evangelist for Inkling Markets.  You can find him on Inkling Markets as benthinkin, and on Twitter, @BenGoldN.

Thursday, July 16, 2015

The Best Is Not Enough

Barry Ritholtz has written a curious column titled The 'Wisdom of Crowds' Is Not That Wise for Bloomberg View, which criticizes prediction markets.  This is not a new view for Ritholtz, as he reminds us by linking to six blog posts critical of prediction markets each written by...Barry Ritholtz.  Indeed Ritholtz has made it his mission to find instances of prediction markets 'failing', and has found six of them.  These include:

  • For about two months in 2003, a prediction market expressed there was a 50%-75% that Howard Dean would win the Iowa Caucus...Dean did not win the nomination.
  • For most of 2007, a prediction market thought there was a 10%-30% Barack Obama would win the New Hampshire Primary.  Then for about a week before the election, it thought the chance was in the 50%-95% range...Obama lost the New Hampshire Primary.
  • In 2005, a prediction market did a poor job forecasting the result of the Michael Jackson trial.  (And so did everyone else.)

Ritholtz's expectation of prediction markets seems to be that they should be perfectly accurate, and when they fail to meet that standard, he dismisses prediction markets as unwise.  But prediction markets aren't trying to be perfect; they generate probabilistic forecasts, meaning they expect to be 'wrong' sometimes.  When a prediction market assigns an 80% likelihoods to events, it expects that roughly one in five won't happen.  So over time, of course there will be some instances where a market leans in one direction and the opposite result occurs.  If you can only find six of these cases over a twelve-year timeframe, you're not looking very hard.

The value of prediction markets is that they're more accurate than other forecasting methodologies, including surveys, data modeling, and gut instinct.  (They're also often cheaper to implement, too.)  Ritholtz utterly fails to demonstrate that any alternative approach is better.  He concludes the following:
I remain unconvinced you can call prices "wise," no matter what market sets them. Perhaps the most constructive comment one can make about the crowd in market prices is that there are no better alternatives yet invented for determining the price of any item to be bought or sold. "The best we've got" hardly rises to the level of "wisdom." (emphasis added)
Here Ritholtz admits that prediction markets are the best tool for forecasting, but nonetheless argues semantics over whether markets are 'wise'.  In my book, using the best tool available is a wise thing to do, so even if the markets aren't wise themselves, people who use them to generate accurate forecasts are.

Ben Golden is a software developer, economist, and evangelist for Inkling Markets.  You can find him on Inkling Markets as benthinkin, and on Twitter, @BenGoldN.

Thursday, July 02, 2015

I, Benthinkin

When I joined Inkling Markets in Sep 2014, I started forecasting on our public-facing forecasting site.  I already had some experience using prediction markets, having been actively involved with SciCast and its predecessor DAGGRE, and was determined to show my new employer that in addition to being a pretty decent software developer, I could forecast with the best of them. 

The challenge was daunting—Inkling users start with five thousand Inkles (our nominal currency) while the top forecasters have accrued hundreds of millions (in one case billions).  To reach the top ten, I would need to double my score more than thirteen times.  It was time to get to work.

One big difference between Inkling and my previous experiences is that I had some relevant domain experience.  Whereas SciCast focused on science and technology—areas where I have only limited knowledge—Inkling includes questions about sports and politics, where I’m much more comfortable.  Rather than relying entirely on technical forecasting tricks I’d developed and original research and analysis, I could often confidently place bets about future events by drawing insights from my understanding of how sports and politics work.

In future posts I’ll provide a detailed look at how I designed, implemented and refined my strategy.  I’ll share tips and tricks, and more importantly how I think about prediction market questions.  I’ll talk about why I love prediction markets, what I get from them, and why you should get involved.  I’ll explore industries / businesses that benefit from greater use of prediction markets (spoiler alert: all of them).  And I’ll discuss the administration of prediction markets: how to pose questions to yield the best possible results, how to keep users happy, etc.

As for my progress forecasting on Inkling, I haven’t reached the top ten (yet), but I am in the top twenty.  In ten months, I've doubled my score only eleven times, for a score of roughly 11 million Inkles.  This is, I believe, a record for Inkling; I can’t find another user who reached 10 million Inkles within a year of joining the site, and only two cracked 5 million.  As I continue to forecast on Inkling Markets, and work with prediction markets, I look forward to writing about my adventures.

Ben Golden is a software developer, economist, and evangelist for Inkling Markets.  You can find him on Inkling Markets as benthinkin, and on Twitter, @BenGoldN.

Tuesday, June 16, 2015

2015 US Open, Prediction Market Style

The 2015 edition of the US Open will be played this week at Chambers Bay Golf Course in University Place, Washington. With big events like this (this is a major after all), we like to have groups of questions all forecasting different aspects of the event. Who's going to make the cut? What score will the cut be? Is there going to be a playoff? Will everyone's favorite player Tiger make the cut? All great prediction market questions, all tagged with "2015USOpen". Check out all the 2015 US Open questions here, and if you want to create your own, make sure to use the same tag!

Friday, June 05, 2015

Rendering Rails Partials and Forecasting Soccer Results

A few weeks ago, we pushed a code release to our public market that incorporated a decent number of changes. Just as we always do, we were closely monitoring for any unforeseen errors and performance changes after the deployment. After a few hours, we noticed that the site was running a little slower than normal. Assuming our changes had led to this drop in performance, we set out to determine why.

Pulling up New Relic, the great tool that we use to monitor our apps’ performance, we found that we spent 90% of the time it took to process any of our requests rendering this single file that displays trade reasons for a question. And after watching for a few more minutes, that percentage was increasing. But we hadn’t changed anything relating to that piece of code, or at least that we expected would affect that piece of code.

When we looked at where the most recent trades were being placed, we realized that this was being caused by one question that had an extreme disproportionate amount of activity: “Who will win the 2014/2015 UEFA Champions League?” Turns out that one of the two of the semi-finals matches was being played right then, and a surge of new users, mostly located in Italy, were trading in this market.

It wasn’t just that the market was heavily traded though. The other part about this question was that traders were submitting reasons for their trades along with the trade itself, a really helpful option that gets discussions going in a question. So every time someone reloaded the page, we rendered all of the trade reasons submitted in that market, which ended up being a ton because of all the activity.

So in the end, it wasn’t our new code that was causing the slow down. It was just a bunch of soccer fans. After the game ended, we spent a little time only displaying only a few of the trade reasons initially, and if a user wanted to see more, we load them dynamically. A good lesson in only showing users what they need to know at the beginning, and having the option of getting all the information if they want.

The finals of the Champions League finals are coming up this weekend, and this time, we’ll be ready.
Check out that market here!

Friday, May 29, 2015

New Market Type -- Continuous Date Markets

Prediction markets are often used to forecast when an event will occur.  Our newest question type combines our standard option markets with an automated rolling function to streamline administration, and improve accuracy of these questions.

Before continuous date markets, you’d have to create stocks for each possible timeframe, with the name describing when that time frame began and ended. As time passed and a time frame ended, you would manually have to cash out an expired stocks. This limits the amount of information you’re able to get from a market, since each close out leaves the market with fewer stocks.

With continuous date markets, all the mechanics of running a market, that predicts a date, are taken care of for you. By specifying the time frame (daily, weekly, monthly) and number of time frames between date buckets, our backend takes care of stock name formatting, as well as keeping track of the actual date and time that the stocks begin and end.

Additionally, when a stock date bucket is about to expire, we “roll” the market in to the future. This means we resolve the closing stock to 0 (since it hasn’t happened yet) and we split the last stock of the market into two. This maintains the same granularity in forecasting until the answer is known. When we do this, we make sure to keep trader’s net worth the same, by giving them an equal position in the newly created stock as they did in the previous last stock.

Let us know what you think!

Wednesday, April 02, 2014

New Job Available - Rails Developer

We are hiring a new Rails Developer. Information about the position is posted here:

If you're interested in applying, we'd love to hear from you. If you know anyone that hates their current job and is looking for something way better, we would appreciate you passing this along to them. :)

So You Think You're Smarter Than a CIA Agent...

A government funded project we're participating in received a nice write up and audio story on NPR this morning.

Read the article and hear NPR's broadcast of the story here.

The Good Judgment Project originated out of IARPA's ACE program which is an attempt to push the state of the art on crowdsourced forecasting techniques and also see if "crowds" without access to classified information would be better forecasters than those inside the intelligence apparatus.

From the article:
According to one report, the predictions made by the Good Judgment Project are often better even than intelligence analysts with access to classified information, and many of the people involved in the project have been astonished by its success at making accurate predictions.
But there are caveats to this as the IARPA project sponsor Jason Matheny points out:
Matheny doesn't think there's any risk that it will replace intelligence services as they exist. 
"I think it's a complement to methods rather than a substitute," he said. 
Matheny said that though Good Judgment predictions have been extremely accurate on the questions they've asked so far, it's not clear that this process will work in every situation. 
"There are likely to be other types of questions for which open source information isn't likely to be enough," he added.

Monday, March 03, 2014

2014 Oscars - How did Inkling do?

The Oscars are over, the Hollywood elite are home in their mansions nursing their hangovers and taking an in-house spa day, and the rest of us are left to gossip about the highlights and lowlights of the show last night.

With that in mind, let's see how Inkling did in its predictions.

Of the 18 categories we reported on last night before the show started, Inkling correctly predicted 17 of them for a 94% hit rate. The one miss? Original Song. Damn.

Kudos to all those who participated and made predictions. Given these are crowdsourced predictions, we're nothing without you. :)

Sunday, March 02, 2014

2014 Oscar Prediction Guide

The Oscars begin shortly and here's Inkling's predictions. Each winner Inkling predicted has the highest chance of winning among its fellow nominees in the category. Remember, we deal in probabilities, so all we're telling you is the chance a nominee will win, like when the weather man tells you there's a 70% chance of rain, there's a 30% chance it won't rain!

Read on after the predictions for a little more about how to interpret these results and how we calculate them.

Alright, on to the predictions:

Which film will win the most Oscars?
Gravity (75% chance)

Best Picture
12 Years a Slave (76% chance)

Best Actress 
Cate Blanchett (82% chance)

Best Actor
Matthew McConaughey (55% chance)

Supporting Actress
Lupita Nyong'o (68% chance)

Supporting Actor
Jared Leto (76% chance)

Original Screenplay
Her (48% chance)

Adapted Screenplay
12 Years a Slave (80% chance)

Visual Effects
Gravity (80% chance)

Sound Mixing
Gravity (39% chance)

Sound Editing
Gravity (38% chance)

Original Song
Happy (53% chance)

Original Score
Gravity (31% chance)

Makeup and Hair Styling
Dallas Buyers Club (51% chance)

Foreign Language Film
The Great Beauty (57% chance)

Best Director
Gravity (53% chance)

Costume Design
The Great Gatsby (34% chance)

Gravity (63% chance)

Animated Feature Film
Frozen (92% chance)

To see details about each prediction, like what chances the other nominees have, go here:

As you can see, sometimes there is a strong signal from people making predictions when they think they know who the winner will be, i.e. Animated Feature Film (Frozen 92%) or Best Actress (Cate Blanchett 82%) - but other times there isn't much of a signal at all (costume design, original score, sound editing, etc.) This lack of a signal can be for two reasons. First, sometimes there just aren't that many predictions made so the chances don't move very much from where they started (evenly split up among however many nominees there are, i.e. 5 nominees = 20% chance each). But other time no strong signal can mean disagreement among people about what the outcome will be. For Original Screenplay, for example, there's a 48% chance "Her" will win. But this was after 36 predictions. In contrast, there were only 18 predictions made in the Adapted Screenplay category, but those 18 predictions amounted to an 80% chance "12 Years a Slave" would win, meaning there was almost unanimous opinion among those who made the predictions. "Her" was a more controversial pick in that category.

If you have more questions about how this all works, find us on twitter at @inklinghq.

Enjoy the show tonight!

Tuesday, January 21, 2014

A little "less" in "relentless"

As someone running a small company with a handful of employees and several contractors, I've had to learn the limits of how hard I can push people and what I can ask them to do. I suspect other founders or managers in larger companies could stand to examine this aspect of their leadership style as well.

A few months ago we took on a very large consulting project helping George Mason University and IARPA launch and grow a new prediction market focused on Science and Technology forecasting called SciCast (it's launched, you should check it out!). While the project has gone well thus far, it has also presented our small company with a significant set of challenges: namely that we still have product development, customer support, consulting, and sales activities for our own product line, Inkling Markets! 

Where as before our interaction with our clients was controlled and routine, the SciCast project is a traditional consulting engagement with deliverables, timelines, a demanding client, multiple collaborators, and metrics we've been asked to meet. As anyone who has been a consultant knows, this level of accountability can be a serious amount of work. And with that amount of work inevitably comes adversity and additional pressure to succeed.

Unfortunately, this is where things tend to get dangerous. 

Doing well on this project, trying to grow our revenue, building our company for long term success - these are all goals that are of primary importance to me as the founder, but only secondary importance to any employee. An employee may partially share my goals for the business and see its growth as beneficial to their sustained employment, but they have other incentives at work as well. They want to make a good salary, they want to maximize bonuses, and they want to build experience for gaining more responsibility in the company or for their next job outside the company.  For us to have the most productive working relationship, we must reach a balance. That balance is lost if I'm making excess demands on people's time and well-being.

Before working at Inkling I worked at a large consulting firm for many years. Consulting firms are famous for the "grind" - working 6 or 7 days a week on a client project, excessive travel, 16+ hour days. If you talk to anyone in the middle of one of those projects, you hear words like "death march," "fire drill," and "insanity." Managers are hard charging and tend to focus more on making sure short-term tasks are getting done on time vs. the long view of burning people out. One not need to look far then to understand why attrition rates at consulting firms are high and why a disproportionate number of people leave after only a few years. 

That model may work (despite itself) when a large business can afford to have a human resources machine drumming up interest at universities around the world for new employees to indoctrinate, but it doesn't work so well for a small business whose switching cost at losing even a single employee can be quite burdensome.

With that in mind, I've tried to catalog some warning signs that you may be pushing your team a little too hard and need to ease up:
  • People start becoming unresponsive or orders of magnitude slower responding to requests;
  • Quality of work diminishes - just doing tasks for the sake of doing them;
  • People are quiet in discussions and don't proactively offer up opinions or solutions; and most obviously
  • Openly complaining about the amount of work or the tedium of completing it.

"But we still have to meet our deadlines and how do we do it without working people hard," you ask?

Sometimes the problem may not be quantity, but quality. Is everyone working as effectively as they can? Is someone spinning their wheels on something unnecessarily? Sometimes you may just have to bite the bullet and tell your client you simply cannot get something done on time. My experience more often than not is if you are transparent about the likelihood of missing a deadline as early as possible and give valid reasons for why it can't get done, people are understanding and will work with you to narrow the scope of your work or agree to an extension. If there is no other option left than to put yourself through a "fire drill," try to keep the length as short as possible and ease off when it's over for a few days. I strongly believe in the concept of vacationing while not actually on vacation.

As a founder, no one is going to be as passionate about your business as you will be. If you want your team to work hard, they need to share in some of that passion and be incentivized correctly. If they are not, their shelf life will be short. Eventually, people break and simply decide it's not worth it anymore. When that happens, you're on your own with no team at all, and that's the worst outcome of all.

Like this post and want to know when I write others every once in a great while? Follow me on Twitter

Wednesday, December 04, 2013

Inkling Predictions iOS app updates (v1.2)

The latest version of the Inkling iOS app is now available on the app store. New features include:

  • Native alerts for new questions that have been published and new comments made in questions you're participating in
  • Support for our "advanced" trading interface
  • Lots of bug fixes and layout improvements
Please be sure to update and download the latest version and as always let us know if you have any feedback, suggestions for new features or issues with something not working.

Wednesday, October 30, 2013

Inkling Predictions for iOS 1.1

We were late to the party in updating our app for iOS7 but finally have a version out that is compatible and doesn't crash. Anyone who has a current version and has upgraded to iOS7 should run an update and everything will work again.

We're also close to doing another release that will add a bunch of new features including advanced trading, native alerts, setting your own price alerts, and more. It should be an nice upgrade and will be available in about a month.

Tuesday, September 10, 2013

Realtime dashboard, stats, and an evolving look

A few application improvements we've made recently:

  • On your dashboard the data was always stale. It was automatically refreshed every hour, but this could get confusing in active questions where your profit/loss had gone up or down quickly, but you couldn't tell unless you manually clicked "refresh" for that position. Now all your data on the dashboard is updated in real time.
  • We're taking a look at each email sent by Inkling to make sure it has valuable information in it and re-building them if they don't. For example we just re-did the emails you get when a question ends. Now it should be much clearer exactly how you did and what happened overall in the question. 
  • Stats are updating now much more frequently
  • We're slowly migrating to a new look for the application. And if you haven't noticed, we updated our logo too.
  • We've made a ton of "under the hood" improvements which should make the application feel snappier. 
For our enterprise customers, we've introduced several new capabilities including:

  • Enhanced reporting capabilities allowing you to fine tune what data you want to pull down from what time periods.
  • In addition to collecting people's predictions, you can now ask for a specific probability value as another input.
  • Lots of enhancements to the API. 

Tuesday, July 02, 2013

Psychology Influences Markets, Research Confirms

When it comes to economics versus psychology, score one for psychology. Economists argue that markets usually reflect rational behavior—that is, the dominant players in a market, such as the hedge-fund managers who make billions of dollars' worth of trades, almost always make well-informed and objective decisions. Psychologists, on the other hand, say that markets are not immune from human irrationality, whether that irrationality is due to optimism, fear, greed, or other forces.
Now, a new analysis published in the XX issue of the Proceedings of the National Academy of Sciences (PNAS) supports the latter case, showing that markets are indeed susceptible to .

Thursday, June 20, 2013

Acknowledging Everyone's Contributions Through Revenue Sharing

When I was a senior in college I worked in the kitchen at a pizza place called Lennie's in Bloomington, Indiana to make a little extra money. I made pizzas, sandwiches, pasta, and salads. I washed dishes, I coordinated the food going out to the tables, and I helped clean up at the end of the night. Typically I worked with 3 or 4 other people in the kitchen, along with 4 or 5 wait staff and a manager. Friday and Saturday nights during the school year were always incredibly hectic as Lennie's was, and still is, one of the more popular destinations in town. Needless to say, everyone was bone tired by the end of the night.

One of the lessons you learn working in the kitchen of a restaurant is the symbiotic relationship between the kitchen and the front of the house. Unless you've actually worked at a restaurant, I don't think you can appreciate the level of team work, dependency, and coordination that are involved in delivering a quality experience to the customer.

Management at Lennie's seemed to understand this and structured payment to their workers accordingly. On top of everyone's base hourly wage, the wait staff earned tips. But at the end of the night they left a percentage of their tips to the kitchen, who then divided them evenly. Sometimes if the kitchen had bailed a waitperson out in some way, say they had mis-entered a ticket and we had to remake a customer's food, that waitperson would give us a little extra to say thanks.

Fast forward 5 years to when I became a manager at a global consulting firm. They had a profit sharing plan too, but the profit target was arbitrarily set by the CFO and no one really knew how close or far away we were from hitting that target. It also felt very abstract to the point that if I saw some of that money at the end of the year, great, but I certainly wasn't banking on it. More troubling to me however was when I began selling work on behalf of the company. I wasn't a "salesman" per se, but in the course of doing business with a client, I would certainly be responsible for "inside sales" and grow the account over time. Yet I continued to make my monthly salary. I sold projects worth 6 and 7 figures, but saw no direct benefit monetarily.

The natural answer most companies go to then is a commission model. But commissions are usually reserved for sales people. Sales people are on the front lines doing the work to find new business and closing deals and they should be rewarded for that. But they would have nothing to sell if it weren't for the hard work of the teams behind the scenes. Stock options are also a tool companies use to reward employees and do technically give them ownership, but at startups and smaller companies like ours, those are far riskier than options in publicly traded companies. Options could indeed end up being worth a lot of money, or more likely, end up being worth very little.

At Inkling, we've been doing well enough that in the past few months we've hired 3 new people to join our team. In doing so, we had to think about new compensation packages and what they should entail. One of the things we're going to try is a revenue sharing and commission structure that applies to anyone in the company. Any revenue we bring in in the future, each employee gets a cut. If they are instrumental in a new sale, they get an additional cut on top of that. These cuts will be paid out the month the sale happens, not at the end of the year.

Sales matter and keep the company in existence. But sales wouldn't occur without a team behind the scenes working on a quality product. We're going to try and openly acknowledge this relationship just like it existed at the pizza place. As the guy primarily responsible for selling however, I'll try not to screw up too many orders.

Tuesday, June 11, 2013

Inkling Demoing at WorldFuture 2013 in Chicago on July 19th

The World Future Society's annual conference, WorldFuture 2013: Exploring the Next Horizon brings together the world’s premier minds to discuss the long-range future of science, technology, humanity, government, religion and many other topics. Sometimes called a “World’s Fair of Ideas,” WorldFuture 2013 will feature MIT Media Lab founder Nicholas Negroponte, visionary author Ramez Naam, Ford futurist Sheryl Connelly, and geosecurity expert John Watts.

The conference starts July 19th.

Being local folk, we've been asked to be part of the BetaLaunch event the night the conference starts, right after the Negroponte keynote. If you're attending the event, come by and say hello. It looks like it's going to be a fun conference!

Sunday, June 02, 2013

Risk Exposure as a Risk Assessment Metric using Probability Markets

Adam demonstrated how Inkling can be used as a risk assessment tool to estimate the probability of an impact occurring. This impact probability equation can be formulated as:

impact likelihood = probability of risk * impact likelihood given risk has occurred

Prediction markets are exceptionally well-suited for doing these kinds of impact assessments, especially in public policy concerns where risk probabilities are not easily derived from historical data or where public participation is an essential part of the deliberation process. You can take an impact assessment one step further by measuring the magnitude of a given potential impact with regard to its likelihood, commonly referred to as risk exposure. Risk exposure is the risk adjusted value of the consequences should a risk become realized.

Once you've derived a likelihood of an impact occurring, you can then quantify your risk exposure by multiplying the probability of the event occurring by the value of the total loss of risk.

risk exposure = probability of risk * total loss of risk

... or to calculate the risk exposure of an impact:

risk exposure = (probability of risk * probability of impact) * total loss of risk

The total loss of risk can be thought of in terms of the value of an asset, shift or loss in demand, number of people affected, value of ecosystem services, or any other quantifiable amount of utility that may be lost if a risk is realized. Total loss of risk may be something you already know, such as the value of your real estate assets or it could be a value derived from a third prediction market. For example, asking a question in a prediction market such as "how many barrels of oil could flow through phase IV of the Keystone pipeline in 2014?" will give you the total loss of risk of barrels of oil coming in from Canada via the Keystone pipeline. Multiplying that number by the probability that the segment is not built because of environmental policy outcomes will give you your risk exposure which can be weighed vs your investment. 

As a risk manager, an important thing to remember when dealing with total loss of risk exposures is that a low probability of a high loss of risk may be equivalent to a high probability of a low loss of risk. Therefore, many risk managers will construct risk matrices when evaluating their portfolio's risk exposure in order to see a full range of their risk decisions. Loss of life, for example should not be boiled down to risk exposure as a low probability of many lives lost is not equivalent to a high probability of a few lives lost. They are just not comparable.

As you can see, building a resilient portfolio is dependent upon minimizing your risk exposure. This can be done by: 1) setting aside enough resources to cover your risk exposure; 2) reducing the amount of total loss of risk you are taking on; 3) reducing your risk probability; or 4) mitigating the potential impacts should a risk occur. Finding the right balance between these 4 variables is the art and science of risk management.

We feel that prediction markets naturally lend themselves towards robust risk assessment tools. If you have used Inkling for risk management purposes in your organization please write us, we'd love to hear about it.

Pat Carolan
Inkling Markets

Friday, May 17, 2013

The Road to Making - From Starter League to Developer

In college I studied economics. I recommend it to anyone who feels comfortable with the idea of knowing a lot about the world's woes and having very little ability to do anything about it. When I graduated, applying for jobs introduced me to a harsh reality, I had a lot of ideas but my tradecraft was lacking. I'd been huffing the sweet ethers of theory but yearned for the sobering oxygen of practice.

After college I caught a break and landed in the field of business intelligence (BI) before it really had a name other than "IT". Everyone has a different definition of BI, but here's mine: BI is the art and craft of persuasion using data. That's it. However you do it, be it graphs, regressions, tables, art, words, it doesn't really matter. It's the message that's the thing not the medium. Your job as a BI analyst is to gather and present the best version of the truth that you possibly can given an imperfect model of what you're trying to represent. You solve puzzles using the scientific method. It's a discipline where you strive for objectivity asymptotically.

I designed models that priced risk, but I also spec'd out the feeds that transmitted data and drew the diagrams showing how data would be related, stored and retrieved. Only, I didn't really build any of it. I was an analyst. It was my job to derive the logic, design the spec, write the requirements. We called ourselves designers and architects because that's a pretty good analogy, but our jobs stopped at the blueprint and the craftsmen took over from there. Analysts wrote the spec, developers built the product. If you're good, you know how to find the sweet spot between too much and not enough detail and you know the dimensions of a 2x4. But at the end of the day, I was still the guy who took the requirements from the customer to the developers, a people person dammit.

But I wanted to make things. Knowing how to design something that works isn't exactly the same as knowing how to build it. After designing the inputs and outputs of BI systems for a while you develop theories about how it could be done better. You think to yourself that given a chance to build it yourself, you can do it fitter, faster, more productive. This is the worst sort of hubris of course and the gods are laughing at you while you put your boots on but I'll come back to that. So after being an analyst and researcher for almost 10 years I decided I had to learn some tradecraft. I had picked up SQL and R along the way... how much harder could app development be i asked myself? My mom and friends already thought I was building apps like Zuck. How hard could it be? Damned hard, it turns out.

I enrolled in the second class of the Starter League during the winter of 2012. Starter League is the world's best learn-to-code school and is located in Chicago. It teaches web app development in 12 weeks. Starter League recently partnered with 37signals and is changing lives, I've seen it. All that said, everyone in my class completely sucked at web app development when we started. With no exception, we all struggled to grasp the fundamentals. Even the CS graduates didn't have a professional quality app ready by graduation. There were no "naturals". With time, many of us got better and went on to good jobs. But it was easily the hardest thing I've ever done. What programmers forget, but what is obvious to someone coming from functionally siloed enterprise work is that these "easy" web development frameworks are polyglots of various languages, DSLs, domains, competing theories and disciplines. You have to know a ton of stuff to build anything useful. Here's a short (incomplete) list of what you need to know to launch the simplest rails app:

- Ruby (and its gems, many of which are are themselves domain specific languages)
- Rails
- Javascript
- Unix
- Database Administration
- Dev Ops
- Application Architecture
- QA methodology

So after the Starter League, I was still just getting started. I spent a year after that working on projects with friends while working full-time in BI. During that time my wife and I had a child and I did my coding during my spare time. I spent 10-20 hours per week at night and on weekends building apps and "rm -rf"ing them just as quickly. The funny thing about building these toy apps is that I learned to love doing it just because. For the same reason my dad takes apart lawnmowers and rebuilds chainsaws, programming is absolutely wonderful work.

Then, after practicing for a year, Adam Siegel, Inkling Markets' CEO, sent me an email. He was looking for a BI guy with rails experience! I was honest and up front about my abilities. I told him that I was not a pro. He was willing to hire me anyways because he believed I could learn and because he wanted to build out his reporting and analytics. He also wanted someone that would be interested in prediction markets. Perhaps an econ background has some use after all! It sounded like a perfect fit. I believe the cliche that luck happens when preparation meets opportunity, so I did not hesitate to take the job.

So here I am finally making stuff. Practicing my new tradecraft, learning the superpowers of the programming elite and earning a good living. They even call me padawan. Now back to the part of the story where the gods laugh at my foolish pride. Working at a software shop is sacred. I wear my hoody with respect because you wield the power to create, destroy and hurt yourself. It is as intense as being handed the keys to an F-14 and told, "don't crash it kid", and it is challenging. Everyday more challenging than the last. Since joining Inkling, I have written code I am proud of but the learning curve has been steep. My predecessors are YC grads, 37signals employees and Accenture Labs guys. The code is dense and deep. I came in wanting to design the ultimate end-to-end real-time BI solution in D3 and Ember but have spent much of my time preparing for that day by writing tests, migrating reports to email and climbing the very steep D3 learning curve. Another cliche I believe in is crawl-walk-run. After a couple of months, I now think I'm starting to walk and the road is high. All-in-all though, it has been worth it every step of the way.

If there's anything I can pass on to would-be-makers that I've learned, it's this:

Doing analysis, reading hacker news and conducting research is good at teaching you where you're headed before you get there but it's not the real thing. If you're a good developer, respect what analysts bring to the table; if you're an analyst trying to become a developer, try to unlearn what you've learned (not that one is better, it's just a different mindset).

Don't write any code you don't understand (you'll get less done, but you will learn better)... unless you're using a gem... then that's half the point. You'll get faster.

Don't overshoot your abilities too much in job interviews. The demand for developers is good enough that if you work hard to learn your craft, you'll find a job eventually. A good employer has the long view and will hire you because they believe your good ideas will soon be met by your ability to realize them.

Pat Carolan
Engineer and Data Analyst
Inkling Markets

Thursday, May 02, 2013

New Report Features

Today we made some updates to the ‘All Trades’ and the ‘Price Changes’ reports which site administrators can begin using right away. Here is a list of the changes:

All Trades Report:

   added: stock status
   added: market status

Price Changes Report:

   added: price status
   added: stock status
   added: market status
   added: market type
   added: price id

The ‘All Trades’ report now has two new columns: stock status and market status. The ‘stock status’ column will show you the current status of the stock (also known as your possible answer) for every trade. When an answer is resolved, it will flag all trades for that market in the report as either ‘closed’ or ‘open’ depending on whether or not the question has resolved.This may be different than the status of the market (the question you’re asking) in the case where multiple answers may be correct, or if the possible answers are independent of each other (for example, in the question “What will the closing price be on these days in 2013 for Google?”, the stock for the answer last day in March (0LDIM) may cease trading at the end of march, while June, September and December stocks continue active trading).

The ‘market status’ column, similarly, will tell you the status of the market (also known as your question). We had users coming to us requesting these new columns so they could better understand what was happening with trading in their active markets or when they needed to run historical analyses only on their closed markets.

We also added five new columns to the ‘Price Changes’ report. The price changes report shows every prediction on a site and is often used to answer time series questions about Inkling data. The new columns are: price status, stock status, market status, market type, and price id. The price status column, tells you how the price was generated in the market. When you create a new market an initial price is set by the user who generates the question. This is the question originator’s prediction and will be flagged with a status of ‘initial’. This status may be useful if you do not wish to include the starting price in your analysis. A status of ‘market’ indicates that the price was derived via a trade in an active market and a price status of ‘final’ tells you the price set when the answer (stock) was resolved. Final prices are set to either 100 or 0 and are often removed when analyzing price movements. A status of ‘final: no trades’ is set when the answer was resolved without any trading having occurred for that stock.

Because different market types have very different characteristics, as a site administrator you may want to perform your analysis on only certain kinds of markets. The market status column will give you what type of market the prediction occurred in. For most of Inkling’s markets, this will be one of the following: Binary (yes or no answer), a DateMarket (what specific day will something happen), a DateRange (between what range of days will an event occur), Futures (the answer is a number), or Options (multiple choice).

Finally, we added the price id to give you a unique identifier for every prediction on your site, this is typically useful when you are doing analysis where you need to drill down on a single prediction or join data sets together relationally.

These changes lay some of the groundwork for some exciting features on the horizon in our graphing and reporting capabilities. Check back often to see what we’re doing to try to make Inkling’s data more accessible and insightful.

Pat Carolan
BI Specialist
Inkling Markets

Monday, April 22, 2013

Water Policy Markets Launched

A few months ago, Rod Smith from Stratecon Inc. a boutique consulting firm that serves the water industry out of Claremont, CA, reached out to us with an interesting idea: create an expert network of water industry professionals and use an Inkling prediction market to ask them questions about the industry, then analyze the data and apply his firm's expertise to create an entirely new form of industry analysis. Stratecon also plans to offer private prediction markets to its clients that will be for employees of that organization only.

A couple weeks ago, Stratecon soft launched its offering here. And here is a press release about the launch.

We're excited to be working with Stratecon and think the idea of using prediction markets in conjunction with expert networks is a promising idea we'll be exploring a lot more in the coming months.

Friday, February 22, 2013

Crowdsourced Academy Awards Predictions

This Sunday evening the stars will all come together in Hollywood as they always do once a year for the 2013 Academy Awards.

Every year in our public prediction market someone publishes questions about practically every category rewarded, and this year was no different.

We decided to have a little fun this year though and create a separate site just to showcase Academy Awards predictions. You can see it here:

After working up the design, building the site was a breeze. We used widgets from the public site to both display the real-time predictions and to allow visitors to make their own predictions. The widgets are also built to allow you to customize their design via CSS so they can match whatever site you're displaying them at.

Because all of the interaction and calculations are handled on the Inkling side, we then just needed to build a "static" site with some HTML, CSS, and Javascript. Instead of hosting it ourselves however, we decided to just host the site at Amazon S3. A recently introduced domain service called Route 53 made this even easier.

This was a fun project to work on and I suspect we'll be doing more of these in the future. March Madness anyone?

Wednesday, November 28, 2012

Intrade's Unfortunate Encounter With the CFTC

Eric Zitzewitz, an Associate Professor at Dartmouth and expert in prediction markets wrote a great opinion piece for Bloomberg about why Intrade should not be sued by the CFTC and subsequently why U.S. citizens should still have access to participate in Intrade. You know, just like the rest of the world does.

Sunday, November 04, 2012

Inkling Predictions iOS Update

Version 1.0.1 of Inkling Predictions is now available in the app store.

We fixed several bugs and made the app compatible with the iPhone 5.

As always if you have any feedback or feature suggestions for us, be sure to let us know.

Wednesday, September 12, 2012

Inkling Predictions, our iOS app is now available

After getting through a couple review hurdles with Apple, our iOS app for Inkling is now available!

"Inkling Predictions" can be downloaded to your iOS device from here:

The app can be used to access our public site at or to access any corporate site you're already a member of.

You can make predictions, review your dashboard, view and make comments for any question, and see your balances.

We look forward to continuing to develop more features and see our app as overcoming a big hurdle in making it easier for anyone to make predictions.

Thursday, August 30, 2012

Front loaded incentives using fantasy currency

As a software product considered "optional" to use in most settings, we have to really worry about incentives for people to participate, both as part of our software (game mechanics) and what we advise our clients to do outside the software, i.e. recognition for participation, prizes, etc.

With that in mind I've been thinking a lot about an article that appeared on TechCrunch a couple weeks ago about a study done at the University of Chicago regarding the level of performance of public school teachers under two different incentive programs.

In the first group, teachers were rewarded at the end of the year based on their student's performance on a standardized test. For every percent improvement over their school district's average, they would receive up to $4,000.

In the second group, teachers were told they had been given $4,000 at the beginning of the year and that number would be reduced based on how their students did. For every percentage point improvement over the average, that would be the amount of money they could keep.

As you may have guessed from the title of this blog post, the teachers who had something to lose performed better. Their students were on average 10% better than the district average. The teachers who were given a bonus offer at the end of the year showed no improvement.

Here was the money quote (no pun intended):

"The results of our experiment are consistent with over 30 years of psychological and economic research on the power of loss aversion to motivate behavior: Students whose teachers in the 'loss' treatment of the experiment showed large and significant gains in their math test scores," said List, the Homer J. Livingston Professor in Economics at UChicago.

Current "best practices" about incentives in software usage suggest a diverse approach of rewards, multiple leader boards, a certain rate and style of marketing and status communication, and development of community and interaction.

While there are many things we could improve in Inkling, we're already doing a lot of this with mixed results. These types of incentives seem to appeal to a certain psychological profile, but not to a majority, so application providers, including us, are left with just trying to get a maximum number of registrants so we can get a respectable number of active users. The research about teacher's performance is encouraging because they were surely a very diverse pool of people psychologically, yet this one basic "carrot" seemed to work incredibly well.

So how can those learnings be applied to Inkling and perhaps more broadly to application development in general?

Here are some ideas that have been rattling around:

  • Since everyone starts out with 5,000 inkles, we could introduce a "tax" that charges you on a monthly basis based on your usage of the software. If you've made X number of predictions, you get a "tax exemption" but if you don't, you start to lose your inkles and have less predictive power in the application.
  • We could generalize this behavior for whatever behavior we're trying to promote: logging in, making comments, sharing the site with a friend, etc.
  • Perhaps "tax" has too negative of a connotation. We could replicate what the University of Chicago did with the teachers and give people a bonus at the beginning of each month that they lose unless they make predictions, comments, etc.

We would have to be careful though that we're not incentivizing behavior we don't want, i.e. people just going in and making "garbage" predictions just to avoid paying the tax or losing some of their bonus. Which means a tax or bonus structure would have to be based on people's performance. But conveniently, performance can't be evaluated unless they exhibit the behaviors you want them to anyway.

More generally, perhaps the next generation of incentives in software applications will introduce their own currency specifically for this purpose. For example, applications are always bugging you to complete your profile or to do Facebook Connect and usually just show a status bar or reminder text that you "haven't completed 100% of your profile." What if instead when you signed up you were given 1,000 of fantasy money that you begin to lose if you don't do these things within a certain period of time. And if you do do them, the fantasy money can be cashed out for stuff: a waiver of AirBnB fees on a rental, an extra InMail in LinkedIn, an extra 1GB of space in Dropbox.

Continuing with the profile example, I'm sure these companies have quantified what it means to have someone have a complete profile in real dollar value because they can be more effectively marketed to. There would be a nice business case therefore to do this if it means 10% more profile completions assuming the economics work.

Companies like Kiip are kind of already doing this, but with earning "recognition" as you achieve things in applications. The fantasy currency enables the more effective reverse approach of front-loading the incentive and it's yours to lose. Currency is also much more flexible because you can begin to use it in other parts of your application - "earn X by doing Y."