By Jack Such
This past election season was the first of its kind in modern American politics. I’m not even talking about the multiple assassination attempts or the endless memes about coconut trees and what JD Vance does to couches in his free time — no, what really made the 2024 election special was that, for the first time in living memory, you were legally allowed to bet on it.
In early October, just a month before the election, a U.S. district court ruled that elections do not fall under the prohibited category of “gaming,” opening the door for companies like the one I work for, Kalshi, to host financial markets (otherwise known as “prediction markets”) on their outcome.
This decision was controversial, to say the least. There were many people against the decision; detractors had a few different reasons for opposing the ruling, such as a baseline resistance to more betting entering American society, or concerns about who is participating in these markets.
Some detractors even went as far as claiming election markets are a direct threat to democracy since they would financially incentivize people in the markets to try to rig the election for their candidate. I am admittedly a biased source, but this argument always struck me as particularly devoid of logic — there are a million reasons why people want elections to go one way or another, and money is the least of those. Plus, what could people even do to change the outcome of an election that isn’t already being done through campaigning or prevented through election safety procedures? I completely understand being upset about money in politics, but the key fight on that front is overturning Citizens United, not banning prediction markets.
Proponents, such as myself, believe that prediction markets are a great benefit to society. People who trade in the markets can profit from their opinion, and people and businesses that are affected by the outcome (so, basically everybody) can use the markets to hedge the outcome in case it doesn’t go their way.
However, the most compelling reason to support the widespread adoption of prediction markets is that they produce information that is incredibly accurate. More on how this is possible later, but the fact that prediction markets produce accurate probabilities of future events (that are free to look at!) means that everyone, not just the people who use them, benefit from their existence. In this regard, they are a true public good.
Since prediction markets produce high-quality information on current events, they are a particularly important tool for the media — especially the ever-shrinking sliver of the media that is legitimately concerned with accurate reporting and minimizing bias. As a longtime reader, I know that Tangle falls squarely into this category, so during the run-up to the election I approached the team about incorporating the prediction market odds into their coverage. I felt they were underrating Trump’s chances of victory and argued that their readership would benefit from the information produced by the markets. They were hesitant, but offered a consolation prize: after the election, I would get a Sunday edition to explain what we learned from prediction markets’ mainstream debut in American elections. In this piece, I’m going to do just that. Below, you’ll find a recap of the markets’ performance, the philosophy behind the creation of prediction markets, and why I believe they are a critical technology arriving at the perfect time to help save our rapidly declining media landscape.
2024 & The History of Election Markets.
Aside from legalized betting, another noteworthy aspect of the 2024 election was how competitive the race was supposed to be. I’m sure we all remember the narrative during the final stretch of the campaign; for weeks, polls and pundits across networks deemed the race a dead heat that was just too close to call. This all culminated in the infamous Nate Silver meta-analysis, where he used polling data to simulate the election 80,000 times and found that Harris won in 40,012 (50.015%) of them.
The prediction markets, however, told a completely different story. For most of October, Trump was a significant favorite in the markets, at one point crossing over a 65% likelihood to win. When the polls opened on Election Day, he was sitting around 57% to win — still a close race, but a stark contrast to what the polling data showed.
Of course, we all know what happened next. Trump won decisively, taking the popular vote and sweeping every swing state while the GOP captured both the House and Senate. The markets certainly didn’t predict all of that, but even in the races where the markets had the wrong favorite, they still outperformed the polls in almost every key race. Here are the Kalshi odds of each swing state + the presidential race at the time the voting booths opened, compared to the implied odds based on 538’s polling analysis:
Only in Michigan did the polls produce a more accurate forecast of who would win (by virtue of being less confident in a Harris victory than Kalshi traders were).
You might be asking, “Ok, so what? Many of these percentages are relatively similar to the polls. Is this really evidence that the markets are better at forecasting election outcomes?”
Not necessarily, but it’s a small part of a growing body of evidence that market-based mechanisms are the superior method of predicting future outcomes. Beating the polls in seven out of eight key races in 2024 is just the tip of the iceberg; markets have a long history of consistently doing so.
Consider the longest-running academic study available on the subject, a sweeping analysis that measured polling vs. prediction markets for the ’88, ’92, ‘96, ’00, and ’04 presidential elections (the first proto-prediction markets, the Iowa Electronic Markets, began in 1988). The study found these markets outperformed polls 74% of the time. In order to operate as an academic project, these markets weren’t allowed to have many traders, so perhaps it's just chance, but outperforming the polls 74% of the time across five election cycles seems too significant an effect to just be explained away by sample-size variance.
A slew of further research broadly aligns with those findings. There are also examples from other proto-prediction markets (which don’t have the advantages that modern ones like Kalshi do of deep liquidity and a clearer regulatory status) of polls getting routinely beat by the markets. The now-defunct InTrade, which had markets on the 2008 election races, was the subject of a lengthy analysis that found “InTrade yielded probabilities that were more accurate than polls, and were particularly good at picking the winner early and in close races," and another from 2012 that found the same.
It’s also worth noting that this accuracy extends beyond elections. Here’s an analysis from 2023 of Kalshi’s markets on the Federal Reserve, which found that FedWatch (one of the most widely used tools on Wall Street to predict Fed decisions) had an error rate that was nearly double what Kalshi’s is.
Interesting…but I must just be cherrypicking examples of prediction markets’ success, right?
Wrong. There’s actually a simple way to empirically measure how accurate prediction markets are as a whole. The technical term for this is market calibration, and it essentially measures how often markets of a certain price resolve YES or NO, and compares that to the expected outcome. Markets that trade at 20% should theoretically resolve YES 20% of the time, markets that trade at 80% should resolve YES 80% of the time, and so on.
If a prediction market were perfectly accurate, a graph of its calibration would look like a straight line with a slope of 1. Lo and behold…
That is about as close as you can get to perfect accuracy–and the sample size for this is thousands of markets. If you still aren’t convinced, maybe some historical context will do the trick. Kalshi’s regulatory ruling was indeed historic, but we can’t claim to be the first ever election market in America — betting markets on elections in America actually date all the way back to the 1800s. Yes, you read that right. Before gambling got lumped in with other “sins” like drinking during the Prohibition Era (and subsequently banned), betting markets were the primary indicators of election winners. Just take a look at this clipping from The New York Times in 1904, quoting Andrew Carnegie as being “sure” of Teddy Roosevelt’s election due to the betting markets!
So, while it may seem strange to us in modern times, betting markets actually have a longer history than polls do at predicting American elections (and, curiously enough, one study shows that the markets were actually more accurate before the introduction of polling).
All that is to say, anyone previously familiar with prediction markets was utterly unsurprised at them outperforming polls in 2024. One election cycle doesn’t make prediction markets a definitive replacement for polling or expert analysis, but between their long history of accuracy, their performance in 2024, and their consistent performance across categories, by now it should be clear that, at minimum, they should be treated as a valuable information source during election season — if not the default best.
The more important question is…how? Why is a market able to outperform a poll or an expert?
How & Why Do Prediction Markets Work?
Before we dive into what makes prediction markets so effective, it’s worth first acknowledging the limitations of their competition: pundits and polling.
I won’t say too much on pundits — I think we can all agree at this point that a political science degree or a job on a major news network is not at all indicative of an ability to predict or analyze American politics. For some evidence to this point, consider Philip Tetlock, whose book Expert Political Judgment recorded the results of one of the largest studies of media analysts ever done and provided an inspiration for the modern prediction market movement.
Tetlock’s study analyzed >88,000 predictions over a 10+ year span from media experts across economics and politics, and compared the predictions to what eventually happened. The results? Not only were the experts not very accurate, they were actually worse than random chance would have been — or as Tetlock put it, worse than “a monkey throwing darts” at the correct outcome.
So, in general, the experts aren’t very expert. Feel free to ignore the talking heads! But what about polling? Why can’t polling live up to the successes of prediction markets?
First, it’s important to recognize that prediction markets and polls are measuring different things. While an election market asks a trader who will win the election, a poll is asking a respondent who they want to win the election, and then uses an extrapolation of those results to predict a winner. The superiority of prediction markets starts with the fact that they are simply asking a more direct question.
Beyond that, though, polling in the modern era of politics has simply collapsed in efficacy. (I’m defining the modern era more so as the “billions of people on social media” era, which includes the last four election cycles).
In 2012, the polls weren’t terrible, but weren’t particularly impressive either. They predicted a slight Obama victory, when the reality is that the election was a Democratic landslide. Then, of course, we have the infamous 2016 election, where the polls had Clinton as a landslide favorite only to be wildly wrong on Election Day as Trump pulled out an upset victory. In 2020, polling correctly predicted a Biden victory, but when accounting for the margin of votes, 2020 was actually worse than 2016 and the least accurate in 40 years! Then, of course, we have the 2024 “dead heat” that never was. Year after year, cycle after cycle, polling continuously fails — and there’s a strong argument to be made that it’s only getting worse as time goes on. There’s no one simple explanation for why this is, but there are a number of factors that all likely contribute to the problem.
First, people simply don’t trust polls anymore — nor do they care to respond. Declining response rates, particularly among conservatives, has frustrated pollsters for years, and social media has only ever exacerbated this trend. Combine that with anecdotal evidence that some conservatives will actually intentionally lie to pollsters rather than just not respond, and you have a recipe for consistent conservative underperformance in polls, which has happened in every Trump election cycle.
Second, the pollsters themselves are unreliable narrators of their own data. The first example of this is herding. Herding is the phenomenon of polls ignoring their own data in favor of converging on what the other polls are saying. Pollsters have bosses just like the rest of us, and a poll that ends up being too crazy-sounding or risks being too far off the mark is a threat to keeping your work funded, so many pollsters just copy what everyone else is saying to avoid this.
Even for the pollsters who aren’t herding, the need for weighting choices make polls a pretty unreliable source of information. To attempt to accurately portray the whole population from a small sample size, pollsters must weight their data to match the electorate demographically, politically, and in terms of how likely certain people are to vote. It turns out that even simple changes to how polls are weighted can move the results almost 8 points! Not exactly high quality information when a simple tweak can completely change the insight.
Weighting especially touches on the more fundamental reason that polls are untrustworthy: it’s not measuring the right thing. Polls take small sample sizes and attempt to project those findings onto the entire American electorate. Essentially, polls try to create a signal from the noise… but who wins elections is based on noise!
To get some sort of signal on the simultaneous actions of ~140 million people, you need a higher-level definition of signal and a more direct way to measure it (hint: like a prediction market).
Ok, so pundits are not to be trusted, and polls aren’t working either. But why exactly is it that prediction markets do?
The brilliance of prediction markets is that they are a combination of two powerful forces: “wisdom of the crowd” and “skin in the game.”
Wisdom of the crowd is a very simple concept: collective intelligence trumps individual intelligence, and the larger the collective, the better the intelligence. This concept actually dates back to Aristotle’s Politics, but I prefer to explain it using the Scholastic Book Fair.
Like many children enrolled in American public schools, I loved it when the Scholastic Book Fair came to town once a year. At the fair, my school would run a competition. They placed a ton of M&Ms in a massive jar, and anyone who correctly guessed how many M&Ms were in the jar won a free book (and the M&Ms). Every year, I failed to guess the right amount — in fact, during my entire tenure at Escondido Elementary, not one kid ever won the prize.
However, something very curious always happened. When the teachers would add up everyone’s guesses and average them out, it was always deadly accurate — within single digits in a jar of hundreds of M&Ms.
That's the wisdom of the crowd on full display, and part of why prediction markets outperform polls. Ask a small sample size of people who they think will win the election, and you may get a decent answer. Ask tens of thousands, and suddenly your answer starts to get accurate.
A related point here is that prediction markets include information that isn’t public knowledge; for example, someone in a key swing county might have noticed more Trump yard signs than expected and placed his trade accordingly. Essentially, the overall wisdom of prediction markets contains more than just the crowd’s opinion of public knowledge–it also includes the sum of the crowd’s scattered private knowledge.
But wait, what does any of that have to do with financial markets? If wisdom of the crowd leads to accuracy, couldn’t you just change the poll to a survey on who people think would win, ask as many people as possible, and achieve the same result as a prediction market?
No, because you would still be missing the most critical aspect of prediction markets: “skin in the game.”
To have skin in the game is to have personal risk attached to a certain outcome. Poll respondents face zero consequences for giving false answers (while pundits on social media are actively rewarded for doing so), and have zero reward for giving truthful ones. On a prediction market, though, people have a serious risk and reward for being wrong or right in their predictions. Be smart and accurate, and you get rich. Be wrong, and get punished.
This is the critical element that weeds out all of the inaccuracies in prediction markets. Because of the financial risk/reward, traders in prediction markets are incredibly incentivized to drill down to the exact probability that an event will occur (or as Scott Alexander puts it, “Either prediction markets are right, or you can get rich”).
This incentive creates the true value behind prediction markets. Having skin in the game means that traders on prediction markets are forced to see past media narratives, personal biases, and any other source of potential reality-bending. Or, they’ll lose money.
Through combining an incentive for accuracy with the emergent wisdom from a large enough crowd, prediction markets are able to outperform the other methods of predicting the future.
Why Prediction Markets Matter.
Ok, so prediction markets are always right, but so what? How does this actually affect the average person’s life for the better, and why should anyone care about encouraging the adoption of prediction markets? Well, I can’t explain why everyone should care, but I can explain why I do, and I think many of you will agree.
Frankly, I look at the state of the world and feel almost nothing but fear and disappointment. The internet has wildly exacerbated all of the worst traits of pre-internet media. Traditional media is forced to be partisan and drive agendas, lest they lose their viewership to political influencers. Foreign powers can easily buy these influencers to do their bidding, and/or simply run farms of bot networks that accomplish the same goal (that account on your feed spreading an outrageously hot take? Yeah, it’s probably this guy). Outright lying is becoming more and more commonplace from our elected officials, as is outright disrespect for norms and unwritten rules that once worked to hold our government and population together.
Essentially, we’ve become a country of parrots (or, more accurately, a country of two separate clans of parrots with a violent hate for each other). This breakdown of the American social contract has serious consequences, and I suspect many of you can feel these effects. I certainly can.
I’ve spent my life moving back and forth between deep-blue California and deep-red Tennessee. I have family in both places, friends in both places, and I’ve attended schools in both places. After spending enough time entrenched in each side of American political culture, I can confirm without a doubt in my mind: There is no shared reality anymore.
My history textbooks were filled with tales about America uniting itself to go to the moon, or stop the spread of communism, or defeat the Nazis, but whatever shared sense of American political identity existed in the 20th century has now been completely eroded by the 21st century media environment.
That may sound hyperbolic, but unfortunately it isn’t — you can see evidence of this in random interactions online, you can see it in long-running Gallup polls that track faith in the media and other American institutions, and you can see it in measures of polarization levels over time. Probably my favorite example is that you can see it in economic sentiment polls, and you can most likely see it in your day-to-day lives. Modern media has completely changed the game.
There are still islands of media that do things the right way; you’re reading these words because you found one of them. But the scale of the problem is simply too large. Those of us who care about truth and stability need an ace up our sleeve.
And that’s where the true value of prediction markets lies.
Financial hedging and being able to call elections a few hours early is great, but the real importance of prediction markets is that they create a shared reality. In fact, they may be the last bastion of shared reality that we have.
No matter your race, religion, age, gender, political affiliation or — critically — your media diet, when you look at a prediction market, you are looking at the same number as everyone else. Tariffs, recessions, egg prices, government spending, the Epstein files, climate change — any important aspect of the world’s future can now be distilled down to a maximally bias-free likelihood that it will actually happen. No narrative, no agenda.
That’s as close to capital-T Truth as you can get — and in this day and age, there’s nothing more precious than that.
Jack Such is a member of the Kalshi growth team and a graduate of the University of Washington. You can read more of his writing here.
Member comments