Prediction markets are good at compressing beliefs into a single number. The price is the world's best guess.
But prediction markets have two problems holding them back. Both come from the same place: trading.
One: the price is public. The moment a prediction market has a price, everyone can see it. If your competitor sees the same number you do, there's no edge. No one pays for public information, so prediction markets only survive where gambling demand can support them. Elections, sports, memes. The questions institutions actually care about (geopolitical risk, supply chain disruptions, regulatory timelines) go unanswered. No one's paying to keep those markets alive.
Two: getting information into the market requires betting. A nurse notices a spike in respiratory cases. A construction worker sees a new project breaking ground. A commuter spots a pothole the city doesn't know about. These people know something, but prediction markets ask them to gamble. Put up capital, take risk, hope someone's on the other side of the trade. Most people won't do that, so the information never gets in.
Matt Liston (cofounder of Augur and Gnosis) has an idea: put a wall between the people who trade and the people who provide information. Traders see the price. Sourcers don't. Sourcers just submit what they know. Prices stay private, so institutions might actually pay for the signal.
He calls it Cognitive Finance. It fixes the first problem. But there's still a question:
If sourcers can't trade, how do you pay them? And how do you know which information actually helped?
The question is: what would forecasters have believed if they hadn't seen this claim?
That's what I'm trying to solve. I call it Randomized Counterfactual Credit.
Think of it like a randomized trial: you run a bunch of forecasters on the same question. They don't talk to each other. When a new claim comes in, you flip a coin. Some forecasters see it, some don't. You snapshot everyone's belief at that moment. After the event resolves, you score those snapshots: did the forecasters who saw this claim predict better than those who didn't, at the moment it arrived?
If yes, the claim helped. Pay the person who submitted it.
You can score predictions with something called a Brier score. Lower is better. 0 is perfect, 1 is completely wrong. For each claim, compare the average Brier score of forecasters who saw it vs. those who didn't:
Claims are scored based on when they arrived. A claim that shifts beliefs early gets more credit than the same claim arriving after everyone already knows.
Positive means the claim helped. Negative means it hurt. Zero means noise.
Payouts are simple: take all the positive RCC scores, normalize them, split the bounty. If your claim hurt predictions, you don't lose money, you just don't get paid.
I ran a toy example: "Will Tesla begin commercial Robotaxi service in any US city by June 30, 2026?"
I ran 25 LLMs, fed them 18 made up claims one at a time, and flipped a coin for each one to decide which LLMs saw it. Then I resolved the market as YES.
The claim that scored highest? A Reddit post. Someone posted photos of Model 3s driving around Austin with no one behind the wheel. The claim that scored lowest? Official government data on how often Tesla's driving system fails and a human has to take over. One claim that sounded bad for Tesla (local drivers protesting robotaxis) actually scored high, because the protest coverage mentioned Tesla was seeking permits for 500 vehicles.
It doesn't matter if a claim sounds good or bad. It matters if it helps predict what happens.
This doesn't prove much. I made up the claims and chose the outcome. But it shows what RCC is measuring, and that you can do it without anyone placing a bet.
Liston's wall solves who sees the signal, and RCC solves who provides it.
Whether this actually works requires real data and real deployment. This is a sketch.
If you want to take a closer look at it or tell me what I'm missing, the code is here.