Introducing Upshot One, a Q&A Protocol
tl;dr: Upshot One is a question & answer protocol built on Ethereum. It leverages a field of mechanism design called peer prediction to incentivize honest answers to questions.
When we ask a question, we naturally expect a “correct” answer in return. For general queries on the Internet, and especially on blockchains, this has obviously proven to be a presumptuous expectation.
A number of services have arisen to make this expectation realistic, but they have their pitfalls. They tend to require trust (e.g. major journalism outlets, whitelisted authorities), have no guarantees on the quality of gathered information (e.g. most crowdsourced mediums such as Twitter polls or Amazon reviews), or are complex and economically expensive (e.g. dispute bonds and voting). To our knowledge, few popular approaches even define what a “quality answer” is in a way that most people would agree with.
The question remains: “How can we incentivize people to answer questions honestly?”
In previous approaches to answering questions, the quality of answers is assessed according to an ambiguous notion of credibility or expensive mechanisms. Peer prediction takes a ground-up approach to this question-answer dilemma by examining the structure of answers.
Peer prediction is a nascent field of mechanism design that finds new ways of eliciting quality answers, and peer prediction mechanisms are distinguished from one another by how they assess quality. Put simply, peer prediction mechanisms assess quality using a measure of mutual information (a generalization of correlation) — the more honest answers are, the more mutual information they admit. In these settings, participants are compensated for answering questions. The mutual information between participants’ answers determines how well they’re compensated — participants who submit answers that admit high mutual information get paid more. This creates a clear incentive to answer questions and to answer them honestly.
Peer prediction is a means of identifying high-quality answers. With it, we can build a protocol that efficiently matches high-quality answers to questions: a question & answer protocol. We call that protocol Upshot One. This is how Upshot One works.
To efficiently identify participants who are most likely to have quality answers to questions, we randomly select participants using stake-weighted sortition. Then, committed answers are revealed and funneled into a new peer prediction mechanism called the Determinant-based Mutual Information Mechanism (DMI-Mechanism). Under weak assumptions, DMI-Mechanism is guaranteed to reward participants who submit informed and honest answers more than those that try to “cheat the system.”
Upshot One can even be used to answer subjective questions with no inherent notion of ground truth (e.g. “Do you like jazz?”). Disagreements on elicited answers to these or other questions can often be justified — even those with a notion of ground truth can be called into question (e.g. the temperature of a room can be questioned if the thermometer is not trusted, if the measurer didn’t effectively consider the variation of temperature throughout the room, etc.). This is why Upshot One implements subjectivocracy — histories of question-answer pairs can be forked into new histories (e.g. fork A could say the temperature is cold and fork B could say the temperature is warm). With subjectivocracy, multiple, equally valid realities can simultaneously exist, which is beneficial for some applications (e.g. content curation). Furthermore, subjectivocracy serves as a final line of defense against an attack — realities that have been successfully manipulated can simply be forked away.
Upshot One is built as a set of smart contracts on the Ethereum blockchain. Anyone can ask questions, anyone can supply answers and be rewarded for them, and anyone can subscribe to the ensuing feed of matched questions and answers.
Insurance: Many important insurance products are non-parametric — they require subjective assessment to determine if a claim is valid or not (e.g. smart contract insurance). When a lot of capital is tied to a claim, voting mechanisms need to be coupled with expensive mechanisms like escalation games, re-votes, etc. to defend assessments against coercive attacks. Upshot One’s use of peer prediction and randomness makes manipulating the answer to a question very expensive and therefore a very strong tool for robust decentralized insurance systems. On Upshot One, one can ask whether or not an insured event occurred and receive an answer that describes the severity of the outcome.
Prediction markets are powerful mechanisms that allow us to read “the crowd’s” expectations. However, they’re often implemented with pre-specified end dates, and rely on an external source of truth in order to be resolved, both of which are limiting. Worse, they’re often strapped for liquidity. On Upshot One, one can ask what the outcome of a prediction market is. By using trading fees to compensate participants in Upshot One, we can resolve prediction markets in a reliable, low-latency manner that isn’t influenced by conflicts of interest (e.g. if an Upshot One participant is active in a prediction market). We can even go a step further: Upshot One can serve as an alternative to prediction markets altogether. On Upshot One, questions can be repeatedly (even perpetually) asked and answered with a current gauge of the crowd’s expectations.
Efficient NFT Markets: NFTs typify “low-velocity” assets in the cryptosphere — the liquidity for any particular NFT is usually low. Appraising low-velocity assets is difficult because the market value of any such asset is so rarely revealed. On Upshot One, one can ask what the value of an NFT is at a fraction of the cost required to purchase it and reveal its appraised value in real-time. This enables new NFT applications featuring familiar financial instruments that are typically reserved for high-velocity assets (e.g. debt markets, NFT indices).
Curation: One may wish to only view social media posts that are considered “deserving of attention” or are labelled as “accurate.” On Upshot One, one can ask whether or not some content is deserving of a particular label and receive a “yes” or “no” answer (or some more nuanced classification). Upshot One acknowledges that our preferences are diverse, so it does not mandate a “one-size-fits-all” approach to curation — users can find the fork of Upshot One that corresponds to their unique preferences.
Acknowledgements to my co-founder Kenny Peluso for collaboration on this post and the whitepaper.