A small-game fallacy occurs when game theorists, economists, or others trying to apply game-theoretic or microeconomic techniques to real-world problems, posit a simple, and thus cognizable, interaction, under a very limited and precise set of rules, whereas real-world analogous situations take place within longer-term and vastly more complicated games with many more players: "the games of life". Interactions between small games and large games infect most works of game theory, and much of microeconomics, often rendering such analyses useless or worse than useless as a guide for how the "players" will behave in real circumstances. These fallacies tend to be particularly egregious when "economic imperialists" try to apply the techniques of economics to domains beyond the traditional efficient-markets domain of economics, attempting to bring economic theory to bear to describe law, politics, security protocols, or a wide variety of other institutions that behave very differently from efficient markets. However as we shall see, small-game fallacies can sometimes arise even in the analysis of some very market-like institutions, such as "prediction markets."
Most studies in experimental economics suffer from small-game/large-game effects. Unless these experiments are very securely anonymized, in a way the players actually trust, and in a way the players have learned to adapt to, overriding their moral instincts -- an extremely rare circumstance, despite many efforts to achieve this -- large-game effects quickly creep in, rendering the results often very misleading, sometimes practically the opposite of the actual behavior of people in analogous real-life situations. A common example: it may be narrowly rational and in accord with theory to "cheat", "betray", or otherwise play a narrowly selfish game, but if the players may be interacting with each other after the experimenters' game is over, the perceived or actual reputational effects in the larger "games of life", ongoing between the players in subsequent weeks or years, may easily exceed the meager rewards doled out by the experimenters to act selfishly in the small game. Even if the players can somehow be convinced that they will remain complete strangers to each other indefinitely into the future, our moral instincts generally evolved to play larger "games of life", not one-off games, nor anonymous games, nor games with pseudonyms of strictly limited duration, with the result that behaving according to theory must be learned: our default behavior is very different. (This explains, why, for example, economics students typically play in a more narrowly self-interested way, i.e. more according to the simple theories of economics, than other kinds of students).
Small-game/large-game effects are not limited to reputational incentives to play nicer: moral instincts and motivations learned in larger games also include tribal unity against perceived opponents, revenge, implied or actual threats of future coercion, and other effects causing much behavior to be worse than selfish, and these too can spill over between the larger and smaller games (when, for example, teams from rival schools or nations are pitted against each other in economic experiments). Moral instincts, though quite real, should not be construed as necessarily or even usually being actually morally superior to various kinds of learned morals, whether learned in economics class or in the schools of religion or philosophy.
Small-game/large-game problems can also occur in auditing, when audits look at a particular system and fail to take into account interactions that can occur outside their system of financial controls, rendering the net interactions very different from what simply auditing the particular system would suggest. A common fraud is for trades to be made outside the scope of the audit, "off the books", rendering the books themselves very misleading as to the overall net state of affairs.
Similarly, small-game/large-game problems often arise when software or security architects focus on an economics methodology, focusing on the interactions occurring within the defined architecture and failing to properly take into account (often because it is prohibitively difficult to do so) the wide variety of possible acts occurring outside the system and the resulting changes, often radical, to incentives within the system. For example, the incentive compatibility of certain interactions within an architecture can quickly disappear or reverse when opposite trades can be made outside the system (such as hedging or even more-than-offsetting a position that by itself would otherwise create a very different incentive within the system), or when larger political or otherwise coercive motivations and threats occur outside the analyzed incentive system, changing the incentives of players acting within the system in unpredictable ways. Security protocols always consist of at least two layers: a "dry layer" that can be analyzed by the objective mathematics of computer science, and a "wet layer" that consists of the often unpredictable net large-game motivations of the protocols' users. These should not be confused, nor should the false precision of mathematical economic theories be confused with the objective accuracy of computer science theories, which are based on the mathematics of computer architecture and algorithms and hold regardless of users' incentives and motivations.
A related error is the pure-information fallacy: treating an economic institution purely as an information system, accounting only for market-proximate incentives to contribute information via trading decisions, while neglecting how that market necessarily also changes players' incentives to act outside of that market. For example, a currently popular view of proposition bets, the "prediction markets" view, often treats prop bets or idea futures as purely information-distribution mechanisms, with the only incentives supposed as the benign incentive to profit by adding useful information to the market. This fails to take into account the incentives such markets create to act differently outside the market. A "prediction market" is always also one that changes incentives outside that market: a prediction market automatically creates parallel incentives to bring about the predicted event. For example a prediction market on a certain person's death is also an assassination market. Which is why a pre-Gulf-War-II DARPA-sponsored experimental "prediction market" included a prop bet on Saddam Hussein's death, but excluded such trading on any other, more politically correct world leaders. A sufficiently large market predicting an individual's death is also, necessarily, an assassination market, and similarly other "prediction" markets are also act markets, changing incentives to act outside that market to bring about the predicted events.
This is pure gold. Prediction market may become assassination market
Things are moving quickly...
FYI I'm in full agreement with Szabo. Combinatorial prediction markets, never mind mild futarchy, haven't been proven in real life situations. Bitcoin and cryptocurrencies are virgin territory as they are. No need to further complicate things.
Lets handle this the way the early evolution of the web was handled!
While a "prediction market on Death A" provides a payout to Group X upon A's death (Group X being those who correctly bet that A would die), an assassination market needs to provide the payout to a specific individual Z (usually as only Z knows the date of A's death). In a prediction market, one cannot control or restrict membership in Group X, which makes the incentives different: A rival assassin can spy on Z and steal his payout by front running Z's trade, A can bet on his own death and then fake it -profiting from his enemy's contributions-, Z's co-conspirators can wait for the pro-death trade to go through - make a (now-wildly-profitable) pro-life trade - and then rat out Z to the authorities in exchange for legal immunity. These counterexamples all exploit the unrestricted entrance into the prediction market's Group X.
Moreover, consider two large prediction markets instead of one: the first, "A to be assassinated sometime in May 2015?", the second, "Someone to try to assassinate A (successfully or otherwise) sometime in May 2015?". What are the incentives now? The more money on M1="No", the more likely someone is to attempt to assassinate (M2="Yes" should increase), warning the victim of danger, and giving bettors a way to profit by thwarting the assassination itself (but not the assassination-attempt).
I'd go as far to say that, by narrowing your focus to a single prediction market, you've made the very fallacy you describe in your post.
Is it possible that more than one prediction market will survive?
I think that it will be the same as for currencies (on the long run at least)
Are there incentives to stay on different prediction markets? (I mean, if they give the same features ...)
I think you've gotten a little confused. InTrade.com was a single website, but it contained many, many different prediction markets. There may be only one "prediction marketplace" (ie, one website [InTrade], or one protocol [Truthcoin]), but it would be very bizarre to imagine only one prediction market existing. That would be a little like saying "people can only ever trade shares of one corporation (Apple, for example)".
This needs to be applied to Game of Thrones immediately
When you see a bunch of public intellectual singing a certain tune that rings dubious to you, but beneficial to a state--predicting how another state is just going to lose, lose, lose in the near future because of x, y, z (made up bad faith stuff), that's coming from the stuff Szabo is talking about up there.
For instance, the anti-Greek chorus. There are several themes used over the times, however, let's point out the early May theme of how Greece was going to inevitably capitulate completely, and poor, stupid, and feckless Syriaza was going to be tossed over by the voters they overpromised to. In neither case was that ever likely. The international chorus was there to rattle/communicate with institutional stakeholders. No big if it doesn't work, any more than icing the kicker at the end of an American football game.
For a real world "prediction market that changes incentives outside that market" with $billions at stake, look no further than CDS vs underlying credit. Here's a good place to start:
Just stumbled across this blog. The Assassination Market quip killed me. Thank you.
The "Assassination market" has been around since the early 90s (see Jim Bell and Wikipedia.)
Nassim Taleb calls that the ludic fallacy http://en.m.wikipedia.org/wiki/Ludic_fallacy
Any market cannot be separated from feedback into the price of the good itself, a sort of Heisenberg Uncertainty Response. The more a market attracts interest from less interested parties, the less it predicts some 'true' reflection of the fundamentals.
Prediction markets become assassination markets, but they can also becomes a disinformation market. Saddam Hussein could have bet on the book for his death either up or down for different purposes. Up if he wanted to convince people he was dead (and please stop trying to kill me) or down to convince people he wasn't worth killing.
Economist Charles Goodhart of the Bank of England postulated that any measurement of the currency intended as a control became like a corset and activity would bulge out elsewhere, rendering the control useless. Likewise a market can become the driver not the intermediary. We see this with certain commodities markets where the paper issuance is orders of magnitude greater than the physical, and prop trading starts to drive the price away from physical fundamentals.
Look at what happened with "The Interview" movie. Apparently even making a movie about a fictional assassination is considered enough to incentivize assassination in some people's eyes. And ironically Kim Jong-un's (alleged) reaction could have caused him to be more likely to be killed. Real world feedback loops quickly become entangled in unpredictable ways.
It's nice to read an article that isn't just pictures or misguidance from some ageing hippie. Thanks for sharing.
I don't see how you stop the assassination aspect.
Let's use the example of public election results that is often used when discussing prediction markets. The bets focus on the following question: Will Hillary Clinton be inaugurated as president in 2016?
A dead person can't be sworn in.
Most questions (perhaps all) are subject to an outside attack in this way. I don't see how this issue can be overcome. But it's not the death of prediction markets imo. Those are useful enough (and effectively inevitable) that it's more likely that we'll transition away from violence as a species than avoid mass adoption of prediction markets.
As in the prediction market problem, expectations of a selfish or cooperative environment results in a selfish or cooperative environment. Hence it's a social disaster to philosophically think in terms of "selfish genes" or selfish individuals as capable of originating selfish forces independent of their origins. The global environment is the only source of physical forces (from potential energy gradients) that guides the creation and sustenance of individuals and genes. Information creation in the specifics of markets is the result of a larger, governing environment. The error is not limited to economics and evolution, but goes back to their fundament, physics: Newton's law is not fundamental because it allows for non-conservative forces (mainly friction) which are not fundamental (see Feynman, "Least Action"). This results in the belief that the thermodynamics 2nd law is "entropy is always increased" which has been supplanted by the standard model and direct observation that entropy in large expanding volumes of the Universe is constant (see Weinberg's "The First 3 Minutes") which means all small open volumes of the Universe (e.g., the Earth's Surface) must have decreasing entropy that is emitting entropy to "allow" the Universe to expand (entropy is conserved).
The error in thinking the Universe is doomed to a heat death erringly justifies a philosophy of selfishness.
Post a Comment