A small-game fallacy occurs when game theorists, economists, or others trying to apply game-theoretic or microeconomic techniques to real-world problems, posit a simple, and thus cognizable, interaction, under a very limited and precise set of rules, whereas real-world analogous situations take place within longer-term and vastly more complicated games with many more players: "the games of life". Interactions between small games and large games infect most works of game theory, and much of microeconomics, often rendering such analyses useless or worse than useless as a guide for how the "players" will behave in real circumstances. These fallacies tend to be particularly egregious when "economic imperialists" try to apply the techniques of economics to domains beyond the traditional efficient-markets domain of economics, attempting to bring economic theory to bear to describe law, politics, security protocols, or a wide variety of other institutions that behave very differently from efficient markets. However as we shall see, small-game fallacies can sometimes arise even in the analysis of some very market-like institutions, such as "prediction markets."
Most studies in experimental economics suffer from small-game/large-game effects. Unless these experiments are very securely anonymized, in a way the players actually trust, and in a way the players have learned to adapt to, overriding their moral instincts -- an extremely rare circumstance, despite many efforts to achieve this -- large-game effects quickly creep in, rendering the results often very misleading, sometimes practically the opposite of the actual behavior of people in analogous real-life situations. A common example: it may be narrowly rational and in accord with theory to "cheat", "betray", or otherwise play a narrowly selfish game, but if the players may be interacting with each other after the experimenters' game is over, the perceived or actual reputational effects in the larger "games of life", ongoing between the players in subsequent weeks or years, may easily exceed the meager rewards doled out by the experimenters to act selfishly in the small game. Even if the players can somehow be convinced that they will remain complete strangers to each other indefinitely into the future, our moral instincts generally evolved to play larger "games of life", not one-off games, nor anonymous games, nor games with pseudonyms of strictly limited duration, with the result that behaving according to theory must be learned: our default behavior is very different. (This explains, why, for example, economics students typically play in a more narrowly self-interested way, i.e. more according to the simple theories of economics, than other kinds of students).
Small-game/large-game effects are not limited to reputational incentives to play nicer: moral instincts and motivations learned in larger games also include tribal unity against perceived opponents, revenge, implied or actual threats of future coercion, and other effects causing much behavior to be worse than selfish, and these too can spill over between the larger and smaller games (when, for example, teams from rival schools or nations are pitted against each other in economic experiments). Moral instincts, though quite real, should not be construed as necessarily or even usually being actually morally superior to various kinds of learned morals, whether learned in economics class or in the schools of religion or philosophy.
Small-game/large-game problems can also occur in auditing, when audits look at a particular system and fail to take into account interactions that can occur outside their system of financial controls, rendering the net interactions very different from what simply auditing the particular system would suggest. A common fraud is for trades to be made outside the scope of the audit, "off the books", rendering the books themselves very misleading as to the overall net state of affairs.
Similarly, small-game/large-game problems often arise when software or security architects focus on an economics methodology, focusing on the interactions occurring within the defined architecture and failing to properly take into account (often because it is prohibitively difficult to do so) the wide variety of possible acts occurring outside the system and the resulting changes, often radical, to incentives within the system. For example, the incentive compatibility of certain interactions within an architecture can quickly disappear or reverse when opposite trades can be made outside the system (such as hedging or even more-than-offsetting a position that by itself would otherwise create a very different incentive within the system), or when larger political or otherwise coercive motivations and threats occur outside the analyzed incentive system, changing the incentives of players acting within the system in unpredictable ways. Security protocols always consist of at least two layers: a "dry layer" that can be analyzed by the objective mathematics of computer science, and a "wet layer" that consists of the often unpredictable net large-game motivations of the protocols' users. These should not be confused, nor should the false precision of mathematical economic theories be confused with the objective accuracy of computer science theories, which are based on the mathematics of computer architecture and algorithms and hold regardless of users' incentives and motivations.
A related error is the pure-information fallacy: treating an economic institution purely as an information system, accounting only for market-proximate incentives to contribute information via trading decisions, while neglecting how that market necessarily also changes players' incentives to act outside of that market. For example, a currently popular view of proposition bets, the "prediction markets" view, often treats prop bets or idea futures as purely information-distribution mechanisms, with the only incentives supposed as the benign incentive to profit by adding useful information to the market. This fails to take into account the incentives such markets create to act differently outside the market. A "prediction market" is always also one that changes incentives outside that market: a prediction market automatically creates parallel incentives to bring about the predicted event. For example a prediction market on a certain person's death is also an assassination market. Which is why a pre-Gulf-War-II DARPA-sponsored experimental "prediction market" included a prop bet on Saddam Hussein's death, but excluded such trading on any other, more politically correct world leaders. A sufficiently large market predicting an individual's death is also, necessarily, an assassination market, and similarly other "prediction" markets are also act markets, changing incentives to act outside that market to bring about the predicted events.