Friedrich Hayek called the hubristic ideas of social scientists, that they could explain and plan the details of society (including economic production), their "fatal conceit." He informally analyzed the division of knowledge to explain why the wide variety of businesses in our economy cannot be centrally planned. "The peculiar character of the problem of a rational economic order is determined precisely by the fact that the knowledge of the circumstances of which we must make use never exists in concentrated or integrated form but solely as the dispersed bits of incomplete and frequently contradictory knowledge which all the separate individuals possess." The economic problem is "a problem of the utilization of knowledge which is not given to anyone in its totality." Austrian economists like Hayek usually eschewed the use of traditional mathematics to describe the economy because such use assumes that economic complexities can be reduced to a small number of axioms.
Friedrich Hayek, the Austrian economist and philosopher who discussed the use of knowledge in society.
Modern mathematics, however -- in particular algorithmic information theory -- clarifies the limits of mathematical reasoning, including models with infinite numbers of axioms. The mathematics of irreducible complexity can be used to formalize the Austrians' insights. Here is an introduction to algorithmic information theory, and further thoughts on measuring complexity.
Sometimes information comes in simple forms. The number 1, for example, is a simple piece of data. The number pi, although it has an infinite number of digits, is similarly simple, because it can be generated by a short finite algorithm (or computer program). That algorithm fully describes pi. However, a large random number has an irreducible complexity. Gregory Chaitin discovered a number, Chaitin's omega, which although it has a simple and clear definition (it's just a sum of probabilities that a random computer program will halt) has an irreducibly infinite complexity. Chaitin proved that there is no way to completely describe omega in a finite manner. Chaitin has thus shown that there is no way to reduce mathematics to a finite set of axioms. Any mathematical system based on a finite set of axioms (e.g. the system of simple algebra and calculus commonly used by non-Austrian economists) overly simplifies mathematical reality, much less social reality.
Furthermore, we know that the physical world contains vast amounts of irreducible complexity. The quantum mechanics of chemistry, the Brownian motions of the atmosphere, and so on create vast amounts of uncertainty and locally unique conditions. Medicine, for example, is filled with locally unique conditions often known only very incompletely by one or a few hyperspecialized physicians or scientists.
Ray Solomonoff and Gregory Chaitin, pioneers of algorithmic information theory.
The strategic nature of the social world means that it will contain irreducible complexity even if the physical world of production and the physical needs of consumption were simple. We can make life open-endedly complicated for each other by playing penny matching games. Furthermore, shared information might be false or deceptively incomplete.
Even if we were perfectly honest and altruistic with each other, we would still face economies of knowledge. A world of more diverse knowledge is far more valuable to us than a world where we all had the same skills and beliefs. This is the most important source of the irreducible complexity of knowledge: the wealthier we are, the greater the irreducibly complex amount of knowledge (i.e. diversity of knowledge) society has about the world and about itself. This entails more diversity of knowledge in different minds, and thus the greater difficulty of coordinating economic behavior.
The vastness of the useful knowledge in the world is far greater than our ability to store, organize, and communicate that knowledge. One limitation is simply how much our brains can hold. There is far more irreducible and important complexity in the world than can be held in a single brain. For this reason, at least some of this omplexity is impossible to share between human minds.
The channel capacity of human language and visual comprehension are further limited. This often makes it impossible to share irreducibly complex knowledge between human minds even if the mind could in theory store and comprehend that knowledge. The main barrier here is the inability to articulate tacit knowledge, rather than limitations of technology. However, the strategic and physical limits to reducing knowledge are of such vast proportions that most knowledge could not be fully shared even with ideal information technology. Indeed, economies of knowledge suggest that the proportion of knowledge would be even less widely shared in a very wealthy world of physically optimal computer and network technology than it is today -- although the absolute amount of knowledge shared would be far greater, the sum total of knowledge would be far greater still, and thus the proportion optimally shared would be smaller.
The limitations on the distribution of knowledge, combined with the inexhaustible sources of irreducible complexity, mean that the wealthier we get, the greater the unique knowledge stored in each mind and shared with few others, and the smaller fraction of knowledge is available to any one mind. There are a far greater variety of knowledge "pigeons" which must be stuffed into the same brain "pigeonholes," and thus less room for "cloning pigeons" through mass media, mass education, and the like. Wealthier societies take greater advantage of long tails (i.e., they satisfy a greater variety of preferences) and thus become even less plannable than poorer societies that are still focused on simpler areas such as agriculture and Newtonian industry. More advanced societies increasingly focus on areas such as interpersonal behaviors (sales, law, etc.) and medicine (the complexity of the genetic code is just the tip of the iceberg; the real challenge is the irreducibly complex quantum effects of biochemistry,
I thought that expressions such as "the irreducibly complex quantum effects of biochemistry, for example the protein folding problem" were supposed to be be reserved for post-modernist hoax articles.
I'd think that protein folding problems have very little irreducible complexity: all that is needed to encode their specification is the amino acid sequence. Instead, their logical depth (or computational replacement cost) would be large, by current methodologies.
I suspect also that irreducible complexity plays very little part in society: we are constitutionally incapable of such encoding and decoding. And if we were, we'd find that approximation would allow radically reduced encoding lengths. And since the observation and measurement we normally undertake has fes significant figures, we end up approximating anyway.
Even though brownian motion is chock full of irreducible complexity, gas laws do a pretty good job of modelling gas behavior. Irreducible complexity can quickly become insignificant in statistically large enough samples.
There are irreducibly complex quantum effects in biochemistry. For example, the mechanical causes of the efficacy of even simple catalysts are often ill understood. Enzymes and ribozymes are very complex catalysts. The mechanical action is quantum, and our ability to reduce data about it to simple models is limited not only by the complexity of their structures but also by quantum effects such as the uncertainty principle.
The protein folding problem, although it contains many quantum effects which have posed a severe barrier to predicting the folding of new amino acid sequences, is as Mike implies limited in its irreducible complexity by the finite number of biological proteins each of which usually takes on a particular general shape based on its sequence. (Some proteins, such as prions, can take on multiple shapes, but usually a small number of specific shapes rather than a random selection from combinatorial possibilities). For that reason, the complexity of the protein folding problem for known natural proteins is ultimately limited, and is more a matter of logical depth than irreducible complexity as you say. Although we don't undestand the quantum mechanics by which each folding occurs we could simply catalog all known proteins and their structures.
However our understanding of other important parts of biochemistry, for example the catalytic mechanisms of enzymes, will probably be ultimately limited by irreducibly complex quantum effects, although as with proteins we may expect that evolution has favored repeatable results for natural enzymes to the extent possible. We will probably be able to predict much of this repetitve function without understanding much about the specific quantum mechanics whereby it happens.
I've struck through the protein folding bit, because that's a bad example, but my claim that there are some limits of irreducible complexity in biochemistry remains.
Even if our understanding of biochemistry could be made simple, the complexity of society would remain vast and too complex for the full understanding of any small fraction of human minds, for the reasons stated.
I should also point out that logical depth also often creates severe limits to feasible understanding. Indeed, the limits of logical depth to create intractible complexity may be more important limits to understanding, especially to understanding other minds, than the limits of ultimately irreducible (Kolmogorov-Solomonoff-Chaitin) complexity.
Mathematically, the inability to crack a cipher is almost entirely a matter of logical depth -- the keys themselves are only hundreds or thousands of bits. In the physical world, there are many chaotic phenomenon that act like a cipher and blow up a small amount of ultimately irreducible complexity into a vast amount of intractible complexity: for example the proverbial butterfly flapping its wings which can ultimately and unpredictably change the direction of a future hurricane. Indeed, the always limited and relative ability of human minds and computers to play the penny matching game I mentioned is probably more a matter of logical depth than ultimately irreducible complexity.
I may have missed something here, but complex systems such as brownian motion and quantum mechanics can be very accurately described by simple formulas. Not infintitely precisely - but precisely enough for all practical applications.
The principle applies for economic representations; the complex realities of a set of exchanges can be reduced to simpler descriptions of market behaviors. Ironically, as you add complexity with more participants and more transactions, the more accurate the abstract representations of the aggregate effects of that complexity become.
Here's an example; there are engineers who understand how the circuits in the processor in your computer work, right in the guts of the chip. That engineer faces complex quantum mechanics every day - and abstracts them away with reliable mathematical formulas. The complexity is entirely reducible - it's not easy, but it can be done.
That engineer uses specialized, esoteric, compartmentalized, knowledge. There are severe limitations on the distribution of that knowledge. Yet that specialization presents no challenges whatsoever for an economist studying, say, the effects of IT on productivity. It's not even a factor. The apparent complexity is entirely reducible.
Engineers spend all their time reducing complexity to manageable abstraction layers. So do doctors, project managers, even truck drivers, teachers, and waiters. Why should economists be defeated by it?
(Perhaps, of all the professions, only lawyers are paid to create complexity.)
Howie, you're ignoring the distinction between lossy and lossless compression. The "reductions" you describe, for example the simple laws of Brownian motion and quantum mechanics, abstract away from the vast randomness of these areas. They don't reduce the randomness, it's just that for certain simple purposes (but not for many other purposes requiring integration of more varied phenomena) the vast randomness of nature can be usefully ignored. Thus, for example, scientists can predict the radioactive decay rate of a large group of atoms, but not when particular radioactive events will occur. An engineer would use individual radioactive events where randomness is required (e.g. for seeding a cryptosystem with random numbers) -- it would be the most absurd thing to use where regularity or predictability are required.
In many other cases scientists have a tough time predicting the behavior of aggregates of large amounts of randomness, such as the weather. In the case of Brownian motion it's only easy to make certain aggregate statements about the behavior of a volume (usually a simplified volume as in a heat engine), not about the behavior of particular parts of the volume, especially large volumes such as the atmosphere.
Even less can the aggregates of strategic and vastly complex or even purposefully randomizing behaviors of individuals often be lossily yet usefully reduced using statistics and Newtonian mathematics. Economists can't generally make predictions about important events in our future as accurate as those of weather forecasters, much less can other social scientists. Sometimes observers of the social scene can hit on certain long-term regularities, analogous to climate forecasting, and there are other and often better ways to usefully if lossily reduce social complexity (e.g. the arduous legal process of trial fact-finding, legal ruling based on those facts, and use of precedent in order to slowly develop a common law).
The vast amounts of specialized knowledge and the ability of people to play penny-matching games strongly suggest that social science is far less prone to useful lossy reduction than even weather and climate forecasting. I'm not at all convinced by your engineering example. The esoteric knowledge of circuit engineers may well be crucial to understanding to what extent Moore's Law is going to continue into the future, which in turn is important to predicting future IT productivity. Indeed, productivity measures and predictions are notoriously prone to error and bias for ignoring various complex yet important phenomena.
In your vain search for explanations of the social world that you can personally understand, you complain about lawyers because in disputes people grapple with reality at levels of detail that social scientists either ignore or, if they dive in, get lost in the details and fail to generalize. Newtonian techniques for predicting simple orbits and the like are grossly inapplicable for lossily compressing the vast logical depth and randomness of social reality. One of the best methods we've hit upon over the millenia is the arduous task of detailed trial fact-finding, followed by legal holdings based on those facts and the slow development of a large body of legeal precedent -- the common law. Complain about lawyers all you like, but that is how civilization has usefully progressed, not through the daydreaming of social "scientists" who want to grossly oversimplify reality in order that they may enjoy the illusion and delude others with the illusion that they profoundly understanding the whole.
Hi, Nice stuff. I found a cool news widget for our blogs at www.widgetmate.com. Now I can show the latest news on my blog. Worked like a breeze.
Post a Comment