From Algorithmic Information Theory:
Charles Bennett has discovered an objective measurement for sophistication. An example of sophistication is the structure of an airplane. We couldn't just throw parts together into a vat, shake them up, and hope thereby to assemble a flying airplane. A flying structure is vastly improbable; it is far outnumbered by the wide variety of non-flying structures. The same would be true if we tried to design a flying plane by throwing a bunch of part templates down on a table and making a blueprint out of the resulting overlays.
On the other hand, an object can be considered superficial when it is not very difficult to recreate another object to perform its function. For example, a garbage pit can be created by a wide variety of random sequences of truckfulls of garbage; it doesn't matter much in which order the trucks come.
More examples of sophistication are provided by the highly evolved structures of living things, such as wings, eyes, brains, and so on. These could not have been thrown together by chance; they must be the result of an adaptive algorithm such as Darwin's algorithm of variation and selection. If we lost the genetic code for vertebrate eyes in a mass extinction, it would take nature a vast number of animal lifetimes to re-evolve them. A sophisticated structure has a high replacement cost.
Bennett calls the computational replacement cost of an object its logical depth. Loosely speaking, depth is the necessary number of steps in the causal path linking an object with its plausible origin. Formally, it is the time required by the universal Turing machine to compute an object from its compressed original description.
From Objective versus Intersubjective Truth:
Post-Hayek and algorithmic information theory, we recognize that information-bearing codes can be computed (and in particular, ideas evolved from the interaction of people with each other over many lifetimes), which are
(a) not feasibly rederivable from first principles,
(b) not feasibly and accurately refutable (given the existence of the code to be refuted)
(c) not even feasibly and accurately justifiable (given the existence of the code to justify)
("Feasibility" is a measure of cost, especially the costs of computation and empircal experiment. "Not feasibly" means "cost not within the order of magnitude of being economically efficient": for example, not solvable within a single human lifetime. Usually the constraints are empirical rather than merely computational).
(a) and (b) are ubiqitous among highly evolved systems of interactions among richly encoded entities (whether that information is genetic or memetic). (c) is rarer, since many of these interpersonal games are likely no more diffult than NP-complete: solutions cannot be feasibly derived from scratch, but known solutions can be verified in feasible time. However, there are many problems, especially empirical problems requiring a "medical trial" over one or more full lifetimes, that don't even meet (c): it's infeasible to create a scientifically repeatable experiment. For the same reason a scientific experiment cannot refute _any_ tradition dealing with interpersonal problems (b), because it may not have run over enough lifetimes, and we don't know which computational or empirical class the interpersonal problem solved by the tradition falls into. One can scientifically refute traditional claims of a non-interpersonal nature, e.g. "God created the world in 4004 B.C.", but one cannot accurately refute metaphorical interpretations or imperative statements which apply to interpersonal relationships.
As Dawkins has observed, death is vastly more probable than life. Cultural parts randomly thrown together, or thrown together by some computationally shallow line of reasoning, most likely result in a big mess rather than well functioning relationships between people. The cultural beliefs which give rise to civilization are, like the genes which specify an organism, a highly improbable structure, surrounded in "meme space" primarily by structures which are far more dysfunctional. Most small deviations, and practically all "radical" deviations, result in the equivalent of death for the organism: a mass breakdown of civilization which can include genocide, mass poverty, starvation, plagues, and, perhaps most commonly and importantly, highly unsatisying, painful, or self-destructive individual life choices.
From Hermeneutics: An Introduction to the Interpretation of Tradition:
Hermeneutics derives from the Greek hermeneutika, "message analysis", or "things for interpreting": the interpretation of tradition, the messages we receive from the past... Natural law theorists are trying to do a Heideggerean deconstruction when they try to find the original meaning and intent of the documents deemed to express natural law, such as codifications of English common law, the U.S. Bill of Rights, etc. For example, the question "would the Founding Fathers have intended the 1st Amendment to cover cyberspace?" is a paradigmatic hermeneutical question...[Hans-Georg] Gadamer saw the value of his teacher [Martin] Heidegger's dynamic analysis, and put it in the service of studying living traditions, that is to say traditions with useful applications, such as the law . Gadamer discussed the classical as a broad normative concept denoting that which is the basis of a liberal eduction. He discussed his historical process of Behwahrung, cumulative preservation, that, through constantly improving itself, allows something true to come into being. In the terms of evolutionary hermeneutics, it is used and propagated because of its useful application, and its useful application constitutes its truth. Gadamer also discusses value in terms of the duration of a work's power to speak directly.
Wednesday, July 25, 2012
Monday, July 23, 2012
Pascal's scams (ii)
Besides the robot apocalypse, there are many other, and often more important, examples of Pascal scams. The following may be or may have been such poorly evidenced but widely feared or hoped-for extreme consequences (these days the fears seem to predominate):
- That we are currently headed for another financial industry disaster even worse than 2008 (overwrought expectations often take the form of "much like the surprise we most recently experienced, only even more extreme").
- That global warming has caused or will cause disaster X (droughts, floods, hurricanes, tornadoes, ...)
- A whole witch's brew of "much like what just happened" fears were the many terrorist disaster fears that sprouted like the plague in the years after 9/11: suitcase nukes, the "ticking-time bomb" excuse for legalizing torture, envelopes filled with mysterious white powders, and on and on.
- On the positive daydream side, Eric Drexler's "molecular nanotechnology" predictions of the 1980s: self-replicating robots, assemblers that could make almost anything, etc. -- a whole new industrial revolution that would make everything cheap. (Instead, it was outsourcing and a high-tech version of t-shirt printing that made many things cheap, and "nanotechnology" became just a cool buzzword to use when talking about chemistry).
- A big hope of some naive young engineers during the previous high oil price era of the late 1970s: solar power satellites made from lunar materials, with O'Neill space colonies to house the workers. Indeed, a whole slew of astronaut voyages and industries in space were supposed to follow after the spectacular (and spectacularly expensive) Apollo moon landings -- a "much like recently experienced, only more so" daydream.
- The "Internet commerce will replace bricks-and-mortar and make all the money those companies were making" ideas that drove the Internet bubble in the late 1990s. Indeed, most or all of the bubbles and depressions in financial markets may be caused by optimistic and pessimistic Pascal fads respectively.
Saturday, July 14, 2012
Pascal's scams
Beware of what I call Pascal's scams: movements or belief systems that ask you to hope for or worry about very improbable outcomes that could have very large positive or negative consequences. (The name comes of course from the infinite-reward Wager proposed by Pascal: these days the large-but-finite versions are far more pernicious). Naive expected value reasoning implies that they are worth the effort: if the odds are 1 in 1,000 that I could win $1 billion, and I am risk and time neutral, then I should expend up to nearly $1 million dollars worth of effort to gain this boon. The problems with these beliefs tend to be at least threefold, all stemming from the general uncertainty, i.e. the poor information or lack of information, from which we abstracted the low probability estimate in the first place: because in the messy real world the low probability estimate is almost always due to low or poor evidence rather than being a lottery with well-defined odds:
(1) there is usually no feasible way to distinguish between the very improbable (say, 1 in 1,000) and the extremely improbable (e.g., one in a billion). Poor evidence leads to what James Franklin calls "low-weight probabilities", which lack robustness to new evidence. When the evidence is poor, and thus robustness of probabilities is lacking, then it is likely that "a small amount of further evidence would substantially change the probability. " This new evidence is as likely to decrease the probability by a factor of X as increase it by a factor of X, and the poorer the original evidence, the greater X is. (Indeed, given the nature of human imagination and bias, it is more likely to decrease it, for reasons described below).
(2) the uncertainties about the diversity and magnitudes of possible consequences, not just their probabilities, are also likely to be extremely high. Indeed, due to the overall poor information, it's easy to overlook negative consequences and recognize only positive ones, or vice-versa. The very acts you take to make it into utopia or avoid dystopia could easily send you to dystopia or make the dystopia worse.
(3) The "unknown unknown" nature of the most uncertainty leads to unfalsifiablity: proponents of the proposition can't propose a clear experiment that would greatly lower the probability or magnitude of consequences of their proposition: or at least, such an experiment would be far too expensive to actually be run, or cannot be conducted until after the time which the believers have already decided that the long-odds bet is rational. So not only is there poor information in a Pascal scam, but in the more pernicious beliefs there is little ability to improve the information.
The biggest problem with these schemes is that, the closer to infinitesimal probability, and thus usually to infinitesimal quality or quantity of evidence, one gets, the closer to infinity the possible extreme-consequence schemes one can dream up, Once some enterprising memetic innovator dreams up a Pascal's scam, the probabilities or consequences of these possible futures can be greatly exaggerated yet still seem plausible. "Yes, but what if?" the carrier of such a mind-virus incessantly demands. Furthermore, since more than a few disasters are indeed low probability events (e.g. 9/11), the plausibility and importance of dealing with such risks seems to grow in importance after they occur -- the occurrence of one improbable disaster leads to paranoia about a large number of others, and similarly for fortuitous windfalls and hopes. Humanity can dream up a near-infinity of Pascal's scams, or spend a near-infinity of time fruitlessly worrying about them or hoping for them. There are however far better ways to spend one's time -- for example in thinking about what has actually happened in the real world, rather than the vast number of things that might happen in the future but quite probably won't, or will likely cause consequences very differently than you expect.
So how should we approach low probability hypotheses with potential high value (negative or positive) outcomes? Franklin et. al. suggest that "[t]he strongly quantitative style of education in statistics, valuable as it is, can lead to a neglect of the more qualitative, logical, legal and causal perspectives needed to understand data intelligently. That is especially so in extreme risk analysis, where there is a lack of large data sets to ground solidly quantitative conclusions, and correspondingly a need to supplement the data with outside information and with argument on individual data points."
On the above quoted points I agree with Franklin, and add a more blunt suggestion: stop throwing around long odds and dreaming of big consequences as if you are onto something profound. If you can't gather the information needed to reduce the uncertainties, and if you can't suggest experiments to make the hope or worry falsifiable, stop nightmaring or daydreaming already. Also, shut up and stop trying to convince the rest of us to join you in wasting our time hoping or worrying about these fantasies. Try spending more time learning about what has actually happened in the real world. That study, too, has its uncertainties, but they are up to infinitely smaller.
(1) there is usually no feasible way to distinguish between the very improbable (say, 1 in 1,000) and the extremely improbable (e.g., one in a billion). Poor evidence leads to what James Franklin calls "low-weight probabilities", which lack robustness to new evidence. When the evidence is poor, and thus robustness of probabilities is lacking, then it is likely that "a small amount of further evidence would substantially change the probability. " This new evidence is as likely to decrease the probability by a factor of X as increase it by a factor of X, and the poorer the original evidence, the greater X is. (Indeed, given the nature of human imagination and bias, it is more likely to decrease it, for reasons described below).
(2) the uncertainties about the diversity and magnitudes of possible consequences, not just their probabilities, are also likely to be extremely high. Indeed, due to the overall poor information, it's easy to overlook negative consequences and recognize only positive ones, or vice-versa. The very acts you take to make it into utopia or avoid dystopia could easily send you to dystopia or make the dystopia worse.
(3) The "unknown unknown" nature of the most uncertainty leads to unfalsifiablity: proponents of the proposition can't propose a clear experiment that would greatly lower the probability or magnitude of consequences of their proposition: or at least, such an experiment would be far too expensive to actually be run, or cannot be conducted until after the time which the believers have already decided that the long-odds bet is rational. So not only is there poor information in a Pascal scam, but in the more pernicious beliefs there is little ability to improve the information.
The biggest problem with these schemes is that, the closer to infinitesimal probability, and thus usually to infinitesimal quality or quantity of evidence, one gets, the closer to infinity the possible extreme-consequence schemes one can dream up, Once some enterprising memetic innovator dreams up a Pascal's scam, the probabilities or consequences of these possible futures can be greatly exaggerated yet still seem plausible. "Yes, but what if?" the carrier of such a mind-virus incessantly demands. Furthermore, since more than a few disasters are indeed low probability events (e.g. 9/11), the plausibility and importance of dealing with such risks seems to grow in importance after they occur -- the occurrence of one improbable disaster leads to paranoia about a large number of others, and similarly for fortuitous windfalls and hopes. Humanity can dream up a near-infinity of Pascal's scams, or spend a near-infinity of time fruitlessly worrying about them or hoping for them. There are however far better ways to spend one's time -- for example in thinking about what has actually happened in the real world, rather than the vast number of things that might happen in the future but quite probably won't, or will likely cause consequences very differently than you expect.
So how should we approach low probability hypotheses with potential high value (negative or positive) outcomes? Franklin et. al. suggest that "[t]he strongly quantitative style of education in statistics, valuable as it is, can lead to a neglect of the more qualitative, logical, legal and causal perspectives needed to understand data intelligently. That is especially so in extreme risk analysis, where there is a lack of large data sets to ground solidly quantitative conclusions, and correspondingly a need to supplement the data with outside information and with argument on individual data points."
On the above quoted points I agree with Franklin, and add a more blunt suggestion: stop throwing around long odds and dreaming of big consequences as if you are onto something profound. If you can't gather the information needed to reduce the uncertainties, and if you can't suggest experiments to make the hope or worry falsifiable, stop nightmaring or daydreaming already. Also, shut up and stop trying to convince the rest of us to join you in wasting our time hoping or worrying about these fantasies. Try spending more time learning about what has actually happened in the real world. That study, too, has its uncertainties, but they are up to infinitely smaller.
Sunday, July 01, 2012
More short takes
Perhaps I should take up Twitter, but I already have this blog, and even my short takes tend to go a bit over 140 characters. So here goes:
* The most important professions in the modern world may be the most reviled: advertiser, salesperson, lawyer, and financial trader. What these professions have in common is extending useful social interactions far beyond the tribe-sized groups we were evolved to inhabit (most often characterized by the Dunbar number). This commonly involves activities that fly in the face of our tribal moral instincts.
* On a related note, much mistaken thinking about society could be eliminated by the most straightforward application of the pigeonhole principle: you can't fit more pigeons into your pigeon coop than you have holes to put them in. Even if you were telepathic, you could not learn all of what is going on in everybody's head because there is no room to fit all that information in yours. If I could completely scan 1,000 brains and had some machine to copy the contents of those into mine, I could only learn at most about a thousandth of the information stored in those brains, and then only at the cost of forgetting all else I had known. That's a theoretical optimum; any such real-world transfer process, such as reading and writing an e-mail or a book, or tutoring, or using or influencing a market price, will pick up only a small fraction of even the theoretically acquirable knowledge or preferences in the mind(s) at the other end of said process, or if you prefer of the information stored by those brain(s). Of course, one can argue that some kinds of knowledge -- like the kinds you and I know? -- are vastly more important than others, but such a claim is usually more snobbery than fact. Furthermore, a society with more such computational and mental diversity is more productive, because specialized algorithms, mental processes, and skills are generally far more productive than generalized ones. As Friedrich Hayek pointed out, our mutual inability to understand a very high fraction of what others know has profound implications for our economic and political institutions.
* A big problem in the last few years has been the poor recording of transfers of ownership of mortgages (i.e. of the debt not the house). The issue of recording transfers of contractual rights is very interesting. I have a proposal for this, secure property titles. This should work just as well for mortgage securities and other kinds of transferable contractual rights as it does for the real estate itself or other kinds of property. Anytime you transfer rights to a contract it should be registered in such a secure and reliable public database in order to avoid the risk of not being able to prove ownership in court.
* Not only should you disagree with others, but you should disagree with yourself. Totalitarian thought asks us to consider, much less accept, only one hypothesis at a time. By contrast quantum thought, as I call it -- although it already has a traditional name less recognizable to the modern ear, scholastic thought -- demands that we simultaneoulsy consider often mutually contradictory possibilities. Thinking about and presenting only one side's arguments gives one's thought and prose a false patina of consistency: a fallacy of thought and communications similar to false precision, but much more common and imporant. Like false precision, it can be a mental mistake or a misleading rhetorical habit. In quantum reality, by contrast, I can be both for and against a proposition because I am entertaining at least two significantly possible but inconsistent hypotheses, or because I favor some parts of a set of ideas and not others. If you are unable or unwilling to think in such a quantum or scholastic manner, it is much less likely that your thoughts are worthy of others' consideration.
* The most important professions in the modern world may be the most reviled: advertiser, salesperson, lawyer, and financial trader. What these professions have in common is extending useful social interactions far beyond the tribe-sized groups we were evolved to inhabit (most often characterized by the Dunbar number). This commonly involves activities that fly in the face of our tribal moral instincts.
* On a related note, much mistaken thinking about society could be eliminated by the most straightforward application of the pigeonhole principle: you can't fit more pigeons into your pigeon coop than you have holes to put them in. Even if you were telepathic, you could not learn all of what is going on in everybody's head because there is no room to fit all that information in yours. If I could completely scan 1,000 brains and had some machine to copy the contents of those into mine, I could only learn at most about a thousandth of the information stored in those brains, and then only at the cost of forgetting all else I had known. That's a theoretical optimum; any such real-world transfer process, such as reading and writing an e-mail or a book, or tutoring, or using or influencing a market price, will pick up only a small fraction of even the theoretically acquirable knowledge or preferences in the mind(s) at the other end of said process, or if you prefer of the information stored by those brain(s). Of course, one can argue that some kinds of knowledge -- like the kinds you and I know? -- are vastly more important than others, but such a claim is usually more snobbery than fact. Furthermore, a society with more such computational and mental diversity is more productive, because specialized algorithms, mental processes, and skills are generally far more productive than generalized ones. As Friedrich Hayek pointed out, our mutual inability to understand a very high fraction of what others know has profound implications for our economic and political institutions.
* A big problem in the last few years has been the poor recording of transfers of ownership of mortgages (i.e. of the debt not the house). The issue of recording transfers of contractual rights is very interesting. I have a proposal for this, secure property titles. This should work just as well for mortgage securities and other kinds of transferable contractual rights as it does for the real estate itself or other kinds of property. Anytime you transfer rights to a contract it should be registered in such a secure and reliable public database in order to avoid the risk of not being able to prove ownership in court.
* Not only should you disagree with others, but you should disagree with yourself. Totalitarian thought asks us to consider, much less accept, only one hypothesis at a time. By contrast quantum thought, as I call it -- although it already has a traditional name less recognizable to the modern ear, scholastic thought -- demands that we simultaneoulsy consider often mutually contradictory possibilities. Thinking about and presenting only one side's arguments gives one's thought and prose a false patina of consistency: a fallacy of thought and communications similar to false precision, but much more common and imporant. Like false precision, it can be a mental mistake or a misleading rhetorical habit. In quantum reality, by contrast, I can be both for and against a proposition because I am entertaining at least two significantly possible but inconsistent hypotheses, or because I favor some parts of a set of ideas and not others. If you are unable or unwilling to think in such a quantum or scholastic manner, it is much less likely that your thoughts are worthy of others' consideration.
Subscribe to:
Posts (Atom)