Tuesday, February 15, 2011

Some speculations on the frontier below our feet

The biggest problem we face with the frontier below that we're literally in the dark. We have a number of crude geophysical techniques (seismology, gravity field, electromagnetic, etc.) but none of them allow creating a detailed map like we can make of the surface of a distant moon of Saturn or even of a cloud-covered planet like Venus. So in some important ways we are more ignorant of the ground a few hundred meters down in most places on our own planet than we are of the surface of most of the other planets and moons in our solar system. We know less about the distribution of the common molecules below the earth's crust, only 35 kilometers below our feet, than we do of the distribution of those molecules on the surfaces of dust clouds in distant galaxies.

One possible fix to this earth-blindness is the neutrino, and more speculatively and generally, dark matter. We can detect neutrinos and anti-neutrinos by (I'm greatly oversimplifying here, physicists please don't cringe) setting up big vats of clear water in complete darkness and lining them with ultra-sensitive cameras. The feature of neutrinos is that they rarely interact with normal matter, so that most of them can fly from their source (nuclear reactions in the earth or sun) through the earth and still be detected. The bug is that almost all of them fly through the detector, too. Only a tiny fraction hit a nucleus in the water and interact, giving off a telltale photon (a particle of light) which is picked up by one of the cameras. It is common now to detect neutrinos from nuclear reactors and the sun, and more recently we have started using some crude instruments to detect geo-neutrinos (i.e. neutrinos or anti-neutrinos generated by the earth not the sun). With enough vats and cameras we may be able to detect enough of these (anti-)neutrinos from nuclear reactions (typically radioactive decays) in the earth's crust to make a detailed radioisotope map (and thus go a long way towards a detailed chemical map) of the earth's interior. For the first time we'd have detailed pictures of the earth's interior instead of very indirect and often questionable inferences. A 3D Google Earth. These observatories may also be a valuable intelligence tool, detecting secret nuclear detonations and reactors being used to construct nuclear bomb making material, via the tell-tale neutrinos these activities give off.

Other forms of weakly interacting particles, the kind that probably make up dark matter, may be much more abundant but interact even more weakly than neutrinos. So weakly we haven't even detected them yet. They're just the best theory we have to explain why galaxies hang together: if they consisted only of the visible matter they should fly apart. Nevertheless, depending on what kinds of dark particles we discover, and on what ways they weakly interact with normal matter, we may find more ways of taking pictures of the earth's interior.

What might we find there? One possibility: an abundance of hydrogen created by a variety of geological reactions and sustained by the lack of oxygen. Scientists have discovered that the predominant kinds of rocks in the earth's crust contain quite a bit of hydrogen trapped inside them: on average about five liters of hydrogen per cubic meter of rock. This probably holds at least to the bottom of the lithosphere. If so that region contains about 150 million trillion liters of hydrogen.

Sufficiently advanced neutrino detectors might be able to see this hydrogen via its tritium, which when it decays gives off a neutrino. Tritium with its half-life of about 12 years is very rare, but is created when a more common hydrogen isotope, deuterium, captures a neutrino from a more common nuclear event (the decay of radioisotopes that are common in the earth's crust). About one-millionth of the deuterium in the heavy water moderating a nuclear reactor is converted into tritium in a year. This rate will be far less in the earth's interior but still may be significant enough compared to tritium's half-life that a sufficiently sensitive and calibrated (with respect to the much greater stream of such neutrinos coming from the sun) neutrino detector of the future may detect hydrogen via such geotritium-generated neutrinos. However, the conversion of deuterium to tritium in the earth's core may be so rare that we will be forced to infer the abundance of hydrogen from the abundance of other elements. Almost all elements have radioisotopes that give off neutrinos when they decay, and most of these are probably much more common in the earth's core than tritium.

Another possibility for detecting hydrogen is, instead of looking for geo-neutrinos, to look at how the slice of earth one wants to study absorbs solar neutrinos. This would require at least two detectors, one to look at the (varying) unobstructed level of solar neutrinos and the other lined up so that the geology being studied is between that detector and the sun. This differential technique may work even better if we have a larger menagerie of weakly interacting particles ("dark matter") to work with, assuming that variations in nuclear structure can still influence how these particles interact with matter.

It's possible that a significant portion the hydrogen known to be locked into the earth's rocks has been freed or can be freed merely by the process of drilling through that rock, exposing the highly pressurized hydrogen in deep rocks to the far lower pressures above. This is suggested by the Kola Superdeep Borehole, one of those abandoned Cold War super-projects. In this case instead of flying rockets farther than the other guy, the goal was to drill deeper than the other guy, and the Soviets won this particular contest: over twelve kilometers straight down, still the world record. They encountered something rarely encountered in shallower wells: a "large quantity of hydrogen gas, with the mud flowing out of the hole described as 'boiling' with hydrogen."

The consequences of abundant geologic hydrogen could be two-fold. First, since a variety of geological and biological processes convert hydrogen to methane (and the biological conversion, by bacteria appropriately named "methanogens", is the main energy source for the deep biosphere, which probably substantially outweighs the surface biosphere), it suggests that our planet's supply of methane (natural gas) is far greater than of oil or currently proven natural gas reserves, so that (modulo worries about carbon dioxide in the atmosphere) our energy use can continue to grow for many decades to come courtesy of this methane.

Second, the Kola well suggests the possibility that geologic hydrogen itself may become an energy source, and one that frees us from having to put more carbon dioxide in the atmosphere. The "hydrogen economy" some futurists go one about, consisting of fuel-cell-driven machinery, depends on making hydrogen which in turn requires a cheap source of electricity. This is highly unlikely unless we figure out a way to make nuclear power much cheaper. But by contrast geologic hydrogen doesn't have to be made, it only has to be extracted and purified. If just ten percent of the hydrogen in the lithosphere turns out to be recoverable over the next 275 years, that's enough by my calculations to enable a mild exponential growth in energy usage of 1.5%/year over that entire period (starting with the energy equivalent usage of natural gas today). During most of that period human population is expected to be flat or falling, so practically that entire increase would be in per capita usage. To put this exponential growth in perspective, at the end of that period a person would be consuming, directly or indirectly, about 330 times as much hydrogen energy as they consume in natural gas energy today. And since it's hydrogen, not hydrocarbon, burning it would not add any more carbon to the atmosphere, just a small amount of water.

Luckily our drilling technology is improving: the Kola well took nearly two decades to drill at a leisurely pace of about 2 meters per day. Modern oil drilling often proceeds at 200 meters/day or higher, albeit not to such great depths. Synthetic diamond, used to coat the tips of the toughest drills, is much cheaper than during the Cold War and continues to fall in price, and we have better materials for withstanding the high temperatures and pressures encountered when we get to the bottom of the earth's crust and proceeding into the upper mantle (where the Kola project got stymied: their goal was 15 kilometers down).


A modern drill bit studded with polycrystalline diamond


Of course, I must stress that the futuristic projections given above are quite speculative. We may not figure out how to affordably build a network of neutrino detecting vats massive enough or of high enough precision to create detailed chemical maps of the earth's interior. And even if we create such maps, we may discover not so much hydrogen, or that the hydrogen is hopelessly locked up in the rocks and that the Kola experience was a fluke or misinterpretation. Nevertheless, if nothing else this exercise shows, despite all the marvelous stargazing science that we have done, how much mysterious ground we have below our shoes.

Wednesday, February 09, 2011

Great stagnation or external growth?

Tyler Cowen posits that we are going through a Great Stagnation. Civilization has harvested the low hanging fruit of the internal combustion engine, electricity, and so on that drove great increases in value and productivity from the end of the nineteenth century. But we have made so few similarly productive discoveries in recent decades that as a result progress is slowing down. Markets have thus overestimated economic growth, resulting in the dot-com bubble and crash and the more recent market problems as real estate prices failed to keep pace with expectations. This thesis echoes much that Peter Thiel and others have been saying, that the financial industry has, in its expectations about financial returns, been counting on 20th century levels of economic growth in the developed world but instead has hit the reality of lower growth rates here, resulting in market volatility and drops.

These pessimistic observations of long-term economic growth are in many ways a much needed splash of cold water in the face for the Kurzweilian "The Singularity is Near" crowd, the people who think nearly everything important has been growing exponentially. And it is understandable for an economist to observe a great stagnation because there has indeed been a great stagnation in real wages as economists measure them: real wages in the developed world grew spectacularly during most of the 20th century but have failed to grow during the last thirty years.

Nevertheless Cowen et. al. are being too pessimistic, reacting too much to the recent market problems. (Indeed the growing popularity of pessimistic observations of great stagnations, peak oil, and the like strongly suggest it's a good time to be long the stock markets!) These melancholy stories fail to take into account the great recent increases in value that are subjectively obvious to almost all good observers who have lived through the last twenty years but that economists have been unable to measure.

In many traditional industries, such as transportation and real estate, the pessimistic thesis is largely true. The real costs of commuting, buying real estate near where my friends are and where I want to work, of getting a traditional college education, and a number of other important things have risen significantly over the past twenty years. These industries are going backwards, becoming less efficient, delivering less value at higher cost: if we could measure their productivity it would be falling.

On the other hand, the costs of manufacturing goods whose costs primarily reflect manufacturing rather than raw materials has fallen substantially over the least twenty years, at about the same rate as in prior decades. Of course, most of these gains have been in the developing and BRICs countries, for a variety of reasons, such as the higher costs of regulation in the developed world and the greater access to cheaper labor elsewhere, but those of us in the U.S., Europe and Japan still benefit via cheap imports that allow us to save more of our money for other things. But perhaps even more importantly, outside of traditional education and mass media we have seen a knowledge and entertainment sharing revolution of unprecedented value. I argue that what looks like a Great Stagnation in the traditional market economy is to a significant extent a product of a vast growth in economic value that has occurred on the Internet and largely outside of the traditional market economy, and a corresponding cannibalization of and brain drain from traditional market businesses.

Most of the economic growth during the Internet era has been largely unmonetized, i.e. external to the measurable market. This is most obvious for completely free services like Craig's List, Wikipedia, many blogs, open source software, and many other services based on content input by users. But ad-funded Internet services also usually create a much greater value than is captured by the advertising revenues. These include search, social networking, many online games, broadcast messaging, and many other services. Only a small fraction of the Internet's overall value has been monetized. In other words, the vast majority of the Internet's value is what economists call an externality: it is external to the measurable prices of the market. Of course, since this value is unmeasured, this thesis is extremely hard to prove or disprove, and can hardly be called scientific; mainly it just strikes me as subjectively obvious. "Social science" can't explain most things about society and this is one of them.

What's worse for the traditional market (as opposed to this recent tsunami of unmonetized voluntary information exchange), this tidal wave of value has greatly reduced the revenues of certain industries. The direct connection the Internet provides between authors and the readers put out of business many bookstores. Online classifieds and free news sources have cannibalized newspapers and magazines. Wikipedia is destroying demand for the traditional encyclopedia. Free and cut-price music has caused a substantial decline in music industry revenues. So the overall effect is a great increase in value combined with a perhaps small, but I'd guess significant reduction in what GDP growth would have been without the Internet.

What are some of the practical consequences? Twenty years ago most smart people did not have an encyclopedia in the home or at the office. Now the vast majority in the developed and even hundreds of millions in the BRICs countries do, and many even have it in the car or on the train. Twenty years ago it was very inconvenient and cost money to place a tiny classified ad that could only be seen in the local newspaper; now it is very easy and free to place an ad of proper length that can be seen all over the world. Search engines combined with mass voluntary and generally free submission of content to the Internet has increased the potential knowledge we have ready access to thousands-fold. Social networking allows us to easily reconnect with old friends we'd long lost contact with. Each of us has access to much larger libraries of music and written works. We have access to a vast "long tail" of specialized content that the traditional mass media never provided us. The barriers to a smart person with worthwhile thoughts getting fellow humans to attend to those thoughts are far lower than what they were twenty years ago. And almost none of this can be measured in market prices, so almost none of it shows up in the economic figures on which economists focus.

Cowen suggests that external gains of similar magnitude occurred in prior productivity revolutions, but I'm skeptical of this claim. A physical widget can be far more completely monetized than a piece of information, because it is excludable: if you don't pay, you don't get the widget. As opposed to information that computers readily copy. (The most underappreciated function of computers is that they are far better copy machines than the paper copiers). It's true that competition drove down prices. But the result was still largely monetized as greater value caused increased demand, whereas growth in the use of search engines, Twitter, Wikipedia, Facebook etc. largely just requires adding a few more computers that now cost far less than the value they convey. (Yes, I'm well aware of scaling issues in software engineering, but they typically don't require much more than a handful of smart computer scientists to solve). Due to Moore's Law the computers that drive the Internet have radically increased in functionality per dollar since the dawn of the Internet. Twitter's total capital equipment purchases, R&D, and user acquisition expenditures are less than fifty cents per registered user and these capital investment costs per user continue to drop at a ferocious rate for Internet businesses and non-profits.

The brain drain from traditional industries can be seen in, for example, the great increase in the proportion of books on computer programming, HTML, and the like on bookstore shelves to traditional engineering and technical disciplines from mechanical engineering to plumbing. It is not so blatant in the relative growth of computer science and electrical engineering relative to other engineering disciplines, but that's just the tip of the iceberg and vast numbers of non-computer scientists, including many with engineering degrees or technical training in other areas, have ended up as computer programmers.

Fortunately, the Internet is giving a vast new generation of smart people access to knowledge who never had it before. The number of smart people who can learn an engineering discipline has probably increased by nearly a factor of ten over the last twenty years (again largely in the BRICs and developing world of course). The number who can actually get a degree of course has not -- which gives rise to a great economic challenge -- what are good ways for this vast new population of educated smart people to prove their intelligence and knowledge when traditional education with its degrees of varying prestige is essentially a zero-sum status game that excludes them? How do we get them in regular social contact with more traditionally credentialed smart people? The Internet may solve much of the problem of finding fellow smart people who share our interests and skills, but we still emotionally bond with people over dinner not over Facebook.

As for the great stagnation in real wages in particular, the biggest reason is probably the extraordinarily rapid pace at which the BRICs and developing world has become educated and accessible to the developed world since the Cold War. In other words, outsourcing has in a temporary post-Cold-War spree outraced the ability of most of us in the developed world to retrain to the more advanced industries. The most unappreciated reason, and the biggest reason retraining for newer industries has been so difficult, is that unmonetized value provides no paying jobs, but may destroy such jobs when it causes the decline of some traditionally monetized industries. On the Internet the developed world is providing vast value to the BRICs and developing world, but that value is largely unmonetized and thus produces relatively few jobs in the developed world. The focus of the developed world on largely unmonetized, though extremely valuable, activities has been a significant cause of wage stagnation in the developed world and of skill and thus wage increases in the developing world. Whereas before they were buying our movies, music, books, and news services, increasingly they are just getting our free stuff on the Internet. The most important new industry of the last twenty years has been mostly unmonetized and thus hasn't provide very many jobs to retrain for, relative to the value it has produced.

And of course there are the challenges of the traditional industries that gave us the industrial revolution and 20th century economic growth in the first place. Starting with the most basic and essential: agriculture, extraction, and mass manufacturing. By no means should these be taken for granted; they are the edifice on which all the remainder rests. Gains in agriculture and extraction may be diminishing as the easy pickings (given sufficiently industrial technology and a sufficiently elaborated division of labor) of providing scarce nutrients and killing pests in agriculture and the geologically concentrated ores are becoming history. Can the great knowledge gains from the Internet be fed back to improve the productivities of our most basic industries, especially in the face of Malthusian depletion of the low hanging fruit of soil productivity and geological wealth? That remains to be seen, but despite all the market troubles and run-up in commodity prices, which have far more to do with financial policies than with the real costs of extracting commodities, I remain optimistic. We still have very large and untapped physical frontiers. These tend to be, for the near future, below us rather than above us, which flies in the face of our spiritual yearnings (although for space fans here is the most promising possible exception to this rule I have encountered). The developing world may win these new physical frontiers due to the high political value the developed world places on environmental cleanliness, which has forced many dirty but crucial businesses overseas. Industries that involve far more complex things, like medicine and the future of the Internet itself, are far more difficult to predict. But the simple physical frontiers as well as the complex medical and social frontiers are all there, waiting for our new generations with their much larger number of much more knowledgeable people to tap them.

Wednesday, January 26, 2011

Some hard things are easy if explained well

Proof of the Pythagorean theorem:


Dark matter:


Chemical reactions, fire, and photosynthesis:


How trains stay on their tracks and go through turns:


And just for fun, the broken window fallacy:

Saturday, January 22, 2011

Tech roundup 01/22/11

Atomic clock on a chip for about $1,500. Accurate and independent clocks improve secure synchronous protocols, in other words can help securely determine the order in which events occur on the Internet, wireless, and other networks while minimizing dependence on trusted third parties like GPS. The technology nicely complements secure timestamping (see e.g. here, here, and here) which can leave an unforgeable record of the ordering of events and the times at which specific data or documents existed.

Bitcoin, an implementation of the bit gold idea (and another example of where the order of events is important), continues to be popular.

It is finally being increasingly realized that there are many "squishy" areas where scientific methods don't work as well as they do in hard sciences like physics and chemistry. Including psychology, significant portions of medicine, ecology, and I'd add the social sciences, climate, and nutrition. These areas are often hopelessly infected with subjective judgments about results, so it's not too surprising that when the the collective judgments change about what constitutes, for example, the "health" of a mind, body, society, or ecosystem, that the "results" of experiments as defined in terms of these judgments change as well. See also "The Trouble With Science".

Flat sats (as I like to call them) may help expand our mobility in the decades ahead. Keith Lofstrom proposes fabricating an entire portion of a phased array communications satellite -- solar cells, radios, electronics, computation, etc. -- on a single silicon wafer. Tens of thousands or more of these, each nearly a foot wide, may be launched on a single small rocket. If they're thin enough, orientation and orbit can be maintained using light pressure (like a solar sail). Medium-term application: phased array broadcast of TV or data allows much smaller ground antennas, perhaps even satellite TV and (mostly downlink) Internet in your phone, iPad, or laptop. Long-term: lack of need for structure to hold together an array of flat sats may bring down the cost of solar power in space to the point that we can put the power-hungry server farms of Internet companies like Google, Amazon, Facebook, etc. in orbit. Biggest potential problem: large numbers of these satellites may both create and be vulnerable to micrometeors and other space debris.

Introduction to genetic programming, a powerful evolutionary machine learning technique that can invent new electronic circuits, rediscover Kepler's laws from orbital data in seconds, and much more, as long as it has fairly complete and efficient simulations of the environment it is inventing or discovering in.

Exploration for underwater gold mining is underway. See also "Mining the Vast Deep."

Monday, January 17, 2011

"The Singularity"

A number of my friends have gotten wrapped up in a movement dubbed "The Singularity." We now have a "Singularity Institute", a NASA-sponsored "Singularity University", and so on as leading futurist organizations. I tend to let my disagreements on these kinds of esoteric futurist visions slide, but I've been thinking about my agreements and disagreements with this movement for long enough that I shall now share them.

One of the basic ideas behind this movement, that computers can help make themselves smarter, and that growth that for a time looks exponential, or even super-exponential in some dimensions and may end up much faster than today may result, is by no means off the wall. Indeed computers have been helping improve future versions of themselves at least since the first compiler and circuit design software was invented. But "the Singularity" itself is an incoherent and, as the capitalization suggests, basically a religious idea. As well as a nifty concept for marketing AI research to investors who like very high risk and reward bets.

The "for a time" bit is crucial. There is as Feynman said "plenty of room at the bottom" but it is by no means infinite given actually demonstrated physics. That means all growth curves that look exponential or more in the short run turn over and become S-curves or similar in the long run, unless we discover physics that we do not now know, as information and data processing under physics as we know it are limited by the number of particles we have access to, and that in turn can only increase in the long run by at most a cubic polynomial (and probably much less than that, since space is mostly empty).

Rodney Brooks thus calls the Singularity "a period" rather than a single point in time, but if so then why call it a singularity?

As for "the Singularity" as a point past which we cannot predict, the stock market is by this definition an ongoing, rolling singularity, as are most aspects of the weather, and many quantum events, and many other aspects of our world and society. And futurists are notoriously bad at predicting the future anyway, so just what is supposed to be novel about an unpredictable future?

The Singularitarian notion of an all-encompassing or "general" intelligence flies in the face of how our modern economy, with its extreme specialization, works. We have been implementing human intelligence in computers little bits and pieces at a time, and this has been going on for centuries. First arithmetic (first with mechanical calculators), then bitwise Boolean logic (from the early parts of the 20th century with vacuum tubes), then accounting formulae and linear algebra (big mainframes of the 1950s and 60s), typesetting (Xerox PARC, Apple, Adobe, etc.), etc. etc. have each gone through their own periods of exponential and even super-exponential growth. But it's these particular operations, not intelligence in general, that exhibits such growth.

At the start of the 20th century, doing arithmetic in one's head was one of the main signs of intelligence. Today machines do quadrillions of additions and subtractions for each one done in a human brain, and this rarely bothers or even occurs to us. And the same extreme division of labor that gives us modern technology also means that AI has and will take the form of these hyper-idiot, hyper-savant, and hyper-specialized machine capabilities. Even if there was such a thing as a "general intelligence" the specialized machines would soundly beat it in the marketplace. It would be very far from a close contest.

Another way to look at the limits of this hypothetical general AI is to look at the limits of machine learning. I've worked extensively with evolutionary algorithms and other machine learning techniques. These are very promising but are also extremely limited without accurate and complete simulations of an environment in which to learn. So for example in evolutionary techniques the "fitness function" involves, critically, a simulation of electric circuits (if evolving electric circuits), of some mechanical physics (if evolving simple mechanical devices or discovering mechanical laws), and so on.

These techniques only can learn things about the real world to the extent such simulations accurately simulate the real world, but except for extremely simple situations (e.g. rediscovering the formulae for Kepler's laws based on orbital data, which a modern computer with the appropriate learning algorithm can now do in seconds) the simulations are usually very woefully incomplete, rendering the results usually useless. For example John Koza after about 20 years of working on genetic programming has discovered about that many useful inventions with it, largely involving easily simulable aspects of electronic circults. And "meta GP", genetic programming that is supposed to evolve its own GP-implementing code, is useless because we can't simulate future runs of GP without actually running them. So these evolutionary techniques, and other machine learning techniques, are often interesting and useful, but the severely limited ability of computers to simulate most real-world phenomena means that no runaway is in store, just potentially much more incremental improvements which will be much greater in simulable arenas and much smaller in others, and will slowly improve as the accuracy and completeness of our simulations slowly improves.

The other problem with tapping into computer intelligence -- and there is indeed after a century of computers quite a bit of very useful but very alien intelligence there to tap into -- is the problem of getting information from human minds to computers and vice versa. Despite all the sensory inputs we can attach to computers these days, and vast stores of human knowledge like Wikipedia that one can feed to them, almost all such data is to a computer nearly meaningless. Think Helen Keller but with most of her sense of touch removed on top of all her other tragedies. Similarly humans have an extremely tough time deciphering the output of most software unless it is extremely massaged. We humans have huge informational bottlenecks between each other, but these hardly compare to the bottlenecks between ourselves and the hyperidiot/hypersavant aliens in our midst, our computers. As a result he vast majority of programmer time is spent working on user interfaces and munging data rather than on the internal workings of programs.

Nor does the human mind, as flexible as it is, exhibit much in the way of some universal general intelligence. Many machines and many other creatures are capable of sensory, information-processing, and output feats that the human mind is quite incapable of. So even if we in some sense had a full understanding of the human mind (and it is information theoretically impossible for one human mind to fully understand even one other human mind), or could somehow faithfully "upload" a human mind to a computer (another entirely conjectural operation, which may require infeasible simulations of chemistry), we would still not have "general" intelligence, again if such a thing even exists.

That's not to say that many of the wide variety of techniques that go under the rubric "AI" are not or will not be highly useful, and may even lead to accelerated economic growth as computers help make themselves smarter. But these will turn into S-curves as they approach physical limits and the idea that this growth or these many and varied intelligences are in any nontrivial way "singular" is very wrong.

Sunday, January 16, 2011

Saturday, December 04, 2010

Some conjectures and facts regarding the Malthusian isocline and the industrial revolution



Reader Matt comments on the Malthusian isocline here,, observing that I placed 17th century Britain a bit "higher" (presumably meaning more advanced, i.e. more "northeast") on the graph than 17th century China.

My placement in this case of one absolutely more advanced than the other is highly conjectural. It's worse than comparing apples to oranges. I had to put them somewhere on the graph and I used my best judgment. Britain was semi-pastoral (yet still stationary) while China had a much more productive climate for grain agriculture (much wetter in the summer and drier in the winter than Britain) and thus could support a far higher population density per hectare. But whether the Chinese population density per hectare adjusted for these natural advantages (presumably leaving technological and institutional differences, what I call denstiy per natural global hectare), which is what I'm trying to graph on my x-axis, was still much higher is far more conjectural. If we draw the x-axis as just population density per hectare (rather than "natural hectare", i.e. adjusted for natural advantages) the Chinese point is far higher, not just somewhat higher as I drew it, and thus China's isocline was clearly more advanced, but I argue due to natural advantages rather than technology or institutions. In terms of technology and institutions, it's worse than apples to oranges not only because of the radical difference between their basic agricultural strategy (labor-intensive grain-and-bean vs. semi-pastoral and heavy use of draft animals) but each had a number of advanced agricultural techniques the other rarely or never used.

See here and here for the background to these theories and the graph in question, as well as a very interesting film of street scenes that allow us to compare the Chinese and British street transportation in the early 20th century (largely before the IC engine replaced horses and rickshaws).

Note that in contrast to the above graph which is schematic and partially conjectural, the following, showing the relentless advanced of the isocline in England from the Late Middle Ages, is based on actual statistics of population and real agricultural labor wage income based on a food-dominated commodity basket:



This differs from the "noisy until 1800" graph of Clark because I've graphed 80-year averages rather than decadal averages to smooth out the noise of pests and short-term climate variations on the quality of harvests which dominated the position of the isocline before the 19th century. The points are from the data but the slope of the isoclines I drew threw them are still conjectural.

Much less conjectural than the relative levels of European and Asian 17th century isoclines and probably more important for the issue of how Britain escaped from the Malthusian trap is that the British laborer and peasant per capita nutrition (roughly corresponding to real income in an era where food dominated the peasant and laborer budget) was much higher than the Chinese. Britain had more and higher quality protein in the diet (partly from more dairy, which has genetic causes, but mainly from more meat, the British having a stationary yet semi-pastoral agriculture). China, having far fewer draft animals per capita, used laborers for arduous and repetitive tasks the Brits considered fit only for draft animals.

Part of Britain's better marginal standard of living can be put to disease, especially the Black Death, but by the 18th century I'd agree that Britain was as healthy as China. What we see after the Black Death is that the Brits greatly increased their meat-eating and use of draft animals, and, what is more important and seemingly unprecedented, were largely able to keep up this intensive use of livestock after their population recovered and even boomed beyond pre-Plague levels, whereas in (AFAIK all) prior agricultural eras and areas increasing human populations in recovery from plagues replaced draft animal with human labor and moved the diet towards grains and away from meat: sliding "souteast" down a static isocline. Also frequent periods of war and excess taxation had often led to the loss of draft animals and their replacement by human labor, in these cases with a regressing (moving "southwest") isocline. In both cases the capital accumulated in draft animals and meat livestock was destroyed. As Steve Sailer correctly points out, in Great Britain there has not been a major famine since before the Black Death while in China they were common until very recently. After the Black Death British agriculture went through a very long period of capital accumulation, in the form of soil conditioners (especially lime), commercial seeds and breeding, engineered water meadows that replaced phosphates and other scarce nutrients, drainage (important in the wet British winters), crops and rotations that fixed more nitrogen (at first growing beans for fodder, then replacing beans with the even better nitrogen fixer clover), and many other improvements besides the tremendous accumulation of livestock. Indeed "improvement" was the watchword of British agriculture as soon as works about it started being published.

So the Chinese peasant and laborer worked harder on a worse diet and lived much closer to starvation, whereas in hard times British peasants could go back to eating the foods that after the Black Death they considered animal fodder (beans, oats, etc.) whereas the analogous soybean was a protein staple of the Chinese peasant as much as it was animal fodder. Again this doesn't tell us whether China's isocline adjusted for its natural advantages was more or less advanced, it tells us that Britain was operating with a higher marginal and mean per capita standard of living and lower population density relative to its natural advantages than China, i.e. further "northwest" on its isocline.

Besides the Columbian Exchange, and even more important, there may have been an even greater but far less heralded exchange going on between Europe and East Asia in the centuries after Europe established regular oceanic contact between the two. Not so much in species, but in techniques and institutions, aided by cheap paper, the printing press, and the resulting expansion of literacy. As this contact increases we see both the East Asian and Western European isoclines advancing rapidly. But the British isocline was advancing faster than Asia's (regardless of which was more advanced absolutely), and probably more importantly in Britain and at least early on the Netherlands were largely maintaining the higher peasant/laborer standard of living despite population growth, whereas in East Asia until the 20th century the isocline started at an already low per-capita income yet the advance was still directly entirely rightward towards higher population growth rather than in catching up to British marginal standards of living. So Britain kept its big lead in per-capita standard of living and farm labor productivity, and thus presumably in the proportion of surplus labor that could be put to work on non-agricultural tasks. And since Britain did not need as large an army as a Continental power, nor as high taxes to support it, more of this labor could go into industry.

Another major factor in industrialization was urbanization, that is the proportion of this surplus labor that could be relocated where other industry or industrial resources were available rather than to where the food was grown. Urbanization (and presumably the general proportion of non-agricultural population) during the 17th-19th centuries was growing rapidly in both Western Europe, especially Britain, and Tokugawa Japan but less so in China, a phenomenon I have yet to see explained (draft animals can't explain the Japanese case as they resembled China in being relatively bereft of them). As far as agriculture and urbanization goes, contrary to myth Tokugawa Japan saw great progress, almost as much as Britain during the same period, albeit much less technological progress in industry than Britain. Urbanization depended most on water transport, i.e. the ability to transport grain to remote regions (the same thing that made ancient Rome a very large city, in her case grain imports across the Mediterranean from Sicily and Egypt). Without good water transport labor beyond that needed for agriculture had to stay in the rural areas near where the grain was grown. Japan, Britain, and the Netherlands had good water transport with many farmers near navigable water due to geography and engineering to extend the navigability of rivers and especially in the Netherlands to build canals which served a triple role of defense barrier, water control for reclamation of formerly submerged areas, and transport. Britain and the Netherlands also had an advantage in that their preferred and available source of protein, beef, could be transported over long distances over land on the hoof, whereas the pigs, fowl, and soybeans of the Chinese required water transport. Urbanization and industrialization also required transport for fuel (wood and coal: forests near navigable water were soon denuded and forests far from navigable water were generally useless. The Brits could use far more forest area per capita because their draft animals could transport more timber and fuel farther. Both the Brits and Chinese had ample and easy-to-mine coal but only in Britain were there mines within ox- or horse-transport distances of navigable water combined with a plentiful supply of these draft animals, which explains why Britain was possibly mining more coal than the rest of the world combined by the 17th century. Horse gin powered pumps also drained the British coal mines before the dawn of the Savery and Newcomen steam engines).

China had for many centuries also had good water transport due to the Grand Canal and its tributaries, but perhaps because of its artificial and inland nature it was far more bureaucratically controlled and subject to excess taxation and other institutional problems than the coastal transport in Japan and Britain. Japan, with its long and thin coastline, probably had the greatest amount of farmland next to coastline controlled by peoples speaking a single language and thus able to trade food with low transaction costs. But Britain's advantage in draft animals more than made up for having a fatter island, as their horses probably made four or more times as much farmland accessible to the navigable water than the mostly human-powered Japanese transport, and its horses also increased the efficiency of river navigations and (eventually) canals. After the railroad, and even moreso the IC engine, reached Japan it quickly caught up to and leapfrogged ahead of Britain (something Clark's theory can't explain).

Sunday, October 24, 2010

Malthus and capital

Why did agricultural civilization remain mired in the Malthusian trap for over 5,000 years? And how was it possible to eventually escape from it? Recall the Malthusian isoclines and how various kinds of societies can be situated along them (click to enlarge all graphs):



Plagues move the economy “northwest” along the isoclines, as more marginal lands are abandoned leaving the fewer people to work and share the more productive lands. Births beyond replacement by contrast move the economy “southeast” towards higher population, the use of more marginal lands, and thus a lower standard of living. Here, for example, is a graph using actual statistics for English real farm labor wage income from 1260 to 1849. Even though England during this period was slowly escaping from the Malthusian trap -- note that each 80 years has advanced farther "northeast" than the previous 80 -- it still followed the basic Malthusian pattern of births and deaths. Observe how the real wage greatly increased after the Black Plague in the mid-14th century, then slowly declines thereafter:



Much less well appreciated than the effects of births and plagues with respect to the Malthusian isocline are creation and destruction of productive capital. Every act of plowing, sowing, weeding, and so on was a seasonal capital investment, and the resulting harvest (and thus the short-term isocline) depended on the qualities and quantities of these short-term investments, as well as on vagaries of pests, weather, etc. Longer-term capital investment could include conditioning, fertilizing, and draining soil, buying livestock, breeding crops and livestock, watering meadows, and so on. Long term progress towards the "northeast" depended on long-term accumulation of capital. It was exceedingly rare to maintain such progress over long periods of time, and the British capital accumulation over such a long period, leading to the breakout from the Malthusian trap, was unprecedented.

Good harvests caused progress that was temporary unless the food was stored and long-term capital investment was substituted for investment in next year’s harvest as well as other pursuits such as luxury and military buildup. Productive innovation, whether institutional or technological, also led to moving the isoclines “northeast”, as they made capital more secure or productive.



Poor harvests (from pests, poor weather, etc.) caused a setback that was temporary as long as it didn’t lead to the destruction of capital. If it resulted in starvation, the deaths boosted the economy up the isocline, so that the standard of living of the remaining population in subsequent years of better harvests was higher than with prior better harvests at higher populations.

Destruction of productive capital was for most of agricultural history as common as creation of capital. Causes included high rents and taxes that forced a choice between going hungry and consuming capital. War (quartering and foraging of troops, destruction of enemy crops and livestock, etc.) was a frequent cause of capital destruction. Some kinds of capital, e.g. livestock and the fertility of the soil, could be destroyed simply by being neglected.

Mancur Olson distinguished between societies of “roving bandits”, where nomadic rulers stole the surpluses of foragers or farmers wherever they went, and “stationary bandits”, who controlled a specific area and simply taxed that area. Rational stationary bandits taxed only to the Laffer maximum, because any further taxation actually reduced their revenues. Indeed, because over-taxation resulted in the destruction of capital, a secure rational stationary bandit reduced taxes below the short-term Laffer maximum to prevent lower tax revenues in future years. Roving bandits, on the other hand, stole nearly all, resulting in destruction of nearly all capital, because anything insecure that one roving bandit didsn't steal was stolen by another.

Stationary bandits did not always confine themselves to taxation that resulted in no destruction of capital. Uncertainty over future power could cause a leader to get greedy and tax at capital-destroying levels while they were still in power. Threats of assasination, coup, or conquest could move stationary bandits closer to roving bandits, since the bandits lost their future revenues if they lost power or territory: in such cases they rationally taxed far higher than the Laffer maximum, usually destroying much capital in the process.



As a result, we can characterize societies and locate their isoclines based on their mode of banditry. This often gets confused with the mobility of production, and the two usually coincided, but they could and often were distinct. Thus most pastoral societies, based on moving livestock from pasture to pasture, also featured roving banditry. And societies based on fixed arable agriculture were generally controlled by stationary bandits. But early modern Britain was a semi-pastoral society but with stationary bandits. And Dark Ages Europe featured roving bandits from pastoral societies frequently conquering arable societies, and being conquered in turn, resulting in a move to a lower-capital society with a mix of roving and stationary banditry.

The Problem of Edible Capital


Of all the ways in which capital can be destroyed, the hardest to avoid, in a hard year, was eating it. Eating your milk cow or your draft animal was like eating your seed corn: very unwise but very likely if your alternative was imminent starvation.

The temptation to eat your capital created vicious cycles of capital destruction. Capital destruction lowered labor productivity, which meant that people produced less calories per calories consumed. This moved the Malthusian isoclines “southwest”, which meant even more people starved during the next equally bad year. War and excessive taxation could trigger or extend the vicious cycle by killing livestock, poisoning farmland, etc.– and rendering future returns insecure, rendering further destruction of capital more probable. The vicious cycle of capital consumption during times of famine may be the main factor that kept ancient agricultural civilizations mired in the Malthusian trap.

Who owned the capital mattered. Edible capital was much more likely to survive (and in the short term the starving people less likely to) if the capital was owned by people who were not themselves starving. Thus, societies living under the feudal hierarchy of long-term tenancy, where livestock was often owned by the local lord rather than a peasant, many have maintained themselves farther above subsistence levels than societies where peasants completely controlled their own livestock.

Culture was filled with warnings against “eating your seed corn.” Thus, as one example of many, Aesop’s stories of “The Goose That Laid the Golden Egg.” It was also filled with warnings about the importance of saving up for bad times, e.g. “The Ant and the Grasshopper.”

Conversely, capital creation that increased labor productivity increased the calories produced per calories consumed, moving the Malthusian isocline up and right. With storage of food it also freed labor for further capital creation, which in future equally good years in turn freed further labor for ancillary or non-agricultural capital investment (transportation, manufacturing, financial services, etc.). However, for nearly all of agricultural history the vast majority of this surplus went to population growth, military expenditure, and luxury display rather than capital investment.

Thus, until the British breakout, agricultural societies remained in the Malthusian trap. Prior agricultural socieities lacked an institutional ratchet that could incentivize capital creation in good harvests, but prevent too much capital destruction in bad harvests. And they generally lacked low-cost protection from foreign wars, so that stationary bandits often started to act more like roving bandits when faced with threats of conquest. To escape the trap, capital creation must exceed capital destruction to such an extent that farm labor productivity grows faster than population. How Britain did this I hope to explore in future posts.

Sunday, October 10, 2010

Elements, evolution, and the nitrogen crisis

The oxygen crisis in the history of life is well known. When photosynthesis arose, cyanobacteria and later plants started dumping large amounts of oxygen into earth’s atmosphere. At first this oxygen, dissolved in the oceans, combined with metals in the oceans and “rusted out.” Eventually, however, the free metals in the oceans were largely depleted and oxygen levels increased in the atmosphere. At first this proved very poisonous, but eventually life not only adapted but took advantage of the oxygen, with some organisms evolving new high-energy respiration pathways that reacted oxygen with carbohydrates from eaten plants. Respiration fueled the Cambrian explosion of sophisticated lifeforms which in turn led to us.

Much less well known, but of similar importance, was the much earlier nitrogen crisis. This was not an overabundance of nitrogen, but the depletion of nitrogen in the readily usable forms that early life had evolved to consume. One might think that life would evolve to reflect at least roughly the same distribution of elements as are available in its environment. Let’s see if this true relative to abundance in our planet’s present oceans:

Elemental abundance in bacteria vs. in seawater (ref):


This is misleading for the metals before the oxygen crisis (i.e. for most of the history of life), when they were far more abundant in the oceans than the present levels shown. But for elements that did not “rust out” of oxygenated seawater, such as oxygen, hydrogen, carbon, nitrogen, phosphorous, and potassium, the above graph is illuminating.

There is a great deal of correlation here to be sure, but there are also outliers, elements that life must concentrate by several orders of magnitude: particularly carbon, nitrogen, and phosphorous, and to a lesser extent potassium. A reasonable guess is that this reflects contingency: life originated in a certain unusual environment, an environment disproportionately rich in certain chemicals, and its core functions cannot evolve to be based on any other molecules. Every known living thing requires, in its core functions, nucleic acids (which make up RNA and DNA), amino acids (which make up proteins, including the crucial proteins that catalyze chemical reactions called enzymes), and the “energy currency” though which all metabolisms consume and produce energy, the adenosine phosphates. Let’s briefly scan some core biological molecules to see how elements are distributed in them:



Adenosine phosphates:


Nucleic acid:



Amino acids:



Lots of hydrogen and oxygen in these molecules, to be sure, but those are the elements in water. So short of a drought or desert, organisms generally have plenty of readily accessible hydrogen and oxygen. Carbon, nitrogen, phosphorous – those are the elements most used by the core molecules of life out of proportion to their existence in the environment.

Carbon, as carbon dioxide, is abundant in the atmosphere (and earlier in earth’s history was far more abundant still). Through the process of photosynthesis, the two double bonds in carbon dioxide can be readily cleaved in order to form other bonds with the carbon in biological molecules. Indeed, instead of storing energy directly as ATP, life can and does take advantage of the relative accessibility of carbon, hydrogen, and oxygen to store energy as carbohydrates and fats, and then through respiration convert them to ATP only when needed.

Nitrogen is also abundant in earth’s atmosphere, but in the form of dinitrogen – two nitrogens superglued together with an ultra-strong triple bond. To form nucleic acids, amino acids, and ATP, something must crack apart the nitrogen. Phosphorous, to the extent it is available in the natural environment, comes in the readily incorporated form of phosphates. The trouble is, phosphorous in any form is just plain uncommon. Nevertheless, all life still relies on it at the center of the genetic code (DNA, RNA) and every metabolism (ATP).

Generally speaking, the result of the chemical contingencies of known life – which for its core functions uses molecules rich in hard-to-obtain nitrogen and phosphorous -- is that in known natural environments ecosystems are either nitrogen-limited or phosphorous-limited. In other worse, the biomass of the ecosystem is usually limited by the amount of nitrogen or phosphorous available. Liebig’s principle states that in any given environment, there is generally one nutrient that limits the growth of an organism or ecosystem. In earth environments that nutrient is usually nitrogen (as ammonia or nitrate) or phosphorous (as phosphate).

The eukaryotes (basically, complicated multi-celled life including all plants, animals and fungi) seem to lack the ability to evolve metabolisms that go beyond a certain point. Instead it’s the simpler prokaryotes -- archae and bacteria -- that have a far wider range of energy chemistry: a dizzying variety of chemosynthetic and photosynthetic metabolisms and ecosystems.

For certain crucial chemicals, the eukaryotes rely on archae and bacteria in their ecosystem. Exhibit A is nitrogen fixation. Life doubtless originated in an environment rich in ammonia and/or nitrates, molecules with only single nitrogens and thus no need to split the superglued dinitrogen bond. But these early organisms would have soon depleted the levels of nitrates and ammonia in the local environment to very low levels. Call it the nitrogen crisis.

Dinitrogen, N2, is the most abundant molecule in our atmosphere. But few things are powerful or precise enough to crack dinitrogen. Lightning can do it, converting dinitrogen and dioxygen in the earth’s atmosphere into nitrates. Lightning thus can, albeit very slowly, put usable nitrates back into sea and soil where they have been depleted by life. Trouble is (a) the resulting equilibrium level is far below the concentrations of nitrogen in organisms, and far below levels for optimum growth, and (b) the process requires an atmosphere rich in oxygen, which the earth until less than a billion years ago did not possess. (Alternatively, lightning might have made significant nitrates from reacting carbon dioxide with nitrogen, a possibility explored here. However, early life probably evolved in water so hot that it destroyed these nitrates).

Prokaryotes came to the rescue – probably very early in the history of life, when local nitrates and ammonia had been exhausted – by evolving perhaps the most important enzyme in biology, nitrogenase, “the nitrogen-splitting anvil.” Nitrogenase’s metal-sulfur core makes it precise enough a catalyst to crack the triple bond of dinitrogen.

Nitrogenase:
The general reaction fixing dinitrogen to ammonia, whether with nitrogenase or artificially, is as follows:

N2 + 6 H + energy → 2 NH3

The dinitrogen is split and combined with hydrogen to form ammonia. Ammonia can then be readily used as an ingredient that ends up, via the sophisticated metabolism that exists in all life, as amino acids, nucleic acids, and adenosine phosphates. When nitrogenase fixes nitrogen it consumes a prodigious amount of energy in the form of ATP. In particular for each atom of nitrogen it consumes the energy of 8 phosphate bonds:

N2 + 8 H+ + 8 e− + 16 ATP → 2 NH3 + H2 + 16 ADP + 16 P

Nitrogenase is extremely similar all organisms known to contain it. It thus probably only ever evolved once. Given its crucial function of supplying a limiting nutrient, despite its high energy cost it proved to be so useful that it spread to many phyla of archae and bacteria. Either it evolved very early in the history of life (before the “LCA”, the Last Common Ancestor of all known life) or it spread through horizontal gene transmission:

Alternative origins and evolution of nitrogenase (click to enlarge) ( ref):

The archae and bacteria that contain nitrogenase, and can thus fix nitrogen, are called diazotrophs. One of the earliest diazotrophs may have been a critter that, like this one, lived in high pressure hot water in an undersea vent. In today’s ocean, the most common diazotroph is the phytoplankton Trichodesmium.

Colonies of Trichodesmium:
The biomass earth's oceans is probably limited by the population of such diazotrophs. Supplying the iron they use to make nitrogenase would increase the amount of nitrogen fixation and thus the biomass in the oceans. A larger ocean ecosystem would draw out more carbon dioxide from the atmosphere, and so is of great interest. This process in the ocean seems to have its limits, however: too much ocean biomass in a particular area can, when it decomposes, deplete oxygen from the ocean, suffocating animals. Oxygen replacement from the atmosphere appears to be too slow to prevent this effect when nitrogen concentrations are high enough, but nitrogen concentrations in almost all ocean areas are far lower than this and would remain lower even while drawing out substantial amounts of carbon dioxide. (Here is a nice Flash animation of the nitrogen cycle in the oceans).

On land, certain plants, especially legumes, are symbiotic with certain diazotrophs. The bugs grow in root nodules in which the legume supplies them large amounts of sugar to power the energy-greedy nitrogenase. In turn, the diazotrophs supply their legume hosts with fixed nitrogen allowing the legumes to generate more protein more quickly than other plants: but at the expense of more photosynthesis needed to feed the energy-hungry bugs.

Friday, October 08, 2010

Petrus Sabbatius comes to power

The Roman Empire was a military dictatorship. Its emperors came and went in a relentless spree of assassinations and civil wars (example) that lasted for nearly 1500 years. One and one-half millennia of violent government extended across history from the victories of Octavian (a.k.a. Caesar Augustus) over his rivals in decades before Christ to the fall of Constantinople to the Turks in 1453. Despite the violence, or perhaps because of it, Roman elites accumulated vast surpluses and left spectacular monuments unmatched until much later in European history.

By the opening of the 6th century the city of Rome itself was no longer a part of the Empire. Instead Italy was ruled by the Goths and the capital city of the remaining empire, “Romania”, was Constantinople. This city (in modern times called Istanbul) controlled the strategic straights linking the Black Sea and the Mediterranean.

No topics dominated the culture of Constantinople so much as (1) the horse races, and (2) the debate over the relative contributions of the divine and the human to the nature of Christ.

The debate over the nature of Christ divided Christians into numerous sects: Orthodox Catholics, Monophysites, Arians, Manicheans, Nestorians, and many others. The Orthodox Catholics believed that Christ was both God and man, Monophysites divine only, Arians human only, and there were a dizzying number of variations on and nuances to these dogmas. Theology was the hottest topic of debate and biggest motivation for political division and persecution in Constantinople. Constantinople was dominated by Orthodox Catholics and Monophysites, while the Arian heresy held by the Goths and Vandals that had taken over the Western part of the Empire was considered a heresy beyond the pale. Other positions, such as Manicheaism, were sometimes tolerated and sometimes not.

With the coming to power of Christianity the brutal gladiatorial fights had been suppressed and horse racing was now the dominant spectator sport. The Hippodrome in Constantinople was the main place of public gathering. Spectators shouted political opinions at the emperor, who in turn used the crowd to gauge public opinion. Indeed, for the normal citizen, this was the only form of political participation.

The racing teams and their colors – Red, White, Blue, Green – dated far back to the early Empire. By the 6th century, the two dominant teams were the Blues and the Greens. The political nature of the Hippodrome had converted their fans into political factions. The Blues tended to be government types, land owners, and Orthodox Catholics (or, during the frequent schisms with Rome, Chalcedonians). Greens tended to be merchants and Monophysites.

During the reign of Anastasius, in a village in Illyria (probably in modern Macedonia just north of modern Greece), where the natives still spoke a passable Latin, lived a young peasant bachelor. Instead of taking up farming he left the village and came to Constantinople to join the army. Dropping his humble family name and styling himself “Justin” – “just man” -- he fought in several wars and was promoted through the ranks of the palace guards. Eventually he was promoted to Count (head) of the Excubitors, one of the two palace guard groups.

Justin then adopted his nephew, one Petrus Sabbatius, and brought him to Constantinople. Sabbatius too dropped his humble name and, aspiring to the achievements of his uncle and benefactor, restyled himself “Justinian”.

Justin’s master, the emperor Anastatius, was a Green and Monophysite. Justin, and to an even greater degree his nephew, were Orthodox Catholics (or during the schism of the time Chalcedonians) who supported the Blue faction.

Anastasius failed to make formal provisions for the succession. His death in 518 threw Constantiople into confusion, as none his three nephews had strong support. The Manichean eunuch Amantius, Chamberlain to Anastasius, hoped to be a power behind the throne of his chosen puppet, an obscure character named Theocritus. The palace guards had traditionally dominated the succession in Rome, so Amantius needed the support of at least one of the two palace guard groups, the Excubitors and the Scholarians.

Justin, head of the Excubitors, secretly promised to support Theocritus and took money from Amantius to bribe the support of influential fence-sitters. But instead of carrying out this secret plot, Justin lobbied and bullied the Blues, their Senate allies (most Senators were Blue), and his own soldiers. Finally winning acclamation of most of the Blues in the Hippodrome, and fearful acquiescence of the Greens, Justin assumed the purple robes of emperor.

Roman imperial successions had always been highly irregular, but the ideal of authority that other political players would most accept is suggested by Justin’s letter, upon assuming power, to the Pope in Rome: “We have been elected to the Empire by the favor of the indivisible Trinity, by the choice of the highest ministers of the sacred Palace, and of the Senate, and finally by the election of the army.”

To cover his tracks, Justin had Amantius and Theocritus executed, under the pretext that Amantius (a heretic Manichean, but tolerated under Anastatius) had insulted the Orthodox Patriarch of Constantinople. He named his nephew Count of the Domestics. Justinian was a, or perhaps the, power behind the throne. Falling in love with a repentant prostitute, Theodora, he had Justin’s quaestor, Proclus, cleverly draft a law that allowed him to marry a former prostitute while still forbidding such a degrading marriage to other Senators.

Early in 527 AD, Justin fell sick and named Justinian Augustus (co-ruler) and successor. A few months later, Justin died and Justinian at age 45 became emperor.

The former Petrus Sabbatius was to lavish his adoptive name and the empire's treasure on cities new and old, grand buildings, and wars of reconquest. Most importantly for our purposes, Justinian would plaster his just-sounding name on a recompilation of Roman law that has profoundly shaped the West down to our own time.

Coming: Tribonian, John of Cappadocia, revolt, massacre, prostration, and the the birth of a bloody code.

References

Procopius, Anecdota (Secret History)
Procopius, History of the Wars
J.B. Bury, History of the Later Roman Empire
Gibbon, The Decline and Fall of the Roman Empire

Sunday, October 03, 2010

Signals, gifts, and politics

(I recently rediscovered this old post of mine and thought it deserved re-posting).

Paraphrasing Robin Hanson from a recent podcast: "In gifts, it's common signals of quality that matter, not private signals of quality."

Robin Hanson has a great theory for why neoclassical economics so often fails to explain human relationships and institutions, especially personal relationships. Why, he asks for example, do guests bring wine to dinner at an acquaintance's home instead of paying cash, like they would at a restaurant? Traditional economics cannot explain such basic things.

Instead Robin posits, building on the work of previous economists and evolutionary psychologists, that signaling dominates most of our relationships and many of our institutions. In other words, much of our behavior is used to signal, or prove by our behavior, to our fellows our intelligence, empathy, status, and so on. In the hunter-gatherer environments in which our genes evolved, such relationships were far more impportant to our genetic success than any other aspects of our environment. Thus our behaviors are dominated by the signals that would have most advantageously (for our genes) developed our relationships in that environment.

The general theory is sound -- I've held a version of it for quite a long time -- but many of the conclusions he draws from this theory, such as the above quote about gifts, are quite questionable. The thoughtful gift, namely the gift that is targeted towards the recipient's unique preferences, is widely welcomed as the best kind of gift. "It's the thought that counts" may be a cliche and an exaggeration, but it nevertheless carries substantial truth. The thoughtful gift signals our intelligence, our empathy, and the fact that those skills are being used in favor of the gift recipient.

This (and a second theory described below) explains far better than Robin does why cash makes such a bad gift. A gift or exchange like bringing wine to dinner provides the opportunity to signal that one has remembered the dinner menu, and often also signals that one knows the hosts' wine preferences. Cash by sharp contrast is the most thoughtless gift. Cash is suitable only for contractual dealings with strangers; it is worse than useless for developing relationships.

Gift cards exhibit a modicum more empathy than cash (you have to know your pal likes Starbucks), but prior generations who put more effort into relationships considered gift certificates to be rather rude as a personal gift: they were only considered suitable as, for example, a substitute for a cash wage bonus. Today, like "friends" links on Facebook, gift cards signal a modicum of passing fancy which substitutes for the many closer relationships and more thoughtful gifts that most of our forebears enjoyed.

A second reason that cash makes such a poor gift is that it provides a very poor emotional and sensory experience. Most signals, as at least indirect products of evolution, are targeted at our emotions far more than they are targeted at the intellect. A good wine, for example, will be experienced far more fondly and thus remembered far longer than a dirty dollar bill. The most common signals also tend to signal emotional states or skills (e.g. empathy) far more than intellectual ones.

Per Friedrich Hayek, this emotional infrastructure breaks down when we are dealing with strangers -- in those cases contractual relationships and "filthy lucre" are far more efficient and effective ways of relating. But the cold natures of these transactions, i.e. the fact that these relationships are divorced from the emotional signals evolution has wired us to expect, explains much of the political resistance to markets with their "filthy lucre", "greed", etc. Merchants, property, contracts, and so on are crucial to our modern economy, but they send the wrong emotional signals to our hunter-gatherer brains.

Most politics, and in particular the pathologies of politics, are themselves about instinctive signaling -- for example signaling tribal loyalty on the right, or signaling altruistic natures on the left. Most political ideologies freely and fraudulently ignore the crucial distinction between friend and stranger: in the world of political signaling we are supposed to care as much about the vast anonymous "poor" as we do about our own children who we well know to be helpless, and we are supposed to be loyal to a vast country of hundreds of millions of strangers (including more than a few very strange strangers) as if they were all familiar kin. In both cases, these are largely fake signals that don't cost the fraudulent signaler very much: the right-winger does not actually have to be patriotic, and the left-winger does not actually have to be altruistic, and in both cases they usually are not. Few of the children of hawk Congressmen served in the Iraq War, and Barack Obama has given only a miniscule portion of his income to charity. But they are very good at making the politically correct noises that most humans emotionally expect to hear. Thus left-wingers can get great social mileage from calling right-wingers "greedy", meaning that right-wingers are failing to send enough altruistic signals, and right-wingers can get great social mileage from calling left-wingers "unpatriotic." People who, due to real altruism, care more about the actual consequences of political policies than about sending the proper social signals to their peers, usually end up being called both "greedy" and "unpatriotic" in the bargain.