Monday, March 07, 2016

The trouble with books

The Chinese invented printing, but their writing system required a large number of typefaces, which made for very high up-front capital costs to print even a single short book.  Centuries after the slow dawn of Chinese printing Gutenberg in Germany, taking advantage of a concise phonetic alphabet, requiring only a small number of typefaces, invented a printing method that required much less up-front capital than Chinese printers.  The Internet has even more radically lowered up-front capital costs to publish than did the Gutenberg revolution.

Chinese printed works were vast but rare. European books were smaller but still too long. Internet works are the actual length a reader needs, they are (or soon will be) available practically everywhere, and often readers can interact frequently with the author.

Most readers don't want to spend most of their time reading verbose works by single author, when a greater variety of more relevant and thoughtfully concise works are available from a much larger pool of thinkers. Prior to the Internet they had much less choice: books were just the way educated people learned and taught.  (And many people still believe that reading and writing books is the sine quo non of being educated, just as many Europeans in 1500 still lauded the superiority of scribal methods and scholastic thought).

Magazines and newspapers involve smaller form factors, but they still draw from a very small pool of authors.  These authors can only write in detail about a wider variety of subjects by pretending to know things that they don't: they take human institutions far more complicated than a single human can possibly comprehend and boil them down to a series of hypersimplified theories, what in less authoritative contexts we'd call ideologies or conspiracy theories.

Instead of being forced to read a vast number of words each from a small number and variety of authors, already widely read by many other people (making your reading of them often quite intellectually redundant), on the Internet you can read much less per-author text (and thus, potentially at least, far more thought out per word) from a much greater number and variety of authors.

The Internet also can be more interactive with more select groups than the old face-to-face + snail-mail + books regime— providing much more opportunity for Socratic dialog, glossing, and other intellectual processes that were too often neglected after Gutenberg.  And while the Internet can produce far higher amounts of garbage,  mixing up thoughtlessly popular haystacks with thoughtfully rare needles, search engines and links often make wading through these vasty spaces much easier.  The Internet allows you to meet people who share your specialized interests and dialog with them, making possible specific interactions that rarely happened in the old regime.  However, without actually reading the content, i.e. while initially searching for it, it is hard to distinguish thoughtless (even though textual) content from the thoughtful content -- a big reason why at least for the moment book-literacy retains its aura of intellectual superiority over Internet literacy: scholarly publishers with their monetary incentives often take the time to select the most thoughtful works for our consideration.  Nevertheless, they lack the knowledge needed to select the most relevant works to match the wide variety of interests and knowledge of their readers, or to judge well among works outside their specialties.

Much as more efficient and speedier transportation networks enabled labor and natural resources to be brought together in a much greater variety of ways, so does the Internet by providing more direct and speedy connections between minds enable a far greater division of knowledge than was possible with in the face-to-face+snail-mail+books regime. However, in contrast to the economy of things, that division of knowledge is largely (so far, at least, and still mostly for the foreseeable future) unmonetized: the information economy is a vastly different beast than the economy of things.

That said, there is a good book(!) that covers much of this (along with of course a bunch of introductory material redundant for most readers, as well as the typical trivial or thoughtless text added to pad it out to books size):  Smarter Than You Think by Clive Thompson.

tl;dr if you thought this blog post was too long, why would you ever pick up a book?

Tuesday, February 16, 2016

Two Malthusian scares


Carter lectures the U.S. 
on energy, 1978
In 1798. the Reverend Thomas Malthus wrote his influential essay on population, arguing that population grows exponentially while the supply of food, energy, and other commodities only grows linearly.  As a result, the vast majority of humankind is doomed to be mired in poverty unless some even grimmer reapers than starvation (war, disease, etc.) are brought to bear, or births are moderated.  In 1978 U.S. president Jimmy Carter, reflecting a popular intellectual Malthusian sentiment of the time, sat by his fireplace in a comfy sweater and instructed Americans to turn down the thermostat lest we run out of oil.  (Here's a similar speech he gave a year earlier).

Malthus' description of a general pattern of human history (and indeed of the history of all living things, an observation that inspired Charles Darwin) was by and large accurate.  But since the time of Malthus, writing during the early industrial revolution, developed and even most developing economies have managed to trot or even race ahead of Malthus: per capita income has increased tremendously far beyond the near-starvation limits set by Malthusian theory.  Industrial productivity has pulled vastly increased amounts of commodities from the earth, using them to produce an unprecedented abundance of goods. Meanwhile population growth has radically declined until today many developing economies have below-replacement birthrates. Nevertheless, Malthus' observations and reasoning periodically stage an intellectual and popular comeback: industrial civilization can only cheat Malthus so long, thought leaders warn us; if we do not mend our unsustainable ways, and convert from gluttony to stringent conservation, Malthus' grim formula will soon return to wreak an awful revenge.

What chemical inputs does life depend on most?  Hydrogen and oxygen from water is plentiful. Plants obtain copious carbon by breathing in carbon dioxide (and animals from eating the carbohydrates, fats, and proteins in the plants or other animals). Nitrogen-fixing bacteria (and legume plants that are symbiotic with them) obtain plenty of single-N nitrogen by splitting the plentiful but strongly bonded N2 in our atmosphere.  Artificial means of nitrogen fixing depend primarily on natural gas or oil prices. Metals are plentiful in soil. The only significant remaining scarce ingredients for crops are potassium (K) and phosphorous (P) -- and of these on land phosphorous, which must be available in the form of phosphate, is the most needed and most lacking of all.  No technology can substitute anything else for phosphate: it has to be phosphate in order to form DNA, the essential cell energy molecule ATP, and crucial parts of our bones and teeth. Phosphate is thus the most geopolitically important agricultural input and exhibit A in Malthusian warnings about limited resources and unsustainable practices.  This also makes phosphorous a favorite target of stockpilers and speculators during a Malthusian scare.

The most important commodity for mining, manufacturing, and especially transportation, for almost all the 20th century and through the present time, is petroleum oil. Internal combustion and other engines powered by fuels refined from petroleum have since the early decades of the 20th century increasingly dominated he transport of goods and people on air, sea, and land.  For much of the late 19th and 20th centuries heating was a major use of oil; in that use it is being eclipsed by its cousin carbon fuel, natural gas. Oil is also the world's most important feedstock for the production of plastics, synthetic fibers, and a wide variety of other chemicals. While oil makes up a much larger price value of trade than phosphates, oil in the long run is potentially far more vulnerable to substitution innovation. And as we shall see, it also has been at least somewhat more amenable to technological improvements in supply.

Between the mid-1800s, by which time the industrial revolution had transformed much of Europe, and the end of the Bretton Woods era, sentiments about the unsustainability of industrial civilization were usually on the intellectual fringe.  But since the late 1960s, cries of doom and sophisticated warnings that we must redesign our economy, our technology, and civilization itself to "sustainable" lifestyles have become mainstream.  Malthusian allegations that we face diminishing, or soon-to-be diminishing, supplies of raw materials such as oil, natural gas, metal ores and fertilizers,  have become "common knowledge."

This component of green ideology has been fueled by two Malthusian scares: two substantial periods, the first between about 1968-1980, the second c. 2004-14, during which nominal commodity prices (i.e. the prices you see posted) increased dramatically,  and "real" (adjusted for some measure of inflation, such as the consumer price index) commodity prices increased substantially.  These price rises led to prophecies of a coming great diminution in our abilities to feed, cloth, and transport ourselves, much less to enjoy all the other abundant goods we have become accustomed to in the developed world as a result of the industrial revolutions that have occurred during and since the time of Malthus.  We faced a miserable future unless we changed our ways.  Unless we stopped having babies and stringently conserved and recycle our resources (the general green movement), or invest in solar power and electric cars (the Silicon Valley green movement), our future was deemed to be doom.  This remains a predominant ideology in Western culture today.

A big part of those scares and of the green movement has been environmental limits -- that we are damaging our environments, our planet, the one and only planet we can ever hope to naturally inhabit in the foreseeable future, far too much -- adding too much sulfur dioxide, ozone, or carbon dioxide to the atmosphere, too many fertilizers to our waterways, etc.  This essay does not address those concerns, which provide a far stronger argument for the sustainability movement than the topic of this essay -- the supply of commodities, especially of raw minerals from the earth such as oil, metal ores, and fertilizers, and the theories that have become popular since the late 1960s that historical rates of industrial growth cannot be sustained in the face of expected future diminution of those supplies.  Indeed, if you are already scared by our ability to pollute our planet, after you finish reading this essay you should be even more scared. Our planet places no material limits on our ability to consume and pollute it. If we are to have such limits we must put them in place, politically, ourselves.  (As for me, I'm scared both by the prospects for greater pollution and by the fact that political solutions and hoped-for advances in technology, not natural limits, will be the only ways to address the threats of pollution. The greatest source of domestic disputes is fighting over the thermostat; I expect this trend will carry over to international climate disputes of the future...)

For many of us in the developed world, a future of higher prices for energy and other commodities does not seem like such big a deal as it once did: we already have more physical goods than we know what to do with, and cutting back to achieve more peace of mind now seems to take and be worth more effort than accumulation. While the better-off fractions of the developed world have by and large reached a level of satisfaction in their consumption of the abundance of goods made possible by the industrial revolutions, the poorer fractions and the developing world have not.

Nevertheless, during the Malthusian scares commodities did seem to grow much more expensive and scarcer -- and not just a few commodities but commodities in general.  Indeed, the prices of a broad range of industrial commodities went substantially higher, often much faster than the general inflation rate  -- a sure sign, according to traditional industrial economics, that industrial supply was being outstripped by consumer demand -- that we were getting approaching, often rapidly, Malthusian limits rather than moving further away from them as has been the general trend since Malthus.  During most of the years of the first and second scares raw material and food prices skyrocketed, practically across the board. Why was this happening, if not the predictions of Malthusians starting to come true?  Let's take a look at the scares, and the economic histories surrounding them, to find out. 

Prelude to the First Malthusian Scare

From ancient civilization to the late 1960s, civilization's money was generally defined by, and either consisted of or was convertible to, standard weights of precious metals. Even thousands of years before the invention of coinage, most fines in the earliest recorded code of laws were defined and paid in weights of silver.  While the role of precious metals in monetary affairs declined throughout the 20th century, with many episodes of fiat currencies untied to precious metals inflating and hyperinflating, under the post-World War II Bretton Woods system United States and its allies up to 1968 had a gold window whereby authorized high rollers (among them most other governments) could still cash in their dollars for gold at a promised official rate.  Under Bretton Woods most other free world currencies were pegged to the dollar.  So you could cash in your local currency for dollars, and (if you were an authorized high roller) your dollars for gold, all at committed official prices.

But Bretton Woods, depending on a single country to ultimately back the entire free world's money, was not financially sustainable.  It established the U.S. dollar as the free-world standard after World War II, when the the U.S. made half the world's industrial goods and held over half of its financial reserves. But the economies damaged by World War II quickly recovered, and agricultural and industrial revolutions spread to the developing world where economic growth greatly quickened.  While the U.S. economy in its own terms was thriving,  the relative U.S. role in the world economy declined as those of the rest of the world quickly rose from the ashes. By the mid-1960s the U.S. held only 16% of the world's financial reserves, and even less than that of its gold reserves.  Even though it promised to exchange dollars for gold on demand, the U.S. Federal Reserve issued more notes, and its banks more broadly issued more dollar credit, than could possibly be securely backed by its diminishing gold reserves.

Meanwhile, most academic and government economists scoffed at the gold standard as a "barbarous relic" and held that world monetary conditions would be improved if the U.S. stopped pretending that the dollar was pegged to gold.  By 1968 the U.S. was no longer willing to honor its commitment to deliver an ounce of gold for $35.  The U.S., which since the time of Franklin Roosevelt had banned the domestic private gold trade, now tried with futility to halt overseas private trade by refusing to deal in gold with governments that allowed private gold trade.  (In reality, this was less a serious attempt to stifle overseas private gold trade than an attempt to close down the gold window in a face-saving way).  The U.S. government forced the London Gold Pool, the gold window mechanism operating between the major free world central banks, to declare a "bank holiday", i.e. shut down its operations.  But of course this didn't stop non-Americans from trading in gold; quite the opposite: it signaled that the U.S. dollar, and all the currencies pegged to the U.S. dollar, had radically changed in form, and might no longer be as reliable as a store of value. The free-market price of gold soon rose well above $35 official rate. The U.S. was by 1971 forced to officially float the dollar (it had already been floating de facto for up to three years), officially making the U.S. dollar a purely fiat currency.  The pegged currencies followed suit, and Bretton Woods was dead.

All this monetary obscurity matters for our Malthusian scares not just because all the aforesaid commodity prices from then until now are quoted in major fiat currencies, most usually in dollars, and these signals prices send about industrial supply and demand can only be as reliable as the currencies those prices are denominated in. Such a consideration could, with good statistics based on good records, be reasonably dealt with by computing the "real", inflation-adjusted prices of commodities.  But commodity prices proceeded to skyrocket even in general-inflation-adjusted terms The dawn of the pure fiat, floating currency regime caused far deeper forces to come into play.

For in reality, post-gold-standard prices for industrial commodities are not driven purely, or often even mostly, by industrial demand being met by appropriate changes in material supply.  Instead, both supply and demand curves have been warped by a new role for the major industrial commodities -- they are no longer purely commodities; the are in part also money.  In particular, they have since the late 1960s been increasingly used as a liquid store of value, easily tradeable for media of exchange (e.g. dollars), as an alternative to and hedge for the new regime of floating rate currencies.

 Noise in the signal: radical increase in volatility of two of the most 
important geopolitical minerals, crude oil and phosphate, between the 
Bretton Woods  (1947-70) and floating rate (1970-present) eras.

What does it mean to say a commodity is part money?  For this we need to turn to economist Carl Menger's theory of the origins of money.  Menger's theory is less an accurate account of how money did historically originate among humans (which happened long before the dawn of the efficient commodity markets postulated by Menger), as a good theory and reasonably accurate set of predictions about how a free barter market economy does behave whenever it does arise.  According to Menger, in a world of market-based barter, a very high cost (which economists would now call a transaction cost) comes from having to keep track of on the order of N^2 prices for the N goods, and the lack of coincidence of mutual wants between the holders of any two particular such commodities.

For this reason market participants start spontaneously treating some goods as intermediate commodities.  Intermediate commodities are held, not to consume them, but to store value until the next opportunity for exchange comes along.  The intermediate commodity's price increases substantially from what it was when it was just demanded for its consumption: it obtains a price premium for its use as a store of value immune from the changing values of currency or currency-denominated assets the holder of the intermediate commodity would have otherwise held.  This price premium waxes and wanes with the weighted changes in inflation expectations of the world's main traded currencies.

Many commodities might be used this way: and every commodity that is used in this way becomes, in part, money.  It is no longer just an good whose supply is driven by purely by the costs of production (since producers may choose to withhold production rather than sell for a currency whose inflation expectations have just increased, or may choose to increase production beyond the needs of immediate demand when inflation expectations have decreased), or whose demand is driven purely by desires or needs for its consumption (since some of the demand -- the vast majority of the demand in the case of gold and silver -- is now for its use as a store of value). It is an intermediate commodity, partial money.

When the primary use of the commodity money is as a medium of exchange, for example the cigarettes used as money in some wartime prisons, strong network effects usually exist to cause the market to converge on one or a few standards -- historically, usually gold and silver.  These undergo a "bubble that does not pop". But the other intermediate commodities, the commodities don't make it all the way to being dominant and nearly pure money, are bubbles that can and do pop.

When the primary use of commodity money is, however, a store of value, a store of value that can be readily exchanged when needed for the actual medium of exchange, there is almost no network effect.  Any commodity that can be stored and so traded can be used as store of value that will render the owner immune from the perceived or actual risks of holding a floating currency. This storage could occur in the form of a paper or digital future, or as the actual commodity in the warehouse, or even as a readily extractable mineral such as oil still in the ground.

Since the end of Bretton Woods, the major industrial commodities, and especially the major geopolitical commodities such as oil and phosphates, have become Mengerian.  They are no longer purely industrial commodities.  They are also stores of value, places to put wealth in between obtaining money and spending it, that provide an alternative for those who wish to diversify away from, for various reasons, holding currency or assets such as bonds and derivatives that are defined by or correlated to the health of currencies.  In the face of increasing inflation expectations, stockpiling of commodities also decreases the risk that further economic growth that nations may be expecting or planning for in their economies could only come at the costs of even purchasing the needed inputs at even higher prices in the future (e.g. China in the 2000s). Stockpiling of commodities is also a general strategy whenever international trust erode and leaders start thinking of potential risks that trade will be slowed or embargoed.

There are other securities that can store and even grow value -- stocks, bonds, real estate.  In many ways all these are better stores of value than commodities.  However for a major financial and political power, they have drawbacks.  First, they are trust-based -- you are trusting somebody (often a foreigner) to pay the coupons or dividends.  In the case of real estate, it depends on the vagaries of local economic activity and politics. Commodities, especially commodities a government can physically control, are far more trust-minimized.  The U.S. can sanction Russia by freezing the assets of its national held in paper or digital form in the U.S., but it cannot take the oil from Russia's wells or stop it from drilling, pumping, and selling it to e.g. Europe.  Finally, commodities in most of their forms, especially as futures, are readily exchanged for pure money such as dollars. Since the theme of this paper is Malthusian resources, we shall focus on the waxing and waning of the intermediateness of commodities, and in particular the geolitical commodities oil and phosphate rock.

Thus industrial commodities, and especially minerals important to geopolitics like crude oil and phosphate, serve at least three major purposes in the post Bretton Woods world beyond just industrial or agricultural consumption:

(a) a hedge against increases in expected inflation in floating rate currencies, or greater uncertainties about same,

(b) for governments, protection against coercive economic sanctions by foreign governments, and

(c) for militarily strong countries, a form of wealth relatively immune from foreign attack in time of war

We will see all these factors at play during the first and second Malthusian scares.


The First Malthusian Scare


Oil "chasing the tail of gold" 
during the 1970s. (Source)

As described above, the Bretton Woods era ended between 1968 and 1971, leading to the purely floating rate regime that has prevailed from 1971 to today. Free trade in gold, which had been banned under Franklin Roosevelt, was returned to the U.S.

Shortly thereafter, in 1973-4, an event that many economists have deemed an "exogenous shock", and even a main cause of the dramatic oil price rises of the 1970s, occurred on the world stage: the Arab oil embargo of Britain and the U.S. in response to their intervention on behalf of Israel in the 1973 Yom Kippur War.  Around the same time OPEC more than doubled the dollar prices it charged for its oil.  It is common but highly inaccurate to call OPEC a "monopoly cartel" at that time -- it accounted for only about 3% of U.S. oil consumption in 1967 and still only 6.7% at the end of 1973.  It had increased its pricing power from negligible to slight; it could hardly have more than doubled the price of oil on its own had not other oil-producing companies and countries been of similar mind.


One dog didn't bark: oil prices around the times of the 
Suez crisis (1956-7), in which Europe lost control of over 
half its oil supply, and the Arab Crisis (73-74), involving a 
temporary embargo that was easy to get around 
by trading through cutout countries.
By sharp contrast, the Suez Crisis of 1956-7, which produced a much more dramatic and longer-lasting impact on the world's oil supply chains -- Europe suddenly lost military control over more than half of the oil it consumed to a recently independent and hostile Egypt, never to gain it back -- was followed by a far smaller and more ephemeral percentage increase in oil prices.

Due to the increased inflation expectations, oil producers conserved on pumping, despite rising prices and oil consumers stockpiled despite the rising prices of funding those purchases. The reverse would happen throughout the decreases in inflation expectations and resulting long decline in commodity prices during most of the 1980s and 1990s.  Oil producers kept pumping beyond the needs of the industrial market to hasten the drawdown of their depreciating reservoirs.


What the voter saw: U.S. consumer 
price index 1973-81.
But during the first Malthusian scare, U.S. and European entities began stockpiling oil and other strategic commodities. For example U.S. oil imports from the Middle East, which had been only about 3% of consumption in 1967 when relations were friendly, had increased to 6.7% by 1973 even though relations had become hostile. and nominal prices of that oil had risen. Meanwhile U.S. oil producers were pumping less oil, choosing to keep relatively more of it in the ground rather than sell it for cheapening dollars.



The Second Malthusian Scare 

As in the 1970s, the commodity rise during the second Malthusian scare of roughly 2004-14 was often sharp and concurrently effected the vast majority of commodities. After substantial advances in both real estate and commodity prices the previous several years, during 2007 central banks cut rates and raise money supplies to fight collapsing real estate prices. These efforts fueled sharp increases in inflation expectations. Nominal and CPI-adjusted prices of fossil fuels, precious metals, industrial metals, lumber, fibers, fertilizers, grains, oilseeds, dairy, and livestock all skyrocketed during 2007, hitting record highs in nominal prices.  The market’s endless search for stores of value independent of the vagaries of fiat currencies had switched from real estate on which easy loans were defaulting to commodities that could be held more securely.  This time the most (in)famous stockpiler was China, planning for the rapid economic growth it expected in the decades ahead.

Also as in the 1970s, the individual components of this broad-range trend were attributed to a stupendous variety of idiosyncratic causes -- the co-occurrence of which all driving prices in the same direction at the same time was astronomically improbable. The only reasonable attribution would be to related common causes – which, given that there had been no sudden worldwide industrial boom, had to have been monetary causes.

Warnings about peak oil, peak phosphorous, etc. issued from all the major media outlets and science journals.  In Silicon Valley, a boom in "green technology" startups and investments ensued -- resulting later in mostly bankrupt companies. 

As I wrote at the time:
It is a gross violation of Occam's Razor to attribute the recent very broad-based run-up in dollar commodity prices primarily to the plethora of disparate causes to which they have been attributed: "peak oil", the war in Iraq, ethanol subsidies displacing food, and so on. Rises in industrial demand, increases in the costs of transporting commodities due to high oil prices, and so on explain only a small fraction of the rise in other commodity prices, and do not explain at all why precious metal prices have increased alongside those of other commodities. Occam's Razor points us, as it did to wise investors and economists in the 1970s, to the one kind of commodity all these other commodities have in common: the currencies they are priced in...
...demand for the oft-dreaded but ill-understood "hoarding" and "speculation", that is storing extra commodities (often off-the-books, or at least not in the officially measured warehouses) and the purchase of extra commodity futures and other commodity derivatives to hedge transactions based on government currencies, will remain strong as long as the Federal Reserve continues to inflate the dollar supply, and as long as many developing countries continue to link their currencies to this dollar. Commodity prices in dollars will level off, and then move back down close to historical trends based largely on just industrial consumption, if or when the Fed stops increasing the supply of dollars faster than the demand for dollars.
As it transpired, neither the Malthusian worries nor, as of this writing, the inflation expectations proved justified.  The run-up in oil helped fund the fracking revolution, a technology which defied peak oil theory:


Theory: peak oil diagram after Hubbert.  Alleged to 
apply at all scales not just to individual wells.


Reality: not as simple as theory

Oil thus demonstrated itself as a poorer monetary substitute than gold: being a much more novel commodity than gold, its production is subject to substantially more likelihood of technological invention and geological discovery.   

Fracking in one diagram

This possibility, which had not been sufficiently priced into oil before fracking, makes oil supply less reliably scarce than that of gold, rendering it less useful as a store of value, reducing the monetary premium of the oil price over oil as it would be priced if it were just an industrial commodity. Secondly, of course, is the direct effect of fracking on the supply curve in lowering oil prices.  The effects of the fracking revolution are amplified because more than just the Econ-101 supply curve shift is reducing oil prices due to fracking: oil's monetary premium is being eroded at the same time. Oil had been a monetary bubble which is now bursting.

If you are Saudi Arabia or Iraq sitting on large reservoirs of easily extracted oi, your alternative to treating oil as a store of value, if you still don't trust the Western powers, is to pump even more and trade the proceeds for gold (and also pay off some foolishly acquired debts) -- which is what we now see happening. Thus, paradoxically (to those who analyze oil as no more than an industrial commodity) oil producing countries pump more oil despite much lower oil prices. They are selling oil from their "oil warehouse" below the sands of the Middle East in exchange for gold, which due to the fracking revolution has reasserted its superiority as a trust-minimized currency over oil. If "money is the bubble that doesn't pop", partial money, i.e. Mengerian intermediate commodities, create bubbles that sometimes do pop, as the desirability of various commodities for their monetary properties, especially their value as a store of value relatively immune from political interdiction, waxes and wanes.



As much good news as bad: the two Malthusian scares 
sandwich an era when inflation-commodity prices fell by about the same 
amount that they rose during the two scares combined. (And they have 
fallen further since 2010). The explanation of "tight money" vs. "easy 
money" is pertinent but oversimplified, as one would expect an 
attempt to explain a theory in chart labels to be...

Conclusion

During the Bretton Woods era the U.S. dollar, pegged to gold at $35 an ounce, served the entire free world as a common and reliable standard of value.  The transition from Bretton Woods to floating rates left the world with no common and reliable standard of value by which to guarantee future real returns on contracts or investments.  As a result the most commonly traded commodities, and especially geopolitical commodities such as oil and phosphates, became Mengerian intermediate commodities, with a price premium as a store of value that waxed and waned with the weighted expected inflation among the world's various floating currencies. As a result, the prices of these commodities are much more volatile since 1970 than they were during the Bretton Woods era. Epochs of increasing inflation expectations have led to rapid, broad-based commodity price rises, where the market gives out false signals of scarcity, leading to Malthusian anxieties and panics that we face a future of diminishing natural resources. This in turn has fueled a major and sustainable growth in green ideology since the 1960s.

It's no longer debatable that commodity supplies in general pose few limits to long-term industrial growth, nor, except in the special case of phosphates, any significant limits that cannot eventually be innovated past by substituting newer more abundant materials for scarcer older ones.  To obtain all the commodities we have consumed in history has involved barely scratching he surface of one planet. Scratching out somewhat more each upcoming decade and century into the foreseeable future is, in terms of that supply, by and large sustainable. A far more debatable proposition is to how much environmental impacts will or should limit industrial growth -- for example, what is or should be our ability to continue pumping more carbon dioxide into the atmosphere? That is a debate this paper shall leave for another day.


Thursday, October 08, 2015

Estimating and minimizing consumer worry

The process of selling in general, and web commerce in particular, is often described or charted as a funnel. Prospective customers are poured in at one end, and a fewer number of paying customers come out at the other. The other prospects spill out through other holes or over the side of the funnel and don't bring you any revenue. The fraction of customers left, converted from prospects to customers, is called the conversion rate. As prospects proceed from initial interest to final sale, from initial entry page to clicking the final "I Agree" button, more and more of them become discouraged by various worries which beset the consumer. They drop out. The remaining prospects, those who have not dropped off, have been converted into customers or into an audience for your advertisers.

There are a variety of factors that cause drop-off, which vary from business to business. A common cause is forms.  Simplifying forms often greatly increases conversion rates.  For example, in one study cutting the number of lines on a form in half increase conversions by a third. As one web designer put it: "[i]s every field you're asking the visitor to submit absolutely necessary?  Can you trim the fat and make the process simpler?"

Besides the sheer tediousness and time consumed in filling out forms, rational consumers also worry about the potentials for privacy violation and identity theft from the information most e-commerce sites currently require them to divulge: physical and e-mail addresses, phone numbers used for cross-site behavioral tracking, insecure credit card numbers, and more.

The tinfoil-wallet crowd is now mainstream

Instances of regret that one has filled out a form, only to have one's trust violated -- or pride among the sophisticated that they refused to fill out such a form -- are on the rise.

The worst worry culprit is usually the step you most want your customers to complete -- paying you. "[T]he credit card form likely has the highest abandonment rate of any other part of the sign-up process." [Source].

If you don't require payments, you are probably funding your service through advertisements. Those also cause worries. Ads typically distract and delay from the content users are after, provide a low quality of entertainment or information, and are too often offensive. And sophisticated users are worried about the tracking that tends to go with ads. Ad blocking grew nearly tenfold between 2009 and 2014.

Replacing ads and identity-based payments with payments that don't require identity, such as bitcoin, can greatly reduce these worries, lowering the barriers and hesitations that currently prevent consumers from paying for your service.

But there remains a big worry that no payment system can reduce.  Consumers worry about whether they are getting their money's worth -- the mental transaction cost problem (see also this paper). If e-commerce were as worry-free as some of it could be, your customers would neither have to fill out forms, nor be bothered by ads, nor have to worry about repeated charges for content or services of variable value. They would be able to just insert a few digital coins into your online vending machine and then not have to worry about losing your service for another year. Eliminate forms and eliminate repeated payments -- both are key to worry-minimized e-commerce.

Many bitcoin startups are making the grave mistake of replacing one set of worries with another. The ability of cryptocurrency systems to facilitate small payments tempts many companies to nickel-and-dime their customers with pay-per-click micropayments and other such excruciating schemes. Don't follow the many lemmings who have already jumped off that cliff. Stick to long-term subscriptions for content (or other services of variable value) and pay-per-unit for fungible units of consistent value (as in phone minutes).  That way customers aren't saddled with having to constantly re-evaluate the amount and worthiness of recurring charges. The costs to your customers of having to finance a years' worth of low-cost subscription to a reputable brand is almost always far less than the mental transaction costs of recurring charges for content or services of variable value. The ideal worry-free commerce is to "stick the coin into the machine" once, and then never have to pay again for an entire year. A vending machine for subscriptions. Reduce your customers' worries across the board: eliminate forms and eliminate recurring charges.

Ideal worry-minimization can only be closely approached in some purely online forms of commerce, such as video streaming, remote storage, privacy services, and the like. The more physical and offline contract performances are -- a common example being physical delivery -- the more location, various kinds of identity (legal, social network, etc.) may need to come into play, adding, often greatly and necessarily, to the worry overhead, the mental transaction costs, of your relationship with your consumers.

I have previously called this worry-minimized commerce by a narrower label, "form-minimized commerce."  The complexity of the forms you make your users fill is indicative of the worries you are causing them, and thus the barriers you are putting up between your prospects and their decisions to purchase your services.

When you are a consumer, the tediousness of the forms you are filling out is not only a direct cost of your time, and your ability to enjoy that time, it is on top of that a decent proxy measure of the odds of your identity being stolen and of your privacy otherwise being violated. The fewer forms you fill out, the more the tediousness, worries, and risks in your life caused by interacting with the world's institutions will drop in proportion.

While such a proxy measure does not account in particular for the wide variety of information that can be disclosed, nor that some kinds of information (social security numbers) are more risky to divulge than others (throw-away email addresses), nor for the wide variety of risks in identity theft and privacy violation that are consequent, nevertheless consumers necessarily must bring to bear such sweeping rules-of-thumb in order to satisfactorily navigate the bizarre complexities of the digital world.  And when your users are using, whether consciously or implicitly, such estimates, you the service provider and the product designer must use them too.

Add to the forms your customers must fill out the repeated charges you make your customers make, and we get a rough proxy measure of the worry that you are causing your consumers:
Index of worry = number of lines of forms +  number of repeated charges for content or services of variable value
If you are funded through ads rather than consumer payments, you can substitute for the repeated charges the proportion of screen space covered by your ads, or any other reasonable estimate of the delay and distraction the ads on your pages cause.

The index of worry allows you to estimate and minimize the worries you are causing your users, and as a result to minimize the drop-off in your sales funnel and maximize the number of users coming back for more -- and willing to view your ads or pay for the privilege.

Friday, July 03, 2015

The Greek financial mess; and some ways Bitcoin might help

Many years of government debt buildup in Greece has ultimately resulted, in the last few days, in a political and financial maelstrom.  The political maelstrom includes demonstrations in the run up to a referendum on obscure debt-restructuring provisions to be held this Sunday (July 5th). This article focuses on  financial problems and some potential practical steps that can be taken to mitigate them. The imposition of capital controls is a disaster for a modern trade-driven economy, a catastrophe which however digital technology, and in particular the digital currency bitcoin (which given the Greek environment usually must involve direct use of the Bitcoin blockchain), has the potential to mitigate.  This article will explore some of the severe practical problems that capital controls are causing Greek individuals and businesses, and suggest some potential bitcoin solutions, many of which could also be applied to other countries with some similar financial problems and controls such as Argentina and Venezuela.  These aren't solutions that can be applied in time to help with the current 6 days of capital controls, but could substantially help some Greeks and some aspects of the Greek economy if some version of these controls is continued for months or years.

At root the Greek financial problem is that the Greek government has spent more, compared to the GDP generated by its economy, than the vast majority of other governments.  It has borrowed copious sums to do so, falling ever deeper into debt.  Here is its payment schedule:

And you thought your student loan debt was bad


The only place the Greek government has left to go for money to fund its ongoing expenditures and pay these debts is Greek banks.  Fearing capital controls and "haircuts" (government confiscation of certain fractions of bank deposits), many Greeks in recent months have, quite rationally, started withdrawing money out of their banks and sending it overseas.  More trusting Greeks kept their savings in their banks, with the result that, with the imposition of capital controls last Monday, they have been locked out of their savings, and plans for "haircuts" of 30% or more have been reported (If somebody lopped off 30% of your head, you’d have more than a haircut).

When capital controls were first rumored and then announced on Sunday, vast lines formed at ATMs as Greeks rushed to rescue what little of their life savings that they could:


ATM line, Thessaloniki


ATM line, Larissa City





On Tuesday, the Greek government defaulted on its scheduled debt payment to the International Monetary Fund (IMF).

Under capital controls, ATM withdrawals from Greek bank accounts are now limited to 60 euros a day.  Debit cards can still be used for payments within the country, but the money simply gets transferred from one frozen bank account to another.  As a result many businesses no longer accept debit cards, and many more are demanding a substantial premium price (in  at least one business, double) for debit cards (transferred bank balances) versus hard cash.  There is a growing shortage of such cash; as a result some stores are paying their suppliers in private "scrip", which can be used by the supplier's workers to purchase goods from the issuing store. (more on this below).

Use of credit and debit cards to pay out of the country is banned and effectively blocked, resulting in a near-complete freeze-out of Greeks from Internet commerce. This restriction, along with the controls resulting in Greeks being excluded from the pan-European money settlement system, means that Greek businesses can't pay for imports.  Many shipments into the country have been halted as a result. (The government plan is to create a whitelist of politically approved cases in which such payments for imports will be unblocked).

A crucial feature of store-issued scrip is that it literally circulates through a complete closed cycle: store --> supplier --> workers --> store.  Such specific cycles are a pattern that is commonly found when currencies are primitive or newly emerging, and every Bitcoin marketer and evangelist should be familiar with them.


The kula ring, two specific cycles (counter-circulating cycles of shell money) allowing exchange of seasonal goods in the precolonial South Pacific

It doesn’t help much to sell bitcoin to isolated individuals: as a mere store of value its volatility is much greater than most existing currencies; as an investment it only makes sense as a tiny high-risk fraction of one's portfolio.  Bitcoin does have some political-affinity and status value in developed countries; by contrast in many developing countries and in countries under financial crisis such as Greece, there are urgent needs bitcoin potentially can address.  In terms of these needs Bitcoin is mainly useful as a way to send money across borders for investment in more stable assets overseas, and to substitute for cash or other substitute currencies in a money-starved environment.

To have value as a medium of exchange, bitcoin must be taken up by a community of people who already frequently trade with each other,  and who have a strong need to use it in these trades. It is especially important to market to the links in the cycle that have the strongest negotiating leverage with the others (in the case of Greek the Greek store scrip cycle, the store and its larger suppliers).  The link in the cycle with the greatest incentives to switch to bitcoin here are likely the store's suppliers, because they don’t fully trust the store, nor the underlying currency, euro or post-euro, that is the “O” in an IOU, but are participating in the scrip because, sans bitcoin, they have no other choice.

In bitcoin specific cycles create other cost savings.  Almost everywhere they economize on the increasingly high KYC/AML (know your customer/anti-money-laundering) costs of going through a fiat-bitcoin exchange.  What's more, in a capital controls environment like Greece specific cycles avoid the capital controls that would be imposed on a Greek-based fiat-bitcoin exchange, and avoid the need nearly all Greek customers using out-of-country exchanges would have to futilely try to tap into their frozen bank accounts in order to purchase bitcoin. Bitcoin will not, contrary to some feverish news reporting, help Greeks get money out of their frozen bank accounts.

But bitcoin does have great potential to help in less obvious ways: for one thing, as a superior (not vulnerable to trust in an issuing store, and in any currency underlying an IOU) substitute for the emerging store scrips.  For another, it could help greatly with the severe cross-border commerce issues that are emerging.  Exporters, including freelancers working over the Internet, can bring bitcoin into the country, thereby avoiding earning wages that get deposited to frozen bank accounts (per Greek lore, be wary of a cave with many tracks coming in but few coming out). Importers can pay for goods with bitcoin while other electronic payment channels (European money settlements, Paypal, and credit & debit cards when paying foreign businesses, etc.) remain frozen. Again specific cycles must be set up: isolated marketing to just exporters or importers will be far less effective than organizing existing supply chains that involve both.

There are likely many other, mostly highly non-obvious, niches in which bitcoin, and other cryptocurrencies, and smart contract platforms could play a quite valuable role in capital-controlled and other financially handicapped countries.

Bitcoin is not easy to learn, either conceptually or in setting up businesses and individuals with the software (and preferably also the secure hardware) to accept it. This is especially the case in a capital controls climate where the traditional bitcoin exchanges and retail payment companies, with their consumer-friendly front ends, as they normally operate in developed countries, likely can't effectively operate. To take advantage of bitcoin many Greeks will have to use the Bitcoin blockchain directly. So it's too late for bitcoin to help much with the current 6 days of bank closure,  but once the learning curves have been surmounted, the participants in specific cycles educated, bitcoin has great potential to address likely many ongoing problems with capital control, in Greece as long as they continue in various forms, and in many other parts of the world where such financial restrictions designed for a pre-digital era have been imposed.

[Update: various minor edits: the first version was rather rough, sorry :-)]

Monday, May 25, 2015

Small-game fallacies

A small-game fallacy occurs when game theorists, economists, or others trying to apply game-theoretic or microeconomic techniques to real-world problems, posit a simple, and thus cognizable, interaction, under a very limited and precise set of rules, whereas real-world analogous situations take place within longer-term and vastly more complicated games with many more players: "the games of life".  Interactions between small games and large games infect most works of game theory, and much of microeconomics, often rendering such analyses useless or worse than useless as a guide for how the "players" will behave in real circumstances. These fallacies tend to be particularly egregious when "economic imperialists" try to apply the techniques of economics to domains beyond the traditional efficient-markets domain of economics, attempting to bring economic theory to bear to describe law, politics, security protocols, or a wide variety of other institutions that behave very differently from efficient markets. However as we shall see, small-game fallacies can sometimes arise even in the analysis of some very market-like institutions, such as "prediction markets."

Most studies in experimental economics suffer from small-game/large-game effects. Unless these experiments are very securely anonymized, in a way the players actually trust, and in a way the players have learned to adapt to, overriding their moral instincts -- an extremely rare circumstance, despite many efforts to achieve this -- large-game effects quickly creep in, rendering the results often very misleading, sometimes practically the opposite of the actual behavior of people in analogous real-life situations. A common example: it may be narrowly rational and in accord with theory to "cheat", "betray", or otherwise play a narrowly selfish game, but if the players may be interacting with each other after the experimenters' game is over, the perceived or actual reputational effects in the larger "games of life", ongoing between the players in subsequent weeks or years, may easily exceed the meager rewards doled out by the experimenters to act selfishly in the small game. Even if the players can somehow be convinced that they will remain complete strangers to each other indefinitely into the future, our moral instincts generally evolved to play larger "games of life", not one-off games, nor anonymous games, nor games with pseudonyms of strictly limited duration, with the result that behaving according to theory must be learned: our default behavior is very different. (This explains, why, for example, economics students typically play in a more narrowly self-interested way, i.e. more according to the simple theories of economics, than other kinds of students).

Small-game/large-game effects are not limited to reputational incentives to play nicer: moral instincts and motivations learned in larger games also include tribal unity against perceived opponents, revenge, implied or actual threats of future coercion, and other effects causing much behavior to be worse than selfish, and these too can spill over between the larger and smaller games (when, for example, teams from rival schools or nations are pitted against each other in economic experiments). Moral instincts, though quite real, should not be construed as necessarily or even usually being actually morally superior to various kinds of learned morals, whether learned in economics class or in the schools of religion or philosophy.

Small-game/large-game problems can also occur in auditing, when audits look at a particular system and fail to take into account interactions that can occur outside their system of financial controls, rendering the net interactions very different from what simply auditing the particular system would suggest. A common fraud is for trades to be made outside the scope of the audit, "off the books", rendering the books themselves very misleading as to the overall net state of affairs.

Similarly, small-game/large-game problems often arise when software or security architects focus on an economics methodology, focusing on the interactions occurring within the defined architecture and failing to properly take into account (often because it is prohibitively difficult to do so) the wide variety of possible acts occurring outside the system and the resulting changes, often radical, to incentives within the system. For example, the incentive compatibility of certain interactions within an architecture can quickly disappear or reverse when opposite trades can be made outside the system (such as hedging or even more-than-offsetting a position that by itself would otherwise create a very different incentive within the system), or when larger political or otherwise coercive motivations and threats occur outside the analyzed incentive system, changing the incentives of players acting within the system in unpredictable ways. Security protocols always consist of at least two layers: a "dry layer" that can be analyzed by the objective mathematics of computer science, and a "wet layer" that consists of the often unpredictable net large-game motivations of the protocols' users.  These should not be confused, nor should the false precision of mathematical economic theories be confused with the objective accuracy of computer science theories, which are based on the mathematics of computer architecture and algorithms and hold regardless of users' incentives and motivations.

A related error is the pure-information fallacy: treating an economic institution purely as an information system, accounting only for market-proximate incentives to contribute information via trading decisions, while neglecting how that market necessarily also changes players' incentives to act outside of that market. For example, a currently popular view of proposition bets, the "prediction markets" view, often treats prop bets or idea futures as purely information-distribution mechanisms, with the only incentives supposed as the benign incentive to profit by adding useful information to the market. This fails to take into account the incentives such markets create to act differently outside the market.  A "prediction market" is always also one that changes incentives outside that market: a prediction market automatically creates parallel incentives to bring about the predicted event. For example a prediction market on a certain person's death is also an assassination market. Which is why a pre-Gulf-War-II DARPA-sponsored experimental "prediction market" included a prop bet on Saddam Hussein's death, but excluded such trading on any other, more politically correct world leaders. A sufficiently large market predicting an individual's death is also, necessarily, an assassination market, and similarly other "prediction" markets are also act markets, changing incentives to act outside that market to bring about the predicted events.

Thursday, December 11, 2014

The dawn of trustworthy computing

When we currently use a smart phone or a laptop on a cell network or the Internet, the other end of these interactions typically run on other solo computers, such as web servers. Practically all of these machines have architectures that were designed to be controlled by a single person or a hierarchy of people who know and trust each other. From the point of view of a remote web or app user, these architectures are based on full trust in an unknown "root" administrator, who can control everything that happens on the server: they can read, alter, delete, or block any data on that computer at will.  Even data sent encrypted over a network is eventually unencrypted and ends up on a computer controlled in this total way. With current web services we are fully trusting, in other words we are fully vulnerable to, the computer, or more specifically the people who have access to that computer, both insiders and hackers, to faithfully execute our orders, secure our payments, and so on. If somebody on the other end wants to ignore or falsify what you've instructed the web server to do, no strong security is stopping them, only fallible and expensive human institutions which often stop at national borders.

The high vulnerability we have to web servers stands in sharp contrast to traditional commercial protocols, such as ticket-selling at a movie theater, that distribute a transaction so that no employee can steal money or resources undetected. There is no "root administrator" at a movie theater who can pocket your cash undetected.  Because, unlike a web server, these traditional protocols, called financial controls, can securely handle cash, you didn't have to fill out a form  to see a movie, shop for groceries, or conduct most other kinds of every-day commerce. You just plunked down some coin and took your stuff or your seat. Imperfect and slow as these processes often are (or were), these analog or paper-based institutions often provided security, financial control, and/or verifiability of fiduciary transactions in many ways far superior to what is possible on web servers, at much less hassle and privacy loss to customers. On the Internet, instead of securely and reliably handing over cash and getting our goods or services, or at least a ticket, we have to fill out forms and make ourselves vulnerable to identity theft in order to participate in e-commerce, and it often is very difficult to prohibitive to conduct many kinds of commerce, even purely online kinds, across borders and other trust boundaries. Today's computers are not very trustworthy, but they are so astronomically faster than humans at so many important tasks that we use them heavily anyway. We reap the tremendous benefits of computers and public networks at large costs of identity fraud and other increasingly disastrous attacks.

Recently developed and developing technology, often called "the block chain", is starting to change this. A block chain computer is a virtual computer, a computer in the cloud, shared across many traditional computers and protected by cryptography and consensus technology. A Turing-complete block chain with large state gives us this shared computer. Earlier efforts included state-machine replication (see list of papers linked below).  QuixCoin is a recent and Ethereum is a current project that has implemented such a scheme. These block chain computers will allow us to put the most crucial parts of our online protocols on a far more reliable and secure footing, and make possible fiduciary interactions that we previously dared not do on a global network 

Much as pocket calculators pioneered an early era of limited personal computing before the dawn of the general-purpose personal computer, Bitcoin has pioneered the field of trustworthy computing with a partial block chain computer. Bitcoin has implemented a currency in which someone in Zimbabwe can pay somebody in Albania without any dependence on local institutions, and can do a number of other interesting trust-minimized operations, including multiple signature authority. But the limits of Bitcoin's language and its tiny memory mean it can't be used for most other fiduciary applications, the most obvious example being risk pools that share collateral across a pool of financial instruments.

A block-chain computer, in sharp contrast to a web server, is shared across many such traditional computers controlled by dozens to thousands of people. By its very design each computer checks each other's work, and thus a block chain computer reliably and securely executes our instructions up to the security limits of block chain technology, which is known formally as anonymous and probabilistic Byzantine consensus (sometimes also called Nakamoto  consensus).  The most famous security limit is the much-discussed "51% attack".  We won't discuss this limit the underlying technology further here, other than saying that the oft-used word "trustless" is exaggerated shorthand for the more accurate mouthful "trust-minimized", which I will use here.  "Trust" used in this context means the need to trust remote strangers, and thus be vulnerable to them. 

Trust-minimized code means you can trust the code without trusting the owners of any particular remote computer. A smart phone user in Albania can use the block chain to interact with a computer controlled by somebody in Zimbabwe, and they don't have to know or trust each other in any way, nor do they need to depend on the institutions of either's countries, for the underlying block chain computer to run its code securely and reliably. Regardless of where any of the computers or their owners are, the block chain computer they share will execute as reliably and securely as consensus technology allows, up to the aforementioned limits. This is an extremely high level of reliability, and a very high level of security, compared to web server technology. 

Instead of the cashier and ticket-ripper of the movie theater, the block chain consists of thousands of computers that can process digital tickets, money, and many other fiduciary objects in digital form.  Think of thousands of robots wearing green eye shades, all checking each other's accounting. Individually the robots (or their owners) are not very trustworthy, but collectively, coordinated by mathematics, they produce results of high reliability and security.

Often block chain proponents talk about the "decentralized" block chain versus the "centralized" web or centralized institutions. It's actually the protocol (Nakamoto consensus, which is highly distributed) combined with strong cryptography, rather than just decentralization per se, that is the source of the far higher reliability and and much lower vulnerability of block chains. The cryptography provides an unforgeable chain of evidence for all transactions and other data uploaded to the block chain. Many other decentralized or peer-to-peer (P2P) technologies do not provide anything close to the security and reliability provided by a block chain protected by full Byzantine or Nakamoto consensus and cryptographic hash chains, but deceptively style themselves as block chains or cryptocurrency.

A big drawback is that our online and distributed block chain computer is much slower and more costly than a web server: by one very rough estimate, about 10,000 times slower and more costly, or about the same as it cost to run a program on a normal computer in 1985. For this reason, we only run on the block chain that portion of an application that needs to be the most reliable and secure: what I call fiduciary code. Since the costs of human ("wet") problems caused by the unreliability and insecurity of web servers running fiduciary code are often far higher than the extra hardware needed to run block chain code, when web server reliability and security falls short, as it often does for fiduciary computations such as payments and financial contracts, it will often make more sense  to run that code on the block chain than to run it less reliably and securely on a web server. Even better, the block chain makes possible new fiduciary-intensive applications, such as posting raw money itself to the Internet, securely and reliably accessible anywhere on the globe -  apps that we would never dare do with a web server.

What kinds of fiduciary code can we run?  We are still thinking up new applications and the categories will be in flux, but a very productive approach is to think of fiduciary applications by analogy to traditional legal code that governs traditional fiduciary institutions. Fiduciary code will often execute some of the functions traditionally thought of as the role of commercial law or security, but with software that securely and reliably spans the global regardless of traditional jurisdiction. Thus:

* Property titles (registered assets), where the on-chain registry is either the legally official registry for off-chain assets or controls on-chain ones, thus providing reliable and secure custody of them. One can think of a cryptocurrency such as Bitcoin as property titles (or at least custody enforced by the block chain consensus protocol) to bits recognized as being a fixed portion of a currency, or as controlling unforgeably costly bits, or both. Block chains could also control hardware which controls the function of and access to physical property.

* Smart contracts: here users (typically two of them) agree via user interface to execute block chain code, which may include transfer of money and other chain-titled assets at various times or under various conditions, transfer and verification of other kinds of information, and other combinations of wet or traditional (off-chain) and dry (on-chain) performance. A block chain can hold cryptocurrency as collateral (like an escrow) which incentivizes off-chain performance that can be verified on-chain, by the parties or by third parties. A full block chain computer can pool on-chain assets into a single chain-controlled risk pool spread among many similar financial contracts, reducing the amount of collateral that needs to be stored on-chain while minimizing the need for off-chain collateral calls. The block chain can also make the search, negotiation, and verification phases of contracting more reliable and secure. With on-chain smart contracts we will be able to buy and sell many online services and financial instruments by button and slider instead of by laboriously filling out forms that disclose our private information.

* On-chain treasuries, trusts, and similar, where money lives on the block chain and is controlled by multiple signature ("multisig") authority.  Putting a treasury with signature authority on a block chain computer is low-hanging fruit, but is often tied to more speculative efforts under the label "distributed autonomous organization (DAO)", which may include voting shares and other mechanisms to control the treasury like a corporation or other kind of of organization.

I hope to discuss these block chain applications, especially smart contracts, in future posts. While there is much futurism in many block chain discussions, including many trying to solve problems that aren't actually solved by the block chain, I will generally stick to low-hanging fruit that could be usefully implemented on Quixcoin, Ethereum, or similar technology in the near future, often interfacing to still necessary parts of traditional protocols and institutions rather than trying to reinvent and replace them in whole.

References

Here is a list of basic computer science papers describing the technology of block chains (including cryptocurrencies).

Wet vs. dry code

Thursday, October 16, 2014

Transportation, divergence, and the industrial revolution

After about 1000 AD northwestern Europe started a gradual switch from using oxen to using horses for farm traction and transportation.  This trend culminated in an eighteenth-century explosion in roads carrying horse-drawn carriages and wagons, as well as in canals, and works greatly extending the navigability of rivers, both carrying horse-drawn barges. This reflected a great rise in the use of cultivated fodder, a hallmark of the novel agricultural system that was evolving in northwestern Europe from the start of the second millennium: stationary pastoralism.  During the same period, and especially in the seventeenth through nineteenth centuries, most of civilized East Asia, and in particular Chinese civilization along its coast, navigable rivers, and canals, faced increasing Malthusian pressures and evolved in the opposite direction: from oxen towards far more costly and limited human porters. Through the early middle ages China had been far ahead, in terms of division of labor and technology, of the roving bandits of northern Europe, but after the latter region's transition to stationary pastoralism that gap closed and Europe surged ahead, a growth divergence that culminated in the industrial revolution.  In the eighteenth century Europe, and thus in the early industrial revolution, muscle power was the engine of land transportation, and hay was its gasoline. 

Metcalfe's Law states that a value of a network is proportional to the square of the number of its nodes.  In an area where good soils, mines, and forests are randomly distributed, the number of nodes valuable to an industrial economy is proportional to the area encompassed.  The number of such nodes that can be economically accessed is an inverse square of the cost per mile of transportation.  Combine this  with Metcalfe's Law and we reach a dramatic but solid mathematical conclusion: the potential value of a land transportation network is the inverse fourth power of the cost of that transportation. A reduction in transportation costs in a trade network by a factor of two increases the potential value of that network by a factor of sixteen. While a power of exactly 4.0 will usually be too high, due to redundancies, this does show how the cost of transportation can have a radical nonlinear impact on the value of the trade networks it enables.  This formalizes Adam Smith's observations: the division of labor (and thus value of an economy) increases with the extent of the market, and the extent of the market is heavily influenced by transportation costs (as he extensively discussed in his Wealth of Nations).

The early industrial revolution was highly dependent on bringing together bulk goods such as coal and iron ore.  Land transportation of such materials more than a dozen miles in most parts of the world was prohibitively costly, and they were only rarely located a shorter distance from navigable water (the costs per mile of water transport were generally orders of magnitude cheaper than the costs per mile of of land transport).  As a result, the early industrial revolution, and the potential for a region to be the first to industrialize, was very sensitive to small changes in land transportation costs.

Furthermore, land and sea-borne transportation were far more complements than substitutes.  Cheaper land transportation was a "force multiplier" for water transportation.  Decreasing the costs of getting to port from field or mine by a factor of two increased the number of fields and mines accessible by a factor of four, and increased the number of possible ways to divide labor, and thus the value, by an even greater factor via Metcalfe's law.  This in turn incentived greater investment in sea-borne transport. It's thus not surprising that, even before the industrial revolution, the leaders in global trade and colonization were European countries that could access the Atlantic.

By the dawn of the industrial revolution in northwest Europe the effects of horse haulage had already been dramatic: drop by a factor of two in the costs, and increase in speed by about the same factor, of transporting goods by land, the corresponding increase in commercial crop area and in area that could be economically lumbered and coal and metals that could be mined.   Multiply that factor of four by much more when we factor in (1) innovations in wheels, tires, shock absorption, and road building that followed on the heels, as is were, of the great increase in horse haulage, and (2) the great increase in mileage and inland penetration of navigable rivers and canals, especially in the 18th century, the barges again hauled by horses.  And as Metcalfe's Law suggests, the number of combinations, and thus the value, increased by a far greater factor still. Not only did northwestern European ports have access to far more land, but there were far more ports far more "inland" along rivers and canals, thanks again chiefly to the draft horses and the nutrient-rich cultivated fodder that fed them.

To enable the industrial revolution, mines and nutrient-dense fodder had to be colocated within efficient bulk transport distance of each other — which in the case of horses hauling coal or wood by rural road, was typically less than twenty miles, and for oxen and human porters far less still — to produce the low-cost bulk transportation networks needed to make industrial revolution scale use of most commercial crops and mines. Efficient bulk transportation is needed _all the way_ between the iron mine, the coal mine, and the smelter.  Because the cost per mile of water transport was so much smaller than the costs of land transport, this “last few miles to the mine” problem usually played a dominant role in transportation economics, somewhat analogous to the “last mile” problem in modern cable networks. That’s why stationary pastoralism with its efficient markets for nutrient-dense (because cultivated) fodder was such a huge win — it allowed horses to be housed at the mines, canals, roads, and factories where they worked, which no place in the world outside Europe could during that era do.  Nutrient-dense fodder created a virtuous recursion, enabling itself to be harvested (via horse-drawn mows and rakes) and transported to mine, factory, and stable at increasingly lower costs.

Industrialization came in many phases. Very roughly speaking, the first phase, in the latter half of the eighteenth century, involved the culmination and optimization of the use of horses, by northwestern Europe, and especially England, greatly expanding its horse wagon and carriage roads and horse-drawn barge canal networks.  Horses brought coke or charcoal and iron ore to the smelters. Horse-powered capstans performed some arduous farm tasks such as threshing. Along with primitive Newcomen steam engines they pumped coal mines. Horse gins also powered most of the early versions of innovative textile machinery (they switched to more power-efficient water mills when they later scaled up).  That classic carnival ride, the merry-go-round, was inspired by these perpetually circling horses.

Again roughly speaking, the second phase of industrial growth, after about 1830, was more scientific and far easier to copy than northwestern Europe's unique biology: steam engines came to replace horse gins and water mills for running industrial machinery, and the steam-powered railroad radically lowered transportation costs between major mines, factories, and urban centers. When non-European countries industrialized, such as Japan after the 1870s, they did it in a "leap-frog" style: they skipped over the long-evolved improvement in draft animals and went straight to mature steam engines and, soon thereafter, electrical motors.  Much as countries installing phone networks for the first time over the last few decades have leap-frogged over the land line era, going straight to cell phones. Starting early in the 20th century industrializing countries could replace all the remaining important functions of the horse with internal combustion engines.  England, which made the longest and most thorough use of the horse, and thereby had the transportation economies allowing it to pioneer the industrial revolution, had a less pressing need to use the internal combustion engine and thus lagged enough in that technology so that second-generation  industrializers like Japan, Germany, and the United States became leaders in internal combustion engine products.

Given the scientific nature of the second phase of the industrial revolution, which could be discovered by any culture full of literate craftsmen, this second phase was more technologically inevitable and didn't ultimately depend on northwestern Europe's unique biology.  At the same time, during the long evolution that culminate in the industrial revolution, and during its first phase, land transportation the world over was muscle powered and the unique system of stationary pastoralism, by breeding draft horses that ran on cultivated, nutrient-dense fodder, substantially lowered transportation costs. This allowed the value of northwestern Europe's bulk transportation networks to radically increase and made it very nearly as inevitable that that region would be the pioneers of the industrial revolution.

Hat tips and references: Edward Wright and Raymond Crotty among many other authors have explored some of these issues.
Raymond Crotty
Raymond Crottyamong many other authors have explored some of the issues.

Wednesday, July 02, 2014

Tweeting

https://twitter.com/NickSzabo4

Monday, November 18, 2013

European-Asian divergence predates the industrial revolution

Stephen Broadberry describes new estimates of per capita GDP which say that the economic divergence between Western Europe and other civilized parts of the world predates the industrial revolution.  (H/T Marginal Revolution).  This is more consistent with my own theories (linked below) than the idea that the Great Divergence magically appears from nowhere around the year 1800.  Nevertheless I feel compelled to point out shortcomings in these kinds of estimates, on any side of such debates.

There are the usual correctable, but sadly seldom corrected, problems with datasets comparing European economies over historical periods, for example using "Holland", and leaving out, presumably not only the rest of the modern Netherlands, but the entire area of the exceptional Low Country late medieval industry and wealth (Flanders, Brabant, Hainault, etc.), most of which migrated (along with most of the skilled craftsmen and merchants) to the Netherlands during the 16th century wars there.  The southern Low Countries, until those wars, were the leading centers of European textile manufacture and probably also had the most labor-productive agriculture.

Worse are these and all other attempts to compared historical European "wealth" or "income" to those of non-European cultures before the era of cheap global precious metals flows (initiated by the exploration explosion) allows comparison of prices.  How do you compare the “wealth” or “income” of rice-eating and cotton-wearing Chinese farmer to a milk-drinking, oat-eating, and wool-clad Scottish peasant? It it is neither very useful nor very reliable to try to reduce such cultural and even genetic differences to mere numerical estimates.

So it's no surprise to see such conjectural and subjective estimates subject to major revisions, and I'm sure we'll see many more such revisions, in both directions, in the future. That said, many of the economically important innovations in northwestern Europe long predate not only the industrial revolution, but also the Black Death (Broadberry's new date for the start of the Great Divergence), including the following biological bundle:

(1) heavy dairying

(2) Co-evolution of human lactase persistence and cow milk proteins

(2) delayed marriage

(3) hay

(4) greater use of draft animals


These innovations all long predate the Black Death, except that thereafter this biological divergence, especially in the use of draft animals, accelerated.  After a brief interruption the lactase persistent core resumed its thousand-year conversion of draft power from humans and oxen to horses, including super-horses bred to benefit from good fodder crops -- the Shire Horse, Percheron, Belgian, etc., and of course the famous Clydesdale of the beer ads.  Draft horses figured prominently in the great expansion of the English coal mines from the 14th to 18th centuries. They both pumped the mines and transported the coal to navigable water.  Due to lack of horsepower for pumping and transport, the Chinese use of coal, though already well established by the 13th century visit of Marco Polo, where both mine and consumer were within short human-porter distance to navigable water, failed to grow beyond that limit until the coming of the railroad.  Similarly draft horses, alongside the more famous water-mills, played a key role in the early (pre-steam) exponential growth of the English textile industry, the economically dominant feature of the early industrial revolution.

Greater use of draft animals led to higher labor productivity and larger markets for agricultural output, and thus to greater agricultural specialization. Higher labor productivity implies higher per capita income, even if it can’t be measured. For civilizations outside Western Europe by contrast, much less use was made of draft animals with the result that these effects were confined to within a dozen or less miles of navigable water.

Contrariwise, northern Europe has always been at a severe ecological disadvantage to warmer climates when it comes to growing rice, cotton, sugar, and most other economically important crops.  However these seem not to have had an anti-Malthusian effect in increasing labor productivity -- the increased efficiency of rice in converting solar power to consumable calories, for example, simply led to a greater population rather than a sustained increase in per capita income.