Pages

Friday, February 24, 2006

The irreducible complexity of society

Here are some thougts about irreducible complexity and society. There's quite a bit more I could say about this, and some day I will. This is just a very brief introduction to what I've been thinking about for several years, and some tentative conclusions about it.

Friedrich Hayek called the hubristic ideas of social scientists, that they could explain and plan the details of society (including economic production), their "fatal conceit." He informally analyzed the division of knowledge to explain why the wide variety of businesses in our economy cannot be centrally planned. "The peculiar character of the problem of a rational economic order is determined precisely by the fact that the knowledge of the circumstances of which we must make use never exists in concentrated or integrated form but solely as the dispersed bits of incomplete and frequently contradictory knowledge which all the separate individuals possess." The economic problem is "a problem of the utilization of knowledge which is not given to anyone in its totality." Austrian economists like Hayek usually eschewed the use of traditional mathematics to describe the economy because such use assumes that economic complexities can be reduced to a small number of axioms.


Friedrich Hayek, the Austrian economist and philosopher who discussed the use of knowledge in society.

Modern mathematics, however -- in particular algorithmic information theory -- clarifies the limits of mathematical reasoning, including models with infinite numbers of axioms. The mathematics of irreducible complexity can be used to formalize the Austrians' insights. Here is an introduction to algorithmic information theory, and further thoughts on measuring complexity.

Sometimes information comes in simple forms. The number 1, for example, is a simple piece of data. The number pi, although it has an infinite number of digits, is similarly simple, because it can be generated by a short finite algorithm (or computer program). That algorithm fully describes pi. However, a large random number has an irreducible complexity. Gregory Chaitin discovered a number, Chaitin's omega, which although it has a simple and clear definition (it's just a sum of probabilities that a random computer program will halt) has an irreducibly infinite complexity. Chaitin proved that there is no way to completely describe omega in a finite manner. Chaitin has thus shown that there is no way to reduce mathematics to a finite set of axioms. Any mathematical system based on a finite set of axioms (e.g. the system of simple algebra and calculus commonly used by non-Austrian economists) overly simplifies mathematical reality, much less social reality.

Furthermore, we know that the physical world contains vast amounts of irreducible complexity. The quantum mechanics of chemistry, the Brownian motions of the atmosphere, and so on create vast amounts of uncertainty and locally unique conditions. Medicine, for example, is filled with locally unique conditions often known only very incompletely by one or a few hyperspecialized physicians or scientists.


Ray Solomonoff and Gregory Chaitin, pioneers of algorithmic information theory.

The strategic nature of the social world means that it will contain irreducible complexity even if the physical world of production and the physical needs of consumption were simple. We can make life open-endedly complicated for each other by playing penny matching games. Furthermore, shared information might be false or deceptively incomplete.

Even if we were perfectly honest and altruistic with each other, we would still face economies of knowledge. A world of more diverse knowledge is far more valuable to us than a world where we all had the same skills and beliefs. This is the most important source of the irreducible complexity of knowledge: the wealthier we are, the greater the irreducibly complex amount of knowledge (i.e. diversity of knowledge) society has about the world and about itself. This entails more diversity of knowledge in different minds, and thus the greater difficulty of coordinating economic behavior.

The vastness of the useful knowledge in the world is far greater than our ability to store, organize, and communicate that knowledge. One limitation is simply how much our brains can hold. There is far more irreducible and important complexity in the world than can be held in a single brain. For this reason, at least some of this omplexity is impossible to share between human minds.

The channel capacity of human language and visual comprehension are further limited. This often makes it impossible to share irreducibly complex knowledge between human minds even if the mind could in theory store and comprehend that knowledge. The main barrier here is the inability to articulate tacit knowledge, rather than limitations of technology. However, the strategic and physical limits to reducing knowledge are of such vast proportions that most knowledge could not be fully shared even with ideal information technology. Indeed, economies of knowledge suggest that the proportion of knowledge would be even less widely shared in a very wealthy world of physically optimal computer and network technology than it is today -- although the absolute amount of knowledge shared would be far greater, the sum total of knowledge would be far greater still, and thus the proportion optimally shared would be smaller.

The limitations on the distribution of knowledge, combined with the inexhaustible sources of irreducible complexity, mean that the wealthier we get, the greater the unique knowledge stored in each mind and shared with few others, and the smaller fraction of knowledge is available to any one mind. There are a far greater variety of knowledge "pigeons" which must be stuffed into the same brain "pigeonholes," and thus less room for "cloning pigeons" through mass media, mass education, and the like. Wealthier societies take greater advantage of long tails (i.e., they satisfy a greater variety of preferences) and thus become even less plannable than poorer societies that are still focused on simpler areas such as agriculture and Newtonian industry. More advanced societies increasingly focus on areas such as interpersonal behaviors (sales, law, etc.) and medicine (the complexity of the genetic code is just the tip of the iceberg; the real challenge is the irreducibly complex quantum effects of biochemistry, for example the protein folding problem). . Both interpersonal behaviors and medicine are areas where our demand is insatiable and supply is based on the internalization of vast irreducible complexity. This is not to say that further breakthroughs in simplifying the world from which we are supplied, such as those of Newtonian physics and the industrial revolution, are not possible; but to achieve them we will have to search through vastly larger haystacks. Furthermore, once these breakthroughs are made supply will become cheap and demand quickly satiated; then we will be back to trying to satisfy our higher-order and inexhaustible preferences using a supply of largely irreducible complexity.

Saturday, February 18, 2006

The constitutionality of federal charity


We have seen that U.S. Rep. David Crockett, later a hero of the Alamo, argued passionately against the propriety and constitutionality of federal charity. Crockett told his fellow Representatives that taxpayer money was "not yours to give."

During the first several decades after the U.S. Constitution was enacted, constitutional issues were debated and decided far more often in Congress than in the Supreme Court. Crockett was part of a long line of early and eminent Congressional constitutionalists who argued against the constitutionality of federal charity. Another was James Madison, one of the main drafters of the Constitution as well as one of its main proponents in the Federalist Papers. Opposing moderate Federalists and Republicans were were radical Federalists who argued following Alexander Hamilton for practically unlimited federal powers. According to Crockett, Madison, and many others, Congress had no power under any clause of the Constitution to allocate money for charity, except when such charity was necessary and proper (using a fairly narrow construction of that phrase) for implementing an enumerated Congressional power such as funding and governing the armed forces, paying federal debts, executing the laws, or implementing treaties.

David P. Currie's book The Constitution in Congress chronicles these early constitutional debates and is a must read for constitutional scholars. It contains debates by many House members (including James Madison, Elbridge Gerry, and other founders and drafters) on the constitutionality of many different kinds of legislation. Currie cites the Annals of the early Congress which are now online. Currie discussses the following two debates on federal charity:

(1) In 1793, a number of French citizens were driven out of the French Colony of Hispaniola (now Haiti), landed in Baltimore, and petitioned Congress for financial assistance. Rep. John Nicholas expressed doubt that Congress had the constitutional authority "to bestow the money of their constituents on an act of charity."[1] Rep. Abraham Clark responded that "in a case of this kind, we are not to be tied up by the Constitution." [2]. Rep. Elias Boudinot, another radical Federalist, argued that the general welfare clause authorized this kind of spending.[3] Rep. James Madison resolved the debate by observing that the U.S. owed France money from the Revolutionary War. Madison disagreed with the radical Federalist interpretation of the general welfare clause and argued against setting a dangerous precedent for open-ended spending. Congress could, however, provide money to the French refugees in partial payment of these debts, and thus constitutionally under Congress' Article I powers to pay federal debts.[4]

(2) The second opportunity to debate the constitutionality of federal charity occurred when a fire destroyed much of the previously thriving city of Savannah, Georgia in 1796. The fire was referred to as a "national calamity." Rep. John Milledge related how "[n]ot a public building, not a place of public worship, or of public justice" was left standing. Rep. Willaim Claiborne again raised the radical Federalist argument that the measure to help fund the rebuilding of Savannah was constitutional under the general welfare clause.[5] Reps. Nathaniel Macon[6], William Giles[7], Nicholas[8], and others argued that allocating federal funds to relieve Savannah would violate the Constitution, since such charity was not authorized by any enumerated power in the Constitution. "Insurance offices," not the federal government, were "the proper securities against fire," according to Macon.[8] Rep. Andrew Moore argued "every individual citizen could, if he pleased, show his individual humanity by subscribing to their relief, but it was not Constitutional for them to afford relief from the Treasury."[9] The measure to provide relief to Savannah from federal tax dollars was then defeated 55-24 [10].

[1] David P. Currie, The Constitution in Congress, citing 4 Annals at 170, 172.
[2] Id. citing 4 Annals at 350.
[3] Id. citing 4 Annals at 172.
[4] Id. citing 4 Annals 170-71.
[5] Id. pg. 222 citing 6 Annals 1717.
[6] Id. citing 6 Annals 1724.
[7] Id. citing 6 Annals 1723.
[8] Annals 1718(online)
[9] Id.
[10] Currie citing 6 Annals 1727.

Here is a direct link to the Annals of Congress. Here is the start of the Savannah debate and here are some of Rep. Macon's comments during the debate.

(n.b. Currie's citations do not necessarily match the page numbers in the online version of the Annals).


Images: Rep. James Madison (top), Rep. Nathaniel Macon (bottom).

Thursday, February 16, 2006

Some lost frontier wisdom


Davy Crockett served in the U.S. House of Representatives from 1827-31 and 1833-35. He later fought for the Texas Revolution and died at the Alamo. While in the House, Edward Ellis recalled him as making this speech on the propriety and constitutionality of Congress acting like a charity with the taxpayer's money.

Thursday, February 09, 2006

The Roundhead Revolution?

Gregory Clark (via Marginal Revolution) has a new and more comprehensive data set on real wages in England from 1209 to the present. Up to about 1600, it is consistent with the Malthusian theory that real wages varied inversely with population. But then from at least 1630, there is a remarkable and unprecedented departure from the Malthusian curve formed by the ratio of real wages to population. Real wages rose over 50% from 1630 to 1690 despite rising population. There is then a stable period from 1730 to 1800 with a new curve parallel to but offset from the original Malthusian curve, and then a second startling departure from 1800 to today reflecting the end of this last Malthusian epoch (ironically just as Malthus was writing).

This data contradicts the idea that nothing remarkable happened to the economy before the industrial revolution got going in the late 18th century. It also contradicts the theory that a qualitative shift occurred due the the Glorious Revolution of 1689 in which Parliament gained more power, some Dutch financial pratices were introduced, and soon thereafer the Bank of England founded.

Rather, the theory that comes most readily to mind to explain an economic revolution in 17th century England is the rather un-PC theory of Max Weber. I'll get back to that, but first Clark debunks the theory of Becker et. al. regarding family investment. According to this theory, parents choose between the strategy of having a large number of children and having a small number of children in whom they invest heavily, teaching them skills. This is basically the spectrum between "R strategy" and "K strategy" along which all animals lie, except that with humans there is a strong cultural component in this choice (or at least Becker et. al. claim or assume that family size has always been a cultural choice for humans -- see my comments on this below).

According to this family investment theory, until quite recently (perhaps until the industrial revolution) having more children was the better bet due to lack of reward for skill, and overall underdevelopment of the economy limited the reward for skill, so the world was caught in a Malthusian trap of unskilled labor. However at this recent point in history rewards for skilled labor went up, making it worthwhile for parents to invest in skills (e.g. literacy). Clark's data contradicts this theory: his data show that the ratio of wages for skilled to unskilled laborers did not rise either in the 17th century revolution or during the industrial revolution, and actually were in substantial decline by 1900. Indeed, a decline in demand for skilled labor is what Adam Smith predicted would happen with increasing specialization in manufacturing. Thus, there was no increase in reward for skill investment which would have pulled us out of the Malthusian trap. Thus, Clark also rejects the family investment theory.

I think, however, that part of the family investment theory can be rescued. Clark's and other data on literacy demonstrate a substantial rise in literacy just prior to and at the initial stages of the qualitative change in productivity. Literacy in England doubled between 1580 and 1660. Parents were in fact making substantial investments in literacy despite an apparent lack of incentive to do so. Why?

My own tentative theory to explains Clark's data combines Becker, Weber, and the observations of many scholars about the cultural importance of printing. Printing was invented in the middle of the 15th century. Books were cheap by the end of that century. Thereafer they just got cheaper. At first books printed en masse what scribes had long considered to be the classics. Eventually, however, books came to contain a wide variety of useful information important to various trades. For example, legal cases became much more thoroughly recorded and far more easily accessible, facilitating development of the common law. Similar revolutions occurred in medicine and a wide variety of trades, and undoubtedly eventually occurred in the building trades that were the source of Clark's data.

Printing played a crucial role in the Reformation which saw the schisms from the Roman Church and the birth in particular of Calvinism. The crucial thing to observe is that, while per Clark the gains from investment in skills did not increase relative to unskilled labor, with the availability of cheap books and with the proper content the costs of investing in the learning did radically decrease for many skills. Apprenticeships that used to take seven years could be compressed into a few years reading from books (much cheaper than bothering the master for those years) combined with a short period learning on the job. This wouldn't have been a straightforward process as it required not just cheap books with specialized content about the trades, but some redesigning of the work itself and up-front investment by parents in their children's literacy. Thus, it would have required major cultural changes. That is why, while under my theory cheap books were the catalyst that drove mankind out of the Malthusian trap, many institutional innovations, which took over a century to evolve, had to be made to take advantage of those books to fundamentally change the economy.

Probably the biggest change required is that literacy entails a very large up-front investment. In the 17th century that investment would have been undertaken primarily by the family. Such an investment requires delayed gratification -- the trait Weber considered crucial to the rise of capitalism and derived from Calvinsim. However, Calvinist delayed gratification under my revised theory didn't cause capitalism via an increased savings rate, as Weber et. al. postulated, but rather caused parents to undertake a costly practice of investing in their children's literacy. Once that investment was made, the children could take advantage of books to learn skills with unprecedented ease and to skill levels not previously possible. So the overall investment in skills did not increase, but instead the focus of that investment shifted from long apprenticeships of young adults to the literacy of children. At the same time, the productivity of that investment greatly increased, and the result was overall higher productivity.

Investment in literacy would have both enabled and been motivated by the famous Protestant belief that people should read the Bible for themselves rather than depending on a priest to read it for them. This process would have started in the late 15th century among an elite of merchants and nobles, giving rise to the Reformation, but might not have propagated amongst the tradesmen Clark tracks until the 17th century. It is with the spread of Huguenot, Puritan, Presbyterian, etc. literacy culture to tradesmen that we see the 17th century revolution in real wages and the first major move away from the Malthusian curve.

This theory that the Malthusian trap was evaded by a sharp increase in the productivity of skill investment explains why population growth did not fall in the 17th century as Becker et. al. would predict. Cheap books substantially lowered the cost of skill investment, so the productivity gains could come without increasing the overall investment in skills and thus without lowering family size.

The family size/skill investment tradeoff is more likely to explain the second and sharper departure from Malthusian curve starting around 1800. However again I think Becker is wrong insofar as this was not due to an increase in returns on skill (Clark's data debunks this), but due to (1) technology improving faster than humans can have children, and (2) the rise of birth control (without which there is little control by choice or culture over family size -- no cultural family size choice theory really works before the widespread use of birth control).

The catalyst in moving away from the Malthusian curve was thus not, per Becker et. al., an increase in the returns on investments in skills, but rather a decrease in the costs of such investments once cheap books teaching specialized trades were available and the initial hill of literacy was climbed by Calvinist families. If the Calvinist literacy-investment theory ("The Roundhead Revolution") is true, we should see a similar departure from the Malthusian curve at the same time or perhaps even somewhat earlier in the Netherlands, and also in Scotland, but probably not in Catholic countries of that period.

Moot court fun

I saw a great constitutional law moot court at GWU today, with Chief Justice John Roberts presiding over a panel that also included 2nd Circuit Judges Guido Calabresi and Sonia Sotomayor. The Chief Justice managed to flummox both the petitioners and the respondents with his hypotheticals (factual scenarios that posed problems for their legal theories). More here.

Wednesday, February 08, 2006

Time, sacrifice, and value

Here is a modified excerpt from A Measure of Sacrifice:

Mechanical clocks, bell towers, and sandglasses, a combination invented in 13th century Italy, provided the world’s first fair and fungible measure of sacrifice. So many of the things we sacrifice for are not fungible, but we can arrange our affairs around the measurement of the sacrifice rather than its results. Merchants and workers alike used the new precision of clock time to prove, brag, and complain about their sacrifices.

In a letter from a fourteenth-century Italian merchant to his wife, Francesco di Marco Ganti invokes the new hours tell her of the sacrifices he is making: “tonight, in the twenty-third hour, I was called to the college,” and, “I don’t have any time, it is the twenty-first hour and I have had nothing to eat or drink.” [4] Like many cell phone callers today, he wants to reassure her that he is spending the evening working, not wenching.

A major application of clocks was to schedule meeting times. Being a city official was an expected sacrifice as well as a source of political power. To measure the sacrifice, as well as to coordinate more tightly meeting times, the modern clocks of the fourteenth century came in handy. Some regulations of civic meetings of this period point up that that measuring the sacrifice was important, regardless of the variable output of the meetings. In Nuremburg, the Commission of Five “had to observe the sworn minimum meeting time of four “or” (hours) per day, regardless of whether or not they had a corresponding workload. They were also obliged to supervise their own compliance by means of a sandglass.[4]
As commerce grew, more quantities needed their value to be measured, leading to more complications and more opportunities for fraud. Measurement disputes become too frequent when measuring too many quantities.

Measuring something that actually indicates value is difficult. Measuring something that indicates value and immune to spoofing is very difficult. Labor markets did not come easily but are the result of a long evolution in how we measure value.

Most workers in the modern economy earn money based on a time rate -- the hour, the day, the week, or the month. In agricultural societies slavery, serfdom, and piece rates were more common than time-rate wages. Time measures input rather than output. Our most common economic relationship, employment, arranges our affairs around the measurement of the sacrifice rather than its results.

To create anything of value requires some sacrifice. To successfully contract we must measure value. Since we can’t, absent a perfect exchange market, directly measure the economic value of something, we may be able to estimate it indirectly by measuring something else. This something else anchors the performance – it gives the performer an incentive to optimize the measured value. Which measures are the most appropriate anchors of performance? Starting in Europe by the 13th century, that measure was increasingly a measure of the sacrifice needed to create the desired economic value.

Actual opportunity costs are very hard to measure, but at least for labor we have a good proxy measure[14] of the opportunities lost by working -- time. This is why paying somebody per hour (or per month while noticing how often a worker is around the office) is so very common. It's far cheapr to measure time, thus estimating the worker's opportunity costs, than actual value of output.

Time as a proxy measure for worker value is hardly automatic – labor is not value. A bad artist can spend years doodling, or a worker can dig a hole where nobody wants a hole. Arbitrary amounts of time could be spent on activities that do not have value for anybody except, perhaps, the worker himself. To improve the productivity of the time rate contract required two breakthroughs: the first, creating the conditions under which sacrifice is a better estimate of value than piece rate or other measurement alternatives, and second, the ability to measure, with accuracy and integrity, the sacrifice.

Three of the main alternatives to time-rate wages are eliminating worker choice (i.e., serfdom and slavery), commodity market exchange, and piece rates. When eliminating choice, masters and lords imposed high exit costs, often in the form of severe punishments for escape, shirking, or embezzlement. Serfs were usually required to produce a particular quantity of a good (where the good can be measured, as it often can in agriculture) to be expropriated by the lord or master. Serfs kept for their personal use (not for legal trade) either a percentage or the marginal output, i.e. the output above and beyond what they owed, by custom or coercion, to their lord.

Where quantity was not a good measure of value, close observation and control were kept over the laborer, and the main motivator was harsh punishments for failure. High exit costs also provided the lord with a longer-term relationship, thus over time the serf or slave might develop a strong reputation for trustworthiness with the lord. The undesirability of servitude, from the point of view of the laborer at least, is obvious. Serfs and slaves faced brutal work conditions, floggings, starvation, very short life spans, and the inability to escape no matter how bad conditions got.

Piece rates measure directly some attribute of a good or service that is important to its value – its quantity, weight, volume, or the like -- and then fix a price for it. Guild regulations which fixed prices often amounted to creating piece rates. Piece rates seem the ideal alternative for liberating workers, but they suffer for two reasons. First, the outputs of labor depend not only on effort, skills, etc. (things under control of the employee), but things out of control of the employee. The employee wants something like insurance against these vagaries of the work environment. The employer, who has more wealth and knowledge of market conditions, takes on these risks in exchange for profit.

In an unregulated commodity market, buyers can reject or negotiate downwards the price of poor quality goods. Sellers can negotiate upwards or decline to sell. With piece rate contracts, on the other hand, there is a fixed payment for a unit of output. Thus second main drawback to piece rates is that they motivate the worker to put out more quantity at the expense of quality. This can be devastating. The tendency of communist countries to pay piece rates, rather than hourly rates, is one reason that, while the Soviet bloc’s quantity (and thus the most straightforward measurements of economic growth) was able to keep up with the West, quality did not (thus the contrast, for example, between the notoriously ugly and unreliable Trabant of East Germany and the BMWs, Mercedes, Audi and Volkswagens of West Germany).

Thus with the time-rate wage the employee is insured against vagaries of production beyond his control, including selling price fluctuations (in the case of a market exchange), or variation in the price or availability of factors of production (in the case of both market exchange or piece rates). The employer takes on these risks, while at the same time through promotion, raises, demotions, wage cuts or firing retaining incentives for quality employee output.

Besides lacking implicit insurance for the employee, another limit to market purchase of each worker’s output is that it can be made prohibitively costly by relationship-specific investments. These investments occur when workers engage in interdependent production -- as the workers learn the equipment or adapt to each other. Relationship-specific investments can also occur between firms, for example building a cannon foundry next to an iron mine. These investments, when combine with the inability to write long-term contracts that account for all eventualities, motivate firms to integrate. Dealing with unspecified eventualities then becomes the right of the single owner. This incentive to integrate is opposed by the diseconomies of scale in a bureaucracy, caused by the distribution of knowledge, which market exchange handles much better [13]. An in-depth discussion of economic tradeoffs that produce observed distributions of firm sizes in a market, i.e. the number of workers involved in an employment relationship instead of selling their wares directly or working for smaller firms, has been discussed in [11,12]

The main alternative to market exchange of output, piece rate, or coerced labor (serfdom or slavery) consists of the employers paying by sacrifice -- by some measure of the desirable things the employee foregoes to pursue the employer’s objectives. An hour spent at work is an hour not spent partying, playing with the children, etc. For labor, this “opportunity cost” is most easily denominated in time – a day spent working for the employer is a day not spent doing things the employee would, if not for the pay, desire to do. [1,9]

Time doesn’t specify costs such as effort and danger. These have to be taken into account by an employee or his union when evaluating a job offer. Worker choice, through the ability to switch jobs at much lower costs than with serfdom, allows this crucial quality control to occur.

It’s usually hard to specify customer preferences, or quality, in a production contract. It’s easy to specify sacrifice, if we can measure it. Time is immediately observed; quality is eventually observed. With employment via a time-wage, the costly giving up of other opportunities, measured in time, can be directly motivated (via daily or hourly wages), while quality is motivated in a delayed, discontinuous manner (by firing if employers and/or peers judge that quality of the work is too often bad). Third parties, say the guy who owned the shop across the street, could observe the workers arriving and leaving, and tell when they did so by the time. Common synchronization greatly reduced the opportunities for fraud involving that most basic contractual promise, the promise of time.

Once pay for time is in place, the basic incentives are in place – the employee is, verifiably, on the job for a specific portion of the day – so he might as well work. He might as well do the work, both quantity and quality, that the employer requires. With incentives more closely aligned by the calendar and the city bells measuring the opportunity costs of employment, to be compensated by the employer, the employer can focus observations on verifying the specific quantity and qualities desired, and the employee (to gain raises and avoid getting fired) focuses on satisfying them. So with the time-wage contract, perfected by northern and western Europeans in the late Middle Ages, we have two levels of the protocol in this relationship: (1) the employee trades away other opportunities to commit his time to the employer – this time is measured and compensated, (2) the employer is motivated, by (positively) opportunities for promotions and wage rate hikes and (negatively) by the threat of firing, to use that time, otherwise worthless to both employer and employee, to achieve the quantity and/or quality goals desired by the employer.[1]

References:

[1] A good discussion of time-wage vs. piece-rate vs. other kinds of employment contracts can be found in McMillan, Games, Strategies, and Managers, Oxford University Press 1992

[4] My main source for clocks and their impact is Dohrn-van Rossum, History of the Hour – Clocks and Modern Temporal Orders, University of Chicago Press, 1996.

[9] The original sources for much of the time rate contract discussion is Seiler, Eric (1984) “Piece rate vs. Time Rate: The Effect of Incentives on Earnings”, Review of Economics and Statistics 66: 363-76 and Ehrenberg, Ronald G., editor (1990) “Do Compensation Policies Matter?”, Special Issue of Indsturial and Labor Relations Review 43: 3-S-273-S

[11] Coase, R.H., The Firm, the Market and the Law, University of Chicago Press 1988

[12] Williamson, Oliver, The Economic Institutions of Capitalism, Free Press 1985

[13] Hayek, Friedrich, "The Use of Knowledge In Society"

[14] The insight that we measure value via proxy measures is due to Yoram Barzel.

Where are you now?

A new crop of web services in the U.K. allow you to locate and track people via their mobile phone. The phone companies themselves, and emergency call centers, and anybody else authorized, have long been able to do this in the U.S. and U.K. and elsewhere, but now it's going retail. There is also this service which at least alerts the trackee when his or her location is being queried. This partially addresses the attack of borrowing somebody's phone long enough to "give consent" and then tracking them with the service. Via Ian Grigg's Financial Cryptopgraphy.

Counter-intuitively, this development may enhance privacy, since the publicity that accompanies retail services will help prevent people from being in denial about the functions of their cell phone. Even if it doesn't enhance privacy in this manner, it may at least help take us from an Orwellian model of surveillance (the state behind a one-way mirror) to a Brinian model (peer to peer surveillance). OTOH, it may just fulfill the daydream of many a boss of being able to track employees 24x7.

From accounting to mathematics

I've noticed that there are strong parallels between accounting and two important areas of mathematics -- the elementary algebra and the calculus. I suspect these reflect the origins of algebra in accounting and origins of some of the basic concepts behind the calculus in accounting. Readily available references to the history of accounting and mathematics may be too scant to prove it, but I think the parallels are quite suggestive.

A basic parallel between accounting and algebra is the balance metaphor. The origin of this metaphor was almost surely the balance scale, an ancient commercial tool for measuring the weight of precious metals and other commodities. Standard weights would be added or removed from the scale until balance with the commodity to be weighed was achieved.

Starting with the "accounting equation," assets = liabilities + equity, the strategy of accounting as with algebra is to achieve numerical balance by filling in missing quantities. As far back as the Sumerians the need for balance in accounting was widely understood, but it was expressed either in purely verbal or purely ledger form rather than with an algebraic notation.(Simarily, logic was expressed in standard language rather than with its own abstract symbolic notation until Gottleib Frege in the 19th century). Furthermore, examples of algebraic work left by the Sumerians, Babylonians, and Indians, and indeed up to the time of Fibonacci and Pacioli, typically involved accounting problems.

Calculus largely has its origins in the study of change and how a dynamic view of the world relates to a static view of the world. Newton called calculus the study of "fluxions," or units of change. (This is a more descriptive label for the field than "calculus" which simply means "calculating stone" and has been used to refer to a wide variety of areas of mathematics and logic). Long before Newton, the relationship between the static and the dynamic was probably first conceptualized as the relationship between the balance sheet and the income statement. The balance sheet, which can be summarized as

assets = liabilities + equity

is the "integral" of the income statement which can be summarized as

revenues = expenses + net income

(in other words, the income statement is the "derivative" of the balance sheet: the change in the balance sheet over a specific period of time).

Earlier civilizations had only mapped large scales of time to a spatial visualization in the form of a calendar. Diaries and accounting ledgers ordered by time also crudely map time into space. The sundial mapped time into space, but in a distorted manner. Medieval Europeans with the invention of the mechanical clock and of musical notation including rhythm expanded and systematized the mapping of time to a spatial visualization with consistent units. William of Occam and other Scholastics visualized time as a spatial dimension and other phenomena (including temperature, distance, and moral qualities) as orthogonal dimensions to be graphed against time. Occam then used methods of Archimedes to calculate absolute quantities from the area under such curves, but we awaited Newton and Leibniz, building on the analytical geometry of DesCartes, which systematically related algebraic equations to spatial curves, to create a systematic calculus. Earlier in India, the algebra and much of the differential calculus were also developed within or alongside a rich business culture in which bookeeping using "Arabic" numerals (also invented in Inda) was also widespread. I conclude that the conceptual apparatus behind much traditional mathematics originated in commercial accounting techniques.

A big exception to this is geometry. Geometry developed primarily from the need to define property rights in the largest form of wealth from the neolithic until quite recently, namely farmland, but that is another blog post for another day.

Wednesday, February 01, 2006

Executive power and the interpretation of laws

While I'm on the subject of applying software metaphors to legal code, here is a short article I recently wrote on the principle of least authority.

I go into issues of executive versus legislative power in the U.S. in more depth in Origins of the Non-Delegation Doctrine, with extensive commentary on this subject from both Federalists and Anti-Federalists. (As you may recall, the Federalists were the primary movers behind the original Constitution, and the anti-Federalists were the primary movers behind the Bill of Rights. The Constitution was ratified by most states conditional to a Bill of Rights, which was later pushed through Congress by anti-Federalists and compromising Federalists such as James Madison).

The paper also discusses further how protection of liberties was the prime motivation for the mechanisms such as checks and balances in the Constitution. As Locke said:

"[w]e agree to surrender some of our natural rights so that government can function to preserve the remainder. Absolute arbitrary power, or governing without settled standing laws, can neither of them consist with the ends of society and government, which men would not quit the freedom of the state of nature for, nor tie themselves up under, were it not to preserve their lives, liberties, and fortunes; and by stated rules of right and property to secure their peace and quiet."[1]

[1] John Locke, The Second Treatise On Government XI:137 (1691).

Mechanism, not policy

If Mike Huben is saying that the law should be about mechanism, not policy, then I heartily agree. Yet Mike gets this philosophy blindingly wrong when applying it to the law.

It is the highly evolved common law that got this wisdom most right:

Contract law: provide mechanisms for making and enforcing contracts, not policy about who can contract with whom for what.

Property law: provide mechanisms for transferring, collateralizing, devising, and protecting the quiet use of property, rather than policy dictating who can build what where.

Tort law: use the edges or surfaces of the body, property, etc. as boundaries others may not cross without consent of the possessor, rather than dictating detailed rules for each possible harm.

Et cetera. Under the common law we "write" (sometimes literally, as with contracts or wills) the policy of our own lives. The common law "create[s] user freedom by allowing them to easily experiment..." as Mike says about X-Windows. Our judges are supposed to accumulate a common law, and the legislatures write statutes, that are a "systems code" for our interactions with others. We, by our own agreements with those we specifically deal with, "write" within these basic mechanisms the rules that govern our own interactions with these others. The guiding philosophy of our legal code should indeed be mechanism, not policy.

Of course, law as well as software is not quite this simple, and judges and legislatures sometimes (but not nearly as often as they would like to think) must make some hard policy choices for exceptions and edge cases. Also, as Mike proves in his essay with absurd and awful statements such as "I think that the rights specified in the Constitution and Bill of Rights are there for purposes of mechanism, not to directly protect individual rights," what is "mechanism" and what is "policy" is often in the mind of the beholder. In this case, the Founders (which for the Bill of Rights are the anti-Federalists, not generally the Federalists as Mike suggests) clearly had in their minds that the main purpose of the mechanisms was to protect individual rights, especially as those rights had evolved under the common law which Locke and the Constitution summarized as "life, liberty, and property." "Life" and "liberty" occur three times in the Constitution; "property" is protected in four different places. Mike's beloved welfare state, on the other hand, occurs nowhere in the Constitution. Much of the Constitution was intended to protect the mechanisms of the common law from the hubristic policymaking of legislatures and the arbitrary actions of government officials.

The recent great strides of progress in human history, such as the Industrial Revolution, the Information Revolution, and the abolition of slavery, were propagated by common law countries. Countries lacking the common law have often fallen into authoritarianism and totalitarianism. The common law has proven that mechanism, not policy, is a very wise philosophy for law as well as for systems software. Mechanism, not policy, is a philosophy that all of us should aspire to when opining or voting on laws, and it's a philosophy judges should apply when interpreting them.