Wednesday, March 21, 2007

The trouble with science

Science has revolutionized life since at least the age of exploration, through the industrial revolution, and to an unprecedented degree in the 20th century. Science generally, and physics in particular, got a vast boost in credibility and in government funding following the ability of physicists to develop weapons of unprecedented power in the Manhattan Project. Scientists and their engineering brethren also developed modern electronics, sent men and machines into the cosmos, and much else that would have seemed like miracles and prophecy in prior centuries. Sciences such as psychology and evolutionary theories of behavior have at least potentially revolutionaized our understanding of ourselves. Now we have a large number of self-styled "social sciences" that attempt to understand social behavior and societies through scientific methods. Instead of priests prophecying and invoking miraculous thunderbolts through mumbo-jumbo, our modern scientific priesthood helps create real technology and tells us what to think about social systems and political options by what seems to most people (and even to most scientists outside the particular specialty in question) equally mystical mumbo-jumbo.

This scientific elite is supposed to be all quite different from the priesthoods of old because it is supposed to adhere to scientific methods rather than superstition and dogma. The scientific method developed from several sources, but one that is particularly interesting is the law of evidence in medieval and Renaissance Continental Europe. In English law, issues of fact were (and are) determined by a jury and the law of evidence is all about the general biases of juries and thus what lawyers are and are not allowed to present as evidence to them -- the basic rule to overcome juror bias being that the relevance and integrity of the information must outweigh its potential to prejudice the jurors. But in the neo-Roman law that dominated the Continent from the Late Middle Ages to this day, juries were rare and judges determined issues of fact as well as law. Thus there developed in Continental law elaborate doctrines about how judges were supposed to weigh factual evidence.

Many Renaissance and Baroque era scientists, such as Galileo, Liebniz, and Pascal, had legal training and this Continental law of evidence was reflected in their methods. Most other early scientists had been exposed to law-derived doctrines simply by attending universities many of whose doctrines derived from the original universities which were essentially law schools. Soon, however, the scientific community was independently evolving its own cultural norms from this starting point. The ideal was to seek the truth. Experiment became the sine quo non of scientific credibility, along with mathmetical rigor and important applications in navigation, engineering, and medicine. Scientific funding came from a variety of sources; when governments funded scientists they were expected to solve important problems such as those raised by navigation of the seas, not merely to theorize. After the Englightenment governments started to separate themselves from the social dogmas of their day -- religions -- by making secularizing government and allowing freedom of religion.

Today a wide variety of important political issues are dominated by ideas from scienitific communities (or at least communities that style themselves as scientific): economists, climate scientists, and many others. But there is no separation of science from government. Like the state-sponsored religions of yore, most modern scientists derive both their education and their ongoing livelihood from government funding of the theories with which they are taught and on which they work.

The old state-sponsored religions, and the resulting ideas about politics and society, were funded by governments. Not surprisingly, as such governments took over religion it became sacreligious to criticize the importance of government generally and often specific governmental institutions in particular. Under the nationalizers of dogma such as Henry VIII, who nationalized the lands and priests of the Catholic Church in England, "render under Caeasar" became more important than "render under God." Despite the advantages of better funding these state-sponsored sects have been in decline ever since governments stopped otherwise suppressing their competitors. The state sponsored churches mostly taught uncritical worship of authority whereas their private competitors added much more spiritual value to their adherent's lives.

The simplest science is physics. In some sense all other sciences are just a variety of complex models of what happens when various kinds of complex physical systems interact. Physics itself is the simple core of science. Thus physics has been hailed as the "hardest" of the "hard sciences" -- sciences where evidence trumps bias and the truth always outs sooner or later, usually sooner, despite the biases of the individuals or institutions involved. Hard scientists will often admit that the use of the scientific method in "soft sciences" such as economics and other intersubjective areas can be problematic and subject to great bias. If any science can rise above self-serving biases and efficiently search for the truth, it should be physics.

But the recent history of physics casts some rather disturbing shadows on the integrity of even this hardest of sciences. Lee Smolin in The Trouble with Physics lays out a picture of an unprecedented group of geniuses, the string theorists, who have wasted the last twenty years, largely at taxpayer's expense, basically producing nothing except a vast number of highly obscure but, in certain senses, quite elegant theories. The number of possible string theories is so vast that string theory can, like "intelligent design," explain anything -- it is unfalsifiable. It is "not even wrong," to take Wolfgang Pauli's phrase about an earlier unfalsifiable theory of his era. String theory's main rivals over the last two decades are not much better. Theoretical physics for the last twenty years has mostly not been science at all, but rather has been a large group of geniuses working on their own cabalistic variety of sudoku puzzles at taxpayer expense in the name of science.

If this is the state of physics -- if even the hardest of sciences can be taken over by a thousand-strong cabal of geniuses who produce nothing of value except wonderful-sounding untestable theories whose main success has been in garnering their community more of our tax dollars -- what hope do we have that government-funded climate scientists, economists, and others purporting to do science in areas far more complex or subjective than physics are actually producing relatively unbiased truths? If we took a poll of theoretical physicists, they might well have (up until quite recently) reached a remarkable degree of "consensus" on the truth of string theory -- just as global warming scientists have reached a "consensus" on global warming and (it is implied) on the various bits of the speculative nonsense surrounding global warming. Does such consensus mean us lay people should automatically believe this consensus of experts? Or should we demand more? Shouldn't we rather, when deciding on which theories or predictions of climate science or economics to believe, act like a Continental judge or a common-law jury and demand to actually see the evidence and weigh it for ourselves? Shouldn't we demand to hear from the defense as well as from the prosecution? Experiment, multiple points of view, and critical analysis are, after all, the real scientific method -- as opposed to the ancient religious method of uncritically trusting a single hierarchy of experts.

Today's ideas about politics and society -- "scientific theories" if you agree with them, "dogmas" if you don't -- are funded by the very governmental entities that stand to benefit from increased government power. Just as it was taboo under Henry VIII to "deny" the authority of either Christ or the King, it has now become taboo in many of these modern intellectual communities to "deny" a variety of scientific theories that are now supposed to be "beyond debate," not just things like the basic idea of global warming caused at least in part by anthropogenic carbon dioxide(which this author finds sound and quite probable, but nevertheless believes should remain like all true scientific theories open to further inquiry and debate) but also the variety of extreme speculations that have grown up around it (regarding the severity of storms, projections of droughts, floods, etc., most of which are pseudoscientific nonsense).

I'm hardly the only person who recognizes this problem with science. Indeed, the opinion expressed above is quite mild compared to an increasing number of conservatives who are coming to reject big chunks of good science along with the bad -- not just the many florid speculations surrounding global warming, but global warming itself, evolution, and other products of the expert priesthood that threaten long-established (and often, ironically, highly evolved) beliefs. Conservatives, and more than a few libertarians, feel that modern science is becoming increasingly dominated by government funding and thus becoming dominated by the interests of government in gaining more dominance over our lives. With opposing ideas increasingly unable to access to this research and education funding themselves, the easiest way for those opposed to increasing state power to effectuate their beliefs is to reject the theories of the scientific communities that promote this power.

This, and not sheer cave-man irrationality, is why many conservatives are increasingly throwing out the baby with the bathwater and rejecting science generally. Both trends -- the increased government dominance over science and the increasing rejection of science generally by those who oppose increased government controls which scientists increasingly promote -- are disturbing and dangerous. Science, once a method of weighing evidence that called for the opinions of both prosecution and defense, is increangly being dominated by the prosecution.

We need a return to science with a diversity of funding and thus a diversity of biases. This is much more important to the health of science than the absolute level of funding of science. Reducing government funding of science would thus increase the quality of science -- by making the biases of scientific communities more balanced and thus more likely to cancel each other out, just as the biases of the defense generally cancel out the biases of the prosecution. Where government does fund science, it should demand strict compliance to the basic evidentiary principles of science, such as falsifiability. All government-funded theorists should be required to design experiments that can be conducted relatively inexpensively and in the near future, that would strongly tend to verify or falsify their proposed theories. More speculative theories -- such as those that rely on unobserved or worse, unobservable entities -- simply should not be funded by governments. There are a wide variety of private entities that are happy to fund such speculations; this variety of funding sources is more important to reducing bias the further one gets away from strictly controlled experiment. Any time government funds science we should ask, does the utility of the potential discoveries and the integrity of the scientific methods being used -- their ability to find the truth even in the face of high institutional bias -- outweigh the potential for the funding by one dominant source to prejudice the opinions of the fund recipients?

Science has benefited our lives in incalculable ways for many centuries. Increasingly we inform our political decisions with the discoveries and theories of science. As sciences ranging from climatology to economics play an increasing role modern politics, this task of building a wall of separation between government and science -- or at least not allowing states to sponsor particular scientific theories at the expense of others with comparable weights of evidence, and not allowing states to fund some biased speculations at the expense of others -- is one of our most important and urgent tasks. If we are to remain living in democracies we voters must learn once again to weigh some of the evidence for ourselves, even if this means we gain our understanding through the lossy communications of popularizers. It does not work to trust a theory, no matter how scientific it may sound, based on a "consensus" or "lack of debate" among experts who mostly derive their funding from a single biased source. We democratic jurors must demand to hear from the defense -- really from a variety of parties whose biases largely cancel each other out -- rather than from just the prosection. We must redesign our scientific institutions to minimize the biases that come from a single dominant source of funding if we are to achieve good solutions to our important problems -- solutions that are not dominated by the biases of that dominant entity.

Thursday, March 08, 2007

The nature of suicide terrorism

Here is a good book review of Robert A. Pape's Dying to Win: The Strategic Logic of Suicide Terrorism. Pape has done extensive research on suicide bombers and concluded that the main motivation is not poverty or religion but nationalism. (Note that Pape and I are using "nation" to refer to supra-tribal ethnic groups which share at least a common language and common political ambition to run its own government, not necessarily to existing or historical states). Pape shows that suicide terrorism springs from perceived or actual occupations of one national group by a different and democratically governed one: Sri Lanka and India of Tamil regions (the Tamil Tigers), perceived Western proxy governments of Sunni Arab countries (Al Qaeda), a Shiite government allied to the U.S. invaders of Sunni areas in Iraq (Sunni insurgency), Israel of the Palestinian occupied territories (Hamas), etc. The strategic logic (as I liberally interpret Pape's theory) is that the suicide bombers credibly signal policymakers in democratic governments that the the national group cares far more about the conflict than the occupiers do -- and therefore are willing to sacrifice far more and make life far more difficult for occupiers. Pape points out that (at the time of writing of the book) the most suicide bombers came not from Al Quaeda or any other arguably religious terrorist group, but from the Marxist-Leninist (and thus atheist) Tamil Tigers. It can of course be argued that Marxism itself is a kind of religion, but at least Paper debunks the shallow idea that afterlife promises a la the "seventy virgins" are a necessary motivation for suicidal terrorism. I'd add that suicide terrorism grows from cultures that de-emphasize individualism -- thus the lack of, for example, ethnically European suicide bombers, but the historical existence of Middle Eastern and Japanese suicide fighters which Pape describes. We individualists have not been able to understand suicide bombers; they just seemed inexplicably crazy. Pape's analysis is an excellent antidote to this ignorance.

The movie "Jesus Camp" shows a fundamentalist Christain group trying to inculcate in their children the idea that Christians should also be willing to make extreme sacrifices in the cause of Christian crusade against Islam. If the West insists on occupying non-individualist national groups (for example most of the nations subscribing to Islam or Marxism), something like this indeed probably is necessary to be successful. But Christianity, with its emphasis on the individual soul, and the rest of the Western tradition is far too individualist for this (and for other reasons, which far outweigh the terrorist problem, this is a very good thing).

The lesson for Western foreign policy? (1) don't occupy non-individualist regions -- the costs will be far more expensive then we can imagine -- although you'd think we would have already learned this from the Vietnam and current Iraqi experiences; and (2) if you already are in such an occupation, the basic choices of remaining or leaving are both very expensive -- the former because it continues to motivate extreme nationalist ire, and the latter because it encourages national groups elsewhere by showing that suicide terrorism succeeds in its objectives. It's like paying off a kidnapper, which frees the current hostages but makes future kidnappings more likely. Withdrawing from occupation removes a current source of ire but shows national groups elsewhere that suicide terrorism is the best way to achieve the objectives of otherwise powerless nationalities.

This Hobson's choice reinforces why a decision to occupy is so expensive in the first place. We are no longer in a situation in which it's our literate and culturally unified armed forces against their illiterate and culturally and politically divided tribe, as during the era of colonization. Instead it's our literate and unified nations against their literate and unified nations, the only big differences being our mere superiority on a traditional battlefield and their much higher motivation and collectivism, and thus their much higher willingness to sacrifice individuals for the cause of the large national group. Occupation of national regions that have not yet been thoroughly Westernized (or are not otherwise individualistic) is no longer a politically viable use of force in our world. If we wish to convert collectivist nations to Western democracy, individualism, Christianity, or whatever else we'd like to teach them, our best strategy is to just use our dominant Western economy and media to be, as Reagan put it, a "Shining City on a Hill". That's how we brought down the Marxist and nuclear-armed Soviet Union. Focus on defending our own freedoms instead of trying to impose them, set a good example, and let the collectivists peacefully come to realize the advantages of individualism.

Thursday, February 22, 2007

Partially self-replicating machines being built

There are some fascinating projects out there to build 3D printers that can manufacture a substantial amount of their own parts -- they are partially self-replicating. Self-replication in itself is largely of theoretical interest, as the reproduction process is very slow and labor intensive. Worse, the resulting machines can build devices only at the lowest common denominator of materials that can be easily 3D printed at room temperature. For any given kind of part, such general-purpose machines will be easily beaten in both production rate and quality of parts made by specialized machines.

Nevertheless, I find these devices quite interesting from a theoretical point of view. The complexity of design needed to make these machines fully self-replicating may give us some idea of how improbable it was for self-replicating life to originate.

In a hypothetical scenario where one lacked ready access to industrial civilization -- after a nuclear war or in a future space industry, for example -- machines like these could be quite useful for bootstrapping industry with a bare minimum of capital equipment. For similar reasons, it's possible that machines like these might prove useful in some developing world areas -- and indeed that seems to be a big goal of these projects. Add in open source code made in the first world but freely available in the third, and this opens up the intriguing possibility of a large catalog of parts that can be freely downloaded and cheaply printed. Also useful for this goal is to reduce the cost of the remaining parts not replicated by the machine -- although even, for example, $100 worth of parts is more than a year's income for many people in the developing world, and the polymer materials that can be most usefully printed are also rather expensive. But ya gotta start somewhere.

RepRap is a project to build a 3D printer that can manufacture all of its expensive parts. Although they call this "self-replicating", it might be better to think of their goal as an open-source digital smithy using plastics instead of iron. To achieve their goal the machine need not produce "widely-available [and inexpensive] bought-in parts (screws, washers, microelectronic chips, and the odd electric motor)." That's quite a shortcut as such parts, despite being cheap in first world terms, require a very widespread and complex infrastructure to manufacture and are rather costly by developing country standards. The machine also does not attempt to assemble itself -- extensive manual assembly is presumed. RepRap makes a practical intermediate goal between today's 3D printers and more complete forms of self-replication that would, for example, include all parts of substantial weight, would self-assemble, or both. The project is associated with some silly economics, but I'm hardly complaining as long as the bad ideas result in such good projects. So far RepRap has succeeded in producing "a complete set of parts for the screw section of the extruder."

Similar to RepRap, but apparently not quite as far along, is Tommelise. Their goal is that a craftsman can build a new Tommelise using an old Tommelise and $100-$150 in parts plus the cost of some extra tools.

A less ambitious project that already works -- an open source fab, but with no goal of making its own parts -- is the Fab@Home project. Using Fab@Home folks have already printed out stuff in silicone, cake icing, chocolate, and more.

Meanwhile, in an economically attractive (but not self-replicating) application these guys want to print out bespoke bones based on CT or MRI scans.

Tuesday, February 06, 2007

Falsifiable design: a methodology for evaluating theoretical technologies

Theoretical or futuristic technologies have long been a staple of science fiction. Increasingly the future of technology development has become an important political issue as decisions are made to fund a variety of alternative energy and emission reduction technologies in response to projected long-term climate changes, medical technologies to address health problems and epidemic threats, military R&D in response to recent and feared future security threats, and so on. Hundreds of billions of dollars of R&D funding over the next decade hinges on questions of future technologies. Some of these technologies in some of their forms may be harmful or dangerous; can we evaluate these dangers before spending billions of dollars of development and capital investment in technologies that may end up expensively restricted or banned as designed?

We lack a good discipline of theoretical technology. As a result, discussion of such technologies among scientists and engineers and laypeople alike often never gets beyond naive speculation, which ranges from dismissing or being antagonistic to such possibilities altogether to proclamations that sketchy designs in hard-to-predict areas of technology are easy and/or inevitable and thus worthy of massive attention and R&D funding. At one extreme are recent news stories, supposedly based on the new IPCC report on global warming, that claim that we will not be able to undo the effects of global warming for at least a thousand years. At the other extreme are people like Eric Drexler and Ray Kurzweil who predict runaway technological progress within our lifetimes, allowing us to not only easily and quickly reverse global warming but also conquer death, colonize space, and much else. Such astronomically divergent views, each held by highly intelligent and widely esteemed scientists and engineers, reflect vast disagreements about the capabilities of future technology. Today we lack good ways for proponents of such views to communicate in a disciplined way with each other or to the rest of us about such claims and we lack good ways as individuals, organizations, and as a society to evaluate them.

Sometimes descriptions of theoretical technologies lead to popular futuristic movements such as extropianism, transhumanism, cryonics, and many others. These movements make claims, often outlandish but not refuted by scientific laws, that the nature of technologies in the future should influence how we behave today. In some cases this leads to quite dramatic changes in outlook and behavior. For example, people ranging from baseball legend Ted Williams to cryptography pioneer and nanotechnologist Ralph Merkle have spent up to a hundred thousand dollars or more to have their bodies frozen in the hopes that future technologies will be able to fix the freezing damage and cure whatever killed them. Such beliefs can radically alter a person's outlook on life, but how can we rationally evaluate their credibility?

Some scientific fields have developed that inherently involve theoretical technology. For example, SETI (and less obviously its cousin astrobiology) inherently involve speculation about what technologies hypothetical aliens might possess.

Eric Drexler called the study of such theoretical technologies "exploratory engineering" or "theoretical applied science." Currently futurists tend to evaluate such designs based primarily on their mathematical consistency with known physical law. But mere mathematical consistency with high-level abstractions is woefully insufficient, except in computer science or where only the simplest of physics is involved, for demonstrating the future workability or desirability of possible technologies. And it falls far short of the criteria engineers and scientists generally use to evaluate each others' work.

Traditional science and engineering methods are ill-equipped to deal with such proposals. As "critic" has pointed out, both demand near-term experimentation or tests to verify or falsify claims about the truth of a theory or whether a technology actually works. Until such an experiment has occurred a scientific theory is a mere hypothesis, and until a working physical model, or at least a description sufficient to create such has been produced, a proposed technology is merely theoretical. Scientists tend to ignore theories that they have no prospect of testing and engineers tend to ignore designs that have no imminent prospect of being built, and in the normal practice of these fields this is a quite healthy attitude. But to properly evaluate theoretical technologies we need ways other than near-term working models to reduce the large uncertainties in such proposals and to come to some reasonable estimates of their worthiness and relevance to R&D and other decisions we make today.

The result of these divergent approaches is that when scientists and engineers talk to exploratory engineers they talk past each other. In fact, neither side has been approaching theoretical technology in a way that allows it to be properly evaluated. Here is one of the more productive such conversations -- the typical one is even worse. The theoreticians' claims are too often untestable, and the scientists and engineers too often demand inappropriately that descriptions be provided that would allow one to actually build the devices with today's technology.

To think about how we might evaluate theoretical technologies, we can start by looking a highly evolved system that technologists have long used to judge the worthiness of new technological ideas -- albeit not without controversy -- the patent system.

The patent system sets up several basic requirements for proving that an invention is worthy enough to become a property right: novelty, non-obviousness, utility, and enablement. (Enablement is also often called "written description" or "sufficiency of disclosure", although sometimes these are treated as distinct requirements that we need not get into for our purposes). Novelty and non-obviousness are used to judge whether a technology would have been invented anyway and are largely irrelevant for our purposes here (which is good news because non-obviousness is the source of most of the controversy about patent law). To be worthy of discussion in the context of, for example, R&D funding decisions, the utility of a theoretical technology, if or when in the future it is made to work, should be much larger than that required for a patent -- the technology if it works should be at least indirectly of substantial expected economic importance. I don't expect meeting this requirement to be much of a problem, as futurists usually don't waste much time on the economically trivial, and it may only be needed to distinguish important theoretical technologies from those merely found in science fiction for the purposes of entertainment.

The patent requirement that presents the biggest problem for theoretical technologies is enablement. The inventor or patent agent must write a description of the technology detailed enough to allow another engineer in the field to, given a perhaps large but not infinite amount of resources, build the invention and make it work. If the design is not mature enough to build -- if, for example, it requires further novel and non-obvious inventions to build it and make it work -- the law deems it of too little importance to entitle the proposer to a property right in it. Implementing the description must not require so much experimentation that it becomes a research project in its own right.

An interesting feature of enablement law is the distinction it makes between fields that are considered more predictable or less so. Software and (macro-)mechanical patents generally fall under the "predictable arts" and require must less detailed description to be enabling. But biology, chemistry, and many other "squishier" areas are much less predictable and thus require far more detailed description to be considered enabling.

Theoretical technology lacks enablement. For example, no astronautical engineer today can take Freeman Dyson's description of a Dyson Sphere and go out and build one, even given a generous budget. No one has been able to take Eric Drexler's Nanosystems and go out and build a molecular assembler. From the point of view of the patent system and of typical engineering practice, these proposals either require far more investment than is practical, or far have too many engineering uncertainties and dependencies on other unavailable technology to build them, or both, and thus are not worth considering.

For almost all scientists and engineers, the lack of imminent testability or enablement puts such ideas out of their scope of attention. For the normal practice of science and engineering this is healthy, but it can have quite unfortunate consequences when decisions about what kinds of R&D to fund are made or about how to respond to long-term problems. As a society we must deal with long-term problems such as undesired climate change. To properly deal with such problems we need ways to reason about what technologies may be built to address them over the spans of decades, centuries, or even longer over which we reason about and expect to deal with such problems. For example, the recent IPCC report on global warming has projected trends centuries in the future, but has not dealt in any serious or knowledgeable way with how future technologies might either exacerbate or ameliorate such trends over such a long span of time -- even though the trends they discuss are primarily caused by technologies in the first place.

Can we relax the enablement requirement to create a new system for evaluating theoretical technologies? We might call the result a "non-proximately enabled invention" or a "theoretical patent." [UPDATE: note that this is not a proposal for an actual legally enforceable long-term patent. This is a thought experiment for the purposes of seeing to what extent patent-like criteria can be used for evaluating theoretical technology]. It retains (and even strengthens) the utility requirement of a patent, but it relaxes the enablement requirement in a disciplined way. Instead of enabling a fellow engineer to build the device, the description must be sufficient to enable an research project -- a series of experiments and tests that would if successful lead to a working invention, and if unsuccessful would serve to demonstrate that the technology as proposed probably can't be made to work.

It won't always be possible to test claims about theoretical designs. They may be of a nature as to require technological breakthroughs to even test their claims. In many cases this will be due the lack of appreciation on the part of the theoretical designer of the requirement that such designs be testable. In some cases it will be the lack of skill or imagination of the theoretical designer to fail to be able to come up with such tests. In some cases it will be because the theoretical designer refuses to admit the areas of uncertainty in the design and thus denies the need for experiment to reduce these uncertainties. In some cases it may just be inherent in the radically advanced nature of the technology being studied that there is no possible way to test it without making other technology advances. Regardless of the reason the designs are untestable, these designs cannot be considered to be anything but entertaining science fiction. In only the most certain of areas -- mostly just in computer science and in the simplest and most verified of physics -- should mathematically valid but untested claims be deemed credible for the purposes of making important decisions.

Both successful and unsuccessful experiments that reduce uncertainty are valuable. Demonstrating that the technology can work as proposed allows lower-risk investments in it to be made sooner, and leads to decisions better informed about the capabilities of future technology. Demonstrating that it can't work as proposed prevents a large amount of money from being wasted trying to build it and prevents decisions from being made today based on the assumption that it will work in the future.

The distinction made by patent systems between fields that are more versus less predictable becomes even more important for theoretical designs. In computer science the gold standard is mathematical proof rather than experiment, and given such proof no program of experiment need be specified. Where simple physics is involved, such as the orbital mechanics used to plan space flights, the uncertainty is also usually very small and proof based on known physical laws is generally sufficient to show the future feasibility of orbital trajectories. (Whether the engines can be built to achieve such trajectories is another matter). Where simple scaling processes are involved (e.g. scaling up rockets from Redstone sized to Saturn sized) uncertainties are relatively small. Thus, to take our space flight example, the Apollo program had a relatively low uncertainty well ahead of time, but it was unusual in this regard. As soon as we get into dependency on new materials (e.g. the infamous "unobtainium" material stronger than any known materials), new chemical reactions or new ways of performing chemical reactions, and so on, things get murkier far more quickly, and it is essential to good theoretical design that these uncertainties be identified and that experiments to reduce them be designed.

In other words, the designers of a theoretical technology in any but the most predictable of areas should identify its assumptions and claims that have not already been tested in a laboratory. They should design not only the technology but also a map of the uncertainties and edge cases in the design and a series of such experiments and tests that would progressively reduce these uncertainties. A proposal that lacks this admission of uncertainties coupled with designs of experiments that will reduce such uncertainties should not be deemed credible for the purposes of any important decision. We might call this requirement a requirement for a falsifiable design.

Falsifiable design resembles the systems engineering done with large novel technology programs, especially those that require large amounts of investment. Before making the large investments tests are conducted so that the program, if it won't actually work, will "fail early" with minimal loss, or will proceed with components and interfaces that actually work. Theoretical design must work on the same principle, but on longer time scales and with even greater uncertainty. The greater uncertainties involved make it even more imperative that uncertainties be resolved by experiment.

The distinction between a complete blueprint or enabling description and a description for the purposes of enabling falsifiability of a theoretical design is crucial. A patent enablement or an architectural blueprint is an a description of how to build. An enablement of falsifiability is a description of if it were built -- and we don't claim to be describing or even to know how to build it -- there is how we believe it would function, here are the main uncertainties on how it would function, and here are the experiments that can be done to reduce these uncertainties.

Good theoretical designers should be able to recognize the uncertainties in their own and others' designs. They should be able to characterize the uncertainties in such a way as to that evaluators to assign probabilities to the uncertainties and to decide what experiments or tests can most easily reduce the uncertainties. It is the recognition of uncertainties and the plan of uncertainty reduction, even more than the mathematics demonstrating the consistency of the design with physical law, that will enable the theoretical technology for the purposes, not of building it today, but of making important decisions today based on to what degree we can reasonably expect it to be built and produce value in the future.

Science fiction is an art form in which the goal is to suspend the reader's disbelief by artfully avoiding mention of the many uncertainties involved in the wonderful technologies. To be useful and credible -- to achieve truth rather than just the comfort or entertainment of suspension of disbelief -- theoretical design must do the opposite -- it must highlight the uncertainties and emphasize near-term experiments that can be done to reduce them.

Monday, February 05, 2007

Mining the vasty deep (ii)

Beyond Oil

The first installment of this series looked at the most highly developed offshore extraction industry, offshore oil. A wide variety of minerals besides oil are extracted on land. As technology improves, and as commodity prices remain high, more minerals are being extracted from beneath the sea. The first major offshore mineral beyond oil, starting in the mid 1990s, was diamond. More recently, there has been substantial exploration, research, and investment towards the development of seafloor massive sulfide (SMS) deposits, which include gold, copper, and several other valuable metals in high concentrations. Today we look at mining diamonds from the sea.

Diamond

De Beers' mining ship for their first South African marine diamond mine

The first major area after oil was opened up by remotely operated vehicles (ROVs) in the 1990s -- marine diamond mining. The current center of this activity is Namibia, with offshore reserves estimated at more than 1.5 billion carats. The companies mining or planning to mine the Namibian sea floor with ROVs include Nambed (a partnership between the government and DeBeers, and the largest Namibian diamond mining company), Namco (which has been mining an estimated 3 million carats since discovering its subsea field in 1996), Diamond Fields Intl. (which expects to mine 40,000 carats a year from the sea floor), and Afri-Can (another big concession holder which is currently exploring its concessions and hopes to ramp up to large-scale undersea operations). Afri-Can has been operating a ship and crawler (ROV) that vacuums up 50 tons per hour gravel from a sea floor 90 to 120 meters below the surface and process the gravel for the diamonds. They found 7.2 carats of diamond per 100 cubic meters of gravel, which means the field is probably viable and further sampling is in order.

DeBeers is also investing in a diamond mine in the seas off South Africa. A retrofitted ship will be used featuring a gravel processing planet capable of sorting diamonds from 250 tons of gravel per hour. It is hoped the ship will produce 240,000 carats a year when fully operational.

The ship will engage in

horizontal mining, utilising an underwater vehicle mounted on twin Caterpillar tracks and equipped with an anterior suction system...The crawler's suction systems are equipped with water jets to loosen seabed sediments and sorting bars to filter out oversize boulders. The crawler is fitted with highly accurate acoustic seabed navigation and imaging systems. On board the vessel will be a treatment plant consisting of a primary screening and dewatering plant, a comminution mill sector followed by a dense media separation plant and finally a diamond recovery plant.
In other words, a ROV will vacuum diamond-rich gravel off the sea floor, making a gravel slurry which is then piped to the ship, where it is then sifted for the diamonds. This is similar to the idea of pumping oil from the sea floor onto a FPSO -- a ship which sits over the wells and processes the oil, separates out the water, stores it, and offloads it onto visting oil tankers. However with marine diamond mining, instead of a fixed subsea tree capping and valving a pressurized oil field, a mobile ROV vacuums up a gravel slurry to be pumped through hoses to the ship.

By contrast to this horizontal marine mining, vertical mining "uses a 6 to 7 meter diameter drill head to cut into the seabed and suck up the diamond bearing material from the sea bed."

Coming soon: some new startup companies plan to mine extinct black smokers for copper, gold, and other valuable metals.

Thursday, February 01, 2007

Mining the vasty deep (i)

This is the first in a series of articles about mining the sea floor for minerals. This article is about the first and largest such activity, offshore oil and gas, and especially the more recent activity in deep water oil. First, however, there is currently a fasionable topic, the discussion of which will help put deep water oil in perspective: namely the nonsense about "peak oil" -- the theory that worldwide oil production has or will soon peak and that we thus inevitably face a future of higher oil prices.

Layout of the Total Girassol field, off the shore of Angola in 1,400 meters of water, showing the FPSO ship, risers, flow lines, and subsea trees (well caps and valves on the sea floor).

Why peak oil is nonsense

There are any number of reasons why peak oil is nonsense, such as tar sands and coal gasification. Perhaps the most overlooked, however, is that up until now oil companies have focused on land and shallow seas, which are relatively easy to explore. But there is no reason to expect that oil, which was largely produced by oceans in the first place (especially by the precipitation of dead plankton), is any more scarce underneath our eon's oceans as it is under our lands. Oceans cover over two-thirds of our planet's surface, and most of that is deep water (defined in this series as ocean floor 1,000 meters or more below the surface). A very large fraction of the oil on our planet remains to be discovered in deep water. Given a reasonable property rights regime enforced by major developed world powers, this (along with the vast tar sands in Canada) means not only copious future oil, but that this oil can mostly come from politically stable areas.

The FPSO, riser towers and flow lines in Total's Girassol field off of Angola.

Some perspective, even some purely theoretical perspective, is in order. If we look at the problem at the scale of the solar system, we find that hydrocarbons are remarkably common -- Titan has clouds and lakes of ethane and methane, for example, and there are trillions of tonnes of hydrocarbons, at least, to be found on comets and in the atmospheres and moons of the gas giant planets. What is far more scarce in the solar system is free oxygen. If there will ever be a "peak" in the inputs to hydrocarbon combustion in the solar system it will be in free oxygen -- which as a natural occurence is extremely rare beyond Earth's atmosphere, and is rather expensive to make artificially.

A deep sea "robot hand" on a ROV (Remotely Operated Vehicle) more often than not ends in an attachment specific for the job. Here, a subsea hydraulic grinder and a wire brush.

What's even more scarce, however, are habitable planets that keep a proper balance between greenhouse gases and sunlight. Venus had a runaway greenhouse, partly from being closer to the sun and partly because of increased carbon dioxide in its atmosphere which acts like an inulating blanket, preventing heat from radiating away quickly enough. The result is that Venus' surface temperature is over 400 C (that's over 750 fahrenheit), hotter than the surface of Mercury. On Mars most of its atmosphere escaped, due to its low gravity, and its water, and eventually even much of its remaining carbon dioxide, froze, again partly due the greater distance from the sun and partly due the low level of greenhouse gases in its generally thin atmosphere.

Hydraulic subsea bandsaw. Great for cutting pipes as shown here.

So far, the Earth has been "just right", but the currently rapidly rising amount of carbon dioxide and methane in our atmosphere, largely from the hydrocarbons industries, is moving our planet in the direction of Venus. Nothing as extreme as Venus is in our foreseeable future, but neither will becoming even a little bit more like Venus be very pleasant for most of us. The real barrier to maintaining our hydrocarbon-powered economy is thus not "peak oil", but emissions of carbon dioxide and methane with the resulting global warming. That peak oil is nonsense makes global warming even more important problem to solve, at least in the long term. We won't avoid it by oil naturally becoming too expensive; instead we must realize that our atmosphere is a scarce resource and make property out of it, as we did with "acid rain."

This series isn't mainly about oil or the atmosphere, however; it is about the technology (and perhaps some of the politics and law) about extracting minerals generally from the deep sea. Oil is the first deep sea mineral to be extracted from the sea on a large scale. The rest of this article fill look at some of the technology used to recover offshore oil, especially in deep water. Future posts in this series will look at mining other minerals off the ocean floor.

Painting of the deepwater (1,350 meters) subsea trees at the Total Girassol field. They're not really this close together.

FPSOs

Once the wells have been dug, the main piece of surface equipment that remains on the scene, especially in deep water fields where pipelines to the seashore are not effective, is the Floating Production, Storage, and Offloading (FPSO) platform. The FPSO is usually an oil tanker that has been retrofitted with special equipment, which often injects water into wells, pumps the resulting oil from the sea floor, performs some processing on the oil (such as removing seawater and gases that have come out with the oil), stores it, and then offloads it to oil tankers, which ship it to market for refining into gasoline and other products. The FPSO substitutes far from shore and in deep water for pipes going directly to shore (the preferred technique for shallows wells close to a politically friendly shoreline).

Many billions of dollars typically are invested in developing a single deep water oil field, with hundreds of millions spent on the FPSO alone. According to Wikipedia, the world's largest FPSO is operated by Exxon Mobil near Total's deep water field off Angola: "The world's largest FPSO is the Kizomba A, with a storage capacity of 2.2 million barrels. Built at a cost of over US$800 million by Hyundai Heavy Industries in Ulsan, Korea, it is operated by Esso Exploration Angola (ExxonMobil). Located in 1200 meters (3,940 ft) of water at Deepwater block 15,200 statute miles (320 km) offshore in the Atlantic Ocean from Angola, West Africa, it weighs 81,000 tonnes and is 285 meters long, 63 meters wide, and 32 meters high ((935 ft by 207 ft by 105 ft)."


ROVs

Today's ROVs (Remotely Operated Vehicles) go far beyond the little treasure-recovery sub you may have seen in "Titanic." There are ROVs for exploration, rescue, and a wide variety of other undersea activities. Most interesting are the wide variety of ROVs used for excavation -- for dredging channels, for trenching, laying, and burying pipe, and for maintaining the growing variety of undersea equipment. Due to ocean-crossing cables and deep sea oil fields, it is now common for ROVs to conduct their work in thousands of meters of water, far beyond the practical range of divers.

A grab excavator ROV.

It should be noted that in contrast to space vehciles, where teleprogramming via general commands is the norm, and often involves long time delays between the commands being sent and the results being known to the spacecraft's operators, with undersea operations real-time interaction is the norm. Due to operator fatigue and the costs of maintaining workers on offshore platforms, research is being done on fully automating certain undersea tasks, but the current state of the art remains a human closely in the loop. The costs of maintaining workers on platforms is vastly lower than the cost of maintaining an astronaut in space, so the problem of fully automating undersea operations is correspondingly less important. Nevertheless, many important automation problems, such as the simplification of operations, have had to be solved in order to make it possible for ROVs to replace divers at all.

A ROV for digging trenches, used when laying undersea cable or pipe.

Another important consideration is that ROVs depend on their tethers to deliver not only instructions but power. An untethered robot lacks power to perform many required operations, especially excavation. At sea as long as the tether is delivering power it might as well deliver real-time interactive instructions and sensor data, i.e. teleoperation as well.

Trenching and other high-power ROVs are usually referred to as "work class." There are over 400 collectively worth more than $1.5 billion in operation today and their numbers are increasingly rapidly.

Tankers are big, but storms can be bigger.

Harsh Conditions

Besides deep water and the peril of storms anywhere at sea, many offshore fields operate under other kinds of harsh conditions. The White Rose and Sea Rose fields of Newfoundland start by excavating "glory holes" dug down into the sea floor to protect the seafloor against icebergs which can project all the way to the fairly shallow sea floor. Inside these holes the oil outflow and fluid injection holes themselves are dug and capped with subsea trees (valves). The drill platform, FPSO, and some of the other equipment has been reinforced to protect against icebergs.

In future installments, I'll look at diamond mining and the startups that plan to mine the oceans for copper, gold, and other minerals.

Sunday, January 21, 2007

Chemical microreactors

Here's a bit of theoretical applied science I wrote back in 1993:

Using materials native to space, instead of hauling everything from Earth, is crucial to future efforts at large-scale space industrialization and colonization. At that time we will be using technologies far in advance of today's, but even now we can see the technology developing for use here on earth.

There are a myriad of materials we would like to process, including dirty organic-laden ice on comets and some asteroids, subsurface ice and the atmosphere of Mars, platinum-rich unoxidized nickel-iron metal regoliths on asteroids, etc. There are an even wider array of materials we would like to make. The first and most important is propellant, but eventually we want a wide array of manufacturing and construction inputs, including complex polymers like Kevlar and graphite epoxies for strong tethers.

The advantages of native propellant can be seen in two recent mission proposals. In several Mars mission proposals[1], H2 from Earth or Martian water is chemically processed with CO2 from the Martian atmosphere, making CH4 and O2 propellants for operations on Mars and the return trip to Earth. Even bringing H2 from Earth, this scheme can reduce the propellant mass to be launched from Earth by over 75%. Similarly, I have described a system that converts cometary or asteroidal ice into a cylindrical, zero-tank-mass thermal rocket. This can be used to transport large interplanetary payloads, including the valuable organic and volatile ices themselves into high Earth and Martian orbits.

Earthside chemical plants are usually far too heavy to launch on rockets into deep space. An important benchmarks for plants in space is the thruput mass/equipment mass, or mass thruput ratio (MTR). At first glance, it would seem that almost any system with MTR>1 would be worthwhile, but in real projects risk must be reduced through redundancy, time cost of money must be accounted for, equipment launched from earth must be affordable in the first place (typically <$5 billion) and must be amortized, and propellant burned must be accounted for. For deep-space missions, system MTRs typically need to be in the 100-10,000 per year range to be economical.

A special consideration is the operation of chemical reactors in microgravity. So far all chemical reactors used in space -- mostly rocket engines, and various kinds of life support equipment in space stations -- have been designed for microgravity. However, Earthside chemical plants incorporate many processes that use gravity, and must be redesigned. Microgravity may be advantageous for some kinds of reactions; this is an active area of research. On moons or other plants, we are confronted with various fixed low levels of gravity that may be difficult to design for. With a spinning tethered satellite in free space, we can get the best of all worlds: microgravity, Earth gravity, or even hypergravity where desired.

A bigger challenge is developing chemical reactors that are small enough to launch on rockets, have high enough thruput to be affordable, and are flexible enough to produce the wide variety of products needed for space industry. A long-range ideal strategy is K. Eric Drexler's nanotechnology [2]. In this scenario small "techno-ribosomes", designed and built molecule by molecule, would use organic material in space to reproduce themselves and produce useful product. An intermediate technology, under experimental research today, uses lithography techniques on the nanometer scale to produce designer catalysts and microreactors. Lithography, the technique which has made possible the rapid improvement in computers since 1970, has moved into the deep submicron scale in the laboratory, and will soon be moving there commercially. Lab research is also applying lithography to the chemical industry, where it might enable breakthroughs to rival those it produced in electronics.

Tim May has described nanolithography that uses linear arrays of 1e4-1e5 AFM's that would scan a chip and fill in detail to 10 nm resolution or better. Elsewhere I have described a class of self-organizing molecules called _nanoresists_, which make possible the use of e-beams down to the 1 nm scale. Nanoresists range from ablatable films, to polymers, to biological structures. A wide variety of other nanolithography techniques are described in [4,5,6]. Small-scale lithography not only improves the feature density of existing devices, it also makes possible a wide variety of new devices that take advantage of quantum effects: glowing nanopore silicon, quantum dots ("designer atoms" with programmable electronic and optical properties), tunneling magnets, squeezed lasers, etc. Most important for our purposes, they make possible to mass production of tiny chemical reactors and designer catalysts. Lithography has been used to fabricate a series of catalytic towers on a chip [3]. The towers consist of alternating layers of SiO2 4.1 nm thick and Ni 2-10 nm thick. The deposition process achieves nearly one atom thickness control for both SiO2 and Ni. Previously it was thought that positioning in three dimensions was required for good catalysis, but this catalyst's nanoscale 1-d surface force reagants into the proper binding pattern. It achieved six times the reaction rate of traditional cluster catalysts on the hydrogenolysis of ethane to methane, C2H6 + H2 --> 2CH4. The thickness of the nickel and silicon dioxide layers can be varied to match the size of molecules to be reacted.

Catalysts need to have structures precisely designed to trap certain kinds of molecules, let others flow through, and keep still others out, all without getting clogged or poisoned. Currently these catalysts are built by growing crystals of the right spacing in bulk. Sometimes catalysts come from biotech, for example the bacteria used to grow the corn syrup in soda pop. Within this millenium (only 7.1 years left!) we will start to see catalysts built by new techniques of nanolithography, including AFM machining, AFM arrays and nanoresists Catalysts are critical to the oil industry, the chemical industry and to pollution control -- the worldwide market is in the $100's of billions per year and growing rapidly.

There is a also big market for micron-size chemical reactors. We may one day see the flexible chemical plant, with hundreds of nanoscale reactors on a chip, the channels between them reprogrammed via switchable valves, much as the circuits on a chip can be reprogrammed via transitors. Even a more modest, large version of such a plant could have a wide variety of uses.

Their first use may be in artificial organs to produce various biological molecules. For example, they might replace or augment the functionality of the kidneys, pancreas, liver, thyroid gland, etc. They might produce psychoactive chemicals inside the blood-brain barrier, for example dopamine to reverse Parkinson's disease. Biological and mechanical chemical reactors might work together, the first produced via metaboic engineering[7], the second via nanolithography.

After microreactors, metabolic engineering, and nanoscale catalysts have been developed for use on Earth, they will spin off for use in space. Microplants in space could manufacture propellant, a wide variety of industrial inputs and perform life support functions more efficiently. Over 95% of the mass we now launch into space could be replaced by these materials produced from comets, asteroids, Mars, etc. Even if Drexler's self-replicating assemblers are a long time in coming, nanolithographed tiny chemical reactors could open up the solar system.

====================
ref:
[1] _Case for Mars_ conference proceedings, Zubrin et. al.
papers on "Mars Direct"
[2] K. Eric Drexler, _Nanosystems_, John Wiley & Sons 1992
[3] Science 20 Nov. 1992, pg. 1337.
[4] Ferry et. al. eds., _Granular Nanoelectronics_, Plenum Press 1991
[5] Geis & Angus, "Diamond Film Semiconductors", Sci. Am. 10/92
[6] ???, "Quantum Dots", Sci. Am. 1/93
[7] Science 21 June 1991, pgs. 1668, 1675.

Tuesday, January 09, 2007

Supreme Court says IP licensees can sue before infringing

In today's decision in MedImmune v. Genentech, the U.S. Supreme Court held that (possibly subject to a caveat described below) a patent licensee can sue to get out of the license without having to infringe the patent first. In other words, the licensee can sue without stopping the license payments. This sounds like some minor procedural point. It is a procedural point but it is hardly minor. It dramatically changes the risks of infringement and invalidity, and on whom those risks are placed, in a licensing relationship.

The Court's opinion today reverses the doctrine of the Federal Circuit, the normal appeals court for U.S. patents, that had held that there was no standing for a patent licensee to sue until it had actually infringed the patent or breached the contract. This meant that the licensee had to risk triple damages (for intentional infringement) and other potential problems in order to have a court determine whether the patent was valid or being infringed. This could be harsh, and it has long been argued that such consequences coerce licensees into continuing to pay license fees for products that it has discovered are not really covered by the patent.

Given the fuzziness of the "metes and bounds" of patents, this is a common occurrence. Sometimes the licensee's engineers come up with an alternative design that seems to avoid the patent, but the licensee is too scared of triple damages if the court decides otherwise and sticks with the patented product and paying the license fees. At other times new prior art is discovered or some other research uncovers the probable invalidity of the patent. But enough uncertainty remains that , given the threat of triple damages, the licensee just keeps paying the license fees.

A caveat is that the licensor may have to first "threaten" the licensee somehow that they will take action if the licensee doesn't make the expected payments or introduces a new product not covered by the license. In this case that "threat" took the form of an opinion letter from the licensor that the licensee's new product was covered by the old patent, and thus that the licensee had to pay license fees for the new product

Justice Scalia wrote the opinion for the eight justice majority. He argued that if the only difference between a justiciable controversy (i.e. a case where the plaintiff has standing under "case or controversy" clause of the U.S Constitution) and a non-justiciable one (i.e. no standing to sue) is that the plaintiff chose not to violate the disputed law, then the plaintiff still has standing:

The plaintiff's own action (or inaction) in failing to violate the law eliminates the imminent threat of prosecution, but nonetheless does not eliminate Article III jurisdiction. For example, in Terrace v. Thompson, 263 U. S. 197 (1923), the State threatened the plaintiff with forfeiture of his farm, fines, and penalties if he entered into a lease with an alien in violation of the State's anti-alien land law. Given this genuine threat of enforcement, we did not require, as a prerequisite to testing the validity of the law in a suit for injunction, that the plaintiff bet the farm, so to speak, by taking the violative action.

One of the main purposes of the Declaratory Judgments Act, under which such lawsuits are brought, is to avoid the necessity of committing an illegal act before the case can be brought to court:

Likewise, in Steffel v. Thompson, 415 U. S. 452 (1974), we did not require the plaintiff to proceed to distribute handbills and risk actual prosecution before he could seek a declaratory judgment regarding the constitutionality of a state statute prohibiting such distribution. Id., at 458, 460. As then-Justice Rehnquist put it in his concurrence, "the declaratory judgment procedure is an alternative to pursuit of the arguably illegal activity." Id., at 480. In each of these cases, the plaintiff had eliminated the imminent threat of harm by simply not doing what he claimed the right to do (enter into a lease, or distribute handbills at the shopping center). That did not preclude subject matter jurisdiction because the threat-eliminating behavior was effectively coerced. See Terrace, supra, at 215. 216; Steffel, supra, at 459. The dilemma posed by that coercion "putting the challenger to the choice between abandoning his rights or risking prosecution" is "a dilemma that it was the very purpose of the Declaratory Judgment Act to ameliorate." Abbott Laboratories v.
Gardner
, 387 U. S. 136, 152 (1967).

Scalia, who is normally no fan of easy standing, extended this doctrine from disputes with the government to private disputes. For this he used as precedent Altvater v. Freeman:

The Federal Circuit's Gen-Probe decision [the case in which the Federal Circuit established its doctrine which the Supreme Court today reversed] distinguished Altvater on the ground that it involved the compulsion of an injunction. But Altvater cannot be so readily dismissed. Never mind that the injunction had been privately obtained and was ultimately within the control of
the patentees, who could permit its modification. More fundamentally, and contrary to the Federal Circuit's conclusion, Altvater did not say that the coercion dispositive of the case was governmental, but suggested just the opposite. The opinion acknowledged that the licensees had the option of stopping payments in defiance of the injunction, but explained that the consequence of doing so would be to risk "actual [and] treble damages in infringement suits" by the patentees. 319 U. S., at 365. It significantly did not mention the threat of prosecution for contempt, or any other sort of governmental sanction.

Scalia as usual got to the point:

The rule that a plaintiff must destroy a large building, bet the farm, or(as here) risk treble damages and the loss of 80 percent of its business, before seeking a declaration of its actively contested legal rights finds no support in Article III.

Scalia rebutted the argument that under freedom of contract the parties had a right to, and had here, created an "insurance policy" immunizing the licensor from declaratory lawsuits:

Promising to pay royalties on patents that have not been held invalid does not amount to a promise not to seek a holding of their invalidity.

The lessons of this case also apply to copyright and other kinds of IP, albeit in different ways. In copyright one can be exposed to criminal sanctions for infringement so there is an even stronger case for declaratory lawsuits.

The most interesting issue is to what extent IP licensors will be able to "contract around" this holding and thus still be able to immunize themselves from pre-infringement lawsuits. It's possible that IP licensors will still be able to prevent their licensees from suing with the proper contractual language. Licensees on the other hand may want to insist on language that preserves their rights to sue for declaratory judgments on whether the IP they are licensing is valid or on whether they are really infringing it with activities for which they wish to not pay license fees. If there's interest, I'll post in the future if I see good ideas for such language, and if I have some good ideas of my own I'll post those. This will be a big topic among IP license lawyers for the foreseeable future, as Scalia's opinion left a raft of issues wide open, including the issues about how such language would now be interpreted.

I will later update this post with links to the opinions (I have them via e-mail from Professor Hal Wegner).

UPDATE: Straight from the horse's mouth, here is the slip opinion.

Monday, January 01, 2007

It often pays to wait and learn

Alex Tabarrok writes another of his excellent (but far too infrequent) posts, this time about the problem of global warming and the relative merits of delay versus immediate action:

Suppose...that we are uncertain about which environment we are in but the uncertainty will resolve over time. In this case, there is a strong argument for delay. The argument comes from option pricing theory applied to real options. A potential decision is like an option, making the decision is like exercising the option. Uncertainty raises the value of any option which means that the more uncertainty the more we should hold on to the option, i.e. not exercise or delay our decision.
I agree wholeheartedly, except to stress that the typical problem is not the binary problem of full delay vs. immediate full action, but one of how much and what sorts of things to do in this decade versus future decades. In that sense, for global warming it seems to me none too early to set up political agreements and markets we will need to incentivize greenhouse gas reductions. Not to immediately and radically cut down emissions of greenhouse gases -- far from it -- but to learn about, experiment with, and debug the institutions we will need to reduce greenhouse emissions without creating even greater political and economic threats. Once these are debugged, but not until then, cutting greenhouse emissions will have a far lower cost than if we panic and soon start naively building large international bureaucracies.

We already know how to use markets to reduce pollution with minimal cost to industry (and minimal economic impact generally). We now need to learn how to apply these lessons on and international level while avoiding the very real threat of the corruption and catastrophic decay of essential industries that comes from establishing new governmental institutions to radically alter their behavior.

There are tons of theories about politics and economics, practically all of them highly oversimplified nonsense. No single person knows more than a miniscule fraction of the knowledge needed to solve global warming. Political debate over technological solutions will get us nowhere. We won't learn much more about creating incentives to reduce greenhouse gases except by creating them and seeing how they work. As with any social experiment, we should start small and with what we already know works well in analogous contexts -- i.e. what we already know about getting the biggest pollution reductions at the smallest costs.

The costs of the markets -- especially the target auction (and expected exchange) prices of carbon dioxide pollution units -- should thus start out small. In that sense, the European approach under the Kyoto Protocol (the European Union Emission Trading Scheme ) provides a good model even though it has been criticized for costing industry almost nothing so far, and correspondingly producing little carbon dioxide reduction so far. So what? We have to learn to crawl before we can learn to walk. The goal, certain fanatic "greens" notwithstanding, is to figure out how to reduce carbon dioxide emissions, not to punish industry or return to pre-industrial economies. Once people and organizations get used to a simple set of incentives, they can be tightened in the future in response to the actual course of global warming, in response to what we learn about global warming, and above all in what we learn from our responses to global warming.

We already know markets -- and perhaps also carefully designed carbon taxes, but as opposed to micro-regulation and getting the law involved in choosing particular technological solutions -- markets can radically reduce specific pollutants if they specific mitigation decisions are left to market participants rather than dictated by government. And we might learn certain mitigation strategies (perhaps this one, for example) that turn out to be superior to radical carbon dioxide reductions.


Let's set up and debug the basics now -- and nothing is more basic to this problem than international forums, agreements, and exchange(s) that include all countries that will be major sources of greenhouse gases over the next century. These institutions must be designed (and this won't be easy!) for minimal transaction costs -- in particular for minimal rent-seeking and minimal corruption. Until we've set up and debugged such a system, it could be far more catastrophic than the projected changes in the weather to impose large costs and create large bureacracies funded by premature "solutions" to the global warming problem.

As the accompanying illustrations show, a domestic United States programming that left all decisions beyond the most basic and general of market rules to the market participants -- and thus left the specific decisions to those with the most knowledge, here the electric utility companies, and minimized the threat of the rise of a corrupt bureaucracy -- was able to radically reduce sulfur dioxide and nitrous oxides (acid rain causing) pollution. So much so that few remember that acid rain in the 1970s and 80s was a scare almost as big global warming is today. Scientists plausibly argued that our forests were in imminent danger of demise. By experimenting with, learning about, and then exploiting the right institutions, that major pollution threat was, after methodically working through the initial learning curve, rapidly mitigated.

Global warming is an even bigger challenge than acid rain -- especially its international nature which demands the participation of all major countries -- but we now have the acid rain experience and others to learn from. We don't have to start from scratch and we don't need to implement a crash program. We can start with what worked quite well in the similar case of acid rain and experiment until we have figured out in reality -- not merely in shallow political rhetoric -- what will work well for the mitigation of global warming.

Furthermore, it probably will pay to not impose the big costs until we've learned far more about the scientific nature of the problems (note the plural) and benefits (yes, there are also benefits, and also plural) of the major greenhouse gases, and until the various industries have learned how to efficiently address the wide variety and vast number of unique problems for industry that the general task of reducing carbon dioxide output raises. It's important to note, however, that the scientific uncertainty, while still substantial, is nevertheless far smaller than the uncertainties of political and economic institutions and their costs.

To put it succinctly -- we should not impose costs faster than industry can adapt to them, and we should not develop international institutions faster than we can debug them: otherwise the "solutions" could be far worse than the disease.


Another application of Tabarrok's theory: "the" space program. (Just the fact that people use "the" to refer to what are, or at least should be, a wide variety of efforts, as in almost any other general area of human endeavor, should give us the first big hint that something is very wrong with "it"). For global warming we may be letting our fears outstrip reality; in "the" space program we have let our hopes outstrip reality. Much of what NASA has done over its nearly fifty year history, for example, would have been far more effective and self-sustaining if done several decades later, in a very different way, on a smaller scale, on a much lower budget, and for practical reasons, such as commercial or military reasons, rather than as ephemeral political fancies. The best space development strategy is often to just to wait and learn -- wait until we've developed better technology and wait until we've learned more about what's available up there. Our children will be able to do it far more effectively than we. I understand that such waiting is excrutiatingly painful to die-hard space fans like myself, but all the more reason to beware of deluding ourselves into acting too soon.

In both the global warming and space program activist camps you hear a lot of rot about how "all we need" is "the political will." Utter nonsense. Mostly what we need to do is wait and experiment and learn. When the time is ripe the will is straightforward.

A million-yuan giraffe, brought back to China from East Africa by a Zheng He flotilla in 1414 (click to enlarge). Like NASA's billion-dollar moon rocks, these were not the most cost-efficient scientific acquisitions, but the Emperor's ships (and NASA's rockets) were bigger and grander than anybody else's! In the same year as the entry of this giraffe into China, on the other side of the planet, the Portuguese using a far humbler but more practical fleet took the strategic choke-point of Ceuta from the Muslims. Who would you guess conquered the world's sea trade routes soon thereafter -- tiny Portugal with their tiny ships and practical goals, or vast China with their vast ocean liners engaging in endeavours of glory?

Tuesday, December 19, 2006

Go native

Collision between a comet and a 370 kilogram projectile exposes subsurface ice, causing large amounts of dust and over 250,000 tons of water vapor to stream off the comet. As a liquid that much water would fill over a hundred Olymic-sized swimming pools.

NASA for now is sticking to Werner von Braun's World War Two vintage idea of lunar and Martian bases that import everything from earth -- including even such basics as propellants, tankage, food, air, and water. While these white elephants are still seen by most of NASA as the "next logical steps", there are a wide variety of other and often much better ideas for space development out there. I've long considered the possibility that comet mining, or similar drilling for any ice that might exist underneath the regolith of some asteroids (as we know it does on Mars), is a much more productive way to go.

The argument for microgravity ice mining, laid out in the above-linked article, involves several steps of reasoning, but the most important parts of the idea are (1) that use of native materials, rather than hauling materials from earth, is crucial to productive and affordable long-term space development, and (2) that given the expensive bottleneck of launching from earth, the most important variable for space industry is the mass throughput ratio (MTR): the ratio of the mass of material processed per unit of time to mass of the (earth-origin) machinery required. Higher MTR translates to greater amplification of a small mass launched from earth into a large amount of materials processed in space. Cometary or similar ice can be processed to create propellant, tankage, and other useful items as described in the above-linked article.

Snow maker.

To develop space economically, instead of merely wasting large amounts of money on historical dead ends, we need to use native materials, and we need to process them with very high mass throughput ratios. Developing bases on the moon, Mars, or elsewhere in deep space without using native materials would be as foolish as if the Pilgrims at Plymouth Rock had imported all their water, food, and wood from England.

I endorse the ice mining approach because, even though cometary (and possibly asteroidal) ice is not conveniently located, we can generally process water with very high mass throughput ratios. Here are some approximate (back-of-envelope) estimates I've made, usually based on the linked descriptions of the process. I typically assume the machines can operate on average 20 hours per day, 300 days per year:

Centrifuge that separates sludge from water at a rate of 200 gallons per minute.

Snow cone machine (grinding ice into small bits): 67,000/yr. (In other words, the machine can convert 67,000 times its own weight in ice to "snow" in a year).

Sludge separation with centrifuge: 3,000/yr.

Snow making from liquid water: 330,000/yr.

The MTR of a sequence of processes can be readily estimated. For example to grind ice into snow, then melt it (which I assume here has an MTR of 10,000/yr.), then separate out the dirt and organics with a centrifuge, then make snow with the now clear water -- a sequence similar to the comet mining process described in the above-linked article -- I estimate the MTR to be about 1,500 per year with the current off-the-shelf equipment.

If this can be achieved for comet mining, then we can (for example) launch 10 tons of equipment and, by staying at the comet or asteroid for two years, make 30,000 tons of ice for various uses, including propellant, tankage, and radiation shielding. This is before discounting mass we lose from transporting the equipment to the comet or asteroid and ice back to a useful orbit, which will often be a factor of 10 to 20 -- see the above-linked article. Thus 10 tons launched into low earth orbit would result in 1,500 to 3,000 tons of water ice in a useful orbit such as earth geosynchronous orbit, Earth-Sun L1 (for the global warming application, link below), Mars orbit, or a wide variety of other inner solar system orbits.

Old ice breaker.

With a decent R&D budget we can probably actually do much better than the current off-the-shelf equipment, some aspects of which have to be redesigned for the space environment anyway (e.g. where the water flow depends on gravity). The MTRs of many of these machines can probably be improved by large factors by appropriate substitution of materials with better engineering properties (especially strength) per mass -- for example Kevlar, Vectran, Spectra, carbon fiber, or carbon nanotube for structural steel and diamond grit for grinding surfaces. Such substitutions would be too expensive for typical earthside applications, but make sense where transportation costs are very high, as with launch costs. This substitution is helped by fact that, unlike most other proposed processes for extracting propellant from materials native to space (e.g. liquid oxygen from lunar soil) these processes don't require high temperatures.

Old ice saw.

On a related note, here's an interesting set of pages on the ancient art of harvesting, handling and storing ice for refrigeration.

Lest this all be dismissed as mere NASA-style entertainment, there are possibly quite important applications, including combatting global warming, for which native material based space industry is crucial. (No, the idea is not to throw big ice cubes into the earth's oceans. :-) Furthermore, once the first mission returns ice, it can be used as propellant and tankage to bootstrap even more ice mining equipment. Only a few such boostrapping cycles are needed before orbital ice becomes as cheap as water on earth. Since comet ice can include methane and ammonia as well as water, it can provide the raw materials for most of chemicals and bulk materials we need in space industry. Indeed, all the above-mentioned advanced materials can be manufactured starting with these building blocks -- keeping in mind that the current earthside manufacturing processes will have to themselves be examined for MTR, reliability, and automability, and so the substitution of more simply produced materials may be in order, just as we are substituting a simpler rocket for advanced earth-launched upper stages in order to take advantage of native ice derived propellant and tanks.

Of course, there many challenges with automation and reliability to be met. Some of these are being addressed by the current developments in deep sea oil and mineral extraction, which I am currently researching and will hopefully cover in a near future post.

Here is recent evidence of not just ice, but (at least temporarily) liquid water underneath the Martian surface that has flowed out onto the surface and frozen. Mars itself, in a deep gravity well, is not an ideal source of water for use beyond Mars, but ice there suggests that ice might also survive near the surfaces of some asteroids that lie even farther from the sun.


Thursday, December 14, 2006

History in a nutshell

I've written about geographic patterns that demonstrate the importance of securing as well as producing wealth (link below). These have shaped forms of government and law. Hunter-gatherer tribes, under which our instincts evolved, had no need of large organizations, governments, or law as we know it. The dawn of agriculture was probably made possible, not by the discovery that food plants could be grown from seeds (this would have been obvious to a hunter-gatherer), but in solving the much harder problem of how to protect this capital investment over the course of a planting-growing-harvesting-storage cycle from fellow human beings. This required internal law and external security exercised much more thoroughly and over a larger area. It was securing the production, more than the production itself, that required eventually radical organizational evolution.

Once farmland became the main source of wealth, there were substantial economies of scale in protecting it. This posed a difficulty in forming organizations larger than tribes; those cultures that could coordinate larger militaries slowly displaced tribes that could not. This led to a wide variety of governmental forms, but they tended to have in common that the military consumed the bulk of the otherwise insecure agricultural surplus. The primary legal form was that of real property, usually claimed by military lords and their heirs.

The next phase appeared sporadically and temporarily among city-states that dealt most in goods (included harvested and transportable agricultural commodities) rather than farmland. As cultures became centered around trade and industry, converting from farmland to goods as the main source of wealth, they also tended to convert to from feudal monarchies to republics (or as we tend to call them now, democracies). Contract law became as or more important than real property law. Real property became much more alienable, either sold outright or pledged as security for insurance or investments.

As most wealth became mobile, taxes became centered on trade and income -- on wealth transfers that require crossing borders or crossing trust barriers -- rather than on wealth. During the same centuries as the rise of republics, cheap paper made widespread monolinguistic merchant communities more effective. The printing press gave rise to modern national languages, making large-scale organizations such as the modern corporation and nation-state possible. This led to substantial efficiency gains both in the security of production (and of the accompanying trade) as well as in the production and trade themselves.

Technology and economy have now once again leapt ahead of government and law: now in the most developed countries it's no longer goods, but services and information, which increasingly create wealth. It's still almost anybody's guess what organizational forms will be needed to produce and secure service value and information wealth, and for that matter whether such wealth will take the legal forms that have appeared with goods-centered republics (e.g. patents and copyrights) or whether new more security-efficient forms of property and contract will emerge. I expect smart contracts and related protocols to play an important role.

More about security and history here.