Pages

Friday, September 28, 2007

Secure time broadcast

From my article Advances in Distributed Security:
Broadcasts using sound or radiation, from sources such as bell towers, radio towers, satellites, and pulsars, must send out the same value to every receiver. A remote beacon such as a pulsar has perfect security: the access structure is any party, and its complement, the attack structure, is the empty set. For human controlled broadcasts, the attack structure consists only of the broadcaster and the access structure is any receiver.

Natural broadcasts are thus immune to the problem, described in the discussion of the Byzantine Generals problem below, of a transmitter sending different values to different receivers. Indeed, as we will see below, distributed researchers have gone to great lengths just to recreate this simple property on the Internet with logical broadcast protocols.

Nature provides clocks that are oblivious to the malicious intentions of any outside parties. In the case of a remote high-energy system such as a pulsar, this means anybody. Pulsars are many orders of magnitude more accurate than random delays that face attackers on the Internet. If critical Internet servers were synchronized to natural clocks in a secure and timely fashion, they would be immune to attacks that relied on uncertainties in timing.

More here.

Detecting pulsars is still, alas, a difficult process, but not so hard that amateurs have had no success doing so. Amateurs have had some success tracking pulsars with software defined radio and quagi antennas combined with digital signal processing.

Wednesday, September 19, 2007

Real options analysis for space projects

For the purposes of estimating the value of a project, risk has for simplicity often been treated as spread out evenly over the life of a project. For normal projects these risks can usually be fairly approximated by estimating risk per unit time. This occurs in, for example, the net present value (NPV) formula for estimating the value of a project where a constant interest rate risk premium is used. In a typical space mission, however, risk tends to be concentrated at particular events such as launches, burns, flybies and landings. Just floating out in space is relatively quite low risk, which is why we can fly to Jupiter and Saturn with failure rates not significantly greater than missions to the moon or Mars. All four of the spacecraft that have so far intentionally been flown beyond Saturn -- Pioneers 10 and 11, and Voyagers 1 and 2 -- had many initial glitches. But since passing beyond Saturn they have travelled on for nearly two decades, far beyond Pluto, with no substantial and unexpected failure. The Voyagers continue to send back valuable data on the very edge of the solar system, where the solar wind gives way to the interstellar influence -- more than 80 times as far from the sun as our own earth.

There is a small risk of component failure that tends to obey a Poisson distribution that grows over time. But the risk in even earth-orbiting satellites is dominated by launch and orbit-insertion failures, and other failures at the start of the satellite's lifetime, which are unrelated to the satellite's expected lifetime.

Thus the vast majority of the risk of most space projects does not grow exponentially with their duration, and indeed is usually not closely correlated to their duration in any way. We would thus get an exponentially wrong answer by a line of reasoning that estimated the risk of a generic deep space mission as X%/year, and deduce by plugging that risk into the net present value (NPV) equation that, for example, an 8 year mission is substantially costlier (due to a risk that grows exponentially over time) than a 2 year mission. An example of this fallacy is NPV analysis that assumes a constant risk premium for comparison of futuristic lunar and asteroid mining scenarios. All such papers that I've seen (e.g. this one) fall victim to this fallacy. To use NPV properly we need to account for the risks of particular events in the mission (in the mining scenario primarily launches, burns, and mining operations) to estimate a total risk, and divide that total risk by the duration of the mission to get a risk premium. The risk premium per year for the longer mission will thus probably be substantially lower than for a shorter mission (implying an overall risk slighly higher for the longer mission, all other things being equal).

An even more accurate method for evaluating risk in space projects is called real options analysis. It has changed valuation from the old static NPV model to a dynamic model based on specific risks. One improvement this brings is removing the assumption of constant risk, which we've seen is wildly inappropriate for deep space missions. Another idea real options brings us is that designing a project to postpone choices adds value to the project when there will be better information in the future. A science mission example: if a scientific event could occur at either target A or target B, it's best to postpone the choice of target until we know where the event is going to occur. If that's possible, we now have a scientifically more valuable mission.

Orbital planning for deep space missions tends to plan for a fixed mission ahead of time. Real options analysis says that the project gains value if we design in options to change the project's course in the future. For orbital mechanics that means designing the trajectory to allow retargeting at certain windows, even at some cost of delta-v and time. (Whether the tradeoff is worth it can be analyzed using real options mathematics, if we can make comparison estimates of different scientific targets using a common utilitarian scale).

In the context of an Jupiter orbiter swinging by various Jovian moons, such options might include hanging around an interesting moon longer or changing the trajectory to target. The idea is instead of plotting a single trajectory, you plot a tree of trajectories, with various points where the mission controllers can choose trajectory A or trajectory B based on mission opportunies.

A shorthand way to think of real options analys is that the project is modeled as a game tree with each node on the tree representing a choice (i.e. a real option) that can be made by the project managers. The choices are called "real options" because the math is the same as for financial options (game trees, Black-Scholes, etc.) but they represent real-world choices, for example the option of a vineyeard to sell its wine this year or let it age at least another year, the option of a mine to open or close, or expand or shrink its operations, etc.

The orbital planning I've seen tends to plan for a fixed mission ahead of time. Real options analysis says that the project may gain value if we design in options to change the project's course in the future. For orbital mechanics that means designing the trajectory to allow retargeting at certain windows, even at some cost of delta-v and time. (Whether the tradeoffs between delta-v, time, and particular real options are worth it can be analyzed using real options mathematics, if we can compare different scientific targets using a common utilitarian scale).

In the context of the Jovian moons project, such options might include hanging around Europa longer if a volcano is going there (like the one discovered on the similar moon Enceladus) or if some evidence of life is found (or leaving it sooner if not), or changing the trajectory so that the next target is Europa instead Ganymede if a volcano suddenly springs up on Europa, or to Io if an interesting volcano springs up there. The idea is instead of plotting a single trajectory, we plot a tree of trajectories, with various points where the mission controllers can choose trajectory A or trajectory B (sometimes with further options C, D, etc.) based on mission opportunities. Other trajectory options might include hanging around a particular moon for longer or changing the view angle to the target. We may trade off extra delta-v, extra time, or both in order to enable future changes in the trajectory plan.

Here is more on real options analysis. Real options analysis is also quite valuable for the research and developoment phase of a project. Here is a good paper on real options analysis for space mission design. My thanks to Shane Ross and Mark Sonter for their helpful comments on space project evaluation and planning.

UPDATE: This post is featured in the Carnival of Space #22.

Sunday, September 16, 2007

Tamper evident numbers

From The Playdough Protocols:
Evolving beyond clay tokens, accounting was the first use of the external marks and started to take a familiar form. Along with the tamper evident clay, the Sumerians developed a kind of virtual tamper evidence. It took the form of two sets of numbers. On the front of the tablet, each group of commodities would be recorded separately -- For example on the front would be recorded 120 pots of wheat, 90 pots of barley, and 55 goats. On the reverse would simply be recorded "265" -- the same objects counted gain, probably in a different order, and without bothering to categorize them. The scribe, or an auditor, would then verify that the sum was correct. If not, an error or fraud had occured. Note the similarity to tamper evident seals -- if a seal is broken, this meant that error or fraud had occured. The breaker of the seals, or the scribe who recorded the wrong numbers, or the debtor who paid the wrong amounts of commodities would be called on the carpet to answer for his or her discrepancy.

Checksums still form the basis of modern accounting. Indeed, the principle of double entry bookeeping is based on two sets of independently derived numbers that must add up to the same number. Below, we will see that modern computers, using cryptographic methods, can now compute unspoofable checksums.

Sunday, September 09, 2007

Mining the vasty deep (iii)

In the first two parts of this series I described undersea oil and diamond mining operations. In addition at least two startup companies, Neptune Minerals and Nautilus Minerals, are moving into mining the seafloor for metals. They're planning to dig into extinct black smokers for copper, and possibly also zinc, gold, silver, and other minerals. David Heydon, CEO of Nauilus Minerals, says of the remarkably high quality of the ores, "we haven’t seen these types of mineral deposits since the beginning of modern mining."

Nautilus and Neptune don't plan to mine live black smokers, which would be dangerous (black smokers are hot springs with temperatures well above 200 centigrade, although not boiling because at high pressure) and environmentally questionable (live black smokers team with exotic chemosynthetic life). Rather, they plan to mine the probably far greater number of extinct and lifeless black smokers that populate the oceans.
autilus paydirt:
Cutting of an extinct black smoker at 1,600 meters.


Geologist Steven Scott, a longtime advocate of sea mining, describes the geology of seafloor massive sulfides (SMS's). "These deposits are essentially metalliferous mud formed from hot, dense brines." They form within and around black smokers, the famous deep-sea vents discovered in 1979. Black smokers deposit "sinter-like mounds and chimneys of metal sulfides, oxides, silica and sulfates. Veins, disseminations and stockworks of relatively low metal grade impregnate the underlying lavas."

Curiously, we may already be mining black smokers: billion-year-old, long-extinct black smokers whose remains now lie on dry land. "The [current live black smoker] deposits have similarities to so-called volcanogenic massive sulfide ores being mined on land in Canada and elsewhere and which formed in ancient oceans as much as 2700 million years ago. Elements of potential commercial interest in both the modern and ancient deposits are copper, zinc, lead, silver, gold and barium. About 150 seafloor sites are known, most of which lie between 1500 and 3500 meters water depth."

Neptune paydirt, topside.

Both Neptune and Nautilus recently performed assays of several promising extinct smokers. Nautilus' assay was performed by two methods: a drill based on a ship (but dropped through 1600 meters of water) and a remotely operated vehicle (ROV) that took cutting samples. Here are at least some of the results. Heydon observes that "with 97% of the ocean floor yet to be extensively explored, it is likely that numerous deposits remain undiscovered."

Here is more from Steven Scott.

Business Plans

Nautilus estimates that it could cost as much as $300 million to ramp up to full-scale mining. Its plans include a $120 million specialized mining ship, the Jules Verne. This ship is similar to the FPSO used for deep sea oil, but will separate ore from sludge and water rather than oil from water. A large ROV, based on the trencher ROV used to lay pipes in the deep sea oil industry, but adding a large rotating drum scraper like those used in coal mining, will scrape the smokers creating sludge which will be pumped up to the Jules Verne and processed. Since the most expensive FPSOs can cost upwards of $800 million, the $120 million is a relative bargain. Indeed, the basic mothership/ROV setup is taken straight from the deep sea oil industry FPSO/ROV methodology.
Nautilus will contribute $120 million to develop undersea tools, including pumps, pipes and two subsea crawler-miners. It will also partner with a group of engineering companies with expertise in drilling and geophysics. If Jules Verne is in place by 2009, it could stay at the Papua New Guinea site until 2014, before moving on to exploit other deposits. Nautilus estimates that it will be able to mine 6000 tonnes a day from its target site.

Although there are technological hurdles to be overcome, ... 'the technology already exists; it just hasn't been integrated in this way. One beauty of the sea floor model is that a floating equipment production chain is our one main investment. Once we have the chain we can easily bring it up and redeploy it to another area.'
(More on Nautilus' plans here).

Nautilus' ROV with cutting arm to take samples.

Like many start-ups, Nautilus has a speculative and risky business plan, but the payoff could be vast:
[Nautilus] doesn't expect to reach first-scale production until 2009. There's no comparable traditional mining business model, since many of the individual deposits underwater are too small to be economically viable if they were on land. But because offshore drilling operations can move, whereas mines on land are stuck, the combined value of deposits in a wider area makes the operation worthwhile.
...
The worth of the oil, gas and minerals in the world's oceans is estimated to be in the trillions of dollars. If Mr. Heydon's estimates are correct, deep sea mining could have the potential to supply the world's growing demand for gold, copper and silver, among other metals. The resulting revenue could be in the billions of dollars for deep sea mining companies, of which only two currently exist.
...
Disputes over who owns what in the ocean have been a fact of global politics for decades. For reasons of security, potential resources and sometimes just pride, countries are constantly claiming control over new chunks of underwater property. As an indicator of just how rare it is to be able to mine hassle-free in the ocean, the exploration licence Papua New Guinea granted Nautilus was a world first.
More here. See also here.

In forthcoming posts in this series, we will look at some of the technological, environmental, political, and legal considerations involved in this new endeavor.

Saturday, September 08, 2007

Partial and total orders

From Advances in Distributed Security:
A basic issue of security and fault tolerance that must be resolved is the secure determination of which order events occured in. If a contract specifies a deadline and it goes down to the wire, how can a relying party or third party adjudicator determine whether the deadline was met? The outcome itself, and its fairness, may rest on fairly deciding who came first. If Alice tries to double-spend a piece of digital cash [C82], only the recipient who checks with the bank first is entitled to its value. But if the bank servers are replicated, which of the two recipients Bob or Charles checked with the bank first? In the case of a replicated property title service [S98] we have a similar problem -- if Alice transfers her title to two other owners, which new owner actually received the deed? If property is homesteaded on a first-come first-serve basis, which of two or more "land rushers" competing for a lucrative parcel is entitled to the land?

Lamport (Causal) Order

Imagine a network where computers don't know how to keep time very well -- they are always getting out of synchronization. (Alas, all you have to really think of here is the actual Internet with PCs). Such a network, called an asynchronous network, lacks an accurate and secure global clock time by which computers can determine the order in which events, which might be messages sent or instructions executed on a particular local machine, have happened. Lamport [L78] was among the first to tackle the problem of how to determine the order of events in such a network.

A partial order means that we know in what order some of the elements are, but we aren't sure about some of the others, or some of the others may be equal. An example is the "less than or equal to" relationship among a group of integers, some of which can repeat. Some of the integers we know are less than some others, but an integer paired with itself is equal. A total order, on the other hand, is like the "less than" relationship among unique integers -- we can always tell when one integer is less than another -- there is no ambiguity left. In the case of events, a partial order means for some pairs of events we know whether one occured before another, and for some others we don't know. We use the same symbols as we would use for the analogous case of the integers, so that "x <= y" means "x either occured before y or we don't know whether it occured before or after y". In a total of events, we know for any two events which one happened first. We write "x < y" meaning "x occured before y."

Lamport's answer to the event ordering problem was to show that parties (or, we use the terms equivalently here, nodes on the network) can agree on a partial order of events based on causal relationships between these events -- or at least the subset of events where we can determine that causation could occur. On a network, parties influence each other by talking to each other -- in other words, by sending each other messages. Lamport used these messages as the basic building block for constructing his partial order, according to the following rules:

  • 1. If an event is local to node P, every nonfaulty node agrees on P's opinion of it.
  • 2. Every correct node agrees that every message was sent before it was received.
  • 3. If we agree that event A occured before event B and that event B occured before event C, then we agree that event A occured before event C. In other words, this partial order is transitive.

Breaking Ties -- Creating a Fair Total Order

The partial order leaves us with the need to agree on how to break ties -- how to resolve the ambiguities where we can't agree which event took place first -- and thus create a total order of events. We want to do so in a way that is fair, in other words, in a way that cannot be manipulated to the advantage of any particular party.

An unfair way to create a total order would be to impose a certain predictable rule for breaking ties. For example, we could decide on a total order for the processes and break ties in the causal order by referring to this total order.

However, such a procedure creates a bias that may, depending on the application, favor certain servers over others, and therefore allow those servers to favor certain clients over others.

One way to break ties fairly is have the participants toss fair coins -- in other words, generate random numbers in a way that cannot be manipulated and then assign those random numbers to events. There are several ways to toss fair coins over a network and we describe one such way below.

Another way to break ties fairly is to have the participants agree to a global clock time that is more accurate than the message delays faced by those who would manipulate timing in favor of some party. This entails using a network with very predictable message lag for the clock synchronization protocol and a less predictable one for the other services.
More here.

Thursday, September 06, 2007

Institutional changes prerequisite to the industrial revolution

In the ongoing debate at Marginal Revolution over Gregory Clark's new book A Farewell to Alms -- which I highly recommend, despite my criticisms -- he wrote the following:

"The widespread impression that between 1300 and 1800 England experienced significant institutional improvements is just wrong. There were changes, yes. But not improvements."

To which I responded:
Some important institutional changes -- some of them arguably radical improvements -- in England between 1300 and 1800:

* The mechanical clock, 14th century. The resulting rise of clock culture and the time wage may have slowly but radically improved the coordination and work habits of Europeans. Earlier adaptation to clock culture, a process that may take centuries to evolve, may explain the large discrepencies between European and many non-European laborer work habits that Clark cites.

* The printing press and the rise of book consciousness, which radically decreased the costs of teaching economically important knowledge to both children and adults. The rise of book consciousness, reflected in the literacy and book cost data Clark graphs, explains the most prominent puzzle revealed by Clark's data: the fact that skills and innovation rose dramatically even as the rewards to skills were stagnant or declined.

* Nationalization the Church in England and secularization of family law, 16th century.

* The incorporation of the Lex Mercatoria into the common law, and the resulting rise of modern contract law, 18th century. Indeed, much of this occured in the same decades as the start of the industrial revolution.

* The "Romanization" of property law, rendering land more freely saleable, divisible, and mortgageable, which Adam Smith noted was an important improvement still in process at his time.

* The rise of marine insurance (e.g. Lloyd's of London) and the associated rise of colonialism and world trade, 17th-18th century.

* The decline of guilds and monopolies, 16th-18th centuries. Medieval England was certainly not a highly competitive market economy. Commerce in goods was dominated rather by monopolies and a variety of price and quality controls instituted by guilds and towns.
Here is more of the debate at MR.

Tuesday, September 04, 2007

Secure property titles

From Secure Property Titles with Owner Authority:
In all cases of property rights there is a defined space, whether a namespace or physical space, and the task is to agree on simple attributes of or rights to control subdivisions of that space. In some cases a name or other symbol corresponds to a person or object owned or controlled by that person. For example, Internet users must agree on which domain name corresponds to which web site operator. In other cases we are simply concerned with control over a subdivision of the space. With real estate we must agree on who owns various rights (to occupy the surface, to mine the minerals under, etc.) to a piece of land. With radio spectrum we must agree on who owns what range of frequencies and in what physical space (or transmitting power as an easily observed approximation of physical space used).

...all such [multiparty problems of] control over the semantics of symbols, to be made and respected across trust boundaries, are problems of agreeing on and maintaining property rights...

...New advances in replicated database technology will give us the ability to securely maintain and transfer ownership for a wide variety of kinds of property, including not only land but chattels, securities, names, and addresses. This technology will give us public records which can "survive a nuclear war", along the lines of the original design goal of the Internet. While thugs can still take physical property by force, the continued existence of correct ownership records will remain a thorn in the side of usurping claimants...

The ideal title database would have the following properties:

(1) Current owner Alice should be able transfer her title to only a single relying counterparty (similar to the "double spending" problem in digital cash)

(2) Servers should not be able to forge transfers

(3) Servers should not be able to block transfers to or from politically incorrect parties.

...Using these results [of Byzantine quorum systems] it looks like we can approach our ideal title database as follows:

(1) Alice signs the title and Bob's public key, and sends this message to 2f+1 servers, committing her to transfer title to Bob. Bob checks at least 2f+1 servers before relying on Alice's transfer.

(2) No collusion of servers can forge Alice's signature (we achieve at least this property ideally!)

(3) A conspiracy of >=(1/4)n servers can block a transfer. Alice's recourse is to use some other channels to broadcast her intention, demonstrating that the registry did not follow her wishes, and hoping the alternative channels are more reliable. Bob only has similar recourse if he signed a document with Alice demonstrating their intentions to transfer title from Alice to Bob. The most basic recourse is a correct subset of servers which exits the property club and establishes a new one, then advertises its correctness (and proves the incorrectness of its rival group) as described above.

More here.

Sunday, September 02, 2007

Bilinear group cryptography

An important recent development in public key cryptography is the bilinear group, which for abstract algebra wonks is defined as follows (if you're not into abstract algebra feel free to skip to below):
Bilinear groups are a set of three abstract algebraic groups, G1, G2 and GT , together with a deterministic function e, called a bilinear map, that takes as input one element from G1 and one element from G2 and outputs an element in GT . Suppose all three groups have order Q, element g1 generates group G1, and element g2 generates group G2. Then, one special property called bilinearity of the map e is that for all a, b < Q, we have that e(g1^a , g2^b) = e(g1, g2)^ab. This new element, e(g1, g2)^ab, is in group GT . The key observation is what happens to the exponents, a and b, during the mapping: they are multiplied. The group GT is distinct from both G1 or G2; thus the output of the mapping cannot be fed back into the map e as input.
Elliptic curves are generally used for the groups, although bilinear schemes in at least some other algebras are also possible.

Two of the main applications of bilinear groups are proxy re-signatures and proxy re-encryption. In proxy re-signatures, a semi-trusted party transforms Alice's public key signature into Bob's. The proxy does not have, cannot derive, and thus cannot sign with either Bob's secret key or with Alice's, but can only transform Alice's signature into Bob's. The proxy re-signer is thus "semi-trusted" -- it is trusted with some things we normally would trust a proxy signer with, but not with others. For example it is not trusted with either Alice's or Bob's private key, only with a special key that allows the signature transformation.

The target signature could also be a group signature. Thus, for example, Alice could sign her e-mail with her own digital signature, and a proxy re-signer sitting on the corporate e-mail firewall could re-sign the e-mail with the corporate group signature.

Proxy re-signers can be chained in a series, so that signature A is transformed by proxy AB into signature B, which is transformed by proxy BC into signature C, and so on. The last signature Z proves that the message was signed by each proxy in the chain in order. Proxy re-signers can also be chained together in a tree or directed acyclic graph. (Note that threshold signatures by contrast do not require or prove that the signatures took place in a particular order).

Proxy re-encryption is the same idea for public key encryption, with the added bonus that the re-encryptor can't read the message. So, for example, we could have the following scheme to restrict the distribution of content:

(1) Content owner Alice encrypts her content with her public key and publishes it to proxies P1, P2, etc., along with re-encryption keys AC1, AC2, etc. for each customer.

(2) Proxy allows customer to access the content only if paid. When paid, the proxy re-encrypts to the customer using the re-encryption key for that customer.

The proxies themselves are trusted neither with an ability to view the content nor with the discretion to distribute to additional customers not desired by content owner Alice. The proxy is trusted only to restrict access to a customer. (I present this scheme mainly just to illustrate what proxy re-encryption does. As an application, this particular content distribution scheme seems to me to only be useful if it somehow lowers transaction costs to route all payments through proxies rather than paying Alice directly, the latter which could be done by normal public-key cryptography, and of course it doesn't protect against a cheating customer re-publishing the content to the world).

I suspect proxy re-encryption could simplify the design of digital mix schemes like onion routing -- this is left as an exercise for the cryptographically inclinded reader.

This thesis is my source for most of this blog post; it discusses bilinear group cryptography for proxy re-encryption, proxy re-signing, and for reducing the trust needed for blinded offline digital cash.

Legal caveat: many, if not most protocols based on bilinear groups seem to have been recently patented.