Much less well known, but of similar importance, was the much earlier nitrogen crisis. This was not an overabundance of nitrogen, but the depletion of nitrogen in the readily usable forms that early life had evolved to consume. One might think that life would evolve to reflect at least roughly the same distribution of elements as are available in its environment. Let’s see if this true relative to abundance in our planet’s present oceans:
Elemental abundance in bacteria vs. in seawater (ref):
This is misleading for the metals before the oxygen crisis (i.e. for most of the history of life), when they were far more abundant in the oceans than the present levels shown. But for elements that did not “rust out” of oxygenated seawater, such as oxygen, hydrogen, carbon, nitrogen, phosphorous, and potassium, the above graph is illuminating.
There is a great deal of correlation here to be sure, but there are also outliers, elements that life must concentrate by several orders of magnitude: particularly carbon, nitrogen, and phosphorous, and to a lesser extent potassium. A reasonable guess is that this reflects contingency: life originated in a certain unusual environment, an environment disproportionately rich in certain chemicals, and its core functions cannot evolve to be based on any other molecules. Every known living thing requires, in its core functions, nucleic acids (which make up RNA and DNA), amino acids (which make up proteins, including the crucial proteins that catalyze chemical reactions called enzymes), and the “energy currency” though which all metabolisms consume and produce energy, the adenosine phosphates. Let’s briefly scan some core biological molecules to see how elements are distributed in them:
Carbon, as carbon dioxide, is abundant in the atmosphere (and earlier in earth’s history was far more abundant still). Through the process of photosynthesis, the two double bonds in carbon dioxide can be readily cleaved in order to form other bonds with the carbon in biological molecules. Indeed, instead of storing energy directly as ATP, life can and does take advantage of the relative accessibility of carbon, hydrogen, and oxygen to store energy as carbohydrates and fats, and then through respiration convert them to ATP only when needed.
Nitrogen is also abundant in earth’s atmosphere, but in the form of dinitrogen – two nitrogens superglued together with an ultra-strong triple bond. To form nucleic acids, amino acids, and ATP, something must crack apart the nitrogen. Phosphorous, to the extent it is available in the natural environment, comes in the readily incorporated form of phosphates. The trouble is, phosphorous in any form is just plain uncommon. Nevertheless, all life still relies on it at the center of the genetic code (DNA, RNA) and every metabolism (ATP).
Generally speaking, the result of the chemical contingencies of known life – which for its core functions uses molecules rich in hard-to-obtain nitrogen and phosphorous -- is that in known natural environments ecosystems are either nitrogen-limited or phosphorous-limited. In other worse, the biomass of the ecosystem is usually limited by the amount of nitrogen or phosphorous available. Liebig’s principle states that in any given environment, there is generally one nutrient that limits the growth of an organism or ecosystem. In earth environments that nutrient is usually nitrogen (as ammonia or nitrate) or phosphorous (as phosphate).
The eukaryotes (basically, complicated multi-celled life including all plants, animals and fungi) seem to lack the ability to evolve metabolisms that go beyond a certain point. Instead it’s the simpler prokaryotes -- archae and bacteria -- that have a far wider range of energy chemistry: a dizzying variety of chemosynthetic and photosynthetic metabolisms and ecosystems.
For certain crucial chemicals, the eukaryotes rely on archae and bacteria in their ecosystem. Exhibit A is nitrogen fixation. Life doubtless originated in an environment rich in ammonia and/or nitrates, molecules with only single nitrogens and thus no need to split the superglued dinitrogen bond. But these early organisms would have soon depleted the levels of nitrates and ammonia in the local environment to very low levels. Call it the nitrogen crisis.
Dinitrogen, N2, is the most abundant molecule in our atmosphere. But few things are powerful or precise enough to crack dinitrogen. Lightning can do it, converting dinitrogen and dioxygen in the earth’s atmosphere into nitrates. Lightning thus can, albeit very slowly, put usable nitrates back into sea and soil where they have been depleted by life. Trouble is (a) the resulting equilibrium level is far below the concentrations of nitrogen in organisms, and far below levels for optimum growth, and (b) the process requires an atmosphere rich in oxygen, which the earth until less than a billion years ago did not possess. (Alternatively, lightning might have made significant nitrates from reacting carbon dioxide with nitrogen, a possibility explored here. However, early life probably evolved in water so hot that it destroyed these nitrates).
Prokaryotes came to the rescue – probably very early in the history of life, when local nitrates and ammonia had been exhausted – by evolving perhaps the most important enzyme in biology, nitrogenase, “the nitrogen-splitting anvil.” Nitrogenase’s metal-sulfur core makes it precise enough a catalyst to crack the triple bond of dinitrogen.
The general reaction fixing dinitrogen to ammonia, whether with nitrogenase or artificially, is as follows:
N2 + 6 H + energy → 2 NH3
The dinitrogen is split and combined with hydrogen to form ammonia. Ammonia can then be readily used as an ingredient that ends up, via the sophisticated metabolism that exists in all life, as amino acids, nucleic acids, and adenosine phosphates. When nitrogenase fixes nitrogen it consumes a prodigious amount of energy in the form of ATP. In particular for each atom of nitrogen it consumes the energy of 8 phosphate bonds:
N2 + 8 H+ + 8 e− + 16 ATP → 2 NH3 + H2 + 16 ADP + 16 P
Nitrogenase is extremely similar all organisms known to contain it. It thus probably only ever evolved once. Given its crucial function of supplying a limiting nutrient, despite its high energy cost it proved to be so useful that it spread to many phyla of archae and bacteria. Either it evolved very early in the history of life (before the “LCA”, the Last Common Ancestor of all known life) or it spread through horizontal gene transmission:
Alternative origins and evolution of nitrogenase (click to enlarge) ( ref):
The archae and bacteria that contain nitrogenase, and can thus fix nitrogen, are called diazotrophs. One of the earliest diazotrophs may have been a critter that, like this one, lived in high pressure hot water in an undersea vent. In today’s ocean, the most common diazotroph is the phytoplankton Trichodesmium.
Colonies of Trichodesmium:
The biomass earth's oceans is probably limited by the population of such diazotrophs. Supplying the iron they use to make nitrogenase would increase the amount of nitrogen fixation and thus the biomass in the oceans. A larger ocean ecosystem would draw out more carbon dioxide from the atmosphere, and so is of great interest. This process in the ocean seems to have its limits, however: too much ocean biomass in a particular area can, when it decomposes, deplete oxygen from the ocean, suffocating animals. Oxygen replacement from the atmosphere appears to be too slow to prevent this effect when nitrogen concentrations are high enough, but nitrogen concentrations in almost all ocean areas are far lower than this and would remain lower even while drawing out substantial amounts of carbon dioxide. (Here is a nice Flash animation of the nitrogen cycle in the oceans).
On land, certain plants, especially legumes, are symbiotic with certain diazotrophs. The bugs grow in root nodules in which the legume supplies them large amounts of sugar to power the energy-greedy nitrogenase. In turn, the diazotrophs supply their legume hosts with fixed nitrogen allowing the legumes to generate more protein more quickly than other plants: but at the expense of more photosynthesis needed to feed the energy-hungry bugs.
One thing seems unclear to me. Maybe nitrogen and phosphorous are just irreplaceable - nothing does the things they do so well.
After all, if they are the limiting factors in most ecosystems, then that implies that there is *enormous* selection pressure on anything that can either replace their use or merely lessen consumption. Yet apparently in the billions of years since, even as prokaryote evolve all sorts of nifty & bizarre tricks, no one has succeeded.
But it seems to me that we ought to expect some hard nutrient-related limit. If nitrogen & phosphorous were removed, then the next limit would be copper or something. And if that were removed, then there'd be another limiting element, and so on.
gwern, that's a great point about selection pressures. Ultimately the question is a scientific one but at this point it's rather philosophical. Is evolution stuck in a certain basic design and can't get out of it for reasons of genetic distance? (e.g. for the same reason an alligator can't just mutate into a chicken, even though both forms are possible)? Or is it that it is simply impossible for life to exist anywhere without amino acids and phosphates? I doubt that anybody at this point knows the answer to that question.
As for nutrient limits, you've made a good restatement of Liebig's principle. I should add that, except where evolution is "stuck", Liebig's principle applies only over the short term. Often in the long term evolution can evolve to reduce its use of given nutrients (e.g. as it adapted to land by using less water). But per the first paragraph there are limits beyond which life hasn't evolved -- we certainly never evolved to be as dry as the land environment around us, and we still all need carbon, nitrogen, and phosphorous.
A useful metaphor, BTW, is Liebig's barrel:
That said, Liebig's law of the minimum, while a great rule of thumb, is not always and entirely true. To a limited degree organisms can proximately adapt, and to a much greater but still limited (per my last comment) degree can ultimately adapt (i.e. evolve), by taking special steps to conserve the scarce nutrient and by substituting one nutrient for another. Thus, for example, being a bit low on water, nitrogen, phosphorous, or carbon can stunt the growth of a plant but it will often take some steps to conserve the scarce nutrient and thus reproduce. If you are on a low-fat diet you will end up making up for it by eating more carbs, and vice versa.
As a result, it's not strictly the case that there is always just nutrient needed by an organism such that it never needs anything else until it is satiated on that one. It's just close to being the case, and a good rule of thumb that a balanced diet, with the staves of Liebig's barrel built up to the same level, is better than an unbalanced one.
And our human economy with its knowledge and technology is far better still at substitution, which I hope to discuss in future posts.
To some extent nitrogen and phosphorus are irreplaceable. Sulfur less so.
Only 2 of the 20 main amino acids have an atom of sulfur in them. So when sulfur is limiting, evolution favors proteins that use less methionine or cysteine. And sure enough, the protein that transports sulfur from the environment into the cell uses no sulfur.
But each amino acid uses one nitrogen for its backbone. Four of them use a second nitrogen, one a third, and one a fourth. You can evolve proteins that don't use much arginine. You can evolve proteins that don't use much histidine. Of course these are useful building blocks that you're doing without. But you can't possibly cut it down to less than one nitrogen per amino acid. To bring it less than that you need to make enzymes out of something other than amino acids, and you need a way to code your structures into DNA or some other code-storage medium, and so on.
Similarly, phophorus is part of the backbone of DNA and RNA. You can reduce some uses of it, but you can't reduce this use much unless you find an alternative to DNA.
Well! Looks like I need to amend my comment - there *are* lifeforms that have evolved to not need phosporous: http://www.nature.com/news/2010/101202/full/news.2010.645.html
> But Felisa Wolfe-Simon, a geomicrobiologist and NASA Astrobiology Research Fellow based at the US Geological Survey in Menlo Park, California, and her colleagues report online today in Science1 that a member of the Halomonadaceae family of proteobacteria can use arsenic in place of phosphorus. The finding implies that "you can potentially cross phosphorus off the list of elements required for life", says David Valentine, a geomicrobiologist at the University of California, Santa Barbara.
> The team used two different mass-spectrometry techniques to confirm that the bacterium's DNA contained arsenic, implying — although not directly proving —that the element had taken on phosphate's role in holding together the DNA backbone. Analysis with laser-like X-rays from a synchrotron particle accelerator indicated that this arsenic took the form of arsenate, and made bonds with carbon and oxygen in much the same way as phosphate.
Thanks gwern, that is a very interesting result and quite pertinent to what we are discussing. From what I can tell from skimming the paper and the commentary, the result strongly suggests arsenic is being used in place of phosphorous to _some_ extent in _some_ molecules. It's not clear how many different kinds of molecules such as ATP, DNA, RNA, phospholipid, etc. normally constructed out of phosphate are involved. So they haven't proven that phosphorous in at least some necessary molecules is irreplaceable. Much more research is needed. Nevertheless the result is very exciting.
It is otherwise a very dramatic story as well. Involving poisons and "aliens" oh my, so it has the world abuzz.
The basic result of their experiment is that a particular species of Halomonadaceae bacteria grew 60% as fast on arsenate as on phosphate but didn't grow at all when deprived of either. Suggesting that arsenate can in this very arsenic-adapted bacteria substitute for phosphate but only at 60% efficiency. The reason arsenic is a deadly poison for the rest of us is that when arsenate substitutes for phosphates in us it degrades molecular functions of e.g. ATP to such an extent that we soon die as a result.
If life originated where arsenates were more common than phosphates, this might change the basic ATP-centric theory of origins to a ATA (adenosine tri-arsenate) theory. However, there are chemical reasons to doubt this (water tends to break down arsenates far faster than phosphates), so more likely life originated with phosphorous and has adapted to _partial_ (but not preferred) use of phosphorous in arsenic-heavy environments like Mono Lake.
Contrary to the "astrobiology" hype in this story, arsenic substitution doesn't greatly increase the probable frequency of life in the universe, since arsenic is even less abundant than phosphorous.
Errata (sorry for the sloppy off-the-cuff writing):
Last sentence in 2nd-to-last paragraph in my comment above should read "...partial (but not preferred) use of arsenate in arsenic-heavy environments..."
In first paragraph, "So they haven't proven that phosphorous in at least some necessary molecules is irreplaceable" should read "So they haven't proven that phosphorous in in all necessary molecules [that contain it] is replaceable".
I should add paranthetically, for readers who have forgotten their periodic table, that arsenic is just below phosphorous on the table and thus functions in much the same, but not identical way. For example it forms arsenates that behave similarly but not identically to phosphates. They can generally be taken up into biomolecules normally based on phosphates but the resulting functions of those molecules are normally different enough to degrade the organism to death -- the bacteria tested in this study being an exciting exception, probably due to currently unknown chemistry in the cell modifying the pertinent biochemical reactions.
> So they haven't proven that phosphorous in at least some necessary molecules is irreplaceable.
The media descriptions seem to say that the microbes grew even in an all-arsenate and zero-phosphorous environment (just not nearly as well). I suppose they could be working off residual phosphorous left over from their ancestors...
> Contrary to the "astrobiology" hype in this story, arsenic substitution doesn't greatly increase the probable frequency of life in the universe, since arsenic is even less abundant than phosphorous.
But it *does* increase the probability. Basic probability theory: p(A v B) > p(A) assuming B isn't something pathological like a contradiction. Arsenic theories clearly aren't pathological, so the probability is increased by discovering an alternate pathway.
I haven't looked specifically the numbers on how much the overall biomass and number of cells increased, so I don't know how significant phosphorous recycled from ancestors was, or the small amount of residual phosphorous contaminating "arsenate only" nutrient solution. Presumably, however, the fact that the zero-arsenate-and-zero-phosphate solution brought growth to a halt controls for the recycling and residual contamination. These kinds of things will have to be checked by independent scientists repeating the experiment.
If the experiment had a flaw in its controls, as argued by skeptic Steven Benner, it's possible the organism adapts by just using far less phosphate and by isolating the arsenic in the large vacuoles that appeared when fed only arsenate. The As:P ratio changed over the course of the experiment from 1:500 to 7:1. This might just reflect a vast amount of arsenate segregated in the vacuoles.
However, some arsenic was also detected by spectroscopy of teh resulting DNA. And I tend to agree the difference between the arsenate-only growth of 60% and the no-phosphate/no-arsentate growth of zero strongly suggests the arsenate is being used functionally in place of phosphate. Just that the evidence is indirect and of course needs to be independently repeated, and we can't yet conclude which specific molecules were involved.
Phosphorous is about a thousand times more abundant than arsenic in the universe, strongly suggesting that P(A or B) is only very slightly higher than P(A).
Another interesting possibility, BTW, is that the experiment itself actually _bred_ an arsenate-using bacteria, and the unmutated bacteria in the wild doesn't actually use much arsenate -- it just tolerates some contamination of phosphate-based molecules with arsenate while generally using the more abundant and functional phosphate. Since Mono Lake is still more abundant in phosphate than arsenate (just much less so than normal environments), this would make sense and we'd have to discover a very rare (at least) environment where arsenate is actually much more abundant than phosphate to see a bacteria that actually evolved to be dominated by arsenate rather than phosphate in the wild.
One could also sequence the genomes of the wild and the post-experiment strains and see if there have been relevant mutations to see whether the experiment included breeding a merely arsenate-tolerant bug to be (presumably) functionally dominaed by arsenate.
Even if Wolfe-Simon bred the arsenate-dominated strain in the process of the experiment, the fact that such an organism is even possible to breed is just about as important as if it actually can act arsenate-only with the genetic code it has in the wild. The main discovery if it stands is that arsenate-dominated function is possible, and whether one has to breed it or whether it occurs completely in nature is a secondary issue. Obviously if the experiment can be repeated achieving this organism doesn't require major genetic manipulation, just at most mutation(s) probable in lab-sized populations and thus probable in nature if there actually exist any arsenate-dominated natural environments.
Post a Comment