Earth, Mars, Venus (of old), and Exoplanets
— an online book to bring a new synthesis to astrobiology…
and a tie-in of many great stories that science has to tell us
Vince Gutschick, 2020-21 – author information at the end
This is a freely available book, simply as a webpage with links. I appreciate your sharing it with anyone who may have an interest in the topics or whose interest just may be piqued by it. Please share any comments via the contact form.
It’s still a work in progress. Some additional figures and images are worth putting in shortly… and your comments may move me to make changes.
My presentation is multi-level. It’s aimed both at readers with scientific or technical backgrounds and at readers who have less background and who can gain from the more step-by-step presentation . I draw on biology, physics, chemistry, geology, engineering, astronomy, and a smattering of social science.
The core topic is, what makes a planet habitable in any of three senses, with a view toward informing a bit more the search for habitable planets. More so, all the digging into the welter of conditions for habitability lets us appreciate Earth’s habitability, unique within an enormous space, as well as ways to keep Earth habitable. Big changes are coming.
The Table of Contents expands here; the span to topics is rather unusual. An index is unnecessary, given the searchability of the webpage. There are over 100 internal and external links, with 56 appendices, sidebars, and supplements whose topics and links show up with a click on another Show More link. Selected summaries pop up under the search term Our luck in the Universe.
Let’s talk about everything that a planet must have to support life – simple life, complex life, our lives. How does everything have to work – the planet’s location, composition, structure, geological processes, even its complement of life? What are the deal-makers and the deal-breakers and the simply very interesting? This certainly will illuminate what makes Earth special and may inform how we can avoid breaking it.
Astrobiology is replete with ideas and bits of real data about the possibility that life exists on other worlds, and the possibility that we might detect it, even the possibility that we might cohabit other worlds with such life (or even without other life, as likely on Mars!). Astrobiology remains fragmented in its basis, not yet pulling together the vast array of human knowledge in many, diverse sciences. I offer ideas here for a more comprehensive foundation. In the end I offer that inquiries into what life may be like elsewhere… if it exists… can tell us a great deal about our own Earthly life… which is the only life we may ever experience firsthand in an enormous Universe deployed over insuperable distances.
I offer this book if you’re curious about other worlds where life may thrive or hang on or expand. I provide it for you to make your own judgment about the proposals, hypotheses, and claims for life on other worlds. If you are a research scientist perhaps looking for additional context, I offer syntheses from physics to chemistry to biology to geology to a bit of social science. If you are a curious and critical thinker outside of science, I present concepts at several different levels, any of which may suit you. To that end and to keep the flow of ideas smooth, I’ve put many intriguing and/or deeper concepts in Appendices and sidebars. On the web they load on demand.
“The Universe is not only queerer than we imagine—it is queerer than we can imagine.”
— Biologist J. B. S. Haldane, 1928
… and that’s still true, even after we imagined and (sort of) assimilated the mind-boggling ideas of black holes, quantum entanglement, ice volcanoes on moons, weird organisms of the past like Hallucinogenia (above, in reconstruction), and more. Now we set ourselves to imagine life on other worlds. We have to avoid simple bias using our one sample of life on Earth to project how life looks, acts, and is set to exist elsewhere. At the same time, we have to use our huge store of knowledge of the principles of physics, chemistry, biology, geology, and cosmology to figure out possible combinations of possibilities. These principles appear to work universally, as far as we’ve tested them. I try to strike that balance here.
Hallucinogenia sparsa reconstruction
Dreams of worlds, dreams we can generate
The worlds beyond Earth are exotic, even beyond exotic. One need only see images taken from spacecraft that we, imaginative and clever humans, have sent a billion times farther than our ancestors could conceive. What lies on the planets, what kinds of stars beam in the stellar systems that we now find abundantly, if remotely, with telescopes of stunning capabilities and with creative minds that can assemble elaborate measurements to reconstruct alien stars and alien worlds around them? Might life like ours, or very unlike ours, be found in our galaxy and beyond? If so, has it evolved to any sentient beings? We now can imagine what might be needed for a planet – an exoplanet – to support life. We do have a fund of knowledge beyond what any one of us yet knows about all the pieces – the chemistry of biochemical reactions, the lives of the stars, short or long, the universal workings of quantum mechanics and of physical processes in geology, and far more. We can work to put it together to explore, at a distance, the habitability of a planet.
Artist’s conception: European Space Observatory, M. Kornmesser
Dreams of many people now
The habitability of a planet – ours, Mars, exoplanets in other stellar systems – is a focus or even obsession of many people. As of this writing (2020), billionaires Jeff Bezos, Elon Musk, and Richard Branson all want to travel to Mars or even live there, despite its nearly total lack of “amenities” for life such as us – oxygen in the atmosphere, open water, moderate temperatures, or even a nice sky (rust is not a color we’ve learned to love). Governmental entities in (post-?)industrial nations send probes to Mars to detect signs of life, be it only bacterial, and support research to detect exoplanets and even characterize surface conditions in some of them. Nongovernmental groups, notably SETI, support searches for communication of intelligent extraterrestrial life with us. An almost comparable effort in terms of funds and the time spent by workers is on assessing the future habitability of our own planet, Earth, under the impacts of climate change, land-use change, loss of biological diversity, and diverse, globalized manners of pollution from disposed-of plastics, estrogen mimics in waterways, mine tailings and spills, and more. Perhaps the most productive use of studies of other planets is the insight yielded on Earth’s future habitability with all its constraints. After all, life barely made it on Earth through a number of mass extinctions. “Civilization exists by geological consent,” said Will Durant. For the moment, we have a demonstrably habitable planet, on the land surface and ocean, apparently even in some deep rocks. Some places are “less habitable” than others, meaning they support a lower density of life or (perhaps transiently) a zero density. Think deserts, the low-iron-content Southern Pacific Ocean, active volcanoes. Still, even we humans with our special needs infest the whole habitable Earth and Canada, as Ambrose Bierce quipped in his Devil’s Dictionary. That’s remarkable among all planets that we know, near and far. (“Infest” is an apt choice. As our human population has exploded over the last several millenia, the ratio of wild animal biomass to human-managed livestock biomass has dropped from perhaps 30:1 to only 1:25.)
Why I’m writing this
Actually, one big reason that I wrote this is that it’s fun! It certainly is for me, to put many ideas together and in context, and to tell the great story of Earth and beyond, showing how tiny samples of rock put into an isotope mass spectrometer fit in with the mergers of massive neutron stars, detected with the huge and incredibly sensitive laser gravity interferometers. Science has many stories to tell, fascinating, sobering, inspiring, even all three. At our son’s graduation from Caltech in 2008, commencement speaker Robert Krulwich gave the students the lifelong task to tell the great stories that science has to tell. Science is misunderstood, it’s under attack, and retreating to science as a bastion of truth or understanding serves little purpose. Our human development is based on narrative; people love stories; stories have power.
The stories here are often quantitative, with a mathematical core. I do delve more deeply into physics and attendant math through calculus, series, functions, and such in a number of sidebars and appendices that provided stronger “hooks” to the issues. The story as a whole can be read while glossing over much of the math, but the math is there for the taking when you may wish. There are two sides of math as a metaphorical sword. Galileo Galilei wrote this Discourses in vernacular Italian with only a bit of math coming in late, so that he could reach the public. He also wrote that (in translation), “The logic of the universe is written in the language of mathematics.” Both sides are necessary, story/metaphor and math; I’ve included both, for diverse groups of readers. I hope that you, dear reader, may wish to comment here or on the upcoming website, science-essays.com. I thank you in advance for your thoughts.
This text is intended to put strong limits on the habitability of any planet, including our own; astrobiology, the theoretical study of where life might exist other than on Earth, has a great number of untenable ideas that I comment upon. Our planet is in a remarkable set of circumstances. That’s in many senses – first, cosmologically, in our having heavier elements such as zinc and iodine and selenium to make all the biochemicals in our bodies without a late supernova too near to make those elements. It’s astronomically favorable, in having a benign and long-lived Sun with nice radiation peaking in the range of energies that can drive biochemical reactions but not toast them. It’s geologically favorable, in having a planet that has tectonic forces to create dry land (dolphins may be smart but they don’t have cities or cell phones, for what those are worth) and that has sorted its elements so they’re nicely available to us, and that has water delivered by asteroid impacts before we had to live through any, and much more. I also point out the fragility of life even on Earth, with mass extinctions that our ancestors barely survived; those extinctions are built into the tectonics and mineral cycles of the Earth, which we are changing. I address the hope that we can colonize Mars, with an extensive discussion of the extreme difficulty. The conclusion is readily drawn: fix our treatment of the Earth; there really is no plan(et) B.
I developed this presentation without resource to existing books and similar resources on astrobiology, in order to give an unbiased (I hope) and independent synthesis that I hope may inform the search for life on other bodies, the practicality and value of colonization of Mars, and the insights offered from all such research to delineate how we may preserve the habitability of Earth.
There are lots of ways to ask this question. Is Mars habitable? Do we mean by any of its own life forms (bacteria), if they exist? Or, do we mean that exogenous life forms – specifically, humans – could potentially live there, with much artifice – oodles of technology for life support but eventually self-sustaining? NASA, the ESA, the space agencies of China and India are checking out the first prospect or will do so soon. Dreamers, including the wealthy and very tech-savvy Elon Musk are proposing that the latter is true.
SpaceX Starship model. Daily Sabah
Back to the first definition, that a planet or place on a planet has endogenous life forms: what forms do we want to find? Bacteria might count – elating some evolutionary biologists but making the average person just sigh at the end. We might also seek evidence of more-evolved life forms that are multicellular and whose cells are differentiated for diverse functions – that is, not just colonies of similar cells. If we’re talking about animals, we call these metazoans. There’s no analogous one-word term for plants (and their kin that we don’t really call plants) but we would not count strings of cyanobacteria, even if some end cells are specialized to fix atmospheric nitrogen into usable ammonia.
While we’re at it, let’s be open to the idea that the categories of life on another planet are unlikely to fall into our Earthly categories – plant, animal, fungus, bacterium, protist, or the much more accurate and informative distinctions of Bacteria, Eubacteria, and Eukaryota (having cells with a membrane-bound nucleus) and then the elaboration into all the various clades that evolved (organisms that share a common ancestor). That new cladistic view from genetics, plus some fossils gave us some illuminating new views – birds are the surviving dinosaurs; fungi are more closely related to us animals than they are to plants (a reason that our fungal infections are harder to kill off without harming us than are bacterial infections); our own species, Homo sapiens sapiens, once shared the planet with other members of the genus Homo that interbred with us – the tree of life is less a simple tree than an example of cross-linked pipes. On a planet that evolved its own set of complex life forms, is “animal” a valid category, describing organisms that don’t do the primary capture of solar (really, call it stellar) energy but eat others that do and have some power of locomotion to distinguish them from, say, fungi? What if some life forms there do both photosynthesis and eating? Hmm, sound like our own Euglena, but what if they were big and multicellular? (I can argue that this is highly unlikely, but this isn’t the place to elaborate. I did so in my interview that got me my job at Los Alamos!). The tree of life on any planet may be very complex.
Tree of Life: Zmescience.com
Habitability “1.0,” raw habitability is the ability of a place (a planet, part of a planet) to support life of at least one form, even just the simplest. Life has to have to properties of self-replication, growth, and maintenance of function; there are some persistent physicochemical processes that don’t count – the atmospheric cloud eddies over oceanic islands that can spawn one another and that persist or recur don’t count. In any case, we recognize it when we see it (well, if we look at it hard enough – some slow-growers don’t catch the eye; it’s as US Supreme Court Justice Potter Steward said a bit less than helpfully about pornography – [I can’t define it unequivocally] but I know it when I see it.) Even this simple level of habitability requires a concurrence of many conditions of the stellar system (star size, planetary orbit and axial tilt, planetary mass, delivery of water), the astronomical history (making heavy elements with a supernova or neutron star merger safely distant in time), geology (persistence of plate tectonics and magnetic field), chemistry (the right greenhouse effect), and more.
Habitability 2.0 is for complex life forms that depend on each other, creating complex ecosystem interactions – with multiple trophic levels (producers, consumers on several tiers) and functional diversification of life forms (such as 250,000 different plants on Earth). That is, it’s not supporting a range of just bacteria or similar microbes such as Earth had for about 3 billion years. Diversity may offer resilience to the astronomical and geological disturbances that a bound to come – asteroid impacts or episodes of extreme volcanism. Speculation on what kinds of complex life might evolve is impossible now or up to a rather distant future. Still, examining our home planet’s complex life and what preserves it over giga-years serves the purpose of instructing us on maintaining it. It also informs us about the challenges of going to habitability 3.0 on, say, Mars.
Habitability 3.0 is the ability to support us, even allowing for all our technological contrivances of mining, power generation, air conditioning, chemical transformations, and such. We evolved on a planet (or moon) like no other we’ve ever seen or visited, so we have special needs – water, oxygen, equal temperatures, etc. Toward the end of this book I use the example of Mars, the only world other than Earth that is likely to ever be considered. I provide details of some technological workarounds for numerous lacks of resources (e.g., water, O2, equable temperatures, soil and nutrients for growing our food, on Mars) and the presence of what we might call generically “nasties” (high UV or cosmic ray fluxes…).
Habitability has at least three dimensions, of space, time, and type of life form. Re space: not all of a planet is habitable, at least, not for all kinds of life. (Ambrose Bierce once quipped that mankind “infests the whole habitable Earth, and Canada.”) In fact, spatial patterns of habitability are very fine-grained and can be of intricate topology. Consider a small field extending to soil depths and to the air space above. A small part of the soil is habitable by microbe X, another part (perhaps with some overlap) by microbe Y, and no microbe actually inhabits the air (see Denny, Air and Water, for an interesting discussion). Passing through, as by microbes blown across an area, does not count as inhabiting a location; the life form must be able to sustain its population with process of growth and of reproduction to replace lost members. The parts that are habitable for any life form tend also to be many spatial domains that may have exquisitely complicated connectivity. Imagine that we could do computed tomography through soil to measure (a wild idea for sensors) where life form X might live; the image would have sheaths, bubbles, tori, and other patterns. Right, not inside that buried rock, so warp the habitable geometry around that rock but not in it; not in those clay lenses; not in that anaerobic pocket; and so on. We humans inhabit surface areas fleetingly at each place. The very same geographic location may have been uninhabitable by any chose life form at various times, certainly on geologic scales (e.g., Ice Ages). Even much of Venus in a coarse-grained view was likely habitable until as recently at 700 million years ago. So, to declare an area habitable we need specify the place, with its quasi-fractal geometry, the time, and the life form. We might be accommodating and state that a larger place is habitable for life form Z over a short or long time interval, provided that the life form can migrate around to access the specific habitable places that blink in and out during the time interval.
Sketch of Snowball Earth Neethis Wikipedia Commons
There is another obvious qualifier, or, really, more of an expansion to the definition of habitability for a given life form. A location may be potentially habitable by a given life form but not inhabited by that life form at a given instant of time or time interval. Ecologists distinguish the potential niche that could be filled by a life form from the realized niche where the form currently resides. New Mexico State University astronomer Jason Jaciewicz noted, “If someone on another planet visited Earth 700Myr ago, when it was covered in ice, s/he would have kept on going. Habitability is not continuously apparent. He/she/it/?? could have concluded that Earth is not habitable – not even “mostly harmless” as in the devilishly humorous Hitchhiker’s Guide to the Galaxy. The thread of habitability was nearly broken then and at several mass extinctions that offer enormous insight about our planet and others, as I bring in later here. Geological and chemical relics of now-severed habitability appear on Venus, in plausible models. Our own Earth apparently froze over, or nearly so, several times.
Co-dependence: There is a further, biotic dimension of habitability: the ability of life form A to maintain a population at a location and time may depend upon the presence OR absence of (many) other life forms. A case in the large is the elimination of most anaerobic bacteria in the surface water of the ocean when cyanobacteria evolved oxygenic photosynthesis. For the bacteria called obligate anaerobes, oxygen at quite small concentrations is fully inhibitory of their growth or even lethal. Conversely, some life forms require other life forms to exist. Animals on Earth require plants as food and microbes as sources of vitamin B12. Fungi require plants or animals as “food” sources. Most flowering plants require pollinators rather than wind.
In the common view, all we may ask to declare a planet or a place to be habitable is that significant parts of it have life forms that we’re willing to count. With that proviso, we can then look at the physical, chemical, geological, and astronomical conditions that might support life. Delving into terms used in mathematical physics, we need attend to the boundary conditions in time and space: What is the lifetime of the star that is “useful” to life, within bounds of its output that warms a planet and refrains from unsettling outbursts? What defines a “friendly” neighborhood with the Goldilocks amount of local asteroids that deliver water but become infrequent destroyers, and with a decided lack of new supernovae, gamma-ray bursts, and such? Remember that all planets start out in a stellar nebula with asteroids and planetesimals as often-threatening neighbors. Most stars appear to be binaries, with great consequences for illuminating a planet and keeping its orbit stable.
A note about “we:” I use that ubiquitous pronoun with two meanings, each fairly clear in context. By “we” I often mean you, the reader, and I. Other times it’s clear that I intend to denote the whole scientific community, of which I’m a small part; to avoid nosism, I use “I” when it’s my personal interpretation of a topic. Science is a worldwide, over-all-time effort to understand the world under the premise that its structures and activities all arise from universal laws of physics, which determines chemistry, and both of which determine biology, with much chance mixed in at junctures (lucky coalescence of conditions for the first living cells, for example), and chance that we can describe in numerous cases to exquisitely degrees – look at hugely successful statistical mechanics. So, we scientists all work to share our understandings from pieces of the puzzle. “Art is I, science is we,” it was once said. Science and scientists have stories to tell. We need to tell them, as Robert Krulwich implored all the graduates at Caltech’s commencement in 2008; Lou Ellen’s and my son, David, was one of those graduates. This book is my bit, my set of stories from a career that happily crossed a range of sciences.
The history of studies of habitability is long but not too long. We now know that in our Solar System only Earth is habitable by visible life. We’re still speculating about other places. Was Mars once habitable, if only by bacteria? We’ll get to that topic. Ditto for Venus, perhaps with more evolved life until about 700 million years ago (no Perelandra now). We humans have been speculating about extraterrestrial life for ages, starting from very limited bases. Let’s put aside short fables of the ancients about life on the Moon or celestial objects not even know to be planets in ancient times. Some notable stabs at evidence, either on plausible or implausible bases, include Percival Lowell’s look at Mars with a decent telescope that could image what he took to be canals on Mars. Evidence of Earth’s long-term habitability itself is only recent. Fossils of multicellular life – mastodons, dinosaurs, Eryops, name them – were initially interpreted as creation, or as evolution in an environment largely similar to the current environment (clearly not so). No one asked what created and maintained those conditions such as breathable air and available water. Over a couple of centuries we humans discovered life that was yet more different, going back to the cryptic bacteria of billions of years ago. There are some credible bacterial fossils but even better evidence comes from chemical traces that had to come from bacterial life, such as the steranes. By now, we have a wealth of information about who was here through 3.8 billion of the 4.6 billion years of life on Earth and what environment they faced as temperatures and the chemical milieu.
Back to speculation: At a purely statistical level in 1961, Frank Drake posited his famous equation for the likely number of planets in our galaxy harboring intelligent life. It’s the product of a cascade of probabilities:
Average rate of star formation in our galaxy,
Multiplied by the fraction of stars having planets,
Multiplied by The average number of planets per star that can support life,
…Multiplied by several more factors
The third factor is largely my focus. Given a planet, is it habitable? There are many, many physical, chemical, geological, orbital, and astronomical factors that truly mostly prohibit habitability. Vaclav Smil addressed the topic in his 2002 book, Earth’s Biosphere: evolution, dynamics, and change (MIT Press). Many others have addressed additional factors required for life to communicate over vast interstellar of intergalactic distances. At the end, I offer a few comments on the topic.
Multiplied by the enormous number of planets, perhaps 100 billion in the just our Milky Way galaxy, the probability of life is again very high. However, living planets are likely to be mind-bogglingly far apart. The closest Earth-like planet found to date is 1,400 light-years away, making for a pretty dull conversation of 2,800-year exchanges of dialogue, even if that almost-Earth supports intelligent life. More likely, the nearest planet with intelligent life is much farther away. We’re extremely unlikely to be contacted, which is to be grateful for, given the history on Earth of contacts between cultures of very different levels of technology (it ends badly for the culture of lower technology). Nonetheless, it is exhilarating to look in detail at what it took to make life on Earth, the confluence of many events of low probability.
Taking it to a conclusion that many people wish for, are any of the planets other than Earth likely prospects for visiting on foot (clad in space boots) or even colonizing? What might it take to get there safely in due time? Given that transporting people and matter is very costly in money and energy, How can a self-sustaining ecosystem be set up using local materials and energy resources? Our physiology and psychology clamp on a lot of constraints. If a colony is to perpetuate itself, reproduction is needed but faces real challenges from the genetics, particularly from inbreeding in a small population. We humans may have barely made it through some climatic drama; we’re still not very genetically diverse for a population of our massive size.
We now have much more material for our speculations – and for the insight they may afford us for our prospects here on good old Earth. Astronomers and astrobiologists have been spotting many exoplanets (outside our Solar System) with some remarkable similarities to the Earth. However, as Yogi Berra’s son once said, when asked about how he resembled his father, “Our similarities are very different.” Please read on.
Before we look at the amazing set of conditions for the habitability of a planet we may look in on the search for potentially habitable planets. This has become a passion among astronomers and among people who call their discipline astrobiology. More than a century back, the possibility of life on Mars was proposed enthusiastically by Sir Percival Lowell, and others took the cue. Recent unmanned missions to Mars have searched for life, at least as microbes. No luck, so far. A flashback in time provides evidence that Venus once could have supported life. More recently, the search moved farther out, to planets of distant stellar systems. Astronomers have had great success in detecting these planets, particularly with the Kepler telescope and its successors. Among a number of methods of detection (we’ll get to the range of methods near the end), they may use the tiny fraction of light loss as the planet partially occults the central star. They can infer the orbits and surface temperatures of these exoplanets and may ultimately measure the constituents of the atmosphere on some of them. Both astronomers and astrobiologists then focus on a habitable zone for a planet, defined by stellar and orbital parameters that should maintain the planet at a temperature constantly in the range amenable to life. Does an equable temperature range alone make a planet habitable? No one claims that, as they cite the need for a medium such as liquid water (or, as I argue later, implausibly, that it might be ammonia). There are some novel ways of going deeper into exoplanet surface conditions now. With luck, we can detect the absorption lines of “nice” chemical compounds in the atmosphere of a planet; those of promise include oxygen (likely very rare to be found), carbon dioxide, and methane (made geochemically, even on Earth, but more so by living microbes). The list of candidate planets remains short, currently at zero in my take.
Closer to home, astrobiologists wish to get more information about Europa, a moon of Jupiter, Enceladus, a moon of Saturn, and several others. The medium of life, water, might be liquid on these bodies, with flexing of rock by the tidal pull of the giant neighbor raising temperatures raised above the very low values set purely by radiative balance. Flexing or volcanic heating on various bodies could also be a source of thermal energy, which is useless as metabolic energy. Energy has to flow from a ‘source’ to a ‘sink’ to do useful work, particularly to run chemical reactions in the metabolism of an organism, enabling growth, maintenance, and mobility. Energy that doesn’t flow just sits there, such as warm water over equally warm water –no overturning, no other motion. On Earth, radiant energy flowing from the Sun to photosynthetic reaction centers in green plants, algae, etc. provides this energy for life. Not all energy flows can do significant work, however; only the fraction termed free energy can do work. Sadi Carnot kick-started our knowledge of free energy in 1824 (in appreciation, the French named a street for him in the city of Aigues Mortes; good on ya’, France). There’s a lot of free energy flow in sunlight reaching photosynthetic organisms on Earth. Not so for low-temperature heat flows in flexing moons. There are ways to capture a ludicrously tiny fraction of this ‘geothermal’ (selenothermal?) energy, such as with a Stirling engine, but organisms don’t seem to be able to make one.
There’s much more to the story when we consider other factors of biology, chemistry, and physics. Astrobiologists know that certain chemical compounds in a planet’s atmosphere can signal the presence of life, but presence isn’t enough. Chemical elements have to be recycled from losses to erosion to keep organisms supplied – tectonic activity, with all its hazards of volcanoes and earthquakes, may be a necessity. For building blocks of bodies carbon appears necessary, in chemically reduced form (sorry, Mars, with all your oxidized carbon as carbon dioxide). To run critical electron-transport reactions the transition metal elements appear to be equally necessary, kept at the surface, not all sequestered in the planet’s core. Also, any atmosphere has a greenhouse effect, but the warming can go terribly wrong in a short time. Earth almost lost its life when early cyanobacterial life liberated oxygen to oxidize methane to carbon dioxide, almost freezing the Earth solid – more on that, later. Venus had the opposite experience, baking away any chance for life, though not the fault of any life on it.
Mostly, Devil’s Advocate
Spoiler alert? I conclude that life is extremely unlikely to be found; it may exist, but at unimaginably great distances from us. My synthesis is in the hope of raising the bar in the discussion. Track my arguments if you wish and find the loopholes – the chase is fun. Nonetheless, I offer that he probability of life in the rest of the Universe is very high. The probability that it is very, very far from us is also very high. Reasonable estimates of the probability of any planet being habitable, or, more so, of harboring life, and even more so, of harboring intelligent life, are in the realm of very small numbers. Actually, what I find compelling is not the prospect of life elsewhere. It is the understanding that the conditions for life here on Earth were so exquisitely unlikely, and that we can use the understanding to help preserve those conditions against what are mostly our own activities.
For the full story, we have a lot of topics to delve into:
- The excitement a few years ago over Proxima Centauri b, with a warm planet
- Other exoplanets, now found in the tens of thousands with virtuoso use of telescopes on Earth and in orbit. Many may be warm enough. How many meet many other conditions for life?
- SETI, the search for extraterrestrial intelligence – upping the criteria from microbes to something like us
- The unique chemistry of life: water carbon, and a bunch of elements present by luck
- Nucleosynthesis, or how those elements got here while sparing us the cataclysmic events needed – That’s the help from supernovae or neutron-star mergers but none that were recent!
- The right orbit, the right star, the Moon’s help, Jupiter’s help
- The right number of impacts: being on a rocky planet while asteroids from the icy world deliver our water but don’t do the Chicxulub extinction number on us too frequently
- Volcanoes, mountain building, earthquakes, and other accompaniments to the big plus side of plate tectonics in making dry land, renewing soils, etc.
- How Venus may have made it as habitable, 2/3 of the way toward the present
- The greenhouse effect and its several catastrophic excursions that almost did life in
- Photosynthesis using visible and near-visible radiation is a sine qua non; heat provides no energy for metabolism
- Plants: We need them but what have they done to the Earth in the past?
- Terrestrial life evolving fast enough to get to sentient life before the Sun cooks us – a shout out to Elon Musk
- OK, planet X is habitable; can we get there? Would we relish it?
- The rocket equation: why it’s so hard to travel fast and far
- In the words of Will Durant, civilization exists (only) by geological consent, subject to change without notice: Earth’s tenuous habitability with its mass extinctions
- … and more
Getting the temperature right. Simply, life on Earth needs temperatures that keep water liquid but not too hot, at least part of the time. Certainly, there are unfavorable excursions. Siberian air temperatures reach -60°C; soils in hot deserts reach 70-plus °C; Prismatic Hot Spring in Yellowstone National Park hits 89°C at its center that’s colonized by colorful bacteria. Deep in the ocean at the spreading ridges are black smokers, vents of mineral-laden water. Bacteria there support other organisms such as unusual clams and shrimp while tolerating temperatures of over 120°C… but temper that with the effects of enormous hydrostatic pressure that both keeps water from boiling and stabilizes all those critical biochemicals such as proteins against thermal denaturation. We then expect life to thrive – that is, both survive and be metabolically active – somewhere between -2°C in cold seawater and hot surfaces pools such as at Yellowstone National Park at perhaps 90°C. (Those heat-tolerant organisms gave us a heat-resistant DNA replicase that enables so much of modern biology via the PCR method. It’s how we know, among other things, that we modern humans have some Neanderthal and Denisovan genes in us, as well as viral genes from way back. We’ve been genetically engineered by nature.)
These temperature ranges far exceed our human comfort zones. At the most extreme temperatures, let’s not expect life to look anything like us. Estimating the temperature regimes for organisms on distant planets is strongly limited by our lack of knowledge of what geological structures exist on them and what organisms might live there. Here on Earth organisms find or create local microenvironments where conditions may depart radically from the gross air or soil temperatures. Lizards and snakes bask in the sun or retreat to cool crevices. (There’s little such thermal differentiation in water; water conducts heat well, especially heat coming from below with convection that mixes it, averaging out most local environments). We humans can use clothing to great effect, with some surprising strategies of deployment (see an Appendix, “Keeping cool with long sleeves and pants in the sun”). Plants transpire water, mostly to trade for CO2 for photosynthesis, but also cooling their leaves. Subtleties can still escape us. Endothermic (cold-blooded) reptiles thrived in the “Saurian sauna” on Earth in the Silurian age, when the surface temperature appears to have averaged about 35°C. There was precious little space for warm-blooded animals to shelter. On the other side, the cold side: Recently it was found that modern-day moose overwinter badly in areas cleared of branches by past forest fires; The ground and air cool too much by thermal radiation to space when few intact branches block the optical path from ground to air. In short, for planets other than Earth we can only make estimates of the gross environment – mean air or soil temperatures over large areas. Life forms, if any exist, have to find their own coping mechanisms to bring their own internal temperatures in the survival zone.
Grand Prismatic Spring, Yellowstone NP
Outside the equable temperature limits for organisms to be active, some organisms can survive sometimes (note the double “some”) by going inactive, as by hibernating. Others create resistant forms such as spores. Humans make some striking acclimations. People live or at least visit areas to collect valuable salt in the Danakil Depression (above; brilliant-ethiopia.com) where air temperature routinely makes excursions to 50°C, way above our body’s core temperature of a nominal 37°C. We need water to evaporate from our lungs, noses, mouths to lose heat in these conditions. The great story of coping with temperature extremes is written large in the literature of physiology. A good presentation with mechanistic understanding is in the book, Environmental Physics, by G. S. Campbell and J. N. Norman.
In 2003 my colleague, Hormoz BassiriRad, and I put together a broad perspective on how organisms – plants, animals, microbes, … – withstand extreme episodes of adverse conditions of temperature, water status, and other environmental conditions. We went from the physiology of individuals to the ecological and evolutionary effects. We brought in the spectrum or the statistical distribution of adverse conditions; how often is often and for what degree of extremity? A similar perspective is worth taking on any study of life elsewhere; the environment is always changing, and surviving 99% of the time is not good enough.
With a diversity of physiological, behavioral, and developmental acclimations enabling organisms to survive and prosper in environments that may reach temperatures that are extreme in our view, it’s not possible to set hard-and-fast limits for the livable range, even for familiar Earthly life. That said, there are regimes of temperature in space and time that are real deal-breakers for life and many more regimes that seem to militate against abundant life and, especially, multicellular life and the subset of life that’s intelligent. Temperatures should not stay extreme for times longer than the slowest generation cycle of organisms.
Orbital physics enters here. À la the Drake equation, a planet can’t be too near or too far from its star to be permanently cold or hot. That’s a given. It gets far more complex and interesting as we can detail later. There are planetary orbits that endow a planet with a good mean temperature but make it too cold or too hot almost everywhere or for too long a portion of that planet’s year. A planet tidally locked to face the star is very hot on that face and very cold on the opposite face. That leads to the atmosphere, if it exists at all, condensing out as ices on the cold side – at least, those chemical components useful for life; skip argon, for example. Witness Mercury in its 3:2 synchrony with the Sun; the planet rotates to expose different areas to sunlight, but areas stay exposed for 1/2 of a year each. Heat can’t be conducted through many km of solid rock to even things out.
There’s much, much more demanded of a planet for habitability – and demanded also of its star, the stars near it (violent cataclysms to provide heavy elements but only before the stellar system condenses into star + planets), its possible neighboring planets, and a whole lot about the planet itself – its chemical constituents, tectonic activity, plain old size (a rather narrow range to keep water but not massive amounts of hydrogen), water depth, and more. We’ll get to all of these in the process of exploring habitability.
We’ll need a benign, long-lived star, a planet at the right distance from it, a rocky planet yet with significant water, a fine tuning of chemical elements in air, water, and rock, and quite a bit more. Let’s start with the central star.
A star is a nuclear fusion reactor. Nuclear energy released is converted to tremendous kinetic energy of nuclei, gamma radiation, partly to the creation of the elusive neutrinos, with all but the neutrinos “degrading” to heat and the accompanying electromagnetic radiation (light and light’s kin, much as a hot iron bar emits much light). Critical for the potential for life near it are its energy output (temperature, size) and longevity. The star’s mass is its central attribute. Its mass determines its temperature, its lifetime, and the stability of its output against flares and worse. Stars around the mass of our Sun spend most of their lives on what astronomers call the Main Sequence, slowly changing in temperature and thus in power output as they age. The changes have had striking consequences for life on Earth, to be covered later. Stars provide nearby planets with electromagnetic radiation, distributed among types we call infrared (long wavelengths, low energy), visible light, and ultraviolet (UV). For stars of modest size, for much of their lives their output is strongly in the visible band, with modest UV. Hot stars are more problematic about UV. Even a 10% increase in stellar temperature boosts the UV to 18% of total energy, from 13%.
By the author
The useful measure of wavelength is micrometers (μm, millionths of a meter), and the similarly useful measure of temperature is absolute temperature, measured from absolute zero, in kelvin (symbol K; a span of 1 kelvin is also 1°C; Kelvin and Celsius scales just start from different bases). The visible band is critical as an energy input for living organisms, as we will explore; it’s not just handy for current life, but arguably for any form of life, based on the energetics of chemical reactions involving the equally necessary carbon compounds (with carbon not just an accident of life on Earth but a fundamental property of chemistry). More massive stars, which are hotter, put out too much genetically damaging UV, even when a planet is at a distance providing the right stellar energy for an equable temperature. They also live too short a life for lengthy biological evolution – hope for bacteria to evolve before the star’s end but not much more. Less massive stars can be comfy providers for life. They offer somewhat less visible light to a planet at a distance giving a good temperature. They’re long-livers, a positive attribute. There are some cool stars off the Main Sequence, such as red dwarfs as remnants of bigger stars, but they have several key problems that we’ll delve into – they flare disastrously, and a planet has to be so close it’s locked to have one side facing the star. Such was the end of hope for life for life on a planet around our nearest neighboring star, Proxima Centauri b…. which we know is small – how do we do so?
Life-giving radiation, part I
It’s an opportune moment to delve into the nature of electromagnetic radiation. Light and its heating equivalent were clear to the ancients. That light, and infrared, UV, radio waves, X-rays, and gamma rays are traveling waves of oscillating electrical and magnetic fields was not known until 1864. A number of physicists had been playing with large (macroscopic) items: magnets, wires, electrical currents, and their interactions. James Clerk Maxwell (left, as a young scientist) had formulated equations for all the electrical and magnetic phenomena… and realized that the mathematics allowed for a traveling wave as a solution of the combined equations. That solution had a predicted velocity that matched the estimates of the speed of light (which was first measured rather well in 1676 by Danish astronomer Ole Römer)! Voila!
This description also jibed with the finding in 1801 by Thomas Young that light was comprised of waves, from their ability to interfere with each other (below) just as water waves and sound waves to when peaks and troughs meet and cancel each other.
Physicists then knew that electromagnetic waves were described by several quantities. Essential is their wavelength, λ (lambda), such as the famous yellow D line of incandescent sodium vapor, at 589 nanometers (0.589 micrometers or about 1/100th the diameter of a human hair). Their wavelength is conjugate to their frequency, ν (nu); and their polarization (in which plane is the electric field rising and falling… though the movement can also be in an ellipse) Just as for material waves such as sound, frequency, wavelength, and velocity are related as
For microwaves in our appliances with a wavelength of a bit over 12 cm, the frequency is 2.45 billion hertz (cycles per second) or 2.45 GHz. For the sodium D line ν is a remarkably high 509×1012 Hz or 509 terahertz, THz. There are other properties of light that become important in various contexts. Light has polarization, which is the orientation of its oscillating fields relative to its direction of motion. In groups of photons, the way we almost always see it, it also has a degree to coherence or consistent timing with its fellow photons. This is important for lasers, as well as for clever ways to measure the sizes of stars.
Light and other EM radiation transports energy through space. There’s an input of energy to generate radiation, which comes from the jiggling (regular acceleration) of charged particles, such as the movement of electrons up and down in a radio antenna. Thermal motion of charged particles in a body generates radiation. A hot body such as a heated iron bar in a blacksmith’s shop can generate radiation of sufficiently high frequency / short wavelength to be readily visible, as red to even whitish light. It’s long been known that a body at a given temperature emits radiation in a broad spectrum. Bodies of complex structure generate a smooth spectrum following well-established equations. The spectral distribution across all frequencies is a function only of the body’s temperature. The ideal case is the black body, capable of emitting radiation at all frequencies. The term “black” refers to the inverse case – a black body is also capable of absorbing all frequencies, by a principle of symmetry. Our own skin, bodies of water, most Earthly surfaces other than polished metals, and the Sun itself act very closely like black bodies.
If you care for the details in math, the spectral distribution follows the equation
where B(λ)dλ is the increment in the total amount of radiation in a range of wavelengths between a given λ and an incremented wavelength λ+dλ; h is Planck’s constant and k is Botlzmann’s constant (more about both of these, later), and c is the speed of light. Temperature here is the absolute temperature measured from absolute zero, which is the starting level for energy measurements. The concept of absolute zero came from the study of gases. The unit is the kelvin (K). The formula here shows a maximum or peak at a wavelength that depends on temperature. For the Sun, the peak is at a wavelength of visible light, 500 nm. For a body at a room temperature of 20°C or 293 kelvin, it is 10.6 micrometers. The shape of the distribution of energy among wavelengths was presented earlier graphically.
A much simpler equation also captures the behavior of the total energy radiated by a black body, per unit time and per unit area:
Here, T is the absolute temperature, a concept that came from the study of gases (!), and σ is the universal Stefan-Boltzmann constant, independent of the nature of the body. It’s rather remarkable in stating that a doubling of temperature results in a 16-fold increase in radiation. Hot stars expend energy rapidly… and die fast.
While the amount of radiation may look dramatic at high temperatures, there are a couple of major points to note. First, it applies to all bodies, even those we don’t consider particularly hot. In fact, any body above absolute zero is giving off such thermal radiation, at a rate that’s determined by its temperature and diagnostic of that temperature (there’s also a modifier, the emissivity, which we’ll get to in the next paragraph). That fact is critical in explaining the energy budget of the Earth or any body illuminated by other bodies. Second, the total radiation rate is finite. That’s a problem? For classical physics, it was a problem. Physicists considered an arbitrary cavity, say, a cube, and looked at all the ways that waves could fit in that met conditions at the surface (having a zero amplitude there, like violin strings at their supports). There is no limit to the number of modes as one considers shorter and shorter wavelengths. If all the modes or frequencies could contribute equally to the total radiation in the cavity, the sum goes to infinity. This was the ultraviolet catastrophe of classical electromagnetic theory. No concept of classical physics could explain it away. The solution lay in the idea that light comes in discrete packets, or photons, each with a fixed energy. Radiation is quantized. All photons of a given wavelength are identical. Light is both a wave and a particle. In fact, in the full development of the theory of quantum mechanics over the decades, it became clear that everything is both a wave and a particle at the same time, with the wave character more apparent in some cases (diffraction of light… or of electrons or even atoms) and the particle character more apparent in others (molecules colliding with each other to generate gas pressure… but also light beams imparting kicks of momentum to lightweight objects).
While stars emit light much as if they are perfect black bodies, there are deviations from the spectrum for familiar hot bodies. A tungsten filament in incandescent lamps of old emits less light than its temperature would indicate, and its spectrum doesn’t exactly follow the shape of a blackbody at a lower temperature. We say that tungsten and other shiny metal have emissivities – the ability to radiate the amount of radiation in a range of wavelengths like a blackbody – less than one, sometimes far less. The gross accounting is in modifying the earlier equation to I=εσT4, with ε being the average emissivity over all wavelengths. This is readily seen in an image in the thermal infrared region that includes a polished aluminum plate reflecting the cold sky (along with an image of a shrub against the cold sky on the same winter day:
By the author
Here the effect is the inverse: ability to absorb radiation is also proportional to the emissivity, from a physical principle called microscopic reversibility that we need not get into here. Emissivities less than unity figure only modestly in the energy balance of a planet. Most surfaces on a planet are chemically complex and consequently have many modes of absorbing energy. This gives them emissivities very close to one in the range of thermal infrared radiation– about 0.96-0.98 for vegetation and most soils, though as low as 0.6 in some narrow bands for Sahara Desert sands. Water has an emissivity of 0.96, ice, 0.97. The corrections from assuming e = 1.000 can be moderately important for detailed climate models. A 4% error in emissivity leads to a 1% error in radiative temperature on the absolute temperature scale.
I offer some more details and some history of the blackbody concept in an Appendix. And, in a later section, I’ll get into some details of how electromagnetic radiation interacts with molecules – how any molecule absorbs and emits radiation, based on how its electrons are deployed and how the radiative properties give us a planet’s energy balance, including prominently its greenhouse effect.
Stars form by accretion of gas, and perhaps some miscellany from a previous supernova or neutron star merger that created heavy elements. The Sun in its outer layers is about 73% hydrogen and 25% helium, with the atoms in most of its depth broken up into the bare nuclei and free electrons. The Sun’s original chemical composition was inherited from the interstellar medium out of which it formed. Originally it would have contained about 71.1% hydrogen, 27.4% helium, and 1.5% heavier elements. The hydrogen and most of the helium in the Sun would have been produced by Big Bang nucleosynthesis in the first 20 minutes of the universe, and the heavier elements were produced by previous generations of stars before the Sun was formed, and spread into the interstellar medium during the final stages of stellar life and by events such as supernovae. I have as a sidebar a quick introduction to how elements are composed of the nucleons protons and neutrons and of electrons.
The loss of gravitational energy as all the parts fall into each other’s gravity is balanced by the creation of heat. This is the process analogous to a bit of lead dropping a decent distance and thus getting a little hotter. Now magnify that heating enormously. Back in 1862 when no one knew that radioactivity and nuclear reactions occurred, William Thomson, later Lord Kelvin, attempted to use his formidable math and process-modeling skills to figure out the age of the Sun, based on its rate of energy loss. Assuming only an unspecified but not continued source as creating a molten Sun, he calculated an age of 20 million years for the Sun. Nice try, but it failed to match later-established ages of fossils and, of course, the reality of nuclear reactions. In itself that’s not an error in the process of doing science; making an error that would force you to discover new principles. It’s an error when you’re confronted with the reality of nuclear reactions to refuse to accept the new part of physics after it was solidly confirmed. Some theories convince those skeptical of them, others outlive them. Still, Kelvin starting the ball rolling to understand the energy course of the Sun. It illustrates the maxim of Francis Bacon from the 16th century, “Truth emerges more readily from error than from confusion.” Kudos for reducing the confusion… and for at least countering the idea held at his time that the Earth was infinitely old or very young (remember Bishop Ussher and 4004 BC).
The chemical composition of the Sun and other stars as being mostly hydrogen has a checkered history. Cecilia Payne-Gaposchkin documented this thoroughly in her 1925 Ph. D. thesis, as she became the first woman to get a doctorate in astronomy at Radcliffe College. Male astronomers dismissed her discovery, holding to the belief that stars resembled rocky planets, albeit much hotter. This was doubly odd, because Arthur Eddington had already hypothesized that the fusion of hydrogen to helium was the main energy source of the Sun. Admittedly, it took until 1929 for Friedrich Hund to discover the quantum tunneling mechanism that would allow protons to approach each other closely enough to fuse. It was not until 1929 that Robert Atkinson and Fritz Houtermans measured the mass deficit between 4 H atoms and one He atom to show that enormous energy could be released in fusion. Still, bad show, my fellow white males, in holding back your field for some years for the sake of male pride. Cecilia Payne-Gaposchkin finally got a professorship at Harvard at age 56.
As hydrogen and other elements accreted to form the Sun, all energy was substantially conserved; there was no gain or loss of energy (if we count the emitted light) nor had any significant fusion of hydrogen occurred. As accretion proceeds, a star of modest mass, like the proto-Sun, has barely started on its time-development or evolution on what we term the Main Sequence – the pattern of changes in temperature, brightness, and fusion of hydrogen fuel into helium and other chemical elements, basically all determined by the initial mass of the star. That pattern makes sense, given the knowledge of physics, particularly of nuclear fusion that we need dive into. We may start with the fundamental concepts of energy, in its diverse forms – gravitational, thermal, mechanical, elastic, chemical, nuclear, electromagnetic. Briefly, energy is the capacity to do work against a resistance, such as compressing a spring or a gas-filled cylinder, increasing the speed of a vehicle, making a chemical reaction go “uphill” in energy, pumping water uphill, etc. Nuclear energy is trickier to fathom, inherently involving concepts of general relativity. In any event, we can with some ease consider changes in energy – e.g., rearrangements in atomic nuclei releasing large quantities as heat or radiation (pure electromagnetic energy). In the Appendix I clarify (I hope) a number of concepts of force, energy, power, flux per area or per solid angle, all particularly relevant to our encounters with stars such as our Sun. A key facet is that types of energy can be converted into each other. Electromagnetic radiation such as light can be converted into heat; heat an be converted to mechanical motion, as in an automobile engine; mechanical energy is readily converted into heat via friction.
The interconversion of energy (or energy plus mass) is a theme throughout any discussion of how a star performs nuclear fusion, converts mass to heat and “light” in the most general sense, how radiation is again converted into heat at a planet, how heat get partly converted into the energy of wind and storms, and on and on. Some points of the science and of the philosophy of sciences are in a sidebar.
Making stars: the gravitational collapse of gases creates very high temperatures, to an estimated 15 million kelvin in the center of the Sun. (The kelvin is a unit of temperature the same size as the degree Celsius, only counted from a start at absolute zero, -273.16°). The Sun reached a fairly stable structure, with a hot center enabling actually rather slow nuclear fusion reactions, using up only less than a billionth of the mass each year; see below where I give a quick calculation that you make far more energy metabolically than the Sun does by nuclear fusion, per unit mass. The core is composed of mostly the two simplest chemical elements, hydrogen and helium, both fully ionized. All the atoms have had their electrons stripped off to bounce around, making a plasma of such electrons and the bare nuclei – protons, helium nuclei (combos of two protons with two neutrons), and a few heavier elements from the Sun’s origin.
All the nuclear fusion and continued energy release occurs in the core. Surrounding this core is a “shell” or range of distances from the center over which electromagnetic radiation caroms around like a dense set of billiard balls – highly energetic gamma rays slowly being absorbed or scattered to create a wealth of less energetic rays, finally down to mostly light at the surface of the Sun. Surprisingly, this electromagnetic energy, moving at the speed of light, 300,000 km per second, should take only about 2 seconds to reach the surface, but it is estimated to take 170,000 years with all the bouncing back and around. The energetic particles of energy, the photons, readily encounter electrons and protons (and some helium nuclei) and get changed in direction by this. They take a “random walk” through the thick core, which is at the center about 180 time denser than water. With random moves, a photon can eventually get almost any distance from its start, but the number of steps taken and thus the time taken can be phenomenal. I have a Python code simulation that provides an estimate of the long journey of a photon.
The energy does slowly escape from the core of a star. There are three major ways. First is as radiation itself, the more or less tangled path just noted. Second is by convection, with the hot plasma or hot gas churning like water boiling on a stove, so that the hot gas later radiates its own energy, such as light. Third is by conduction without churning of a mass, as in heat moving from a spoon in hot coffee to your hand. Conduction is completely negligible in stars – it works over very short distances. Convection only becomes important when the radiation gets strongly absorbed and converted to heat. It occurs when the rate of change of temperature with distance is great enough; a mildly heated pot of water doesn’t churn. In the Sun outside the core the absorption of radiation is modest, until we look nearer the surface. There, electrons and nuclei have largely recombined into atoms, which have many ways to absorb radiation. Near the “surface” (a bit poorly defined for a mass of gas), the final disposition is mostly the emission of lots of light in a general sense – visible light (about 37% of the total), longer wavelength infrared (about 50%), and about 13% as more energetic ultraviolet radiation (see Fig. 1 earlier). Hotter stars, especially late in their lives, have a different structure, much less stable. For the story on all manner of stars, light, medium, and heavy, a great read if you get into physics is on the webpage of Caltech astronomer and “astroinformatician” George Djorgovski, http://www.astro.caltech.edu/~george/ay20/Ay20-Lec7x.pdf.
The nuclear reaction inside the Sun and similar “nice” stars that are stable and long-lived is principally the fusion of protons (hydrogen nuclei) into helium nuclei, for much of the star’s life. By a series of steps called the proton-proton chain, four protons turn into a helium nucleus. This is the main reaction inside the Sun.
Below, from Astronomy 122, N. Peter Armitage, Johns Hopkings University
The details of the p-p chain are engrossing. We’ll delve into these shortly while noting that the overall energy release of 4 protons (and 2 electrons) becoming a helium-4 nucleus is 26.72 million electron volts, or MeV. The energies involved in nuclear fusion are prodigious so we use several kinds of notation. One MeV expresses the energy released as the voltage that would accelerate an electron to that energy. Multiply the voltage by the charge of the electron, 1.602×10-19 coulombs to get metric energy units. Then, 1 MeV is 1.602×10-13 joules – a tiny amount of energy for a single particle, but, if we started with just 1 gram of protons, it would yield 640 billion joules. That’s about 178,000 kilowatt-hours! (If exponential notation such as 1.602×10-13 is unfamiliar, please check out another Appendix;) Another perspective on the energy release is that the kinetic energy of molecules bouncing around at room temperature near 27°C or 300K is minuscule in comparison, about 1/25th of a single electron-volt.
The following is a dive into some of the details of nuclear fusion that powers the Sun, and us. You may wish to cut to the chase to a summary.
The origin of the huge energy exchanges in fusion reactions is the great strength of the strong force that binds protons and neutrons together in nuclei. Very few of us alive now have any experience of that release, such as happened at Hiroshima and Nagasaki. Our more common experience with energetic chemical reactions is telling. One very energetic one-step reaction is the combination of two neutral hydrogen atoms to create a hydrogen molecule, H2. This releases 7.24×10-19 joules of energy, or 216,000 joules per gram of hydrogen. Nuclear fusion of one gram of hydrogen to 0.993 grams (0.7% loss of mass) of helium releases 640 billion joules, 3 million times greater. The famous formula of Einstein, E = mc2, then explains the enormity of the energy release. That energy is embodied in three forms. First is the set of gamma rays emitted at the fusing nuclei, with energies of 0.511 MeV (4 of the gammas) and 5.49 MeV (the other 2 gammas). Second is the kinetic energy of the new nuclei rebounding from the reactions: 0.420 MeV for the first step (occurring twice) and 12.86 MeV for the final step. Their collisions with other nuclei amount to the sharp decelerations and accelerations of their electrical charges, which generates electromagnetic radiation – more gammas. Third is the energy carried away by the neutrino (denoted by the Greek letter nu, ν), about 0.5 MeV. The total energy release is then 26.72 MeV. Most of that comes out as heat and thus light (and ultraviolet and infrared radiation); the neutrinos zip out of the Sun with about 1 chance in 100 billion of being captured to heat the Sun. They are the weirdest particles, interacting so very weakly with all other matter that they could traverse light-years of lead before being absorbed. There’s more to say about them later and in a Sidebar.
The first step: two protons create a deuteron and a positron. This is the leftmost part of the sketch above. It turns out to happen at an extremely slow rate, such that the average proton lasts 9 billion years before fusing! Good thing for us, giving the Sun a long lifetime and leaving enough time for evolution. One major part of the slowness is the difficulty of getting two protons close enough to fuse. The protons repel each other very strongly via their electrical charges. The nuclear force can only overwhelm the electrical repulsion between the two protons to fuse them together when they approach within a distance about the size of the protons themselves, about one femtometer, 10-15m. That’s 1/100,000th the size of a typical atom. At any separation of the protons, r, the electrical potential energy is proportional to 1/r. The formula is . Here, e is the charge of the electron and ε0 is a fundamental constant called the permittivity of free space. Plugging in the numerical values (1.6×10-19 coulombs and 8.85×10-12 farads per meter) we get 2.31×10-13 joules. A “small” number, but that’s (1) equivalent to 1.43 MeV and (2) the average kinetic energy of a proton at a temperature of 11 billion kelvin! The temperature of the core of the Sun has been confidently modeled (a long story) at 15 million kelvin, way short by a factor of about 700!
Maxwell, Boltzmann, and quantum tunneling to the rescue. Of course, some protons are moving faster than average. Let’s look at the speed (note: velocity taken precisely means speed with a specified direction; I’ve used it a bit loosely up to here). The average energy of motion for
constant, 1.381×10-23 joules per kelvin. At the Sun’s core temperature of 15 MK (millions of kelvin), that energy for a single proton is 2.31×10-13 joules. We can convert that to an equivalent electrical potential accelerating an electron, by dividing the energy by the charge of the electron. The potential is about 2000 volts (V). Some particles get fortuitous bounces to higher speed. By the very intriguing principles of statistical mechanics, we can figure out the probability of the protons having higher energy. We look at the “Maxwell-Boltzmann distribution” of energy in a large number of particles that have settled down by collisions to reach a defined temperature. (Ludwig Boltzmann really got around, didn’t he?)
The probability of an individual particle having a kinetic energy E when the group average is Eaverage is proportional to the negative exponential exp(-E/Eaverage). The relative chance of finding a particle with an energy, E, 700 times the average energy, Eaverage, is e-700, which is far, far less than 1 proton in the entire Sun. It turns out that protons moving at only 7x the average energy do pretty well to drive fusion… in the sense that they are the ones responsible for most fusion reactions. The fraction of protons moving at this energy or higher is about 0.12%. They’re plentiful, but that’s not good enough; the fusion process is frustrated by the need to invoke, as it were, the weak force. We’ll see that, shortly. Caveat: the core of the Sun is a fully ionized plasma of mostly protons and electrons. It doesn’t strictly meet the conditions for it protons to race around with the distribution of speeds of the Maxwell-Boltzmann distribution, but it’s “close” [https://www.scielo.br/scielo.php?script=sci_arttext&pid=S0103-97331999000100014]. The more exact distribution has been calculated.
George Gamow realized that there is a “useful” probability of fusion with protons moving at only about 7 times the average energy. In quantum mechanics, particles also act as waves. Coming to a barrier higher than their energy, the wave still leaks or “tunnels” into and past the barrier weakly but assuredly. This phenomenon is well documented in many cases, including the technology of tunnel diodes in electronics. Gamow estimated the probability of tunneling with a formula that looks like the uncertainty relation – a particle can’t be simultaneously “pinned down” for both its position and momentum. The product of (uncertainty in position) x (uncertainty in momentum) always equals or exceeds the value of Planck’s constant, h, multiplied by a small numerical factor. The formula for the probability that the proton tunnels through the barrier of electronic repulsion to get close enough to the other proton for fusion, is exp(-e2/( ε0 h v)). Here, v is the speed of the collision. A proton at 6.7 times the average energy of the core temperature is moving at about 1.6 million meters per second! The probability of tunneling is then about 76 in a million collisions. We may summarize the story of proton speed and quantum tunneling with a couple of complementary diagrams:
Stephen Smartt, Univ. of Toledo
M. Coraddou et al., Brazilian J. Physics 29 (1999)
The faster a proton is moving, the higher the probability that it will tunnel though the “Coulomb barrier” to get to the other proton. At the same time, the probability of finding such a fast proton declines rapidly with that speed. It’s the product of both probabilities that counts. This total probability peaks at intermediate proton speeds. The peak is at the “most effective energy,” while there are contributions to fusion over a moderate range of energies.
However, it’s far from enough for protons to have extremely frequent collisions and a high probability of protons tunneling close to each other! If a bit over a tenth of a percent of protons had this favorable energy and each of these had a probability of 6% of tunneling close enough to another proton for fusion, then a bit less than one ten-thousandths of the protons could fuse. Protons at the density of the Sun’s core average being spaced apart at about 22,000 femtometers (2.2×10-11 m) while these energetic ones move about 1.6 million meters per second. The energetic protons should collide about (speed)/(distance) units of time, or 70 quadrillion times per second! However, they’re successful only about one time in 9 billion years; that’s one in about 20 septillion times (2×1025)!
The weak interaction slows down the first fusion or two protons, enormously. The Sun only burns up about 3.5 quintillionths of its mass of hydrogen per second. Of course, that’s still 600 million tonnes. The reason for the low rate is that the fusion reaction requires the action of the weak force. A proton has to turn into a neutron and a neutrino with the help of the weak force. The name “weak” is very appropriate. Two protons have to bring in, as it were, the help of a neutrino that only reacts extremely weakly with other matter. Emitting a neutrino in a reaction is the converse of capturing a neutrino, and capturing one is an extremely rare process – hence, the idea that a neutrino has an even chance of being captured only after traversing some light-years’ thickness of dense lead!
Side note: Fusion with the release of a neutrino is a variant of sorts on the spontaneous decay of some unstable nuclei in what’s called beta decay. For example, one nucleus of the isotope of the fairly common heavy element thorium, thorium-234, decays to the element protactinium-234 by emitting an electron, called a beta particle for historic reasons, and a neutrino. This is called beta decay. It was hard to explain in terms of known physics until the polymath and physicist Enrico Fermi developed a very successful theory of beta decay. Fermi attended a Hitchcock lecture at Berkeley, in which one student of J. Robert Oppenheimer used elegant quantum theory. Afterward, he told fellow physicist Emilio Segre, “I went to their seminar and was depressed by my inability to understand them [Oppenheimer’s students]. Only the last sentence cheered me up; it was ‘And this is Fermi’s theory of beta decay.’ “ Excess humility, sly humor? I think the latter about this super-genius.
The neutrinos are very quizzical particles. First, we note that they carry off about 1.9% of the energy of the reaction, which is another 0.53 MeV that should be debited to the 26.72 MeV in the equation above. They really do carry the energy away. They interact so weakly with ordinary matter that they exit the Sun without leaving their energy there (thus, not helping heat our planet, either). That weak interaction expresses the converse property to that showing up in fusion: hard to make hard to catch. While a gamma ray inside the Sun may travel some millionths of a meter before interacting with an electron or other charged particle, a “typical” neutrino has only an even chance of being absorbed by matter if that matter is, say, the amount encountered in moving through several light-years of lead. Neutrinos are truly weird dudes; their properties and even their connection to why any matter exists at all are explored in a sidebar. Their mere existence was postulated by Enrico Fermi decades before there were instruments to detect them. They had to exist in order for energy and momentum both to be conserved in the nuclear reaction!
Once the fusion reaction happens, a deuteron and a positron are formed, as noted in the sketch. The deuteron is a proton and a neutron bound together. Relative to the unbound state of n + p, the drop in energy is 2.224 MeV, a substantial amount. The final forms of energy released in the fusion step are kinetic energy given to the deuteron and the two gamma rays. The gammas arise when the positron meets one of the ubiquitous electrons. The positron and the electron are antiparticles of each other. They annihilate each other to form pure electromagnetic energy in the amount of the mass of both particles multiplied by the square of the speed of light, 2 mec2. That’s 511 keV per “tron.” The gamma rays don’t come to us on Earth. They bounce around for about 170,000 years, on average, being converted into many smaller packets of electromagnetic energy (“light”) on the way. Note also that the deuteron is known on Earth. It’s the nucleus of heavy hydrogen, deuterium. Deuterium figures in a lot of science and technology on Earth. It’s present in water, hydrocarbons, and other compounds of hydrogen at an average of 1 deuterium atom per 6420 atoms of hydrogen. It didn’t get to Earth from the Sun, of course. Deuterium or D acts chemically almost exactly like hydrogen. “Almost” is the relevant word. Its chemical reactions are slower than those of ordinary H and the equilibrium point of its compounds puts more D than H in compounds at the end. Deuterium makes a good tracer of past ocean temperatures. Deuterated pharmaceuticals can be more effective than ordinary ones.
How do we know that the proton-proton chain is the main energy-producing nuclear reaction in the Sun, and its rate? Most simply, hydrogen is the fuel, and it has only a few different reactions. The initial fusion reaction of two protons is challenging to understand. The rate is so slow, even at the core temperature of 15 megakelvin, that it can’t be measured accurately by colliding protons in the lab using an accelerator. The weak force is truly weak. Instead, the rate is calculated from the fundamental physics of the weak force that was learned in other reactions. This I find to be an amazing tour de force of theoretical physics.
At last, we get to the second step,
…a deuteron colliding with another proton to make a helium-3 nucleus. There’s another Coulomb barrier of electrical repulsion to overcome, needing quantum tunneling again. However, the weak interaction is not needed, so the rate of fusion is much higher than p-p fusion. The energy drop upon binding ends up as a strong gamma ray of 5.49 MeV. This step can be measured in the lab. Like the p-p fusion, two instances are needed to make the final helium-4 nucleus or alpha particle. Helium-3 is present on Earth, though not coming to us from the Sun, rather, formed in the stellar explosions that gave our whole Solar System material from which to condense the Sun and the planets. Helium-3 has interesting physical properties that inform us about nuclear physics and superfluidity of its cousin, helium-4. It is also used in dilution refrigerators for generating extremely low temperatures, a few thousandths of a kelvin.
…and the final fusion.
Two helium-3 nuclei collide and, rarely but effectively, generate a helium-4 nucleus and 2 protons. A lot of kinetic energy is released, 12.86 MeV. This reaction also involves overcoming a barrier of electrical repulsion, one that’s 4 times higher than for the first two steps. Quantum tunneling plays the same kind of role here. The helium-4 nucleus is the familiar alpha particle of radioactivity. We see a lot of it on Earth from the alpha decay of heavy elements. Earth received a goodly amount of those elements, so we have pockets of helium gas in natural gas domes. We also have it in the atmosphere at 5 parts per billion. It was not discovered on Earth – rather, in the Sun by its spectral emission line.
The net reaction is then 6 protons generating a helium-4 nucleus and giving back two protons. The total energy release is 26.72 MeV, with 1.9% of that carried away by neutrinos. There are two alternative fusion pathways to the same end products. They involve beryllium nuclei in one step. They’re more significant in stars hotter than the Sun.
Let’s back up to that massive amount of energy that appears in converting protons to helium nuclei. It represents the conversion of mass into energy. Einstein showed that a mass of amount m can disappear and liberate an amount of energy E = mc2, that famous equation, where c is the speed of light, close to 300,000,000 meters per second – a big number. How much mass is lost in the reaction above? The mass of each proton is 1.0072766 atomic mass units (an amu is 1/12th the mass of a common carbon-12 nucleus), or 1.6726 x 10-27 kilogram. Four protons “weigh” (have a mass of) 6.69048 x 10-27 kg. A helium nucleus has a mass of 4.001506 amu or 6.644657 x 10-27 kg. That’s a loss of 0.04582 x 10-27 kg. Multiply this by c2 to get 4.124 x 10-12 joules, equivalent to 25.72 MeV. Add this to the 1.02 MeV from the annihilation of 2 positrons with 2 electrons to get a total of 26.73 MeV. The mass loss is a small fraction of the original mass of the protons, only 0.685%, but compare that to 0.1% conversion of mass to energy in the fearsome nuclear fission of uranium-235, and we have a gut feeling for that as a massive amount of energy.
Summary of nuclear fusion in the Sun: Fusion happens, releasing massive amounts of energy, but at a very slow rate because the first step requires (1) quantum tunneling so that protons can get far closer than they could in classical mechanics and (2) the action of the (truly) weak force. Very few protons succeed; in fact, the average proton lasts about 9 billion years before fusing. Great for us, for a long lifetime for the Sun. When 4 protons (and two electrons) fuse to make a helium-4 nucleus, mass is lost to become pure energy. That energy is initially as gamma rays (directly and from the deceleration of highly energized products). The gammas get down-converted, splitting into numerous photons, up to millions of new photons per gamma (from gammas with energies of 12.86 MeV to over 5 million photons at an average energy of about 2.5 eV). The bouncing of the photons is intense, such that the final photons emerge from the surface of the Sun about 170,000 years after the gammas are created. Some energy is lost as nearly massless neutrinos, weird particles barely interacting with ordinary matter but perhaps a further weirdness in their antiparticles may explain why ordinary matter outlasted antimatter from the Big Bang. The intellectual enterprise is stunning – assembling concepts from studies on Earth to explain a cascade of phenomena in the Sun.
The Sun is not a uniform ball. From its core to its surface there is a gradation of temperature, a gradation of density and chemical composition, and a gradation of activities, nuclear and otherwise. There is, luckily for us, a nice stability in all this. Consider temperature. How do we know what the temperature of the core is, or, for that matter, its temperature anywhere. Obviously, no one has put a thermometer in the core, nor had any other access, even with any kind of radiation, which has little chance of propagating out to us without being thoroughly remixed and rematched. The way of knowing the profiles is an elegant match of theory and measurement.
There is a “stable” of reliable theories, with their equations: conservation of mass (as matter moves around, minus the conversion of nearly 1% just noted; conservation of energy (gamma rays converting into heat, and such); energy transport by radiation and by convection; the equation of state (how temperature, pressure, and volume are related, in particular); the rate of absorption of radiation by matter, for its so-called opacity to radiation; the fusion rate, as a function of temperature and density of matter; and, the critical equation of hydrostatic equilibrium. The importance of all these is clear, and at this point there’s no need to show how they are all solved together, a huge exercise. The last equation is most interesting for predicting the structure of a star, such as our Sun. In brief, we may look at the Sun as a sphere with continuous layer upon layer, each layer having its own density, temperature, and flows of energy. There is a nice balance between gravity pulling them all in and the high pressures resisting the pull. The result is a prediction of all these layers’ properties. The Sun is densest in the center, as one expects, tapering off rather quickly. The temperature likewise is highest in the center, tapering off less radically.
The model needs verification by physical measurements. It may seem curious that a key type of measurement is of the equivalent of earthquakes on the Sun, the helioseisms. Small variations in processes cause pulses of pressure at various depths. The Sun then rings like a bell, as it has been put, in many different modes – patterns in time and in distribution across the face and limbs (edges) of the Sun. These disturbances can be measured as lateral distortions visible at the edges and as Doppler shifts in the light from the Sun; the shift of wavelength tells us how fast that part of the Sun is moving toward us or away from us. The Doppler effect is the change in frequency of radiation when the source is moving to or from us along the line of sight. It’s familiar in sound waves, as in the classic case of a train’s horn. There is a direct analog in light. The emission of radiation by atoms in the Sun’s outer layer, the photosphere, is a heady mixture of many frequencies from many atomic species in various states of ionization (loss of one or more electrons). The frequencies are very well known for atoms at rest. The shifts from motion can be analyzed (with sophisticated methods) to discern the velocity of that part of the surface toward or away from us. It requires that we look at sunlight at a great many frequencies or equivalent wavelengths. This is done routinely by spreading out the light, as with a diffraction grating – a strip with many engraved lines.
The seisms of the Sun are probes into what lies between points on the Sun. A great analogy is to seisms on the Earth, the earthquakes. They propagate as waves throughout the Earth. Measurements of their arrival times indicate the speed of sound in the intervening rock. The sound speeds can be mathematically “inverted” to decipher the composition of the rocks. This needs the help of (1) the knowledge of the speed of sound in a range of minerals at high pressure, with the knowledge being gained in labs with high-pressure equipment, and (2) models of how the minerals are likely to be distributed by depth. You might be amused by calculations in the so-called forward direction, estimating the enormous pressure (340 gigapascals, 3.4 million times air pressure at the surface). In a broadly similar manner, the sunquakes or helioseisms can be both detected and interpreted to infer the composition and state of matter throughout the Sun. Their interpretation for the Sun’s state by depth relies on the models noted earlier.
A visualization of the structure of the Sun is useful:
The insights from theory and measurement extend to verifying the equation for the rate of the proton-proton fusion reaction. There are also companion measurements of the rate of production of neutrinos. Hard as they are to catch, their flux has been measured. Their rate of capture agrees with the model of the structure of the Sun, provided that we accept the fact that the three types of neutrinos can change into each other as they fly toward the Earth. That is a fascinating story in itself; you can read about it in many places.
Our Sun and stars of similar modest total mass are effectively simple in structure. Their energy production is modest, so that the pressure developed in the car comes mainly from “simple” gas pressure developed in response to temperature. Larger stars have a notable, or even massive, contribution to pressure from radiation. Radiation does carry momentum, pushing things around. It’s something we don’t feel in our lives from any light source, as it’s feeble at sunlight levels. Even so, with a light enough device in rather empty space, sunlight accelerates it. No practical device has been deployed. Nonetheless, consider Voyager I and Voyager II, launched in 1977 and now classified as traveling in interstellar space at, respectively, 124 and 148 times the mean Sun-Earth distance. Their trajectories are still being analyzed from their incredibly low-powered but reliable radio transmissions. They showed a tiny acceleration from the emission of small amounts of thermal radiation. Back to the Sun: the solar interior near the core is dense but radiation carries more heat away than does convective churning of material. That holds until radiation nears the surface, as noted earlier.
Our luck in the Universe: There is a nice stability to a low-mass star such as our Sun. If the rate of nuclear fusion in the core increases, then the temperature rises. In consequence, the pressure in the core rises and the core expands. At the reduced density of the core, the rate of fusion falls back down. The Sun’s output, over the time we’ve been measuring it, fluctuates only about 0.1%. There is a long-term trend of increasing output over time, about 1% per 100 million years. (That’s inferred from studies on many other stars, not from measuring our Sun over a puny few decades.) This variation became important over the history of life on Earth, as we will see.
There are short-term phenomena that make the Sun more than a very stable, benign light source. Near a group of sunspots the Sun may develop relatively small bright spots called solar flares. Some of these point at Earth and increase the ionization of the high atmospheric layer, the ionosphere. This disrupts communications on Earth. In addition to electromagnetic radiation, the Sun also emits a stream of charged particles called the solar wind. It’s mostly hydrogen and helium ions and accompanying electrons (while the particles are individually charged, the whole mass is electrically neutral; otherwise it would be subject to very strong “image” forces induced in the solar surface to retain it). The loss of mass has been only a few hundredths of the Sun’s mass over its lifetime. The solar wind reshapes the magnetic field around planets having such magnetic fields, and it helps to strip the atmosphere from planets that lack such fields, e.g., Mars. Larger flows of matter occur in episodes called coronal mass ejections. Hitting the Earths upper atmosphere and magnetic field, the particles create beautiful auroras . They may also can induce large currents in extended electrical connections, to wit, the transmission lines of the electric grid. One CME in 1989 resulted in a nine-hour outage of the important Quebec Hydro Grid.
The Parker Solar Probe is now approaching the Sun repeatedly to measure many such interesting phenomena of the Sun. It’s named after Eugene Parker, who hypothesized the existence of the solar wind in the 1950s. The probe will get as close as 3.8 million miles (6.2 million km) from the Sun’s surface in its later excursions. At that distance it will withstand radiation 475 times stronger than at Earth’s orbit. The secret is that it will do so at night – no, that’s an old joke. An extremely reflective shield made of super-white aluminum oxide stands off from the probe and will limit the gain of heat by radiation. The side of the spacecraft facing away from the Sun can radiate heat away to cold interstellar space at an effective temperature of about 3K, nearly absolute zero. The probe has to orient itself automatically to keep the shield pointing at the Sun; the probe is the most automated and almost self-driving spacecraft ever launched. It has to be; commands from Earth take 8.5 minutes to get to it.
Other optical phenomena at the Sun have given us information variously startling or very useful. Sunlight has small deviations from the smooth distribution of intensity over wavelengths that characterizes a blackbody. The deviations are sharp absences of output over very narrow ranges of wavelengths. These are the Fraunhofer lines, discovered in 1802 by the English chemist William Wollaston, then rediscovered and studied intently by the German physicist Joseph von Fraunhofer. (Note: the appellation physicist is modern. The term was only coined in 1840 by William Whewell, who also coined the term scientist. Before that you could be a natural philosopher.) If you spread out the solar spectrum, as with a prism or a diffraction grating, the lines show up as stark black lines. They’re caused by various atoms in the Sun’s atmosphere absorbing light at those wavelengths, then dumping the energy as heat. It took years of work by many scientists to interpret the lines, attributing them to specific chemical elements in specific states of ionization (number of electrons lost in the heat and light).
Helium is named for the Greek Titan of the Sun, Helios. Helium was discovered in the Sun, not on Earth, from the Fraunhofer line that it generates, with some pride of place (but not sole credit) taken by Jules Janssen in France and Norman Lockyer in England. The discovery of helium is tied also to the discovery of energy levels in atoms being quantized, or restricted to discrete values. Helium was finally identified on Earth and realized to be a chemical element in 1895 by two Swedish chemists, Per Teodor Cleve and Nils Abraham Langlet.
flatearth.ws, a “flat Earth” debunking site
The story of Fraunhofer lines continues today. In my own time as a staff scientist at Los Alamos Scientific Laboratory, as it was known then, I was studying the detailed role of chlorophyll in green-plant photosynthesis. A number of scientists were already measuring the fluorescence of chlorophyll to determine how rapidly a leaf was doing photosynthesis. I thought, how wonderful it would be to measure fluorescence from satellites over vast areas of the globe to see the level of performance by geographic area, season, water status, etc. and to integrate it over the globe to see how much carbon is exchanged. A great way would be to fly a spectrometer on a satellite that could measure the fluorescence radiated from Earth’s vegetation, focused on the narrow band of wavelengths made blank in the Sun’s spectrum by a Fraunhofer line. There’d be no confusing – really, obliterating – light from the Sun to overwhelm the signal. Alas, that would require the capabilities of a spy satellite. Well, it did come to pass. In 2009 the first measurements were made from the GOSAT satellite. These enabled the mapping of photosynthetic productivity of the world’s plants!
NASA – GOSAT satellite
Of course, I had nothing to do with the whole setup, just a great daydream that others turned into reality. Back on Earth, it’s easy to see the beautiful deep red fluorescence of chlorophyll right in your home. Find a nice, green leaf. Grind it up; a mortar and pestle is good, but there are lots of ways to do this, a bit more tediously. Add some pure ethanol, that is, without water. Vodka won’t do, but denatured ethanol from the hardware store will. This will dissolve chlorophyll, plus other pigments. Either filter the solution or be patient and simply pour it off after the leaf bits have settled. Take the solution into the sunlight and see the green solution giving off a red glow. It helps to have the solution in a clear test tube or equivalent – a strong solution can reabsorb too much of the fluorescence. Using a violet or green laser pointer beam makes it more dramatic.
The Sun has two temperatures of great interest. For most of us on Earth it is the surface temperature, about 5800 K. At this temperature it radiates what we term white light, nearly half of its energy at visible wavelengths and half in the infrared. The second temperature is that of its core, about 15 MK (megakelvin), sufficient for steady nuclear fusion to power its surface emission of light. Both of these temperatures were set by its mass, accumulated from a gaseous nebula that was 98% primordial gas – hydrogen (73% by mass), some helium (25%), and bit of lithium – formed in the Big Bang. The other 2% was the set of heavier elements from probably two massive events involving earlier stars in the neighborhood, perhaps two supernovae or one plus a neutron star merger. These heavier elements gave us the solid part of Earth and other planets, asteroids, Kuiper Belt objects farther out, and Oort Cloud objects even farther out.
Scaling relations – necessary math jump to summary, if you wish. For a star that’s basically primordial gas, the mass that it has managed to condense sets its structure, including its size (say, as radius), its core temperature, its surface temperature, and its lifetime. Many studies of other stars of different masses, M, reveal critical facts. One is that the physical size, the radius, grows closely at M0.7 (mass to the 0.7 power) for stars near the mass of the Sun – faster for smaller stars, more slowly for bigger stars. (Power laws are quite useful for expressing relationships. Techniques for measuring sizes of stars are interesting.) That 0.7 power means that a 10% higher mass creates a star that’s 1.10.7 or 7% larger in radius… and 22% larger in volume. The total radiant power output of the stars, again near the mass of the Sun, increases drastically with mass, as M3.7. A 10% increase in mass give a 42% increase in power! The output is the product of surface area, proportional to the square of the radius (thus, mass to the 1.4 power), and the radiant power output per surface area. The latter factor grows then as mass to the 2.3 power. This output per area is proportional to the fourth power of the surface temperature, as we’ll discuss in more detail later. Thus, the surface temperature increases as approximately M0.6 power. A 10% higher mass makes a 6% higher temperature, or a gain of about 350K. The combined effect of temperature and size differences on a planet in the same configuration as Earth with the Sun would be a gain of 42% in intercepted stellar energy and, all else equal, a 9% increase in surface temperature, about 26K = 26°C – toasting us life forms.
Summary: stars that are slightly bigger than the Sun – 10% more massive – are 7% larger in radius and 6% higher in temperature; they provide 26% more energy per surface area and 42% more energy to a planet at the Earth’s distance, a cooker.
Bigger stars live fast and die. The dependence of core temperature on fusion rate is the factor of mass to the 3.7 power, inferred from the combination of size and power density, which gives the luminosity – how bright the star looks to us. The core temperature is closely proportional to the star’s mass, so that a 10% increase in mass gives a 10% increase in temperature. The proton-proton fusion reaction is rather sensitive to temperature, as discussed above, and this accounts for much of the rise in total fusion power. The core density is nearly independent of mass.
Putting some of this together, we can see the dependence of the star’s lifetime on its mass. Its initial fuel for fusion is its mass. Its rate of fuel use as power output rises as the mass to that high power of 3.7. The fractional rate of use of its fuel is (power/fuel), proportional to M2.7, or a bit more closely M2.5. Lifetime is the inverse of this, or mass to the minus 2.5 power. A 10% increase in mass gives a lifetime that’s 21% shorter! A table illustrates the dramatic effect of star mass on lifetime, over a wide range in masses. Here, MꙨ is the mass of our Sun:
(From the website of George Djorgovski, Caltech)
It took 4.6 billion years to evolve sentient life on Earth; a short-lived central star, even at the right distance from an Earth-like planet, might not give evolution enough time to generate life above bacteria or small multicellular organisms on other stars with not much shorter lifetimes.
The radiation output of a star varies over its lifetime, intimately affecting the prospect of habitability of a planet near the star. Our beneficent Sun has not always had the current effective temperature of 5800K. It sent only 70% as much radiation to Earth 2.2 billion years ago when, in a stroke of “luck” almost fatal to life on Earth, bacteria evolved oxygen-liberating photosynthesis. This destroyed the strong methane greenhouse effect that kept the Earth from freezing. Freeze it did, with bacterial life barely hanging on in the era of Snowball Earth. Currently, our Sun is gaining about 1% in luminosity (total radiative output) per hundred million years. It would naturally get a bit toastier, though humans are certain not to be around that long. There’s additional detail, below. The luminosity variation is readily seen to put extreme constraints on the greenhouse effect on the planet. Very, very few planets will “luck out,” as we barely did.
Our luck in the Universe: A long-lived, low-UV star
There are hopes among scientists, engineers, and policymakers of at least 25 nations on Earth to create controlled nuclear fusion on said Earth for power production. To make the fusion fast enough, faster than the slow rate in the Sun, extreme temperatures around 100 million kelvin will be created in the ITER project, the International Thermonuclear Experimental Reactor. It can’t fuse protons at that “wimpy” temperature. It will use deuterium and tritium.
The project, sited in Saint-Paul-lès-Durance, France (nice choice – wine country), will use special fuel, partly created in fission reactors (so, it’s not an independent source of energy), tritium along with deuterium. These two isotopes of common hydrogen with two and one neutrons added to a proton fuse more easily. So, the project is one more step toward fusion power on Earth. There are several other fusion reactor initiatives from private companies. My own take on it is that capture of regular solar energy directly with photovoltaic panels and indirectly with wind turbines is far more economical and manageable. It’s a very big next step to fusing even deuterium, which is abundantly available from water where its concentration is about 0.026% by mass. Second, deuterium fusion generates energetic neutrons, carrying much of the total energy released. The neutrons eventually fuse with nuclei of the reactor vessel, and a number of these elements then become radioactive.
We can’t leave star temperatures without noting the anomaly of the extremely high temperature of the wispy corona surrounding the Sun. We can see the corona only during a solar eclipse that blocks out the main body of the Sun. In the nineteenth century, scientists observed a spectral line at 530.3 nanometers in the Sun’s outer atmosphere, called the corona (a layer we will discuss in a minute.) This line had never been seen before, and so it was assumed that this line was the result of a new element found in the corona, quickly named coronium. It was not until 60 years later that astronomers discovered that this emission was in fact due to highly ionized iron—iron with 13 of its electrons stripped off. This is how we first discovered that the Sun’s atmosphere had a temperature of more than a million degrees.
My wife Dr. Lou Ellen Kay and I saw the corona in Kenya during the 1980 eclipse; it’s memorable, as is the “diamond ring,” where the last bit of direct sunlight peeks through some rough mountains on the moon. She got a great photo of that. The corona has a mean temperature in the millions of degrees, so much hotter than the surface beneath it. It’s not heated by radiation from the main body, which by all the laws of optics disallow a hot body heating any other body to a temperature hotter than itself. (If you focus the Sun’s image with a lens, you get a very hot spot, but it will never exceed the temperature of the Sun’s surface. It won’t even get close, given losses of heat by conduction and other routes.) The corona is heated by magnetic interactions that are still being explored. We don’t count the corona’s radiation into the energy we receive at Earth because the corona is so thin.
By Solar Wind Sherpas
We have yet to look at how the temperature of a planet orbiting a star depends on not only the star but the orbital distance of a planet, its rotation, axial tilt, surface properties, and greenhouse effect. That story is long, and, I trust, very engaging, and comes in the next big section. For the moment, let’s take some steps down in temperature. The hottest thing near us is the solar core at 15.7 million kelvin, a temperature that slowly drives rather fantastic nuclear fusion. In equivalent voltage, that’s about 2000 electron-volts, eV, and a tiny input to unleash fusion reactions that liberate 27 MeV of energy. The energy is ramped down by collisions among radiation and nuclei to the more modest surface temperature of the Sun’s surface at 5800K. That’s equivalent to 0.75 eV; individual photons of visible light emitted at this temperature range in energy from 1.8 eV to 3.1 eV, That’s enough energy to break some chemical bonds. We rely on this for photosynthesis that starts our food chain; in good part we also rely on it for making Vitamin D beneath our skin from cholesterol (one more absolute necessity for our having cholesterol). We also have evolved a number of physiological defenses (and optical ones, too – dark skin, tanned skin) to deal with its potential damage to our cells from the 13% of the Sun’s energy in the shorter-wavelength, more energetic ultraviolet. Our typical bodily processes poke along at thermal energies of about 300K, equivalent to 1/25 eV. Good; we don’t want a lot of potentially disruptive higher-energy processes running around. We have enough trouble with side effects of our more energetic metabolic reactions that create, for example, active oxygen species. These drive our purchase of antioxidants, though our body has a wealth of its own protective mechanisms. One of these is the enzyme catalase we share all through the evolutionary tree; on molecule of it can destroy a million molecules of hydrogen peroxide. In brief, our physiological processes are at a low energy scale, though ultimately fueled by vastly higher-energy processes that cascade down, fusion to thermal energy to light to biochemical processes that tap part of the light’s energy. We can only be near the less energetic processes.
We haven’t paid much attention to bigger, hotter stars, for they make useless companions to habitable planets. They last too short a time and they radiate too much ultraviolet light. Still, they are interesting in their own right. Their surface temperatures reach stunning levels. A star familiar to us all is Sirius, the dog star (appearing in conjunction with the Sun during the dog days of autumn). It has a surface temperature, Ts, of 9,940K. Its output peaks in the blue, so it looks blue to us. (It’s also a binary star; it’s harder-to-see companion is a white dwarf.) Rigel in the eminently recognizable constellation Orion has a temperature of 11,000K, even bluer. It’s also quite massive at 18 times the mass of the Sun, and it’s 40,000 times more luminous than the Sun. Bellatrix in Orion is at a Ts of 21,000K. The blue supergiant Eta Carinae has a diameter that may be 180 times greater than the Sun and a Ts of up to 40,000K. None of these will last long! Oh, we can’t really slight the much smaller stars. After all, there was hoopla about our nearest neighbor, Proxima Centuari. It’s a red dwarf, cool as a stellar cucumber at a Ts of 3042K. It was touted as having a potentially habitable planet (in limited and totally unstable equable zones), but there are many reasons that it can’t be habitable, including massive stellar flares and tidal locking of the planet. We have a ways to go to specify what’s habitable; hang on.
Artist’s conception: European Space Observatory, M. Kornmesser
Currently, every second the Sun fuses 620 million metric tons (tonnes) of hydrogen into 616 million metric tons of helium. The loss of 4.34 million tonnes of mass is pure energy that eventually makes its way to the surface as particles of electromagnetic energy, or photons of UV, visible, and infrared light. We may plug in the value of the speed of light, c = 300,000 km per second, into Einstein’s famous formula, E = mc2, to compute that power (energy per unit time) as 3.9×1026 Joules per second, or watts. (If you’re unfamiliar with exponential notation, the power “26” means “times 10 to the 26th power, or 1 followed by 26 zeroes.) That’s 3.9×1023 hair-dryer units (1000 watts) – that is, 390 sextillion hair dryers. This energy is spread out quite uniformly over the “surface” of the Sun, at 6.42 million watts per square meter. “Surface” is in quotes because the Sun is a gaseous ball with no solid surface. It’s generally defined as the radius from the center at which the gas become opaque to visible light, which occurs a few hundred kilometers from the edge. By the time the solar radiant energy reaches the orbit of Earth, it’s diluted by spreading out over a greater area, by a factor of 53,000 (150 million km / 0.65 million km, then squared). It’s a manageable 1380 watts per square meter, to our comfort of us humans and our fellow living organisms.
Long lifetime: Despite its prodigious energy output, the Sun has kept going for about 4.6 billion years and has much fusible hydrogen elements left. It will last about 5+ billion years more before expanding into a red giant. It will start to use even more kinds of fusion reactions, “burning” helium to carbon, growing in size, becoming much brighter, and engulfing the planets out beyond Earth. There are many good descriptions of the late evolution of the Sun, gleaned from a welter of observations of other stars in various life stages. These descriptions derive from a detailed knowledge of nuclear reactions and exquisite models of the fluid dynamics and gravitational structuring of the Sun. For the moment, the Sun is only brightening modestly, about 1% per 100 million years.
Our own metabolism is far faster than that of the Sun: While the energy release in the Sun seems intense, on average it’s quite low. Divide the energy output of the Sun by its mass. 1.99×1031 kg and you get a figure of 0.02 milliwatts per kilogram. Someone characterized this as (far!) less than the metabolic rate of a reptile or a compost heap. Good thing, so that the Sun will last a long time. The low rate is also an expression of the difficulty of the fusion reactions. As described earlier, enormous temperatures are still too low to push protons close together, and the weak force involved is, well, so weak. Back to us: even at rest, our metabolism by basic biochemical reactions is over 1 watt per kilogram; we’re close to being a 100 W lamp with our “traditional” adult male mass of 70 kg. That’s 50,000 times the rate per mass for the Sun. At the Sun’s loss rate, we get a long time for life on Earth, long enough to have evolved complex life that includes us (not so good for other life now, which we are extinguishing at a phenomenal rate on the geological time scale). Some people, Elon Musk included, think that another 500 million years more for life while the Sun is still very “nice” for life is not long enough, but I find that silly – we can only expect another few million years as a species, at best. We got good deal. Stars that are hotter and brighter than the Sun don’t last nearly as long. Superhot Bellatrix is estimated to be only 20-25 million years old, with only a few mega-years left before it may become a supernova. Stars that are cooler and dimmer than the Sun last a very long time, but, for example, red dwarves flare up, to the elimination of any possible life forms on nearby planets. Stars the size of the Sun or a little smaller are ideal.
Our luck in the Universe:
Overall: Sun-sized stars or a bit smaller are best – on the basis of nice radiant temperature, long life, and stable output. We humans had to learn a lot about how stars work to conclude this. I hope the story proved interesting. There’s still a good word for big, explosive stars. They did their trick some light-years away from our location and provided us with all the elements heavier than lithium so we could have rocks, teeth, trees, and all sorts of things. We owe the heavier elements to the r-process in stars. The “r” stands for “rapid,” describing a melee of neutrons being captured by smaller atomic nuclei, adding those neutrons faster than the element next lower in mass might decay. This can make elements heavier than iron, which is the most stable nuclide; energy has to be added to iron to make heavier elements. Thus we get the elements on which our lives depend – the cobalt in our vitamin B12, the iodine in our thyroid hormones, the selenium in key proteins, the zinc in various enzymes. The melee comes in explosions of massive stars, the supernovae, and in the cataclysmic mergers of neutron stars.
Elements “lighting up” from high temperatures in a supernova. NASA, Chandra X-Ray Observatory
You might recall that such a merger occurring 130 million light-years away made gravitational waves stretching the universe, waves that were captured by the laser interferometry gravitational observatories, or LIGO detectors. (Forty-seven more cataclysms have now been recorded!) The massive gravity interferometers were built on the premise that Einstein’s general relativity is true (all evidence says so!) and that huge masses colliding can cause detectable ripples in the shape of space hundreds of millions of light-years away. Recently (2019) Darach Watson and colleagues have added a further chapter to the story. They aimed the Very Large Telescope (yep, that’s its name) of the European Southern Observatory to a place that LIGO data identified as a neutron star merger. They detected brand new, hot atoms of strontium, a periodic table relative of calcium, formed in the r-process. This is the first direct evidence that neutron stars are really made of neutron-rich matter and that they make our heavy elements via the r-process. OK, they had to use big models of the process, but those are based on really strong data, too.
For the story of the big stars there are many reliable sources of information. The whole landscape of stars is revealingly plotted by luminosity vs. surface temperature in the Hertzsprung-Russell diagram.
R. Hollow, Commonwealth Scientific and Industrial Research Organisation
The stars we’d like to be near are on the Main Sequence toward the lower right. The bombshells that made our heavier elements are, of course, no longer on the chart. Their predecessors are the massive stars on the Main Sequence at upper left and then in the clusters of giants and super-giants.
Temperature on the planet
Now that we have the behavior of stars reasonably in hand and we know the kinds of stars that might suit life, we need look at the surface of the planet, overall and in its variability.
Estimating the temperature regimes for organisms on distant planets is strongly limited by our lack of knowledge of what geological structures exist on them and what organisms might live there. Earlier, we covered the temperatures that life might tolerate, as well as ways to hide from or to survive extremes. With a diversity of physiological, behavioral, and developmental acclimations enabling organisms to survive and prosper in environments that may reach temperatures that are extreme in our view, it’s not possible to set hard-and-fast limits for the livable range, even for familiar Earthly life. That said, there are regimes of temperature in space and time that are real deal-breakers for life and many more regimes that seem to militate against abundant life and, especially, multicellular life and the subset of life that’s intelligent. Temperatures should not stay extreme for times longer than the slowest generation cycle of organisms. Orbital physics enters here. À la the Drake equation, a planet can’t be too near or too far from its star to be permanently cold or hot. That’s a given.
Not only means but extremes in time and in space matter, as noted earlier.
A central constraint on figuring out planetary conditions: The final balance of energy is all radiative
Basically, the energy balance is all radiative on a “nice” planet by a “nice” star – stellar (solar) energy in, heat radiation out and very little or no net energy gain or loss. It’s essentially all electromagnetic radiation. Another, much smaller contributor is a flow of particles, the solar wind and occasional flares or coronal mass ejections. Made up primarily of ionized atoms of hydrogen and a few other chemical elements, they carry very little energy but can circulate in Earth’s magnetic field, disrupting satellite communications while also generating beautiful, colorful aurorae near the poles. As electromagnetic radiation coming in from the central star to a planet there is visible light, clearly, along with invisible (to us) infrared radiation and ultraviolet radiation. There are some minor contributors. One such is thermal radiation of very long wavelength and low energy, but the contribution is tiny from our Sun to Earth, as we’ll see in working out the whole pattern of radiation of a hot body.
Sources of energy on Earth make very little difference, directly. Consider the nice, steady geothermal heat flow from the interior of the planet. We’ll get to its interesting origins in four processes and its larger significance for plate tectonics later. For now, it suffices to note that it averages 0.06 watts per square meter. That’s compared to an average of 239 W m-2 for solar energy that we calculate just below. For another, there is the heat from volcanic eruptions. This has been measured from satellites using their thermal sensors. On average years (e.g., 2001 and 2002 in a report in the journal Nature), the heat liberated by all 45 active volcanoes was 5×1016 joules per year, only about 1/100,000,000th of solar input; a big blast like Mt. St. Helens liberated over 1018 joules, less than 1/3,000,000th of the solar energy absorbed by the Earth each year, about 3.7×1024 joules per year! Even the massive eruptions of the Deccan Traps in India 66 million years ago and of the Siberian Traps in Russia 250-251 million years ago liberated a minor fraction of the solar input, in part because they were spread out over a million years or more. Currently volcanoes are more significant for cooling than for heating. The ejecta they toss into the atmosphere reflect sunlight and reduce Earth’s net solar energy capture. The most dramatic case in recent times was the eruption of Tambora in Indonesia. The Earth overall cooled by over 0.5°C, leading to the Year Without a Summer in Europe in 1816 and the accompanying crop failures and food shortages. Some regions had frosts in every month. The human use of fossil fuels liberates a lot of energy, but it’s also small, less than 7×1020J over the year, as tabulated by the International Energy Agency. That’s over 10,000 times as much as volcanoes but still only about 1/5,000th of solar energy. What’s more significant is the trapping of heat by the carbon dioxide released (ditto for volcanoes over millions of years). I have a calculation of the “heat duration” as the rise in temperature of the air, times the heat capacity, times the mean duration of the temperature rise. For fuel use resulting in CO2 emissions, the trapped heat duration is hundreds of times the liberated heat duration. That raises the human contribution to the order of a percent of solar energy…. and it’s why we’re warming the Earth consistently.
The measure of immediate interest is the energy reaching the planet per unit area of the planet per unit time. We call it the energy flux density; I’ll use the symbol F as a reminder that it’s a flux. There are a few basic facts:
- The flux density at the surface of the star is set by its temperature. The star acts like a perfect radiator of energy, which, counterintuitively as it may seem, we call a black body. A black body absorbs all radiation falling on it, but, by the principle of microscopic reversibility in physics, it emits all the radiation possible at its temperature. Josef Stefan and Ludwig Boltzmann in the span 1879-1884 discovered the law that F = σT4 – that is, it is proportional to the fourth power of the absolute temperature, T (the kelvin scale noted earlier). So, the star Sirius with twice the surface temperature of the Sun has 24 = 16 times higher flux density than the Sun. The Stefan-Boltzmann constant σ has the value 5.67×10-8 watts per square meter per kelvin. For a start like our Sun with T=5800K at its surface, that enormous energy flux density is 64 million watts per square meter. That’s 46,000 times stronger than it is at Earth’s orbit and the equivalent of about 64,000 hair dryers (1000 watts each) per square meter.
- The size of the star, measured by its radius, a, matters. The total power output of the star is its power per surface area, multiplied by its surface area, which is 4πa2. So, the total power output of the star is σT44πa2. That is, between stars of different surface temperatures and radii, it varies as T4a2.
- At any increasing distance, R, from the center of the star, the flux density decreases as 1/R2.
This is easily understood from considering a notional sphere at radius R. The sphere has an area 4πR2. The total power intercepted by the sphere is 4πR2F(R), where F(r) is the flux density spread out all across that sphere at radius R. Now move to any other radial distance, R’. The same total power crosses a similar sphere here- there’s no generation or absorption of energy between the two distances (in free space). Then
This is classic one-over-r-squared law for the falloff of power with distance. With R = radius of the Sun (0.696 million km) and R’ = mean radius of the Earth’s orbit (150 million km),
If we want a planet with an energy flux density that’s the same as for Earth (so that it has about the same temperature), we want the total power of the star spread out at the planet’s orbital distance to be like that for the Earth:
There are multiple solutions. Pick a star size, thus, for a main sequence star, its temperature. That gives us the total power, σT44πa2. Then there is a choice of orbital distance, Rorbit, that will make the two sides match. One can look for a star like the Sun and a planet at the same distance as the Earth has from the Sun; a hotter star (that is, a bigger star) and a more distant planet; or a cooler star (a smaller star) and a closer planet.
One of the last cases is the red dwarf star Proxima Centauri, with a temperature of 3042K and radius 1/7 as large as the Sun’s, and a planet much closer, at only 0.04 times the Earth-Sun distance (the au = astronomical unit).
More of our luck in the Universe: We’re in an admittedly broad range of orbit X stellar distances
Reaching Earth’s orbit above the top of the atmosphere the radiant energy from our Sun “face-on” averages 1371 watts per square meter, or, in familiar terms, about one and a third hair dryer units (a typical hair dryer puts out about a kilowatt). A sizeable part, averaging 29%, is reflected to space to be lost forever, mostly as clouds intervene, covering half to two-thirds of the surface at any one time. This brings the average intercepted energy to 936 watts per square meter over its frontal area, which is r2, where r is the radius of the Earth. The Earth rotates, so that the incoming solar energy over its total surface area, 4r2, bringing the average on to ¼ as much, 239 watts per square meter (the low setting on the hair dryer).
Planets other than Earth vary in the fraction of solar radiation that is absorbed (see below, shortly). In our Solar System, Mercury absorbs 91%, Venus only 24% (though it makes up for that with an enormous greenhouse effect). Of course, we can go to Ice Ages, in which the reflectivity of the Earth increased, offering the direct cooling; the lowered temperature also reduced the water vapor in the air and thus also the greenhouse effect. We can also go to the eras of Snowball Earth that almost permanently froze the Earth; we were saved by plate tectonics and volcanism supersizing our greenhouse effect. In stellar systems outside of ours, the range is likely to be as large, and as susceptible to being pushed into limits of temperature that may make habitability irretrievable.
Radiation leaving Earth after solar radiation is absorbed is thermal radiation, which is emitted by all objects with a very strong fourth-power dependence on temperature. Some of the exiting thermal infrared, or TIR, comes directly from the surface, while some is radiated by clouds and the atmosphere that captured some surface TIR on its way out. Barring a continued warming trend of the surface (even our current catastrophic warming is small in absolute terms), the total thermal radiation leaving is also very close to 239 watts per square meter. In calculations below, as well as in direct measures from satellites, this corresponds to a temperature of -18°C (0°F). The surface is, of course, 33°C warmer on average, at +15°C or 59°F… same as the annual average temperature in Las Cruces, New Mexico, where I’m writing this. The fascinating greenhouse effect is the origin of the surface warming.
We can calculate the radiative temperature of all the planets in our Solar System. We need to know their distance from the Sun and the fraction of solar energy that they absorb. A little simplification is in order, just to get magnitudes and trends. I use the mean distances from the Sun, while referring to the earlier discussion about elliptical orbits so that steady-state temperatures vary as 1 over the square root of the distance. It’s easy to find literature sources that give the range of distances for every planet. I need to use the measured values of the fraction of solar energy absorption, e.g., 71% or 0.71 for Earth. Both Earthbound and spacecraft observations provide such information, using very advanced sensor technology, calibration, and exquisitely choreographed flybys for the spacecraft – yes, it really is rocket science. Shortly, we’ll get into the patterns in which solar energy is distributed across wavelengths (colors of visible light, with no analog in the infrared). For the current purpose, I just cite that the measurements of absorption were done for many bands of wavelength to get a total fractional absorption of energy. Finally, temperature clearly varies across the surface of any planet, as we appreciate so clearly on Earth. The calculations below are grand averages over surface area.
The calculation is straightforward:
(1) Calculate the energy flux density, F, at the planet. A nice way to make comparisons is relative to Earth. Cite each planet’s mean distance from the Sun as a multiple of the Earth’s distance from the Sun – i.e., cite distances in astronomical units: 0.387 for Mercury, 1.52 for Mars, and so on. Then
Example: for Mars, at 1.52 times as far from the Sun as is the Earth, the reduction factor is 0.43, so that the flux density at the planet averages only 593 W m-2.
(2) Multiply the result by the fraction of solar energy absorbed at the planet, a:
Again for Mars, this is 75% or 0.75; Fabs is then 445 W m-2.
(3) Divide by 4, to account for the energy getting spread over 4πr2 area of the whole planet surface while only πr2 faces the Sun instantaneously to absorb energy. Now Mars is down to 111 W m-2. At least it’s a nicely rotating planet, averaging out the heat load reasonably well.
(4) Calculate the blackbody temperature (TTOA , at the Top Of the Atmosphere) that will radiate back to space all that energy
For Mars, the mean TTOA is 210 K, which is -63°C. Bring a jacket! Of course, there are hotter (and colder) areas on Mars. I cite the best measurements that have been made, below, where they are available. The seasonal variations in TTOA on Mars are huge, given that the distance from the Sun varies by a factor of 1.66/1.38 = 1.20. That drives a variation in temperature by a factor of the square root of that, 1.096. That 9.6% variation is 20K!
I give the calculations, which nicely match the measurements, readily done in a simple spreadsheet using the final formula
The results are
Note: (+int’l) indicates that the planet has a significant internal heat source.
In an Appendix I provide more details of temperatures around planets around other stars.
Big, hot stars
We’ll reject stars that are not much hotter and larger than the Sun.
First, they have notably shorter lives, as noted earlier. Evolution of complex life forms took a very long time on Earth. We had only bacteria for more than 80% of the time that the Earth has existed. While bacterial life may have started earlier than 1 billion years after the Earth formed, it took:
- A yet unknown time to evolve from the “RNA world” to what we might call efficient life, with diverse proteins for structure and for carefully controlled catalysis of biochemical reactions; with DNA as genetic material for faithful reproduction of organisms; and with RNA as an intermediate for transcribing DNA into RNA messages and then translation of the RNA into protein. (There’s lots of elaborations on this theme but this is a core, DNA->RNA->protein.) There’s much evidence for an initial world with RNA as both genetic material and catalytic molecules or ribozymes. The pace of metabolism must have been notably slower and the genetic coding was much less reliable; modern organisms reproduce their DNA with far, far fewer genetic changes (mistakes, usually) than do one echo of the RNA world, the RNA viruses;
- About 2.3 billion years, until about 2.2 billion years ago, for photosynthesis that generated oxygen to evolve. Oxygen was critical for the later development of metabolically very active life, such as ourselves, cheetahs, moles, grasses, trees, and much more. The metabolism of chemical “fuels,” the carbohydrates and fats in particular, without oxygen yields little useful energy per cycle of consumption and re-formation by photosynthesis. Any textbook in physiology or biochemistry notes that fermentation of a molecule of the universal sugar, glucose, yields some chemical reductant and only 2 ATP molecules. The reactions of oxidative phosphorylation that proceed beyond this step yield another 34 molecules of ATP. When we exercise rapidly, before our ox-phos kicks in, we tire quickly;
- About 2.5 billion years, until about 2 billion years ago, to get the next major step, eukaryotic cells. In a spectacular world-changing event or events, cells merged the very different physiologies of “regular” bacteria, or Eubacteria, and of Archaebacteria. These cells, in us and all of what we see without a microscope, have internal compartments separating parts of the cell – a nucleus for the genetic material, specialized organelles for energy metabolism, and the beginning of complex signaling and association among cells;
- Nearly 4 billion years, until about 635million years ago, to get multicellular life, in the Ediacaran Era. This led to all that we can see without a microscope! Fungi, plants, and animals were diverging from common ancestors;
- About 4 billion years, to about 540 million years ago, for the explosion of many different life forms in the Cambrian era. All modern phyla of animals (body plans) originated at this time; evolution of the various forms of plants took a bit longer, mostly awaiting the colonization of dry land;
- Another 40 or so million years to get vertebrate animals with nervous systems;
- More time, to 265 million years ago, to get mammals;
- Another 200 million years, to 65 million years ago, to get the primates, our group shared with chimpanzees, gorillas, and orang-utans but also lemurs and tree shrews;
- Another 58 million years, to 7 million years ago, to get hominids – proto-humans, gorillas, chimpanzees, and orang-utans;
- Another 5 million years to get to the genus Homo with bidpedal movement and rapid gains in brain size;
- Another 2 million years, to 200,000 years ago, to get anatomically modern humans;
- Until only about 10,000 years ago to get civilization and agriculture, and 250 years ago for the Industrial Revolution
So, evolution speeded up rapidly as we got close to the modern times but it needed a very long spin-up time to get there. The seemingly slow initial pace was made humorous in the book, Poodles from Hell, by Mick Stevens and Charles Monagan:
Second, hot stars have too much ultraviolet light. Stars as nearly ideal black bodies emit electromagnetic radiation in a pattern governed by the Planck equation presented earlier. The equation describes the portion of radiation in any range of wavelengths. Two ranges or wavebands coming from a central star to a planet are particularly relevant to living organisms. One is the visible range, about 400 to 700 nanometers (nm). More properly we must consider the band that supports photosynthesis. That’s 400 to about 850 nm on Earth, with the range longer than 700 nm relegated to populations of bacteria that are much less abundant than are green plants.
The second band is the ultraviolet. The Sun puts out about 13% of its energy as ultraviolet radiation. This gets scattered but, more so, absorbed in the Earth’s atmosphere, primarily by ozone in the stratosphere. The flux density of UV at the surface depends on the clarity of the sky (clouds and aerosols reduce it), the solar angle (glancing incidence strongly reduces it), and one’s location relative to the polar zone “holes.” Highest levels are on the order of 2 watts per square meter (W m-2), which is about 0.15% of the Sun’s total electromagnetic radiation of 1371 W m-2.
Ozone, O3, is a bent molecule and a rather unstable one. It forms via reactions of oxygen with UV radiation. It has a positive heat (enthalpy) of formation, meaning that energy must be put into O2 as a starting material to make it. Ozone is both formed by UV radiation breaking up ordinary O2 molecules and destroyed by UV itself. There are 4 main chemical reactions coupled to each other that comprise a closed cycle, the Chapman cycle. This leads to an approximate steady state amount of ozone in the atmosphere. There are “leaks” to the cycle from the presence of natural and anthropogenic compounds, particularly nitrous oxide, N2O, and halogens, chlorine and bromine. Ozone is most abundant in the stratosphere where the air is thin. Concentrated and brought to surface air pressure, it would be a layer only 3 mm thick!
Living organisms on Earth are prone to UV-driven photochemical reactions, some deleterious and some useful. Ultraviolet in the range of wavelengths 100 to 315 nm causes DNA damage, especially UVC at 100 to 280 nm. Surface organisms have DNA repair biochemical mechanisms that are effective, but incompletely. In humans cumulative long-term skin damage and cancers result. In the shorter term, UV destroys folic acid (which all eukaryotic organisms, from yeast to humans require for cell division) and vitamin A. Dark skin coloration with melanin offers a good measure of protection, hence it contributed to the evolution of dark skin in sunny areas of the globe. Other animals have pigments and coats such as fur that similarly offer protection, while sunlight avoidance is another common behavioral adaptation. Plants’ leaves contain flavonoids as additional UV screens. The other side of our love-hate relationship with UV is that absorption in blood vessels near the skin’s surface creates vitamin D from cholesterol (a plus side of cholesterol as well as of UV).
Before ordinary oxygen accumulated to a major extent in the Earth’s atmosphere, life at the surface in any niche remotely exposed to sunlight was not possible. The continents were bereft of macroscopic life until about 450 million years ago. Moving to consider exoplanets for their habitability, we may ask if ozone is the only likely UV shield. Its formation depended upon a long period of O2 production by photosynthetic bacteria and then protists (e.g., algae) in the oceans. If we accept the arguments that highly metabolically active life depends on oxygen and that oxygen derives from water-based chemistry of both the planet and its life forms, then ozone is the primary candidate for a UV shield. A hydrocarbon haze (noted earlier) in a methane-shrouded early Earth or an exoplanet might work, but that whole chemistry not giving us high metabolism organisms. Other researchers seem to accept the argument. There have been direct searches for ozone in the atmospheres of exoplanets. Such was found around at massive “super Jupiter” but that planet is blazingly hot. There have also been models of planetary atmospheres with assumed chemical composition and a toolkit of photochemical reaction mechanisms (Howard Chen and colleagues, Astrophysical Journal, 2019; 886 (1)).
The effective depth of the ozone layer appears to be a weak function of the UV radiation level of the star hosting the planet. So, a solution for habitability of a planet around a hot, high UV star might be to stand off at a greater orbital distance. Consider a star that’s 10% hotter than the Sun. It has 46% more total energy output than the Sun. All other things equal, to get an Earthlike temperature the planet would have to be = 1.21 times as far away from its star as is the Earth to get the same average interception of radiant energy. However, the fraction of the star’s radiant energy is 55% higher in the UV. That’s not a deal-breaker but it makes the prospects for life a little dimmer (pun intended). Also, the expected lifetime of the star on the stable main-sequence part of its life is 21% shorter, giving less time for evolution. A truly hot star such as Sirius at 9,940 K is out of the question, both for its extreme UV flux and its short lifetime.
We’ll accept stars cooler and smaller than the Sun, though not a red dwarf such as Proxima Centauri
First, the planet has to orbit so close that it become tidally locked with one face permanently facing the star. That face fries, the far side of the planet gets about as cold as interstellar space, and all the atmosphere freezes out on that side. In a tight orbit, the strong gravitational distortion or bulge on the planet is so pronounced that the star pulls the planet along in a fixed orientation relative to the star, as if the bulge is a handle; the same face on the planet always faces the star. Our own Moon is tidally locked to Earth, so that, apart from glimpses of the edges caused by wiggles or librations of the Moon, its far side was only seen remotely by spacecraft, beginning with the 7 October 1959 flight of the Soviet spacecraft Luna 3. The Apollo astronauts saw the far side, and now China’s Chang-e 5 is parked there.
Consider a planet orbiting a central star with a mass Ms. We can compute the inward gravitational acceleration (gravitational force per unit mass) experienced by the planet simply, as
Here, G is the universal gravitational constant. We’ll consider substantially circular orbit, not too eccentric, so that ro is nearly constant, as is it for the Earth around the Sun. We can plug in numerical values, but an interesting comparison is this strength relative to the gravitational acceleration we feel from the Sun on Earth (note that this is quite small compared to the gravitational acceleration we feel from Earth’s mass, only 0.06% as strong). Actually, let’s skip to the gradient of the gravitational acceleration, which is the rate of change of the acceleration with distance from the star. This is what causes tides, or tidal bulges, in the water and rock on any planet. The formula is
The notation ∂/∂r is from calculus, indicating that the quantity after the symbol is being differentiated, that is, taking the difference between its value at two nearby locations and dividing that by the small difference in locations (r). If we’re drawing a curve of g versus r, this is the slope.
Proxima Centauri has a stellar mass that’s only 1/8 as much as that of our Sun. The planet Proxima Centauri b (PCb) orbits 24 times closer to its star than does our Earth around the Sun! That’s needed because the star is cool, at 3042K. Its total radiative output per unit area of its own surface is just 7.6% that of the Sun. Also, the star is small, with a radius only 1/7th that of our Sun, so the area of its disk is only 2% as big as the Sun’s disk. In all, the radiation at any given distance more than a few tens of stellar radius is proportional to the energy flux density times the disk area. For PCb that is 0.076/49 = 0.00154 as much as from the Sun. To get the same total energy interception per area as Earth gets from the Sun, the planet must be at a distance that’s = 25 times closer. PCb orbits close to that, at 24 times closer.
Now we can compare this to the gravitational gradient at Earth from the pull of the Sun:
That’s a whopping tide. The “Earthlike” planet around Proxmina Centauri b is certainly tidally locked.
So, unlike on Earth, the stellar radiation is not distributed across all longitudes, evening out the temperatures. Instead, the region of the planet with its star overhead is very hot. Only at high “latitudes” (near-“polar”regions with a large given angle to the star) are temperatures in the habitable range. Assuming that the surface absorbs the same fraction of radiation as does the average of the Earth’s surface and assuming the same degree of greenhouse warming (33°C), it becomes a straightforward exercise in trigonometry to calculate the “habitable latitudes.” We might define those as where the temperature is (1) just at the freezing point of water, 0°C, which is at 80° and (2) at 70°C, the temperature of the hot pools at Yellowstone National Park.
This notion fails for several reasons, one posing an insuperable objection: any water in the purportedly habitable zone would be constantly evaporating, as on Earth. It would condense as ice on the cold side of the planet that is not far above the temperature of deep space, near absolute zero. It would be forever unavailable on the warm side.
(2) Second, PC, as we may call it, has two activities that ablate away the planet’s atmosphere. It has a stellar wind that’s estimated as 2,000 times stronger than the solar wind at the Earth. It also has drastic stellar flares. These not only toast the planet frequently; they also blast away any atmosphere. PC also has violent flares of electromagnetic radiation that have reached at least 68 times the average intensity of radiation. These raise temperatures enormously. They also give huge doses of ultraviolet radiation that kill by photochemistry.
Our luck in the Universe: We have avoided the life-quashing extremes of star masses
And binary stars pose difficulties
The case against a habitable planet being in orbit around a binary star is rather convincing, though not absolutely ironclad. There are publications pleading the case for such situations (e.g., https://en.wikipedia.org/wiki/Habitability_of_binary_star_systems), but these fail to incorporate all the astronomical, physical, chemical, and biological consequences, I contend.
Most stars are binaries, two stars orbiting each other more or less closely. Our Sun may well have started as a binary, with either it or its partner ejecting the other (maybe that’s quarantine so that other stellar systems aren’t infected with humans, as a New Yorker cartoon has it).
Binaries present some amusing speculations or even fantasies – a sky with two suns, two beautiful sunsets per day (or more), like Magrathea orbiting the twin suns Soulianis and Rahm in the amusing Hitchhiker’s Guide to the Galaxy. Scientifically relevant to habitability are the effect of two suns on repeated variation of radiation received at a planet orbiting a binary system, the possible chaotic orbiting of the planet or even ejection, and the possible effect on the lifetime of the binary itself.
Variability in binary stars’ energy delivery to the planet. Let’s consider a planet orbiting two Sun-like stars (again, Sun-like for long life and only modest ultraviolet radiation production. The case of a Sun-like star and a low-mass star gives almost twice the variation). The closer the two stars are, the lower the variation in flux density at the planet. In the limit of coalescence (which can happen, making a bigger, hotter, short-lived star), there is no variation. So, we’ll consider the twins, for simplicity of equal mass, to orbit each other at a separation of two-tenths of an Earth-Sun distance or astronomical unit (au) – 30 million km. The orbit of the stars is assumed to be circular; elliptical orbits tend to go circular for dynamical reasons beyond our scope here. To have the right average temperature, the planet needs to be at a distance that’s times farther out than Earth is from our Sun, or 1.414 astronomical units. That gives the planet the Earth-normal average flux density (twice as much output from the two suns, spread out by a factor 1/ 2 = ½ by the 1-over-r-squared rule above).
The two stars are on average at the same distance, each providing ½ the Earth-normal flux density. At times (panel ”b”), they are at two quite different distances from the planet, one closest to the planet, the other farthest. That is, one as it a distance 1.414 – 0.1 = 1.314 au, the other is at 1.414 + 0.1 = 1.514 au. The close one contributes a flux density 1/1.3142 = 0.579 as much flux as the Earth gets from our Sun. The far one contributes a flux density 1/1.5142 = 0.436 as much. The total is 1.015 times as much. The planet is getting warmed above its average temperature. Referring back to the Stefan-Boltzmann law, the fourth power of the absolute temperature, T, is proportional to the intercepted flux density. So, T is proportional to the ¼ power of the intercepted flux density, if the configuration of the suns stays long enough for the temperature to settle down (equilibrate). The projected equilibrium temperature is 1.0151/4 = 1.0037 times higher. For a mean T = 288K, the rise is 1.1°C higher. That’s no big deal. Furthermore, there’s a leveling of temperature changes: the extra heat input goes into raising the temperature of the water, land, and air. It takes time for the changes to occur because of this thermal inertia. The air temperature changes fastest, as we see every day near midday and midnight on Earth; air has the least heat capacity. It takes on the order of 10 hours. For deep water, as in the oceans, it may take up to a hundred days. We may say that there are several relaxation times to move toward equilibrium.
How fast is the change in position of the two suns? We can use an equation from Newton’s gravity for the orbital period, τ, of a binary star system. Plug in the numbers with the separation R, being 30 million km = 3×1010 meters and the sum of the masses being twice the mass of our Sun, or 3.978×1031 kg. The period, τ, is found from the equation
Pretty fast rotation! The time between the suns being aligned along the line to the planet and being aligned crosswise to that line is 44 hours. The air temperature will have substantially equilibrated or relaxed, though not the water temperature.
If the suns orbit each other at twice the separation, or 0.4 au, the radiation changes are four times greater, for a 6.2% change, and a 1.55% change in equilibrium temperature or about 4.5°C. Again, this is probably not critical. Now, a separation that’s 60% of the planet’s mean distance gives a shift of 30% in distance to the planet, or 0.424 au. The peak in energy flux density is 1/0.9902 + 1/1.8382 =1.020 + 0.296 = 1.316 as a multiplier to the average flux density. That 32% rise would lead to a 7.1% rise in equilibrium temperature, or about 20.5°C! That would have a big impact on organisms that are exposed.
Stability of the planet’s orbit. Matched suns could do a long circular dance around their center of mass. The planet might get many jogs from the pulsations in gravitational force. The planet might get pushed into chaotic orbits, clearly life-ending for organisms, or possibly ejected. The prediction of motions in a three-body system has turned out to be impossible to solve mathematically in closed form, that is, written in terms of fundamental mathematical functions. Even predicting whether the motion is periodic or, in contrast, chaotic, is difficult. There are published studies on this that would be hard to apply to interpret the stability for an observed binary sun + planet system.
Stability of the stars’ orbit and changes in evolution of the stars. Binary stars that orbit at small separations exert very strong gravitational effects on each other that distort their shapes as well as causing mass transfer between them or even coalescence. I leave the details to the astronomers to estimate changes in stellar energy output and lifetime for the case where at least one of the binary stars is Sun-like in mass.
Our luck in the Universe: We dodged the common fate of being around binary stars
… even if only to have an Earthlike temperature (and why that is so is another section, later!).
- It has to absorb the right fraction of light – too little and it’s too cold, too much and it’s too hot.
- It has to have an appropriate greenhouse effect. Too much and it scalds its surface, as Venus does. Too little and it’s too cold and has too little heat storage in the atmosphere, like Mars, adding to its problem of being far out and getting low radiation flux. Mars may have had it right in the distant past with a very strong greenhouse effect from CO2, compensating for its wimpy solar flux interception. Note that a habitable planet must have a greenhouse effect, but not only from water vapor – there’s lots to say about that.
- Its orbit can’t be too eccentric, or ellipsoidal. That causes big variations in the stellar radiation received over the extremes of its orbit. Consider a planet with an orbital eccentricity of 10%. That is, its distance from the central varies from 0.9 times the mean distance to 1.1 times the mean distance. That means that the energy flux density that it experiences varies from 0.92 = 0.81 times the mean to 1.12 = 1.21 times the mean. Now, all else equal, and with not notable heat storage over its year, the surface temperature, Ts, will vary in proportion to the ¼ power of the intercepted energy flux density, I. That’s the old blackbody law, applied in reverse, to the planet radiating away the energy it receives from the star. That is, the planet’s equilibrium surface temperature follows the rule
Here, the symbol ∝ means “proportional to;” we’re not showing the other factors that matter and that are unchanged. Now, we also have
Putting these together, we get
This is the absolute temperature. That’s 288K on Earth. Now, at the closest approach to the star,
That gives a 5.4% increase… in absolute temperature, or 15.6K = 15°C. At the far point, the decrease is 4.6% or 13.4°C. Those are really big effects. Earth’s orbit is only 3% eccentric and we see the effect of that; the range of radiation amounts is plus or minus 6%, for a maximal variation in temperature of plus or minus 1.5%, about 4°C. Aphelion, or farthest point, is around 4 July now, winter in the Southern hemisphere.
(4) The planets’ own axis can’t be too tilted. A planet with a severe axial tilt doesn’t average out its heat load from the star over its day of rotation. For long fractions of its orbit, the poles are very oblique to the star, with temperate conditions. For other long times, the poles face the star at a shallow angle, getting hot, or face cold interstellar space, getting cold. The lower latitudes face these problems, out of phase with the poles. In our Solar System, Uranus has an extreme tilt of 98.5° from the plane of the ecliptic. Exoplanets might have the same problem, which is effectively undetectable from our telescopes. Earth’s axis is stabilized by our Moon, which is at a fortunate size and distance to do this…even if it continues to move out slowly.
(5) The planet has to rotate fast enough to moderate swings in temperature between night and day on all sides of the planet. Our local planets range from 10 Earth hours (Jupiter, and, close to that, Saturn) to 243 Earth days (Venus). A fast rotation clearly helps average out solar radiation around the longitudes. A very fast rotation is not a problem for energy balance, but a long one is a problem. At any one time, a range of longitudes is facing the star for a long time and is getting hot, and another range is facing the other way and getting cold. Atmospheric circulation helps even out the temperatures, but far from fully. On Earth, the equator-to-pole circulation helps moderate the equatorial and polar temperatures, both, but only partially. Venus has a high windspeed at high altitudes from interesting hydrodynamic effects, which “moderates” its temperature. The windspeed is quite low near the surface, which is always hellishly hot. The detection of rotation periods of exoplanets is very challenging, so we don’t know what fraction of planets have “good” rotation periods. Given the way that angular momentum is distributed among bodies as a stellar system forms, it might be a high fraction. The effects of fast rotation and atmospheric circulation makes for a short time for temperatures to “relax” toward the average.
Our luck in the Universe: Earth keeps a nice, even distance from our star and has a tolerable axial tilt. Of course, many planetary orbits can get “circularized” and start with only a moderate tilt, so, this is not commonly a deal-breaker.
Our Solar System is a beautiful and strange place. It does have an orderly look at present with a rather quiescent Sun and eight planets (and three or more dwarf planets) mostly in nearly circular orbits, no impending collisions, bearing a variety of sizes, colors, weather patterns – even on distant, dimly-lit Pluto – with lightning and gas volcanos, plus icy rings in more than one place. Our Earth pursues a stately circling of the Sun, its orbit close to a perfect circle.
Occasionally, drama unfurls here from small to large visitors from the asteroid belt or even the Oort Cloud. A few encounters change the Earth radically, as did the asteroid that created the Chicxulub crater 65 million years ago and was the major force extinguishing the (non-avian) dinosaurs, or the biggest collision that made our Moon. It’s easy to believe that there is nothing else like our Solar System. Deep studies now affirm that. They tie together almost countless observations with highly sophisticated dynamical theories to tell us of an early history far more chaotic, Jupiter moving in and out, removing early planets and leaving room for us while scattering pieces that sometimes come to illuminate our skies as little meteors, or big killers. An early planet or two appears to have been ejected from the Solar System to wander an increasingly sunless journey. These studies also tell us that the current regularity of the planets’ marches around the Sun is rather illusory. While two bodies can revolve around each other with regularity for eons, the presence of other bodies even at significant distances slightly perturbs their trajectories. In the long term, the motion of many-body systems is chaotic. Earth’s axis currently has a favorably modest tilt that won’t give us the strikingly long winters and summers experience on Uranus, but it may well in a few billion years (not that we need to worry about that!). Astronomers can calculate the future position of Pluto with exquisite accuracy, but not indefinitely as it slowly shows its underlying chaotic links to other bodies.
So, Earth has had an overall remarkably good, long run as an ultimate host for life. We and our ancestors made it. Does this string of luck likely apply in other stellar systems, stars with planets that might meet the many constraints for habitability but might see chaotic termination of their habitability? We may delve a bit more into our Solar System. The eight planets currently follow a law for the spacing of their orbits, called the Titius-Bode’s law. Scaling Earth’s orbit or semi-major axis to 10 units, the other orbits are approximately:
Each new difference is an intriguing multiple of the preceding one. This law, however appealing, has no basis in dynamics. It does hint that we’re missing a planet between Mars and Jupiter, so we had to put in the dwarf planet or asteroid, Ceres.
Jupiter’s role in this is readily suspected and now documented. It is our dominant planet, having a mass 2.5 times larger than all the other planets combined. The latest synthesis of the dynamics of the Solar System from start to present notes this role. The Grand Tack hypothesis proposes that Jupiter formed much closer to the Sun at 3.5 times the distance that Earth now stands from the Sun, or 3.5 astronomical units, au. Based on the current patterns of motion and on dynamical quantities such as angular momentum that are rigorously conserved, it is very likely that Jupiter moved inward, to where Mars is now. It crushed early inner planets, met Saturn, captured it in a 3:2 motion resonance, and moved out to its position at 5 AU. These traversals of our place in the Solar System did clear out a number of planetesimals in our path, saving Earth some of trouble of doing it itself.
Along its Grand Tack Jupiter both scattered and entrained planetary remnants as asteroids, some orbiting quite regularly, others crossing Earth’s orbit for collisions large and (overwhelmingly) small. The early Solar System was violent in Earth’s location. When I was a boy, I thought, not uniquely so, that the Earth is near the Moon, so why doesn’t it have craters all over. Scientists learned, of course, that Earth has been hit nearly as much as the Moon by bodies up to the size of asteroids and comets. (Resolution: they get eroded away and subducted by tectonic motions.) We have the tale of the iridium layer, the submerged crater at Chicxulub, and the demise of the dinosaurs (the last with some help from massive volcanic eruptions of the Deccan Traps in India). There is a fairly regular distribution of objects colliding with Earth and causing havoc for life. The data on many thousands of impactors reveals a strong power-law relation between the mass of impactors and the average frequency with which they hit the Earth:
From the analysis by P. A. Bland and N. A. Atemieva, Meteorics & Planetary Science (2006)
Dust grains hit all the time; even kilogram masses hit us tens of thousands of time annually – our enchanting meteor showers. Large impactors with masses of 1 million tons or more hit about every million years. It is unsettling that very massive impactors are still frequent, the billion-ton ones hitting at not much lower a frequency. Life has come back, if by the thinnest of margins 66 million years ago, from the very big impacts. Evolution, once rolling, can build the diversity of life back up so that various layers of producers, consumers, and decomposers end up supporting a fairly stable biosphere after a delay. Too frequent catastrophic events, or events that are too big, could overwhelm this reestablishment. We’re happy to have the early delivery of water from many asteroids but, thanks, we’re full up now… not that we have a choice in accepting impactors or not. There are nascent – and only nascent – plans to deflect such Near Earth Objects with our own impacts from nuclear weapons, ion beams, etc. Of course, the NEOs must be detected first. NASA has a project to do so, distributed among several collaborating institutions. The goal is to detect 90% of objects 140 meters across or larger. That missing 10% is naturally worrisome. Stephen Hawking rightly posited an asteroid impact as the greatest threat to life on Earth.
What’s the situation in other stellar systems? What kind of structure in a stellar (solar) system minimizes the chance of collisions that are too big, too frequent, or both? We certainly have woefully inadequate data on other stellar systems to answer this question. Dynamic models could help answer it, though the question has to be phrased in a way that can be answered – What do we mean by structure? Number and sizes of planets? Elemental composition of the original stellar nebula? I leave this question and its answers to the modelers of stellar systems.
So, life on Earth has made it through threats from within our own Solar System, even if at times only by the skin of our figurative teeth (real teeth, of course, didn’t evolve until 90% of Earth’s story was over). There are lessons for Earth’s habitability and less clear lessons for the habitability of exoplanets. Earth has no guarantee of continued habitability, not even a good set of upper and lower bounds for the time of the next big impactor. For exoplanets that are or will be habitable, we may say that the structure of its stellar system bears heavily in determining the probability that evolution will have a long, unbroken chance to operate. It is interesting to speculate if a complex, multi-planet stellar system such as ours favors the avoidance of life-ending impacts on a habitable planet, or does the opposite. We’d need more data on the various types of planetary sets orbit the nice stars of modest mass. Multi-planet systems have been detected already, though with only modest information about the distribution of the major masses and virtually nothing about those pesky asteroids. Perhaps à priori models may have more to say. In any event, stability and safety considerations add to the many constraints on habitability that I have amassed. They further reduce the odds for continuously habitable planets, to nearly zero anywhere near us and very low in the galaxy. Again, we are very, very lucky to have a planet that even puts up with our interference.
Our luck in the Universe: We have a Goldilocks measure of planetary and asteroidal “clutter,” which is pretty finely balanced as stellar systems are likely to be. There are many deal-breakers herein.
Explosions and sterilizing radiation. Neighbors that emit potent gamma radiation are a threat to life, though not a deal-breaker for most stellar systems and their planets. Explosions of stars as supernovae create massive fluxes of particles and gamma rays, and some of our distant neighbors can “go supernova,” the equivalent of “going postal” among humans. The nearest star to Earth capable of this is Spica, 265 light-years away. That’s too far to give us much radiation; nothing much farther than 50 light-years away would be noticed. How many other potentially habitable stellar systems are this fortunate? Supernovae and close-binary neutron stars are spaced rather far apart, as one small piece of luck has it for potentially habitable planets. The other fortunate part of Earth’s place in space and time is to be where one or two supernovae or neutron star mergers left a lot of elements heavier than lithium. Life needs a solid (read rocky) planet, which needs an abundance of the heavier elements at least 1/10 the mass of our proto-Sun.
Gamma-ray bursts are as the name indicates, brief outpourings at the greatest intensity of electromagnetic radiation ever observed. Some release as much energy in a few seconds as our Sun puts out in its entire lifetime. You do not want to be in the way. They likely come from supernovae or neutron star mergers. Their radiation moves farther and faster than does the “hadronic” (nuclear) matter moving in expansive shock waves. It also is more focused and intense. A GRB nearby could sterilize the side of any planet. The other side would be “OK.” The risk is fairly low. GRBs are scattered pretty randomly around the Universe, not concentrated in any area. Our own Milky Way Galaxy seems to have very few; a map of GRBs shows nothing special in the direction of our galaxy’s center.
In brief, supernovae, close-binary neutron stars, and their evolved forms as gamma-ray bursters appear to be a lesser delimiting factor than many others for potentially habitable planets.
Our luck in the Universe: our stellar neighbors are not threatening now. Of course, the majority of stars might have such luck with neighbors.
Now to the planet surface
Any habitable planet must have an atmosphere, for a number of reasons. Our own atmosphere supports the availability of oxygen for us aerobes and its cycling via photosynthetic organisms. Even for an anaerobic planet, as well as for ours, an atmosphere supports the recycling of water that evaporates and condenses in the spatial patterns of weather and climate. Mars has an atmosphere that’s only 1% as thick (in molecules per area) as ours on average, and no longer has any such water cycle. (The reasons it lost its atmosphere inform us about habitability in general – on that, more, later.) An atmosphere supports the recycling of nitrogen as a critical element in life forms (in proteins, genetic material, …). Mars is again the loser.
Along with oceans, the atmosphere transports heat between latitudes, reducing the otherwise even more striking different in temperatures. Sure, the Earth’s poles are cold, but look at Mars, or, more dramatically, Mercury, where that toasted planet has permanent water ice at its poles. The atmosphere stores heat over the daily cycles of irradiance by our Sun or any comparable central star in other stellar systems. We cool down remarkably slowly overnight (or appallingly slowly when we try to get to sleep in a hot, humid place!). Mars with its wimpy atmosphere has massive daily swings in surface temperature, which would be sensed by organisms such as human visitors primarily as soil temperatures. On a good summer day near its equator, the daily high may be 20°C while the predawn low may be less than -70°C . Soil is a poor buffer for heat; the depth to which the daily variations in temperature penetrates soil is tiny on Earth, several centimeters; that depth cannot store nearly as much heat as can a nice, thick atmosphere such as ours with its 10 metric tons (tonnes) of air per square meter (the only way to appreciate that weight readily is to dive and feel an extra 10 of such tonnes by going 10 meters down).
The importance of an atmosphere in radiative balance
An atmosphere has a great deal to do with the fate of radiation, incoming and outgoing. On Earth there is a generally minor direct absorption of sunlight by atmospheric gases but a whopping effect of clouds, which cover 67% of the Earth’s surface at any one time, even if some of those clouds are thin. Clouds are always white on top and are quite effective in reflecting a good part of impinging solar energy back to space. They’re pretty reflective as well in the near infrared that carries about half the energy in sunlight. There is some downward scattering that gives us the moody overcast light and more. The non-cloudy areas of the Earth’s surface vary in reflectivity. Heavily vegetated areas absorb about 95% of visible light and around about 80% of infrared, depending on the details of how all the leaves are displayed. That gives an albedo, which is the fraction of total solar energy reflected, of about 12-14%, including the ultraviolet. The Sahara Desert looks dazzlingly bright in the visible but it is yellowish, not white, and it absorbs well in the infrared, giving it an albedo of about 0.3. Only snow is highly reflective, with an albedo of 0.4 to over 0.8, depending on age, and it covers only 12% of the Earth’s surface permanently, and that area is much less important in the solar energy budget with its slanting presentation and low average flux density per unit of ground area. The permanent snow and ice area only covers about 5% of the area of the Earth’s surface as projected area normal to the direction of sunlight. Snow does cover an extra 21% of the Earth’s surface seasonally – thus, those blinding and beautiful snow scenes on a sunny day after a snowfall.
The reflectivity or albedo of the Earth varies a bit seasonally as snow and ice accumulate, but the two hemispheres surprisingly nearly compensate each other, one snowy when the other is warm. In the long term the albedo has varied greatly. It increases markedly in Ice Ages because ice and snow are highly reflective. This decrease in solar heating of the surface and atmosphere tends to reinforce the accumulation of more snow and ice, with a number of final complications such as reduction of water content in cold air that then reduces snowfall. The routes for Earth to warm again are several, from regular changes in where our axis of rotation points at closest approach to the Sun (perigee) and at greatest distance (apogee). Albedo also decreases and solar heating increases in warm interglacials. There are only modeling estimates of these changes, since there were clearly no instrumental observations going back then, nor are there good remanent indicators of sunlight exposure.
The interplay of the axial tilt (amount and timing relative to perigee) and of our orbital eccentricity gives rise to quasi-regular cycles of cold and warm periods, the Milankovitch cycles. Similar phenomena would occur on exoplanets with ranges of axial tilt, eccentricity, and ocean/land area ratios; this (and volcanism, rock weathering, ocean circulation, etc.) vastly complicates the prediction or estimation of habitable areas and times for any planet. We can barely figure it out for the Earth.
There have also been long ages with high global temperatures and no polar ice – e.g., the quite hot mid-Cretaceous period around 100 million years ago, nicknamed the Saurian Sauna for the dinosaurs basking in it. There were also Snowball Earth episodes of ice covering all or almost all of the Earth. Bacteria that generated oxygen by photosynthesis caused the oxidation of the potent methane greenhouse and almost killed off themselves and all possible future life. Don’t count on life preserving itself, here or on any other planet.
Snowball Earth sketch. Neethis. Wikipedia Commons
The eventual fate of all absorbed solar energy is conversion to heat in the air, water, and soil. About 5% of the energy takes an intermediate ride as wind, which ultimately creates heat by dissipating its energy in eddies and at surfaces. The result is that the Earth has a great variety of types of surfaces and their locations from which energy is then lost at thermal radiation.
The most critical role of the atmosphere for energy balance is in the thermal infrared, which is the vehicle for the final escape of energy from the planet. We need pay attention to the distribution of radiation, both incoming and outgoing, among all the possible wavelengths. Both the Sun (or other star) and the surfaces on the planet emit radiation essentially exactly as blackbodies. For a useful level of understanding of exiting radiation that balances out the energy income and outgo to set Earth’s temperature(s), we need look into the full range of wavelengths and their properties.
Our luck in the Universe: We have an atmosphere that clings to the Earth, even if its greenhouse effect can be reset catastrophically, while recovering because we also have tectonic activity restore a measure of stability. We have enough water for an ocean to help moderate geographical temperature patterns, yet not so much as to prevent the emergence of dry land.
We love sunlight, with its preponderance of visible light that represents a bit under half the energy flux from the Sun. Our eyes have evolved to use that common source of illumination, detecting as they do radiation with wavelengths between 400 and 700 nanometers (reference: a human hair is about 70,000 nanometers thick). This is not just selection pressure in evolution to match the local radiative environment. It is necessary for the photochemical reactions of vision… and also of photosynthesis.
Visible light, as is all electromagnetic radiation, is packaged into individual photons, each having an amount of energy, E, equal to a product of two universal physical constants divided by its wavelength, λ; that is, E=hc/ λ. Here, h is good old Planck’s constant for the quantization of energy and other things and c is the speed of light. For convenience, physicists and others often quote this energy in units of electron-volts, or eV, the energy imparted to a single electron accelerated by that number of volts of electrical potential. The shortest wavelength that we humans can see is about 400 nanometers; its photons pack an energy of 3.1 eV. The longest wavelength is about 700 nanometers, with an energy content of 1.8 eV. Now, these are just the ranges of energy that can cause major and useful rearrangements in the bonds of common chemicals, especially those involved in the metabolism of humans, plants, fungi, birds, you name it. The bonds in question are largely those in carbon-framework molecules, the organic compounds. In a later section here, I argue that carbon is the only plausible element for complex molecules that exist and would have to exist in living organisms; only it can make long chains and bond with such a great diversity of other elements. Silicon is a chemical analog of carbon in the periodic table of the chemical elements but it’s too “fat,” having a radius that’s 50-60% longer than that of carbon when each element makes its strongest bonds. Longer is weaker – less electrical attraction. So, vision and metabolism are both tuned to light in or near the visible spectrum. The more energetic ultraviolet radiation with its shorter wavelength is a bond-breaker, mostly hazardous to the chemical integrity of living organisms. There are Earth-bound organisms that can use longer wavelengths in the infrared for metabolic energy, such as the purple photosynthetic bacteria. Even these go no further than 850 nanometers. Wait, you may say. Rattlesnakes can detect prey using very long wavelength thermal infrared radiation, as long as 30,000 nanometers, emitted by warm bodies. Yes, they can, with extremely poor resolution. The image looks extremely pixelated, like those visual tests that let you infer that a 20×20 pixel set reminds you of Abraham Lincoln. This is not a good way to make one’s way around the world, in general.
Anupam vashist, Medium.com
Delving into electromagnetic radiation as it meets matter
This is a big section with critical details, so I can’t make it shorter. Einstein once said, “Make everything as simple as possible, but no simpler.”
[If you’re familiar with quantum mechanics, you can skip this section and jump to the section on stars themselves]
There are enormous consequences of this realization that radiation is quantized, and in the corresponding quantization of states of atoms and molecules – states that radiation can bump atoms into or out of. The atom itself was a quandary for classical physicists. A negatively charge electron orbiting around a positively charged nucleus should lose energy because it is constantly being accelerated away from a linear escape path in order to maintain that orbit. Yet, it achieves a set of well-defined states of motion and associated energy levels, including the ground state of lowest energy. The story of the quantum theory of the atom, then of molecules, metals, and more, is an extensive one. In brief, a key proposal was that of Count Louis de Broglie, to have bodies such as the electron be guided by “pilot waves.” A steady state has the wave encircle the atom and close on itself, achieving the same phase or state in a cycle when it meets itself. The wave had to have a wavelength determined by the electron’s mass and momentum. The result was quantization – only certain states of angular momentum and hence of energy were stable, the eigenstates. Lots of ad hoc rules for quantization of states were developed. They culminated in two complementary descriptions of the quantum mechanics of atoms. One is the matrix mechanics of Werner Heisenberg. The other one is related to the wave hypothesis and is most familiar now, the wave equation of Erwin Schrödinger (an equation I had to solve, admittedly approximately, over several thousand times over my career). There is a great derivation of it, with deep connections to observable reality, in Fundamentals of Modern Physics, by Robert M. Eisberg.
Waves don’t have a sharply defined location, and that’s the case in the quantum mechanical description of nature. It replaces the classical description of matter, in which every particle has a fully defined position and a fully defined state of motion, a momentum. There is a precise expression of the uncertainty principle: we cannot know the position and momentum (motion) of a particle exactly at the same time. The uncertainty in position multiplied by the uncertainty in momentum cannot be less than h/(4π) – that same Planck constant that appeared in the quantization of light into photons! A very qualitative explanation of the fact that electrons can’t spiral into the nucleus is that they would then have an uncertainty in position that would make the uncertainty in momentum huge, bringing the electron out. For macroscopic bodies, the uncertainties are so small relative to the large values of position and momentum that they become negligible.
The wavefunction, commonly denoted as ψ(r), tells us the probability of finding the electron at any point in 3-dimensional space, r, at any time. It does so not directly but as the “square” of its value at a point. The value of ψ(r) is a complex number of the form a+ib, with i as the square root of -1. This is really a computation device, not a fantasy with “imaginary numbers.” The probability density of electrons is the product ψ*(r) ψ(r) ; here, ψ*(r) is the complex conjugate of ψ(r) (change a+ib to a-ib), so that the probability comes out as a purely real number. In the limit of large particles where quantum effects get small, the probability becomes more sharply defined, as in classical mechanics.
Above is some work I did with colleagues at Los Alamos, calculating the movement of the probability distribution of a particle in a confining space (potential well) – not quite semiclassical but close; the probability distribution is narrow, unlike the wiggly, wide distribution for a pure quantum state.
Basically, there is a differential equation that the wave describing the electron, or even many electrons or other particles, must satisfy. The wavefunction completely describes the state of the system. The description is in one sense deeply statistical while also being complete. The statistical nature arises from fundamental properties of the wave equation. Mathematically, the operators for position and momentum do not commute – results are different if they are reversed in order.
There are no exact (“analytic”) solutions of the wave equation when there are two or more electrons, but solutions that are extremely close are now routine. The wavefunction allows the computation of all the properties of a molecule such as methane or CO2 or water. These include the energy levels of the molecule, the geometry of the molecule as bond lengths and angles, the electric dipole moment, the frequency of vibration of various deformation modes of the molecule (solving for the wavefunction in geometries with nuclei in various displacements to get the energy of displacements), and the average distribution of electrons as the probability density in space.
Critically, one can also calculate the rate at which an atom or molecule is driven to move from one state to another by electromagnetic radiation (or other drivers). Suppose that electromagnetic radiation is vibrating up and down in, say, the x-direction in space. It is able to accelerate electrons up and down in that same direction. This acceleration of the electron may simply jiggle the molecule’s electrons (or the much heavier nuclei, but they respond much less), with not enough energy to move the electron into a new state; the photon just has a probability of getting scattered into a new direction. However, if the frequency of jiggling by a photon with frequency ν represents a difference in energy E = hν between the initial state and a higher-energy (excited) state, the photon has a probability of being absorbed and pushing the molecule into the excited state.
The wavefunctions have symmetries in space (and in the state of electron called spin). This leads to potent selection rules on which states can be excited to other states, depending on their symmetries. Some have clear intuitive interpretation. Consider a hydrogen atom in the ground 1s-state. In this state the distribution of electron probability density is symmetrical over all angles; it’s spherically symmetric. The next higher states in energy are the symmetric 2s state and the three 2p states that are symmetric along the x, y, and z directions. An electric field of a photon acting in the x direction has to drive electron motion in the x direction, so it can drive the transition from the 1s state to the 2px state. This is expressed formally in the transition matrix element in which we integrate over all space the quantity ψ*(r)x ψ(r). This works with stunning accuracy.
The computation is the first step in calculating the rate of transition between states, which, again, is a probability. Averaged over many molecules in, say, a column of air containing many molecules of a gas that can potentially absorb photons of that orientation and energy, we get a nicely defined rate of absorption per unit length of the path of photons. That is, we get the fractional extinction of the flow of photons per unit depth of air (or of anything else). That’s the key to figuring out how greenhouse gases absorb thermal radiation, trapping a significant fraction of the outgoing radiation.
The same molecule can also move from its excited state to the ground state, emitting a photon of the exact energy and oriented (polarized) in the same direction as in the case of excitation, e.g., the x direction. This can happen spontaneously, and it sets an upper limit to the lifetime of the excited state, reached when no other perturbations of the molecule such as collisions are significant. It also sets a “scatter” to the frequency of light that’s absorbed. It’s no longer a perfectly sharp frequency, by virtue of an alternative form of the uncertainty principle: the uncertainty in the lifetime of the excited state multiplied by the uncertainty in the frequency of light emitted is larger than h/(4π).
In complex molecules there are may be a number of symmetries… and the states are defined not just by the electronic structure (the state of all the electrons) but also the states of vibration and rotation of the molecule. Note that these states are also quantized: stationary states of vibration occur at a discrete set of energy levels. The same is true for rotations. The stationary state of the molecule is described by the electronic configuration and levels of vibration and rotation, termed rovibronic for rotationalvibrationalelectronic.
Tellingly, the conceptual framework and the computational methods to describe radiation, matter, and their interaction appear to work throughout the Universe. Toss in the theory of relativity. One gets to use the absorption and emission lines of chemical elements and molecules to determine speeds of motion of stars and galaxies relative to us, because the energy transitions are the same in visible matter here and in matter billions of light-years away. This is another grand achievement of the human intellect. More directly relevant to delving into the habitability of a planet, we are enabled to calculate the radiation loading and the greenhouse effect on any star+planet system, with any known or estimated composition of he planet’s atmosphere.
With all the information from electromagnetic theory, statistical mechanics, and molecular or atomic quantum mechanics, we may comprehend and even calculate how light and other radiation moves through a medium – “light” leaving the layers of a star’s atmosphere, or thermal radiation moving through the atmosphere of a planet with radiation-absorbing greenhouse gases.
Moving up from single molecules to ensembles of molecules in a gas, we have different levels of description to figure out the fate of radiation coming with a given distribution of photon energies (a spectrum over wavelengths) from a direction. Molecules can scatter it into new directions, absorb it to gain energy, or simply let it pass through; they also emit their own radiation. In the most complete description, we would compute how the molecules get distributed over their own energy states (rovibronic) from collisions among themselves and from absorbing and emitting radiation. The calculations are defined but extremely complex, done only in elaborate computer codes for radiative transport in an atmosphere.
In a less detailed but quite accurate view, molecules are described by a temperature, which uniquely gives the fractions of the molecules in the different states. The rates of transitions between each pair of molecular states as radiation transits the material can be calculated. The results can be then combined to give the rates of absorption, scattering, and emission for the whole complex of molecules. There are two cases of interest here. One is the transport of radiation out of a star’s interior. Here one has to know how radiation interacts not just with “ordinary” gases (at high temperatures, the only molecules are just atoms, many of which are well ionized) but also with highly ionized plasma deep in the star. Aided by models of mass motion of gases and plasma as hydrodynamics, one can get a detailed picture of what’s happening at different depths in the star. These pictures are built on pioneering work by Subrahmanyan Chandrasekhar. A key calculation for radiation treats electrons and nearby ions rather as temporary atoms (to get the “Kramers opacity, effectively an absorption coefficient). In this book I let these details remain buried in the star- what matters at the planet is the final radiative temperature at the surface of the star. The result of high-temperature near-blackbody radiation leaving the Sun through its outer envelope of gases with their selective absorptions looks like this at the top of Earth’s atmosphere:
NASA, 1992, via Wikimedia Commons
We can resolve two different fluxes. One is shortwave radiation, or SWR – light (and UV and near infrared) coming from the planet’s host star. From our Sun, 99+% lies in the region between 350 nm in the ultraviolet and 3200 nm (3.2 micrometers, μm) in the near infrared (see the graph earlier, or play with integrating the Planck equation for a blackbody, such as in spreadsheets linked here). To a very good approximation (no accounting for slight wavelength changes when light bounces off a molecule), the SWR has 3 possible fates: passing through without hindrance (transmission), being scattered to a new direction, or being absorbed. Looking at a thin layer:
Notional fates of SWR traversing a thin layer of the atmosphere. For scattered light I show its partitioning into 3 angle classes; one might use only 2, upwelling and downwelling, or a large number, depending upon the purpose of the modeling.
SWR that makes it through one layer, either unhindered or scattered but not absorbed, can suffer the same three fates in the next layer. We can build up a picture with multiple layers (ignoring variations laterally in molecule concentrations or intense scatterers, the clouds or aerosols. I’ve done lots of such models to estimate what happens in another medium, the foliage of a vegetation layer. There are two cases of interest (or more!). One is sunlight (starlight, for another planet) coming through clear air. With only small molecules such as N2 and O2 dominating in Earth’s atmosphere, scattering of SWR is modest and mostly directed near the forward direction. Short wavelengths scatter more strongly than long wavelengths – blue much more than red. Hence, in clear air we get blue light scattered toward us all over the sky, our blue skies. Counting in the other colors, 10% of the direct beam is changed into diffuse skylight. At low solar (stellar) elevations the radiation passes through a much longer path in the atmosphere to reach us, and the red end of the spectrum dominates heavily, less of it having been scattered away. Far-red light, in the terminology of plant physiologists, in the wavelength range 700-850 nm, dominates over red. The ratio of far red to red is the signal for dawn and dusk for plant responses to daylength.
The other case is cloud droplets and other aerosols, natural or human-made. These are typically large compared to the wavelength of light. They scatter light into a larger range of angles, often substantially backward. They may also absorb a lot of radiation – think of the red-brown layer of nitrogen tetroxide from vehicle emissions viewed edge on as one enters the Los Angeles basin near Riverside, or the appalling red-brown dust that’s ever in the atmosphere of Mars, with dust storms sometimes reducing light at the surface to less than 1/10,000th that of clear air (which never occurs on Mars. Potential colonists, think that over!). Just considering fluffy white clouds or other forms of water clouds, they lack significant light absorption but scatter light strongly. It’s straightforward to formulate and solve simple models that regard light as having two streams, up and down, passing through a series of thin layers in which light is scattered. With multiple opportunities for light to bounce back upwards, the top surface of the cloud is bright and the light level below can be strongly diminished. No surprise, of course, as we all see this. For clouds of other chemical species, such as on Venus, the exercise is less academic and more relevant to figuring out the surface conditions.
There are sophisticated models that resolve all the different wavelengths, each having its own probability of scattering and probability of absorption per unit length… and, to set those probabilities, the models must use known depth profiles of gas and aerosol concentrations. These are used in climate models. One common use is to calculate the amount of each wavelength of sunlight reaching the surface of the Earth, with known gases (normal gases, water vapor, pollutants). They agree very well with direct measurements, which look like this:
From: Franz J. Himpsel, University of Wisconsin
We see that sunlight has lots of little pieces chopped out of it by gases absorbing light. Of course, some of the chopping was done by our Sun’s atmosphere – the Fraunhofer lines mentioned earlier. Visually we don’t notice these narrow irregularities because our eye’s visual pigments all absorb very broad bands of red, green, and blue.
Most absorbed radiation ends up as heat without doing notable chemistry along the way. We only need to account for it in heat balance, our major focus here. However, some absorbed radiation does interesting chemistry. There’s the cycle of creation and destruction of ozone in the atmosphere. There’s photochemical degradation of dead vegetation, though it is a rather minor process for that breakdown that’s dominated by the mechanical and biochemical actions of pill bugs, microbes, fungi, etc. Live and dead vegetation and inorganic rocks comprise the land surface cover on Earth; small and more photochemically reactive organic molecules are minor components of the surface; we might still count terpenes emitted by trees, our “natural smog,” though it accounts for little in the heat balance driven by absorbed radiation. Large organic molecules readily dump energy absorbed from light by radiationless relaxation (which we’ll get to) rather than slower and thus disfavored routes such as electron transfer that lead to chemical reactions. This ready conversion is explained by the great density of molecular states of electronic, vibrational, and rotational states that overlap between a big molecule in an energetic state and that molecule in its ground state – the conversion to just heat and not to electronic excitation prepped for chemistry is much favored statistically. Big organic molecules may absorb millions, billions of photons before reacting. If not, your plastic pots, garage covers, etc. would be gone rather fast. Rock and inorganic soil have very few pathways to chemical reactions in situ. A minor exception is generation of electron vacancy centers that colors old glass bottles in the desert to a nice purple. Purple bottles only cover, oh, who knows, less than a hectare out of 51 billion hectares of land; count them out. Water in the oceans that dominates all other chemicals is very hard to photolyze; we don’t see oxygen and hydrogen coming out of sunlit water itself.
Very notably, on Earth a good fraction of solar shortwave radiation is absorbed by a most remarkable molecule, chlorophyll. This molecule illustrates the various internal processes of energy transformation in molecules in general, so I’ll use it. Delving into the photophysics of chlorophyll further reveals the exceptional balance of processes in it. Is the biological evolution of chlorophyll or of some chlorophyll wanna-be nearly inevitable in a habitable world, and does that production make a world habitable? There are many other photochemistries in organic molecules but Chl is in a terrestrial “world of its own.” In truly good environmental conditions on Earth currently the process of photosynthesis that starts with chlorophyll absorbs about 30% of the solar energy flux hitting the leaf area; that energy gets pushed through many coupled “dark” chemical reactions (not requiring more light). The leaf stores only about 6% of absorbed energy as higher-energy bonds in sugars. The fraction stored per unit of total planetary area varies greatly with vegetation cover and environmental conditions. It’s an estimated 0.3% on a global and annual average. Still, that low fraction keeps all of us alive.
Chlorophyll has a ground state (lowest-energy electronic state), and a vast number of excited states of higher energy… and of specific symmetries in both geometry and the state of spin of the electrons. The molecule is rather complex:
Image from Wikipedia Commons.
The long phytyl tail is not critical to the electronic behavior. It serves to anchor the molecule in the lipid layer holding 300 or so Chl molecules in the photosynthetic unit. The magnesium atom is critical. Its loss to give two H atoms at the porphyrin rings makes olive-colored pheopyhytin, active in the later electron transport but not in excition energy hopping. Vertices without an element noted (no N, O) are carbon atoms. When carbon atoms show fewer than 4 bonds, there are hydrogen atoms there suffificient to make 4 bonds to the carbon atom.
Its ground state has its outermost or valence electrons, the ones participating in bonding, paired up, one electron with spin down in each “orbital” and the other with spin down in that same orbital. (For a discussion of spin and the Fermi exclusion principle, you can consult any text on modern physics if you are unfamiliar with the concepts.) That makes the state a spin singlet, having only one possible state. Call it S0. Chlorophyll has several electronically excited states in the range of energy of visible light and near infrared light. Two are also singlet states: S1 lies about 1.8 electron-volts higher than S0 and can be reached when a molecule in state S0 absorbs a photon of red light; S2 lies higher, around 2.8 eV higher than the ground state, and can be reached upon absorption of a photon of blue light. The absorptions are not at sharp wavelengths. Rather, there are Chl molecules in a variety of states of vibrational and rotational excitation at normal temperatures; many different initial “rovibrational” states of the S0 electronic state are present at normal temperatures. Each can independently go to rovibrational states of the S1 electronic state. Each such transition has its own variant energy jump, so that we see range of different wavelengths being absorbed in a broad band. Of course, the same occurs in absorption to electronic state S2; the blue absorption band is also broad.
Let’s view this on a state diagram:
States and fates of chlorophyll. (A) Chl absorbs photons in the red or the blue regions of the spectrum, moving it to the first or the second excited state. (B) The great majority of Chls in the second excited state lose part of their electronic energy as heat (vibrations and rotations) to become the first excited state. (C) Chl in the S1 state can lose its electronic energy as heat (vibrations and rotations) to reach the ground state by radiationless relaxation (r.r.), which is rare; it may also fluoresce (fl.), releasing its energy as a red photon; it may also change its state to the first excited triplet state, T1, by intersystem crossing (i.x.), which can then release energy relatively slowly, including by emitting a photon via phosphorescence (phos.); finally, it may transfer its electronic energy to other chlorophylls; the energy can hop around among the other Chls in the photosynthetic unit, where one of them can start photochemistry by transferring an electron to a complex of other kinds of molecules.
The higher excited state in Chl, the S2 state, very rapidly converts part of its electronic excitation energy to vibration and rotation, becoming a vibrationally and rotationally “hot” S1. Everything else that happens in Chl starts from S1, then. One beautiful phenomenon is fluorescence. In virtually the inverse of absorbing a red photon, it emits a red photon. Viewing a solution of Chl molecules excited by white light, say, a beam of ordinary sunlight, one sees a deep green solution edged with deep red.
That process has a natural lifetime of a couple of picoseconds, depending exactly on the environment (solution, natural position in a leaf cell organelle, etc.). In photosynthetic green plants, there are four other fates possible. One is sort of continuation of the process of moving down in energy to make heat; S1 just converts the rest of its electronic energy to vibration and rotation -that is, to heat. In Chl, this is quite slow, so almost all the Chl molecules in state S1 miss this path and take one of the more interesting paths. Chlorophyll is unusual this way; most other organic molecules just do the conversion to heat; they don’t fluoresce often or go on to photochemistry. A second alternative fate is swapping the excitation energy with another chlorophyll molecule nearby, facilitated by the first molecule acting like a radiating dipole of charges to excite the other molecule. “Excitons” or packets of excitation energy traded among molecules move among the 300 or so Chl molecules gathered in each photosynthetic unit in a chloroplast, an organelle within a leaf cell. A third possible fate, reserved for special Chls at a photosynthetic reaction center, is the use of the electronic energy to drive the separation of an electron at the photosynthetic reaction center, staring the “dark” processes of photosynthesis. The migration of excitation among the 300 Chl molecules to the reaction center is very fast, so that a functioning leaf has most of the energy going into electron transfer and subsequent photochemistry rather than to fluorescence. The fourth alternative fate is having the molecule convert some electronic energy while flipping an electron spin to end up in the first excited triplet state, T1. This is a slower and thus rarer process, because it is forbidden by the selection rules for transitions using the electric field, which can’t flip a magnetic spin. Other, slower interactions do allow this to happen to a small fraction of the Chl molecules in S1. That’s adverse in plants because this state of Chl can interact with a nearby oxygen molecule in its triplet ground state to make excited singlet oxygen. Singlet O2 is a potent oxidizer that can chew up cell contents. (You can make this, with precautions, by mixing ordinary hypochlorite bleach and hydrogen peroxide. Do it in a dark room to see the orange glow from a very rare process, two molecules releasing energy into a single photon. Work fast, don’t breathe, and air out the room; singlet O2 is toxic to us as it is to plants.)
By virtue of its chemical structure and attendant electronic structure, chlorophyll is an exceptional molecule. Its S1 state is so stable that it drives photochemistry much faster than it dumps electronic energy as heat, or fluoresces, or becomes a damaging triplet. It also can transfer energy to its fellow Chls, and it can join in the photochemistry of electron transfer. There is no other molecule like it in nature, perhaps even on other planets, though I won’t bet on that. There are only minor variants, Chl a, Chl b, and BChl, the bacteriochlorophyll. Plants are green because of this very special molecule (and some bacteria are purple, with their BChl). There are auxiliary pigments, the carotenoids, that absorb light in the green and yellow regions of the spectrum, if incompletely. They can transfer some of their electronic energy to Chl to make photosynthesis a bit more efficient in using light.
Green plants suffer various stresses, such as heat or drought. These reduce the ability to keep photochemical reactions going. The plant responds a number of ways, one being a biochemical change in the auxiliary pigments to dump energy as heat rather than pile up excitation at a poorly active reaction center and risk making more triplet state, among other things. The response also changes the amount and time course of absorbed radiation given off as fluorescence. The changes are diagnostic of plant stress and provide very useful measures with appropriate equipment. That equipment can be on the ground or on a satellite. See the earlier section, on using the dark Fraunhofer lines in the solar spectrum to detect this fluorescence. The method could be used on an exoplanet, if we could ever get close to one… and provided that a similarly magnificent molecule as chlorophyll had evolved there.
Our luck in the Universe: Chlorophyll evolved to drive biological energy capture. The chances for evolution of a comparably exceptional chemical on exoplanets are difficult to estimate.
Another flux of electromagnetic radiation is critical for the planet’s greenhouse effect – it’s the longwave or thermal infrared radiation (TIR). At the temperature of Earth or other nice planets for life, the radiation lies in a range of wavelengths that essentially does not overlap the solar/stellar spectrum. At the mean surface temperature of the Earth, 15°C or 288K, 98% of TIR lies between 5.5 micrometers and, well, infinity. In this range of wavelengths totally negligible TIR is coming down from the Sun to Earth.
We then will look at TIR radiating from the surface and transiting the atmosphere. This radiation scatters off the molecules of air so weakly that we only need account for it being absorbed…. and then added to by some of the molecules emitting new TIR. The absorption and emission lines are complex; please see the previous figure.
Each major band of absorption is attributable to one of the greenhouse gas molecules, and in one of its modes of vibration and rotation (no electronic energy level jumps – the energy of photons in this region is too low). One major absorption band for CO2 is at a wavelength of 15 micrometers (μm) (also given by its reciprocal, 667 “reciprocal centimeters” or cm-1). It comes from adding energy in a quantized amount to a bending vibration. The band at a shorter wavelength and correspondingly higher energy lies around 4.26 μm (2349 cm-1); it comes from an asymmetric stretch:
Bending mode. The δs are partial charges Asymmetric stretch.
In both sketches, the right-hand side indicates the displacement directions of charges.
The bands are spread out over a range of wavelengths from several causes. One is that there are simultaneous transitions in rotational levels at smaller energy increments up and down, making for a modest range of total transition energies. Another cause is collisions that temporarily distort the molecules, and the Doppler shift in frequency (some molecules are moving toward us, others away, yet others in other directions). Finally, there are overtone and combination bands, from jumps of two steps in vibration in one mode or in two modes.
Water molecules are even more active as absorbers, since there’s much more water in the atmosphere than there is carbon dioxide. Water can be as much as 5% of the molecules in hot, humid air, while CO2 is still hovering around 0.04% or 400+ parts per million. (Yes, water is the dominant greenhouse gas but its level is set by the amount of the other gases, as I discuss later.) The main vibrational transitions for water are its own bending mode at 6.47 μm (1545 cm-1) and an asymmetric vibration at 2.66 μm (3756 cm-1– but that’s in the SWR region, radiation essentially not present in the upwelling TIR).
Bending mode. The δs are partial charges Asymmetric stretch.
In both sketches, the right-hand side indicates the displacement directions of charges.
Both water and CO2 have other modes of vibration that do not cause a net directional movement of charge of their electrons and nuclei. As an example, in CO2, the oxygen atoms at the ends have a partial negative charge, attracting more density than carbon, which then has a net partial positive charge. The symmetric stretch of CO2 moves the oxygen atoms outward but each one’s motion cancels out the other’s as far as moving charge from the center. This mode of vibration can’t respond to the electric field of light moving up and down; it is inactive in absorbing TIR.
There are also other greenhouse gas molecules – methane, nitrous oxide, ozone, etc. These are studied extensively for climate analysis and prediction.
How thermal infrared radiation moves through the atmosphere
The greenhouse gases (GHGs) absorb thermal infrared, removing it from the upwelling flow of radiant energy escaping the Earth. They then undergo the reverse process, re-emitting the energy… .but in a variety of directions. So, nearly half of their emission is effectively downward. I recount this in a simple picture:
With some TIR returning to Earth, the Earth’s surface has to become warmer than what’s expected from the SWR energy input. As a summary: on average over regions, night and day, and seasons, the average shortwave solar energy absorbed on Earth is 239 watts per square meter. To emit that much power per area as TIR, the temperature of the Earth as viewed above its atmosphere has to be 255K or -18°C. It’s achieved by the average surface being warmer, at 288K or 15°C, a gain of 33°C! At that temperature, the surface radiates at a rate of 390 watts per square meter. The GHGs send back about 156 W m-2 on average, for a net output of, again, 239 W m-2. There’s a great deal more of interest about the greenhouse effect, to be covered shortly – feedbacks to stabilize it (or not), the impossibility of having just water maintain it, ditto the impossibility of having ammonia as the ocean and the greenhouse gas, and more. Another thing to note is that the sky is generally cold as a body radiating TIR back to us. This is especially true for a clear sky, which may have few molecules that are TIR active and able to radiate TIR down toward us. If we’re not cocooned in a car or a shelter a good part of our personal energy balance is the often marked deficit in TIR.
Human energy balance on a hot day. Half of the persons effective area receives thermal infrared radiation from surroundings at a temperature slightly lower than air temperature (vegetation cools by transpiration, e.g.). The surroundings exchanging TIR with the human give the human a modest net input of 510-478 =32 W m-2 over 1 m2 of skin area. The half of the person’s area exposed to downwelling sky TIR gets that at an effective temperature of 15°C. The net exchange of TIR with the sky yields 390 – 478 = -88 W m-2. Convective heat transfer from the air at 60 W m-2 nearly balances the heat budget. I omit the modest heat loss from water evaporated from the lungs as the person breathes, plus the large heat loss from sweating.
In southern New Mexico on a clear day the effective sky temperature is commonly 40°C lower than the air temperature. Of course, the spectrum of TIR coming down from the sky doesn’t approximate the nice, even form of a black body; emission is in distinct bands at different intensities (spectral radiances).
David Turner, notes, COPS Summer School Seminar, 27 July 2007
A fair way to state it is that the sky radiates as much as 150 watts per square meter less than the ground, trees, etc. around us. We bald people when hatless notice this readily. So do moose wandering in burned-out forests. They’re exposed to the cold sky and can suffer long deficiencies in heat input that can lead to their death when combined with other stressors.
Air has some bigger things in it than simple molecules
Our atmosphere always has aerosols in it – clumps of many molecules into a commonly spherical shape. Ordinary water clouds are big area of water condensed into a great number of individual water aerosols. When skies are not clear because of clouds or other aerosols, the propagation of thermal infrared radiation through the atmosphere is more complex than as described by individual molecules absorbing or scattering it. Absorption of radiation by solids and liquids is more than a simple sum over the absorption by individual molecules. Radiation is redirected (refracted) by the whole particle, and the energy states of molecules are perturbed by the adjacency of other molecules. Many experimental and theoretical studies of energy balance for climate change focus on the role of aerosols. Depending on their gross absorption rates (dark, light) and altitude in the atmosphere they may increase or decrease the greenhouse effect. For water clouds one direct effect on energy balance at the surface is that they radiate TIR at a rate close to that of their bases, irrespective of the temperature profile within them. They are so thick optically in the TIR that the final radiation coming down from them is dominated by that last layer. We might experience a clear sky with a very low TIR output but a passing cloud at a low elevation above us is much warmer. In humid climates or humid times, clouds filling the sky also cap the upward movement of TIR. They keep the surface warm; nighttime cooling is minimal. While we in New Mexico commonly expect a 16-18°C drop from afternoon to dawn, we would wake up in hot, humid, cloudy northern Vietnam refreshed only by several °C decline from the previous day’s heat.
Clouds figure locally in climate (local reduction in light (SWR) penetration to the surface, trapping TIR …) and globally because heat is transported long distances by wind and ocean currents. So, too, do surface conditions of reflectivity (e.g., ice). Their role in the greenhouse effect is necessarily complicated, so that I leave it to an abundance of other authors and their texts.
Light possesses momentum, polarization, and a geometry of propagation. The momentum of light can push gases off a planet (aided by stellar wind) – or push extremely lightweight sails on a spacecraft to accelerate it through space (proposed, not yet tested; however, the momentum imparted to the Voyager spacecrafts by sunlight and by their own emission of thermal radiation had to be accounted to account for their final velocities!). The polarization of light indicates details of how a light beam got reflected off small particles or large areas such as seas. That’s behind the utility of polarized sunglasses, of course. The geometry of light’s propagation is the property most immediately relevant to the habitability of a planet. Light travels in straight lines in free space, while subject to being bent by refraction in gases, liquids, and solids – and – a tiny bit – in strong gravitational fields. The bending of light by the Sun’s gravity was detected during a solar eclipse, helping to prove the validity of the theory of general relativity. When light from a sun spreads out from its surface, it keeps spreading out, getting less intense as 1 over the square root of the distance; that’s critical for a planet’s rate of interception of stellar energy as a function of its distance from its sun.
Light reflects off shiny surfaces – mirrors and allied structure – such that the exiting beam makes the identical and opposite angle to the surface normal as the incoming beam. For rougher surfaces made up of many facets, light scatters into many directions. It may be uniformly scattered into all exiting angles. Surfaces with regularly repeating patterns can reflect light that interferes with itself at a regularly spaced range of angles, which is the process of diffraction. Each wavelength has positive interference at a unique angle. The result is to spread out the spectrum of light in an often beautiful display.
Every habitable planet has an atmosphere. Every habitable planet has a greenhouse effect, the trapping (really, delay) of thermal radiation leaving its surface, which raises the surface temperature above that of the case without the greenhouse gases or GHGs that intercept the TIR. Every habitable planet has water, as I argue at length shortly below. Thus, every habitable planet has water vapor in its atmosphere. Water vaporizes (has a significant vapor pressure) at physiological temperatures, so it will contribute to a major greenhouse effect. It adds more than half of the total greenhouse warm in current Earthly conditions, but the vapor concentration in air is driven “passively” by the effect of other greenhouse gases; water alone cannot give a stable greenhouse effect.
Other greenhouse gases (GHGs) absorb in the thermal infrared, that is, the range of wavelengths of electromagnetic radiation emitted by bodies (organisms and their host planet) at nice physiological temperature. As just described, part of the thermal emission “aimed” as it were to leave the planet’s surface gets reradiated back. Thus, the planet’s surface is always at a higher temperature than that of the top of the atmosphere, which is the radiative temperature sensed from space. The GHE has been demonstrated and discussed, now with passion and alarm, since Svante Aarhenius mimicked it in 1896 with CO2 in a sunlit chamber. All that’s needed is a layer that passes visible and near infrared light (shortwave radiation) well but absorbs thermal radiation with some effectiveness.
The simplified model given earlier with the Earth as an example considers:
- Incoming sunlight (shortwave radiation, SWR) hits the planet, with a fraction a reflected back to space (about 30% for the Earth). The rest is absorbed and converted to heat. That makes 0.70*342 = 239 watts per square meter averaged over all parts of the globe.
- The warm surfaces of the planet emit longwave radiation (thermal infrared radiation, TIR), at various rates and combinations of wavelengths, but we’ll lump them into TIR at the average surface temperature, 15°C or 288K. We get a get a surface emission rate as a black body of F = σT4 =390 watts per square meter.
- The TIR that leaves the Earth’s surface propagates out but a fraction f gets absorbed in the atmosphere, warming it. The absorption is done by greenhouse gases, primarily water vapor, but the only reason that appreciable water vapor is in the air is the presence of the primary driver greenhouse gases, CO2, methane, nitrous oxide, stratospheric ozone, and some minor players. We’ll get into the way that other gases drive the amount of water vapor, later.
- The atmosphere then reradiates all that it absorbs (or else it would get every hotter). In the simple model, half of the reradiation goes out to space and half goes back to the surface to warm it – making it warmer than just sunlight would make it. In reality, more is emitted back down than out to space, as one can model with greenhouse gases present in successive layers.
- The surface and the atmosphere have each been lumped into single, homogeneous layers, wrapped around the circumference of the Earth . This is obviously a great simplification, but the idea of greenhouse trapping is conveyed. More and more sophisticated models can be built to explain more and more details, including the variation with geographic position, time of day, and season.
A stronger absorption of TIR raises the temperature. In the simple model we can cut the fraction escaping directly from 0.2 to 0.1. Then, a fraction 0.9 is absorbed and half of that, or 0.45, returns to the surface; the net escape is only 0.55. We’ll divided 239 W m-2 by 0.55 to get a surface emission rate of F = 434 W m-2. We can solve for the new blackbody temperature of the surface as F = σT4 or T = (F/σ)1/4. That’s 296K or 23°C, an eight- degree rise; that would greatly reset the levels of plant (e.g., crop!) stress, patterns of rainfall, etc. This is only a simplified one-layer model; multiple layers of TIR-absorbing gases and, geographical variation, changes in cloud reflection in a more humid atmosphere would have to be accounted – and are – in comprehensive climate models.
Does a planet’s greenhouse effect suffice for habitability?
The radiative input from the planet’s star and the greenhouse effect set the surface temperature…and so much happens. In the simplest terms, the average surface temperature on the planet is the sum of the top-of-the-atmosphere temperature, TTOA, and the greenhouse warming, which we might term ΔTGHE. It’s not that simple, of course. The greenhouse effect helps set TTOA in several ways. For one, a high GHE helps to set the mean cloudiness. That affects the albedo of the planet and thus TTOA. Also, there is not one surface temperature but an elaborate pattern of temperatures in space and time, with the patterning done by the angles of incidence of radiation at various points and processes in air and water that move heat around and affect cloudiness, ice formation, etc. We’ll get to that later. Note particularly that the star’s power output rises with age. This causes surface temperatures to rise, revamping… or threatening… the planet’s life forms unless the GHE can be “reset.” That happened on Earth, traumatically but successfully as is apparent in a detailed look.
A greenhouse effect can be:
- Too low to compensate for a low TTOA, or even “lost,” as is the case for Mars
- Too high, either moderate but paired with a high TTOA, or runaway as is the case of Venus
- Just right, the Goldilocks level for the moment, but it might be:
- Inherently unstable, for a variety of reasons – causing havoc for life forms
- Readily shifted by abiotic and biotic processes on the planet
- Unable to be reset sufficiently as the host star’s output continues to increase over time
There are then deal-breakers and deal-makers for the habitability of a planet, for any of the useful definitions of habitability. Habitability may come and go. The continuity of habitability of the Earth for any form of life has been maintained only tenuously in times of mass extinctions and of Snowball Earth. Life on Earth only evolved to its current complexity because evolution didn’t have to start from scratch from new origins of life each time. On other, potentially habitable planets (PHPs) habitability might come and go and evolution would be far less dynamic. No little green people, for one.
We’re very familiar with the current greenhouse gases on Earth… and how we’re changing them. They are
- CO2, the dominant driver for the last 2 billion years or so, with its ups and downs and critical cycling among plants, volcanoes, ocean waters, and carbonate deposits; massively increasing from combustion of fossil fuels and the burning and decomposition of biomass when land is cleared, as for converting tropical forests for subsistence agriculture or, more so, for ranches and cropland.
Kling, Univ. of Michigan
- CH4 (methane), a natural product of anaerobic (oxygen-lacking) bacterial metabolism on Earth (including in cow stomachs and in rice paddies and natural wetlands) now increased in concentration by releases in natural gas production and transport (leaks are rampant); up to about 2.2 billion years ago it was the major greenhouse driver;
Methane leaks in the Boston, MA area. Envir. Pollution 2013, NG Phillips et al.
- N2O (nitrous oxide), another natural product of anaerobic bacterial metabolism, now increasing from heavy (over-)use of nitrogenous crop fertilizers; ammonia-base fertilizers can be oxidized by bacteria with N2O as a by-product; nitrates applied as such or created by ammonia oxidation can more to anaerobic wet areas and be reduced to N2O or harmless N2;
- O3 (ozone), naturally in the stratosphere; created by photochemical breakup of ordinary O2 oxygen and recombination with more O2;
Note that I did not list water vapor as a driver greenhouse gas, even though its absorption of thermal infrared is about 55% of the total greenhouse effect.
My reason is that water vapor is almost only a passive follower of the other gases. In turn, the reason for that is that its own activity is very strongly dependent on temperature, while the other gases act virtually independent of the temperature. A water-vapor-only GHE would be unstable. The concentration of water vapor depends exponentially on temperature, at equilibrium:
Near the Earths’ mean temperature of 15-16°C, the concentration (partial pressure) rises about 7% for every 1°C rise in temperature. Suppose that we had no other greenhouse gases and that the Earth’s surface temperature were raised enough to increase the amount of water vapor in the air to give us the current GHE. The temperature rise needed would be big, since it might also give us more of the reflective clouds. In any case, a drop from that temperature would generate a downward spiral: less water vapor less GHE and lower temperature even less water vapor even lower temperature. A water-only greenhouse is unstable. So, the “permanent” gases (noncondensible) are the drivers.
A greenhouse effect then needs a starter, a condensable gas such as methane of CO2. Of course, the only “cold starts” with no significant water vapor were after the episodes of Snowball Earth. Other than these, Earth has always had a water-abetted GHE. Consider, then, a hypothetical “near” cold start. Let’s take a trapping fraction for exiting thermal infrared as about 40% of the value near 55% after the Industrial Revolution began and just prior to our fossil-fuel feast of today. A trapping fraction of 22% would have given a very modest GHE. The Earth’s mean surface temperature might attain -4°C. That’s an average. A significant area of the tropics, in particular, would have warmer water. That would add water vapor as a GHG. In turn, the Earth would warm more, adding more water vapor and more of a greenhouse effect. The fraction of trapped TIR would stabilize near present values and so would temperatures. The stabilization would happen because no greenhouse gas, water or otherwise, blocks all the parts of the TIR absorption spectrum. There are “windows” that allow a rather persistent fraction of TIR to leave, limiting the GHE. Only added GHGs of other types such as CO2 can help to strengthen the blockage in some spectral regions where they absorb, thus increasing the GHE.
Mars has more CO2 in its atmosphere than does Earth, not just as a fraction of its admittedly low air pressure but in absolute terms, as the column density above the surface. However, the GHE on Mars is weak, as Mars lacks water vapor to be driven into the atmosphere and amplify the effect of CO2. That’s another plus for water! Mars even gets cold enough to have about 25% of its CO2 condense out as dry ice in the winter. We’ll get to why small planets or hot planets lose their water!
By and large, we like Earth’s current climate – temperatures, precipitation patterns, winds and ocean currents. In part this is a parochial view. We’ve adapted to the conditions. You might ask your distant ancestors, however, about the Ice Ages. With our huge human population currently, we’re also locked into this climate to support our agriculture that covers 20% of the land, or more than half if you include forests and grazing areas. Our agricultural practices are quite finely tuned to the current climate. Climate scientists are right to give dire warnings about changes in the capacity of the food system, among other support systems, that are projected with climate change driven by our own actions in using fossil fuels, clearing forest, and generating methane and N2O with our own agriculture.
Earth has a radiative balance with a mean top-of-the-atmosphere temperature, TTOA, of -18°C, certainly unacceptable for human life and most multicellular life. Our GHE amounts to the addition of +33°C, to a mean surface temperature of +15°C. Could we have reached this rather equable state with a higher TTOA (hotter Sun) and a lesser greenhouse effect, or vice versa, a lower TTOA (cooler Sun) and a higher greenhouse effect? Actually, Earth has experienced this in deep time with the latter case. About 2.2 billion years ago the Sun was only about 70% as powerful but the Earth had a potent GHE with methane and other gases that are highly chemically reduced (in contrast to oxidized like CO2). Then the GHE went the other way, too low. Cyanobacteria polluted the air with oxygen, truly the worst case of pollution ever. This oxidized the methane to far less potent carbon dioxide and the Earth froze, perhaps to the equator, nearly ending all life. That’s Snowball Earth. There’s some more detail later.
A far too low greenhouse effect, now: the case of Mars
Mars now has CO2 as almost the only gas in its atmosphere. Its concentration is much higher than it is on Earth but still modest. Mars doesn’t have water to amplify its greenhouse effect. Being smaller than the Earth it has lower gravity. It may not have accumulated as much water or methane and other volatiles as did Earth. In any event, it readily and steadily lost its water, primarily by water being split into hydrogen and oxygen; hydrogen escaped to space as tiny fractions of the molecules reached Mars’ low escape velocity. Over time this led to almost total loss, leaving only some occluded water in soils. This was a lethal blow, literally. On average Mars is 1.52 times as distant from the Sun as is Earth, giving it a much lower average TTOA of -63°C. If Elon Musk left any antifreeze in his Tesla reaching out beyond the orbit of Mars it wasn’t enough. Combine the low TTOA with a GHE that virtually disappeared and the habitability goes to zero, even for microbes. I hold out minuscule hope that even microbes will be found on Mars. It got the short end of the stick radiatively and in mass. No characters from War of the Worlds were ever there. The only deaths on Earth linked to Mars were some scores of people panicked by a 1938 radio broadcast of a play of that name. A corollary effect to the loss of hydrogen is that the surface of Mars got stuck with the oxygen of the original water. It has a very oxidized surface, unlike that of Earth. It lost its chance for a potent methane greenhouse effect.
A far too high greenhouse, now, but not earlier: the case of Venus
Venus is almost a look-alike planet to Earth. Sure, Venus is closer to the Sun, but it closely resembled Earth at the start, being rocky and of a size very comparable to Earth’s. It offers fantasy writers interesting possibilities, such as C. S. Lewis with his novel, Perelandra. At present, Venus is cold at the top of its atmosphere because its dense, white clouds reflect a whopping 76% of impinging solar energy. Its radiative temperature is below the freezing point of mercury, at -43°C. Not Siberia in winter or Vostok, Antarctica, but chilly. Below its dense clouds of carbon dioxide and sulfuric acid it’s hellishly hot, from the intense greenhouse effect of these two gases that are 93 times denser that Earth’s atmosphere. They don’t even need water vapor as an amplifier to boost the surface temperature to a record 467°C, hot enough to melt lead (even a lead-based Die-Hard battery would have not worked for a car on Venus). It takes special electronics made of silicon carbide, not silicon, to persist on Venus for more than snapshots of the environmental conditions.
Venus lost its water in a runaway greenhouse effect. Refined models indicate habitable conditions from about 2.9 billion years ago to 750 million years ago, based on the Sun’s output during those times and much information about the likely clouds and rotation of the planet (see Geophysical Research Letters, 2016, May et al.). The Sun’s output grew over time, as it does for all Main Sequence stars. When the energy flux density warmed the surface too much, water vapor reached high in its atmosphere, to be dissociated by UV light into H atoms and finally oxygen. The light H atoms were hot enough that a small fraction reached escape velocity. Ultimately all the hydrogen and all the water was lost. This will happen to the Earth; in a billion years the surface temperature will hit about 47°C and the water loss runs away. There is/was a concern that we could advance the schedule to our own tenure on Earth by burning too much fossil fuel. NASA scientist James Hansen suggested the possibility, though the more recent consensus is that we might only push us past a tipping point to lethal temperatures but with lots of water around. Cold comfort, or, hot comfort. For a bit more detail on the runaway greenhouse effect, search for the two models, the Komabayashi-Ingersoll limit and the Simpson-Nakajima limit.
Lesson from a planet with a runaway greenhouse effect: The Goldilocks orbital distance, not too much starlight (Venus) and not too little (Mars) is not that wide a range. Of course, astrobiologists who look at habitability know this and they winnow the prospects among the exoplanets that have been sighted. That still leaves a lot of planets on a lot of stars that meet this very, very minimal criterion. Fold it in with all the other criteria or lesson learned and it’s very long odds against habitable planets, and stunningly small for any nearby stellar systems. Some relief is offered by the abundance of stars moderately smaller than the Sun. Planets around these stars that have a much longer lifetime and a much slower progress in the star’s energy output are offered a long time for biological evolution.
The greenhouses gases on Earth currently have been noted here earlier and copiously in the literature, websites, screeds, what have you. Water is the biggie, but, as has also been noted copiously, it can’t generate a stable GHE by itself. The “driver” GHGs are CO2, CH4 (methane), N2O (nitrous oxide), O3 (ozone), and minor gases, some of them industrially produced, such as chlorofluorocarbons. The current mix by concentration is not what was delivered to the early Earth, so we need a closer look, plus a view toward what exoplanets might have received.
We have water because we have hydrogen and oxygen, naturally. Hydrogen is ubiquitous in the Universe, oxygen not so. Oxygen is created by nuclear fusion reactions in older or more massive stars, as noted earlier. Our Sun, by itself, has not made much oxygen and certainly has not sent it to Earth. Oxygen came from stars that preceded the Sun and that exploded to spread oxygen… and all the other chemical elements precious to us. Both a supernova or two and a neutron star merger (such as captured by the LIGO detectors) are candidates. The elements suffused the gaseous pre-solar nebula from which the Solar System formed. They got incorporated into dust grains, asteroids, and planets. They got blown around by the solar wind and by the pressure of radiation coming from the new Sun. Water itself formed by natural chemical reaction of hydrogen and oxygen. It basically got evaporated away from hot Mercury, less so from Venus, Earth, and so on. Our Earth got a modest share initially and then more from impacts of asteroids (evidence says less so from comets) and perhaps from the collision of a Moon-sized body (that may also formed our Moon from a proto-Earth!). The evidence for which sources were important comes from determinants of the water content of meteorites (and now we’ll see from asteroid measurements by Japanese and American spacecraft!). It also comes from measurements of the different stable isotopes of hydrogen (1H, ordinary hydrogen; 2H, deuterium) and of oxygen (16O, 17O, 18O) that reveal natural processing of the elements in space. We were lucky, getting much water but not too much. My NMSU colleague, Jason Jaciewicz, points out that the balance was precarious, at least for us terrestrial animals. Too much water would have given us a water-only surface.
Carbon came to our Earth as volatile compounds, primarily methane and CO2. Carbon was reasonably abundant in the nebula that gave rise to the Solar System, a consequence of the CNO cycle in stars more massive than the Sun. Earth retained a modest fraction of the carbon that came its way originally; it’s now about 1/10 the fraction of matter that’s found in CI chondrite meteorites that have delivered early Solar System samples to us. Carbon is then only the 13th most abundant chemical element on Earth by mass and only 12th by number of atoms. A lot of Earth’s carbon lies in the Deep Earth (mantle and core, vs. crust). Much has migrated to the core, making it sort of a low-carbon steel (with sulfur and other elements). Near the surface a lot of it got evaporated (as carbon compounds such as methane and CO2) by the high temperature of the coalescing Earth, and a lot got blown off by the solar wind. Note that Mars kept more carbon, being farther from the Sun. There is also evidence that a good part of the Earth’s current store of near-surface carbon came from that collision with a Moon-sized body that brought us some water. The impact(s) would have brought a lot of volatiles up from the mantle, leaving them to condense preferentially near the surface.
Carbon’s distribution in chemical compounds has been extensively reworked chemically and geologically over time. Most carbon evidently arrived as methane but now it’s mostly carbon dioxide. That change is due to life and specifically to photosynthesis that evolved perhaps 3 billion years ago. The evolution of oxygen-liberating photosynthesis was the smoking gun (or the oxidizing gun) in the Great Oxidation Event and in Snowball Earth. Oxidation of methane to CO2 permanently changed our GHE from high to low. You might say that was just in time because the Sun keeps getting more radiant and making things toastier on Earth (very slowly). The changeover over was traumatic, with the Earth nearly freezing to death. Hardy microbes survived it to lead to us evolutionarily.
Carbon has also been cycling around the Earth since day one. That’s among the atmosphere, the biosphere, the oceans, the mantle, and even the core. Carbon dioxide forms carbonic acid and carbonates. Carbonic acid in rain slowly dissolves rock components to make carbonates, some of which reach the oceans to form sediments. The subduction of tectonic plates carries the carbonates to the mantle. From there the carbon can be emitted back to the atmosphere as CO2 in volcanic emissions. Carbon dioxide also is a substrate for photosynthesis. Largely, carbon’s incorporation into biomass is balance by the process of respiration (meaning oxidation of sugars and such in biomass with oxygen) by plants, animals, fungi, bacteria, etc. Some biomass is resistant to decomposition and thus being respired away – hence, it formed our fossil fuels. The consequence is that the level of CO2 in the air is in a very long-term downward trend. Plants are, in a real sense, causing themselves trouble. At least twenty separate times some plants evolved the C4 pathway of photosynthesis that is much more effective in taking up CO2 for photosynthesis at low CO2 levels in air.
Methane is still around, for sure. It’s a natural product of anaerobic (oxygen-lacking) bacterial metabolism on Earth (including in cow stomachs, wetlands, rice paddies, and other deoxygenated places such as in deep soils). Its concentration increases in concentration by releases in natural gas production and transport (leaks are rampant). It has been noted that the presence of methane in the atmosphere of a planet that also has oxygen in its atmosphere should be taken as a sign of the presence of life there.
Technological humanity is now releasing massive quantities of both CO2 in burning fossil fuels and clearing land by burning or by letting cut vegetation decay. Even pre-industrial farming, particularly of rice in flooded paddies, contributed to methane in the atmosphere. Technological humanity is also adding methane in major quantities as it leaks from gas and oil wells and probably predominantly from the natural gas distribution system.
Nitrous oxide is an interesting case. Nitrogen is actually quite rare on Earth as a whole. It’s abundant in the atmosphere but not in the solid Earth and pitifully low in concentration in the oceans (a reason that photosynthesis on land with 40% of the area of the oceans performs about twice as much as does photosynthesis in the oceans – they’re a kind of watery desert in several senses). By number of atoms nitrogen is 23rd in abundance in the Earth’s crust, even behind titanium, fluorine, chlorine, strontium, vanadium, and lithium! Earth lost a lot of its N because its main stable form is N2 gas, light and easily blown away. There are natural processes that make oxidized forms of nitrogen. One is nitric oxide formed by lightning and volcanism. However, N2O is harder to make and it’s almost wholly made by living organisms that decompose nitrogen-containing organic matter in anaerobic conditions in soil and water. So, it, too, is a sign of life.
Nitrous oxide is now increasing in concentration from heavy (over-)use of nitrogenous crop fertilizers; ammonia-based fertilizers themselves can be oxidized by bacteria with N2O as a by-product; nitrates applied as such or created by ammonia oxidation are very mobile in soil solution. They can move to anaerobic wet areas and be reduced to N2O or harmless N2. We now trace most N2O production to ammonia-based industrial agriculture. Ammonia, NH3, or ammonium ion, NH4+, is the ultimate currency for nitrogen in the metabolism of living cells. It’s a fast pass-through to making proteins, nucleic acids, and other key biochemical. Perhaps surprisingly NH3 is appalling hard to make, even for living organisms. Some bacteria make ammonia by what’s termed (di)nitrogen fixation, at the greatest metabolic cost of any biochemical reaction. Its rate of production naturally is thus low, at about 80 million metric tonnes annually. Technological humanity now makes amounts of ammonia that double that by bacteria. That’s for nitrogenous fertilizers as well as explosives and specialty products (OK, nitric acid is a major product, too). Ammonia is rather rare in the atmosphere other than from cattle feedlots and the big plume crossing the Pacific Ocean from China’s overfertilized fields. Ammonia itself is not a good greenhouse gas and can be ignored.
Ozone is another GHG that basically owes its existence in the upper atmosphere to living organisms. It was negligible in the Earth’s early atmosphere that had oxygen only from photolysis of water by sunlight. Ozone is continuously formed and destroyed in photochemical reactions with ordinary dioxygen, O2. It can reach a steady state, which, even with the abundant oxygen is quite low; condensed to a layer at surface atmospheric pressure it would only be 3 mm thick! That’s enough to offer major protection of life from solar ultraviolet rays. It’s a minor greenhouse gas despite its very low concentration. We humans have “worked” at destroying it, though we’re on a trend to undo some of our destruction of it. Ozone is depleted by industrial chemicals such as chlorine-containing refrigerant gases (freons, no longer legal). Ozone loss is not such a problem for the greenhouse effect as it is for the protection of organisms – us, coral reefs, …. – from ultraviolet light from the Sun. Beneficial ozone in the stratosphere is not to be confused with ozone produced in the lower atmosphere, the troposphere where we live, by the oxidation of pollutants such as unburned hydrocarbons.
What about delivery and processing of greenhouse gases on other planets? We may assume that delivery of GHGs or their precursors occurs before the evolution of life. In that case, water and volatile carbon compounds are the likely the key to habitability. Getting them in the right amounts, not too little for a weak GHE or too much to make a gassy planet, is probably rather rare. The right amount depends on the abundance of C and O in the pre-stellar nebula; the speed of development of the stellar planetary system that sets the pace of delivery; the distance of the planet that sets its exposure to stellar wind (close to a star for enough radiant flux from a small star, trending to farther away for Sun-like stars, but not too close to, say, a red dwarf with flares and tidal locking); the mass of the planet that attracts and holds gases and that allows development of a magnetic core to provide a planetary magnetic shield against stellar wind; the luminosity of the star that both sends volatiles and strips some away; perhaps the help of a Jupiter-like planet to send asteroids toward the planet; … I opine that the right amount is very rare even on a planet otherwise set up well regarding its host star, orbit, mass, and much more. The processing of the volatiles to the right level of greenhouse gases maybe idiosyncratic. Even Earth might have developed another way – say, if oxygenic photosynthesis had evolved earlier or later than it did.
Mercury never was habitable. Venus likely was and probably got enough volatiles and the right solar radiation flux to be habitable for a long while. I now wonder if it had too much in the way of volatiles, given the great density of its current atmosphere. Mars never had enough volatiles and then lost the watery part, as noted.
While we’re at it, what about the possibility of a planet with ammonia as the base of life AND its GHE?
Some people like to think outside the box and consider life based on chemicals that are analogs of carbon and of water. One such route of thinking is about life with ammonia replacing water as its cellular solvent. Ammonia might also be thought of for other functions that water performs on Earth (see later). For example, it has a better heat capacity than its simple formula might lead one to think, because it has hydrogen bonding, even if not as much as does water. So, it could help moderate swings in temperature of an ocean (of ammonia, of course), like, or even an organism’s body. Suppose that we accept the possibility. When your ocean, your medium of life, is a potent greenhouse gas, there is the potential for a runaway greenhouse effect if the planet gets too warm. The mean surface temperature must not yield a saturation vapor pressure that exceeds a small fraction of an atmosphere. For NH3, this limits the temperature to much less than the boiling point at one (Earth) atmosphere, -33.4°C. Consequently, biochemical reactions that sustain life will be quite slow compared to those on Earth. Using the crude figure that many key biochemical reactions double in speed with each 10°C increment in temperature, and with a safe temperature of, say, -60°C, that is 75°C below that of Earth, the rates of reaction on the ammonia planet are slower by a factor of about 275/10 ≈ 180. Life on this planet would be quite slow-moving!
Ammonia alone has strong TIR absorption bands only in 2 locations (6 and 10 μm), with a smaller feature below 3 μm. The GHE with NH3 alone might be self-limiting, except that there are other GHGs readily created from NH3 by photochemical reactions in the atmosphere. These include hydrazine, H2N-NH2, hydrogen cyanide, HCN (there has to be carbon present), and cyanogens, NC-CN. These have TIR absorption features in locations that serve to fill in the gaps and create a potent GHE. The synergy of GHGs is then like that on Earth, where CO2, N2O, CH4, and O3 fill in the spectral gaps where H2O does not absorb significantly.
Tellingly, ammonia is difficult to make from the elements N and H. That’s true either biologically or abiologically. It’s quite uphill in what chemists and physicists call free energy. That’s the measure of energy that can do work at a given condition, out of all the energy that’s present. Free energy actually has to be evaluated not absolutely but in comparing two states of the same components – say, N2 + 3H2 as gases at some pressure and temperature and 2 NH3 molecules. Under temperatures and pressures that are plausible for life, the reaction is very much uphill in free energy. An external source of energy to push nitrogen and hydrogen uphill is extremely difficult to imagine. Of course, Jupiter in our Solar System appears to have much ammonia. However, on Jupiter there aren’t good oxidized compounds to play off against ammonia in energy-liberating and energy-storing chemical reactions (that in some eventuality could be the core of metabolism in living organisms). Ammonia is “safe” from reaction there, once formed.
Wilder yet, but entertained by some exobiologists: making do without a greenhouse effect. Earlier I recounted the evidence these researchers have for liquid water deep inside Europa, a moon of Jupiter, and Enceladus, a moon of Saturn, and several others. These moons orbit close to giant planets, so close that their rocky cores flex markedly as they orbit. This may raise temperatures raised above the very low values set purely by radiative balance with the Sun. Alas, such stressed bodies may not last long enough for evolution of life. More to the point, they have very little free energy, in the thermodynamic sense cited above, to drive processes of life. Sunlight as a necessary driver is quite weak there and would penetrate poorly through ice covers to become weaker still.
A feedback is a process that tends to decrease the changes caused by a change or perturbation (negative feedback) or that tends to magnify the changes (positive feedback). Think of the Earth in a period when carbon dioxide is being added to the atmosphere, which perturbs our climate. This addition can be geological, as volcanism. It can be anthropogenic, as the combustion of fossil fuels. Adding CO2 should increase the GHE and the temperature. One negative feedback is the increased rate of weathering of rock at higher temperatures. This consumes some CO2 out of the atmosphere to make carbonates, thereby directly countering the first process of CO2 addition. One positive feedback is the increasing rate of decomposition of organic matter in soil to release even more CO2. Which process dominates? How does it depend on the original condition? The balance can depend upon the size of the perturbing addition; the negative feedback or the positive feedback might saturate, or stop gaining in strength, if the driving process gets large enough. Then the other feedback wins. There are many feedbacks in Earth’s climate system that have shown themselves in operation in the past with stable climates of long duration but also in times of catastrophic change leading to mass extinctions of life. Prediction of the final result is crucial for us humans now dealing with climate change. For estimating habitability of exoplanets it’s important to estimate the probability that feedbacks all coalesce to give stable or unstable climates.
Negative feedbacks that act to dampen changes in climate:
- One such feedback is clearly the increased rate of rock weathering to carbonates at higher temperatures. This affects the current main greenhouse gas, CO2, though not methane or N2O or ozone to any significant extent.
- Another negative feedback is CO2 dissolving into ocean water to remove the gas from the atmosphere. Oceans are a vast reservoir for CO2 as dissolved gas and as carbonates made by abiological reactions or by marine organisms. Oceans currently take up a carefully estimated 26% of the CO2 that we humans add to the atmosphere. The rate is currently limited by the exposure of water to the atmosphere and the stirring of waters to bring up water that is less saturated with the gas. In the long run (too long to wait for it as a remedy for climate change!) the oceans will take up about ¾ of our added CO2. The efficacy of oceanic uptake is reduced by the lowering of CO2 solubility in water as the temperature rises; try heating that carbonated soda.
- A third negative feedback is the increasing rate of losing heat at thermal infrared radiation as the surface temperature rises. Simply, radiative loss increases as the 4th power of the temperature. Climate modelers refer to this fast increase as Planck feedback, from Planck’s equation for blackbody radiation. Acting by itself, Planck feedback is a fair control over our climate for changes in radiative input from the Sun. A plausible 1% increase in solar energy flux density, E, would generate a ¼% increase in surface temperature (recall that equation that T=(E/σ)1/4); that’s 0.0025*288K = 0.7K. Yet the Planck feedback is far from a sufficient control over our climate in the presence of increased greenhouse gases. Climate modelers – really, everyone rational – worries about a doubling of greenhouse gas content. Let’s make the reasonable assumption that Beers’ law applies to radiation passing through an absorbing layer. If this is applied across the whole thermal infrared spectrum then we get an overestimate of the increased TIR trapping: some “windows” in the spectrum where no greenhouse gas absorbs significantly stay open as bypasses. See “What gets through?” earlier. In the simple one-dimensional model of the Earth as a whole-globe average facing an absorbing layer, the change in surface temperature is 10.85°C far above the 3°C warming from sophisticated models. I made a simple model with (1) three discrete wavelength ranges in the TIR, with different attenuations, one being almost a clear window, and (2) increased cloudiness from higher water vapor content in the atmosphere from warmer water. By “tuning” the presumed relation between temperature and cloud fraction I obtained a 3°C warming with a doubling of GHG content in the atmosphere. A bit of fudging, but in the right spirit.
- To add another negative feedback consider the growth of plants that tie up carbon as CO2 increases; there’s an increase enzymatic reaction for CO2 uptake into the biochemical reactions of photosynthesis. I’ve long studied this and it is typically wiped out by increases in the side reaction called (with poor terminology) photorespiration and by metabolic respiration itself. Besides that, the amount of carbon stored in biomass is modest. Rob Jackson of Duke University offered the observation that the maximal uptake of CO2 by plant growth globally and over all time only amounts to about two years of anthropogenic emissions.
On exoplanets we may inquire, at least theoretically, if these negative feedbacks will operate and at comparable efficacies as on Earth. The rock weathering feedback has some dependence upon the type of rock on the planet. On Earth, the silica (sand, silicon dioxide, SiO2) component of rock is good for this feedback. Any given exoplanet might have a significantly different rock composition and a different value in absorbing CO2 in the process of weathering, ranging from better to negligible to worse. We really have almost zero idea of their rock surface compositions. The feedback from a GHG (CO2, even methane) dissolving into ocean water is basic physics. It scales with the relative masses of ocean to atmosphere, likely different from that on Earth. The effect of lapse rate (the rate of temperature change with altitude; I leave its discussion to other sources) is basic physics and should work on exoplanets, even if it’s probably a small effect.
Positive feedbacks that act to amplify changes in climate: Quite a few of them!
- One feedback of great concern for more than climate is ice melting – think polar bears, eroding coastlines in polar regions, thawing of permafrost to release soil carbon as CO2 and methane, and thawing of methane-filled ice structures called clathrates in shallow ocean areas.
- The figure here is a notional calculation of how the melting of Arctic and Antarctic ice could increase the fraction of solar energy absorbed. The assumptions are that: Earth’s ice “viewed” from the Sun occupies 5% of the projected area; cloudiness over these areas remains at 70% on an annual average; 70% of the melted area is over oceans, and 30% becomes exposed land. The albedos of clouds, open ocean, and land are taken from the scientific literatures. The albedo of these icy areas drops by 1.5%. Weighted by the 5% of projected area that they represent, this is a decrease of Earth’s total albedo by 0.008; that’s an increase in absorbed fraction 1.13%. From an equation presented earlier the relative increase in temperature at the top of the atmosphere, TTOA, is ¼ as large, or 0.28%. That’s a rise in temperature of 0.0028*255K = 0.7K or 0.7°C. We don’t want that on top of other contributions to warming!
- The attendant releases of methane and CO2 add greenhouse gases. The process takes years to centuries and the size of the net effect is still under investigation in many ways.
- Ice-free soil gets warmed. The organisms in soil, especially the bacteria, get more active in decomposing soil organic matter accumulated over long times. They release CO2 from their respiration. Up goes another greenhouse gas. Soil carbon on Earth is estimated as 3170 gigatonnes. That’s about 360 times the annual emissions from fossil fuels and land clearance, as carbon. We really don’t want a significant fraction of this to go back into the air, and certainly not fast!
- A warmer Earth has, of course, warmer surface water. That’s not immediate, as surface and deeper waters mix in the oceans (and lakes), as melting ice enters oceans, etc., but it happens in the long run. Surface water temperature sets the content of water vapor in the air, fairly closely following the Clausius-Clapeyron relation. Near current temperatures that indicates a rise of about 6% in water vapor pressure per °C rise in temperature.
Two effects in the large follow from this. One is an increased greenhouse trapping of exiting thermal radiation because water vapor is a greenhouse gas. The net warming effect depends on where the water vapor ends up by altitude, among other things. Another is increased cloudiness. That leads to two changes. High clouds as puffy cumulus or stratus exert a net cooling effect because they reflect a lot of solar radiation. Low stratus clouds exert a net heating effect because they trap thermal radiation trying to exit from the surface below. Which trend wins is still being investigated by observations and by theory.
Our luck in the Universe: The noncondensible gases, plus water, that Earth was bequeathed by complex initial conditions and later dynamics give us a greenhouse effect that is rather robust. It operates in a range of temperatures that favor active metabolism (i.e., not ammonia’s low temperature). The many positive and negative feedbacks that could make our GHE unstable have not exceeded limits for life over at least 3 billion years, though they did come close many times. It may well be that the (great) majority of other planets / exoplanets are not so lucky.
And in the really long term:
Solar or stellar brightening over the lifetime of our Sun or an exoplanet’s host star. For Sun-sized stars at the current time (4.6 Gy), energy or power output increases about 1% per 100 million years. Over the long haul of biological evolution that can be a challenge. The Sun was about 84% as luminous 2.2 billion years ago. A GHE dominated by methane sufficed nicely to keep the Earth warm enough (unfrozen). An increase to the current luminosity represents an increase to 119% of the earlier level. All else equal with just the Planck equation or Planck feedback operating AND the methane GHE, the increase in surface absolute temperature would be by a factor 1.190.25. That’s a factor of 1.0446 and a temperature rise of 0.0446 times the old temperature. If the original average temperature were even as low as freezing, 273K, the increase is 12°C. Bacteria might readily adapt to that change as a relatively sudden change, geologically speaking. Multicellular animals such as ourselves would have a harder time. They did make it through several hard times almost as hot such as the “Saurian Sauna” in the mid-Cretaceous of 100 million years ago at 35°C; that’s 20°C above today’s level. Of course, that followed and was followed by mass extinctions.
Biological evolution has taken a long time on Earth. It was “only” a billion years or so to evolve bacteria, so if you want to count that as habitability then stellar brightening is no big deal. If evolution has as slow a pace on exoplanets the stellar brightening and you want multicellular and even sentient life you might need 4 Gy. That is nearly a deal-breaker.
There is a way out of the bind – changing the greenhouse effect from a strong one to a weaker one – say, methane to CO2. It’s definitely traumatic and only bacteria survived it. There is no guarantee that any given exoplanet has a snowball’s chance in hell, to use an apt metaphor, of its life surviving the transition and to evolve complex life forms. OK, not a snowball’s chance in hell, but a small one.
Many people have noted this about Mars and Venus, and I did so above. I present some details in the next paragraph. A planet that’s not deep-frozen like our gas giants can lose its hydrogen, thus, its water and its greenhouse effect and its habitability, by this route. Venus did so after increasing solar luminosity heated it up to a runaway greenhouse effect. Mars lost its GHE before it really got a start because it has wimpy gravity, 38% that on Earth (another challenge for human physiology; low gravity causes bone loss, brain swelling – as if Elon Musk has a big enough ego already and then gets to Mars!). We should probably skip most small planets as having long times for biological evolution. The same holds for planets like Venus that started out near the upper edge of habitable temperatures from being close to a luminous star.
How does a small planet, or an overly hot planet, lose its water? A planet with all the right chemistry for life’s building blocks and for the greenhouse effect has to hold onto its gases. There are three main ways to lose gases. One is thermal escape: atoms or molecules are in constant motion, with a mean speed proportional to , where R is the universal gas constant, T is the absolute temperature, and m is the molecular mass. This formula comes from statistical mechanics well over 100 years old. For the H2 molecule high up in the hot, if extremely thin, exosphere at T=1800K, the mean speed is about 4700 meters per second. This is more than 40% of escape velocity, ve, about 11,200 m s-1.
[ The magnitude of the escape velocity can be derived by equating the negative of the gravitational potential energy, GMm/r2, to kinetic energy, ½ mv2, with G as the (maybe!) universal gravitational constant and M is the mass of the Earth. A little math lets us replace GM with gr2, with g being the gravitational acceleration at the Earth’s surface – about 9.8 m/s, tempered by location such as centrifugal force that reduces it toward the equator…the lesser of two reasons for rocket launch sites being at low latitudes). ]
To continue, molecules sampled from a gas at constant temperature display a wide range of speeds, according to the classic Maxwell statistical distribution of speeds. A small but significant fraction of them – about 1 out of every 1500 – moves at 2.5 times the mean speed or higher, reaching escape velocity for the H2 molecule. The story is not all that simple, when we consider the further fraction heading with that much velocity in the right direction, upward, but gases do escape this way. Earth loses a fair amount of hydrogen this way, though on a relative scale it’s a tiny amount, less than 1 millionth of the hydrogen in its water every million years.
Small planets don’t fare as well. For a fixed density of rock, the escape velocity varies with the square of the planet’s radius, or as the 2/3 power of its mass. A planet half the diameter of Earth has an escape velocity ¼ as large. At the same temperature of 1800K, most of the H2 molecules move at or above escape velocity! For heavier molecules, the situation is better. Oxygen as O2 gas is 16 times heavier than H2 and moves at only ¼ the speed; thermal escape is negligible.
A second escape route for the gases is electrical acceleration. Gases can be ionized, thermally or by solar radiation. An electric field is generated by electrons moving farther out from the surface than the positive ions, leading to the ions being accelerated to higher speeds. This is minor in many or most cases. A third route is the gases being blown away by the stellar wind. A number of factors affect the rate of loss – the speed and density of the stellar wind, the density of the atmosphere (high density is protective, diluting the energy transfer per molecule), and the presence or absence of a magnetic field to create an ion-filled shock front around the planet. For the Earth, the solar wind is a lesser route of gas loss, by a factor of about three. For planets near certain types of stars, such as red dwarfs, solar winds are devastating in the long term, that is, the billion years or so for life to evolve.
Our luck in the Universe: Earth is big enough to keep its water and its greenhouse effect… but no too big so as to be wreathed in dense gases.
Let’s assume that there is or was one to start, with enough volatiles. The balance of positive and negative feedbacks needs to be biased toward negative feedbacks, in the end. How complicated are the initial conditions on the planet that can result in this good end? We can assemble many constraints noted so far.
- A Goldilocks amount of greenhouse gases is needed. Too little and the GHE is low and possibly can freeze up. Too much and the planet is gaseous, with a solid surface for life too deep.
- Probably the planet needs a mix of both reduced GHGs such as methane and oxidized ones such as CO2, or suitable chemical precursors. This lets the GHE be adjustable.
- It may need a lucky break in timing of a shift from high to low GHE over long evolutionary time.
- Good cycling of carbon (or related element, if any exists!) between the atmosphere and big reservoirs in rocks and ocean will keep the GHE stable.
- A big enough ocean to buffer CO2 or other GHG – ditto.
- Enough planetary mass lets it retain hydrogen and thus water.
- Water, not ammonia, is the basis of both life and the GHE.
- Life should evolve a lot before plants bury enough carbon as carbonates and fossil fuels to trigger a low GHE that might collapse to freezing.
- Plate tectonics, almost surely, is needed for recycling of elements, including C or another GHG base… but still expecting massive swings up and down in GHG levels that attend orogeny and volcanism.
- Any technological civilization, if such has evolved, has to be smarter than we on Earth seem to be, with our messing with GHGs – fossil fuel use; agriculture with its nitrous oxide, methane, and CO2; potent high-tech gases such as the chlorofluorocarbons with a warming potential up to 28,000 times that of CO2.
- Overall, there is a complex posing of astronomical conditions and geological conditions. Being in the right orbital distance from a star is ridiculously inadequate as a condition for habitability.
- Later here, we’ll see that the endowment of many chemical elements is also critical.
Additional gleanings about Earth’s carbon cycle at the core of its greenhouse effect
What counts for the greenhouse effect is what’s near the surface. Part of the story is keeping that carbon there in the right amount even as volcanoes bring up CO2 from the mantle. The balance is by carbonates moving back into the mantle as tectonic plates subduct, diving under other plates. The plates carry carbonate-containing sediments and rocks such as limestone. The crust-mantle interaction is critical.
Earth’s crust, our mineral “frosting,” is defined by a discontinuity in chemical composition. It ranges in depth from 5 km under parts of the ocean to 70 km under continental surfaces. In good measure, it is a bunch of silicon dioxide (silica, sand), followed in abundance by oxides of metals – aluminum, iron, calcium, sodium, magnesium, potassium, and titanium, if we tally up the first 97% of atoms. (Odd, isn’t it, that industrial society is facing a real shortage of high-quality sand for construction.) Hydrogen for water is the 10th most abundant element. Carbon is 13th. Those two elements, H and C, are key in the greenhouse effect, so their potential losses to space or to long-term burial make the Earth’s GHE rather fragile. We were lucky to get them delivered to Earth by cold comets and asteroids. Water and methane and CO2 are all very volatile. They were blown away from the proto-Earth during the formation of the solar system. Our early messengers from deeper space brought enough of these volatiles back to prevent us, and Venus, from being just rocky bits. My NMSU colleague, Jason Jaciewicz, points out that the balance was precarious, at least for us terrestrial animals. Too much water would have given us a water-only surface. Too little and we would not have lubrication of rock convection in the deeper mantle, no plate tectonics to keep the surface mineral content – and atmospheric CO2! – renewed; we’d be like Mars with its wimpy (but recently detected) tectonics. Of course, tectonics comes with earthquakes, mountain building, and volcanoes that cause some massive resets in our GHE.
The greenhouse effect action takes place at the surface but involves huge exchanges with the crust and deeper mantle. Take our current GHE. It has been dominated by CO2, with help from a bit of methane created by life forms (cattle farts, methanogens in rice paddies and such). Carbon dioxide cycles on a grand scale. One cycle that all schoolchildren learn (I hope) is that plants take up CO2 in photosynthesis to make biomass, and then CO2 is released directly by other actions of the biosphere: plants themselves respire to gain energy for further biosynthesis; plants get consumed by everything from bacteria and fungi to insects to humans, who respire and decompose plant biomass and all other biomass. Of course, this is a big deal in the cycling of oxygen in the air. The trade between CO2 and O2 is essentially 1:1. This is clear in the basic equation of common photosynthesis:
The entity (CH2O) is chemical shorthand for 1/6 of a glucose molecule. Water appears on both sides because the molecules are distinct; oxygen in the new water came from the CO2, not the original water.
The photosynthesis-respiration cycle is finely balanced over spans of many years. Let’s look at a number of figures to see the balances and imbalances in all the major CO2 exchange processes, not just photosynthesis and live-organism respiration:
- Total CO2 in the air: 7.38×1016 moles, 3250 gigatonnes (Gt as CO2, 886 Gt as carbon, C). A ready estimate starts with the mass of the atmosphere: take the area of the Earth’s surface, 5.12×1014 square meters, and multiply by the average mass of air above each square meter. The latter is average air pressure, close to 10,100 pascals, divided by the gravitational acceleration that makes that mass exert pressure, or 9.81 meters per second squared. We get 5.28×1018 kg. With a mean molecular mass that’s 80% toward N2 (28 grams per mole) from O2 (32 grams per mole), or 0.029 kg per mole, we get 1.82×1020 mol. Recent painstaking measurements give the average concentration of CO2 in air as 405 parts per million (as moles, not mass). That gives us 7.38×1016 moles of CO2. At 0.044 kg per mole, that’s 3.25×1015kg or 3250 Gt as CO2 and 885 GtC. A note about painstaking measurements: measuring the size of the Earth was a monumental task over the millenia. Take a look at the book, The Measure of All Things, by Ken Alder.
- Gross photosynthesis on Earth – that is, the immediate uptake, before subtracting the loss of carbon by respiration. This estimate comes from an enormous set of measurements by many researchers. We’ll take a recent figure of 125 GtC each year. Right off, this says that photosynthesis takes up about 1/7 of all the carbon as CO2 in the air each year; some other estimates are in that ballpark.
Wait a minute: Aren’t we warned that the CO2 we inject into the air by burning fossil fuels and clearing land will stay for hundreds to thousands of years? Both figures are correct. The short turnover time of 4 to 7 years refers to CO2 taken up by photosynthetic organisms (green plants, some plankton, etc.) but virtually guaranteed to be re-released by respiration. The longer-term figure refers to final escape from that nearly closed recycling, to end up mostly as carbonate sediments in the ocean.
The net change in biomass accumulation, photosynthesis minus respiration (including decomposers’ respiration and a tiny bit a abiological breakdown), is nearly zero. There is a small net gain currently from higher CO2 levels stimulating photosynthetic rates. This still makes up a bit more than the willy-nilly loss of biomass by deforestation.
- Total biomass of living organisms on Earth: about 550 GtC. Again, the tallying is a huge and ongoing effort. With that estimate, we note that photosynthesis renews about ¼ of all living biomass each year. That makes sense. Annual plants get renewed each year, or on an even shorter time, given that a good portion of them gets eaten, as by insects, and is renewed within the year; some trees last much longer, though the longest-lived ones, the bristlecone pines, are an in infinitesimal fraction of live biomass.
- Dead biomass: a recent estimate puts this as 3170 GtC. That’s about 6 times larger than the live biomass. No surprise. Think of all the natural litter (dead leaves, branches, etc.) on the ground and the humus in soil. The latter is about 80% of this biomass.
- Carbonate in the ocean: much of it is dissolved carbonate ion. A recent study gives an average concentration of 2.2 moles per cubic meter. Now, oceans cover 71% of the surface of the Earth, or 3.63×1014 square meters. At an average depth of 3,700 m, their volume is 1.34×1018 cubic meters, holding then 2.96×1018 moles, which is 35,550 GtC. That’s over 40 times more than exists in the atmosphere. Oceans are the final resting place of CO2. However, the transfer to oceans takes time. They only take up about ¼ of the extra CO2 we inject into the air from burning fossil fuels, making cement, and clearing land.
- The amount of CO2 we inject into the air through human activities: This has risen to 11 GtC per year. That’s about 1.25% of the current amount of CO2 in the air. The tracking of this as a total and by regions and activities is fascinating, being done by flask sampling, eddy-covariance towers (e.g., at IRRI, credit: irri.org),
and satellites, with the carbon isotope composition as a telltale sign of fossil-fuel origin. If this all stayed in the air the concentration would rise by 1.25% of 405 ppm, about 5 ppm. This is cut back because ocean absorb about 26% of the extra CO2 and extra plant growth, less land clearance, absorbs another 29%. The net rise is a bit over 2 ppm per year, still alarming for its effect on climate.
This is a lesson for technological planet populations: limit the use of fossil fuels.
An interesting pattern in the upward march of CO2 in the air is a sawtooth feature overlying the nearly linear rise. This is visible in the measurements on top of Mauna Loa volcano, groundbreaking science started by Charles Keeling in 1958:
Even though the global net input of new CO2 is fairly constant over the year, there’s more photosynthesis in the Northern Hemisphere summer than there is in the Southern Hemisphere – there’s just more vegetated land in the north, as a look at the globe reveals.
- The oxygen cycle: all the vagaries of CO2 changes are much reduced in the budget for oxygen, since it’s about 500 times more abundant than CO2 in the air. So, if we burned off all the Earth’s biomass, including soil carbon, it would use up about 1/10 of the oxygen in the air. We seem hell-bent on trying to do so, directly. We’re also doing it indirectly via our changing the climate. One model indicates that all forest in the western US will fall to persistent wildfires by 2100. Retention of our O2 would be small comfort, if all the crops and wildlife are gone.
That covers much of the current carbon cycling. It’s time for a look at the long-term picture. The main processes are volcanic CO2 in, rock weathering and burial out. Volcanoes inject highly variable amounts of CO2 (and even some methane, biogenic or not) over the years. Currently the amount is minor, 0.04 to 0.12 GtC per year, 1% or less of what we humans are injecting. Stupendous amounts were released in the eruption of the Deccan Traps 65-66 million years ago and of the Siberian Traps 250-251 million years ago – both events being linked to mass extinctions of life. There’s at least one multiplier of the emissions. The Siberian Trap eruptions intruded on massive coal beds, setting them on fire to release additional CO2 and methane.
On the loss side, rocks are slowly decomposed by dissolution in acidic CO2-containing precipitation, as rain or melting snow, with some direct erosion by wind and glaciers. They are also decomposed by acids created by plant life. Many plant roots actively excrete organic acids that dissolve ferric minerals, allowing them to take up iron for their growth. Decomposing biomass, mostly plant biomass, contains organic acids that also breaks up soil minerals. The net effect is creation of soluble carbonates from mostly silicate rocks – thus, we call this one side of the silicate-carbonate cycle. The soluble carbonates slowly make their way to rivers and then the oceans. They accumulate as the dissolved carbonates but also as insoluble forms in ocean sediments. The sediments comprise a much smaller total CO2 equivalent than the dissolved forms, but they do get transported in the long term. The tectonic plates that bear them get subducted into the Earth’s mantle. Their CO2 recycles on the scale of some hundreds of millions of years. The plate matter gets heated as it subducts, some melting into magma that feeds volcanoes that release much of their CO2 equivalent.
The other part of the loss process is burial of carbon in chemically reduced form. That is, not as oxidized CO2 but as C atoms bound to hydrogen atoms and to each other. These are coal (mostly C), oil (approximately CH2), and natural gas (CH4, which is methane, and some higher hydrocarbons such as ethane, C2H6, and propane, C3H8). Coal comes from land plants that failed to decay completely, getting buried in sediments that metamorphosed into the rocky form.
No question about it: Earth has seen some wild swings in its climate. The most dramatic evidence is in its mass extinctions of life forms, from ammonites to dinosaurs (at least the ones that didn’t become birds). Five mass extinctions of multicellular life are in the fossil record, and we humans are driving the sixth one at breakneck speed. The big five we’ve seen clearly were the End Ordovician at 447 – 444 million years ago, the Late Devonian at 374 and again at 359 million years ago, the biggest of all, the End Permian at 252 million years ago (loss of 95% of species, surely 99+% of all individuals), the End Triassic at 201 million years ago, and the exciting End Cretaceous at 66 million years ago with its massive asteroid impact apparently exacerbating the Deccan Trap volcanism as a multiplier. We’ve surely lost even simple bacteria, even. An appendix gives an overview of these extinctions, including what can be surmised about their causes. Volcanism and CO2 depletion by rock weathering are often root causes. Sometimes they appear to have come in rapid swings beyond the pace at which organisms can variously migrate, acclimate (deal with changes by physiological changes), or adapt genetically (change their fundamental physiology, developmental schedule, body plan, and dispositions to ecological interactions). The Late Devonian mass extinction appears to have been driven by such an oscillation. There’s more to say about the consequences of climatic excursions after we review the drivers of change. In any event, any exoplanet is likely to face very similar crises, or even total extinction.
One change-agent: the growth of stellar output over time. I’ve mentioned this several times already, as you surely have noted. The best way for life to survive it is to shift the GHE from a high one to a low one. As with all survival of catastrophes, most individual, species, and higher taxa probably don’t make it through. Earth was in the phase with only bacterial life when it went through this … and it was because of its cyanobacteria! Whether a planet with multicellular life already evolved would see that life survive is very doubtful. Bacteria might get to restart evolution then. The reset of the GHE happened on Earth by an accident of evolution, those cyanobacteria. It is not guaranteed on any exoplanet, and it is very traumatic, perhaps to lethality. A note about cyanobacteria: their fossil remnants are layered rocks called stromatolites.
They’re creating new ones in a few locations too nutrient-poor and salty for grazing organisms to eat them all up. Lou Ellen and I visited such a place, Stocking Island in the Bahamas. Lou Ellen and I visited there and had an eyewitness view of several billion years of evolution.
Photo above by Lou Ellen Kay at Stocking Island, the Bhamas
Photo below: Reginald Sprigg This caveat about life having to withstand substantial rises in stellar output certainly applied to Earth. Evolution is a slow and erratic process. It took from 3.5 billion years ago to a bit more than 0.6 billion years ago, the Ediacaran Period, to get significant multicellular life here. However, on a habitable planet around a smaller star, the rise in stellar output is much slower. Life might evolve at its nice, slow pace without having to face big changes in the greenhouse effect. The best bets for multicellular life are around small stars.
Ryan Somma, Wikimedia Commons aso.gov.au
Change-agents in astronomy. We might even look before the formation of a planet. A habitable planet needs a wide range of chemical elements. Thus, the pre-stellar nebula has to be reasonably rich in a diversity of elements. There has to be one or more supernovae or neutron-star merger to provide those elements via the nuclear r-process. That’s on the plus side. Also on the plus side for the early Earth are the clearing of most of the dangerous asteroids by the giant gas planet, Jupiter, and on the other hand the rain of enough asteroids to give us the right amount of water and of other volatiles. This is likely a very rare example among all exoplanets that otherwise meet a number of other conditions for habitability. We haven’t escaped all the trauma of asteroid visits. Chicxulub Crater and the demise of non-avian dinosaurs is example enough. We have escaped other astronomical nasties such as too-near gamma-ray bursts.
Change-agents in Earth’s orbit and axial orientation. Low ellipticity? Check. Modest axial tilt? Check. These two factors favor a nice climate and a stable climate. The ellipticity of our orbit around the Sun does change under the gravitational influence of Jupiter and Saturn. The cycle lasts about 90,000 years. At the peak of ellipticity at 5.8%, the seasonal differences in solar energy flux density reach 25%. That drives some big climatic changes. Given that Earth’s CO2 had declined to levels as low as 180 parts per million in recent tens of millions of years, the result was a series of Ice Ages. The shifts in glaciation and deglaciation are abetted by shifts in the degree of tilt of the Earth’s axis (currently 23.5° from being perpendicular to the plane of our orbit) and in the timing of the tilt relative to close and far approaches to the Sun (perihelion, aphelion). Ice Ages arose from the combination of cool summers with lower snowmelt and warm winters with more water vapor in the air to feed snowfall. These changes form the famous Milankovitch cycles, named after a Serbian mathematician by the given name of Milutin. There are some fine animations of the orbital and axial changes, such as https://climate.nasa.gov/news/2948/milankovitch-orbital-cycles-and-their-role-in-earths-climate/.
Exoplanets with big neighboring planets can expect a similar variety of change, though their net effect may be quite different.
Change-agents on the surface of the planet: the geological side.
Continents scraped up! It gave us the continents about 500 million years after the Earth formed, scraping up lighter rock to give us the space to stand – and, as a result, the places to develop material culture, historic records, and modern technology; dolphins don’t have WiFi. It gives us renewed availability of mineral nutrients on the land as new surfaces emerge and weather. It endangers us with volcanoes and earthquakes, thought never for any major fraction of humans and other life since anatomically modern humans evolved 200,000 years ago.
Volcanism and mountain-building, or orogeny, play off against each other. One injects CO2 (and other chemical species), the other consumes it as new mountains erode, carrying CO2 as carbonates to a deep-ocean burial. When one or the other is ascendant, Earth’s GHE and climate are in for a shock. In the long term we might expect that dissolved CO2 in rainwater helps weather rocks that are mostly silicate, creating carbonate deposits; the deposits get recycle tectonically, with subducted rock heating and melting to release CO2 in volcanoes and vents. There have been big pushes one way or the other that increased or decreased CO2 for some millions or tens of millions of years:
Consider big volcanic eruptions such as the Deccan and Siberian Trap eruptions. The Cretaceous-Tertiary extinction is very familiar in both scientific and pop culture as the demise of the dinosaurs (except for the birds, the last branch on their tree, and only those birds that seem to have sheltered in cliffs). The massive asteroid, 10 km in diameter, obliterated anything remotely near it and sent up dust that cut off so much sunlight for years. The recent story ferreted out, if incompletely so, is that the impact was accompanied by the enormous eruptions of the Deccan Trap volcanoes in India over 750,000 years.
Nicholas (Nichalp), Wikipedia
There is a likely link between them, in that the collision likely increased the eruptions; the region was primed to erupt by India passing over a huge hot spot of hot magma near present-day Reunion. The climate underwent big swings at this time. Volcanoes, as also the asteroid, put voluminous dust into the atmosphere, cooling the planet. The volcanoes also injected a lot of CO2, warming the planet… until their rocks began to weather, with the reaction taking up CO2 to make carbonate deposits and reduce the greenhouse effect, cooling the planet. It’s a geologic version of the Chinese curse, “May you live in interesting times.” Again I note that the direct heat input from volcanoes is dwarfed by the GHE rise from their CO2. On the plus side volcanism did rescue Earth, or its life, from life’s own fouling of its nest. Fifty million years of volcanic injection of CO2 brought Earth out of its deep freeze in Snowball Earth.
Building of big mountains whose weathering removes much CO2. Several mass extinction events involved this. Even the shorter-term Ice Ages we’re in the midst of these last 2.6 million years are tied to such events. The rise of mountains in Indonesia has been implicated in the loss of CO2 that has create the glacial-interglacial pattern over the last tens of millions of years.
A simple analogy for the interplay of volcanism and weathering may be offered, that of a leaky water bucket being continuously refilled. For a given size of hole in the bottom, there is a steady state of filling. The leak rate varies with the amount of water in the bucket. If the level of water is above the steady level, the extra water pressure forces water out at a higher rate until the water level goes down to the steady-state rate. If the level of water is below the steady level, the leak rate slows and the filling gains on it, raising the water level. Tectonic release of CO2 fills the atmospheric bucket. Weathering is the hole in the bucket. Since land plants arose the hole is bigger and the average CO2 content of the air has trended downward. The content may have been as high as 3,000 ppm in the Devonian Age vs. our pre-industrial and interglacial value of 280 ppm; varied pieces of evidence inform us about the Devonian, including preserved leaves with few pores, or stomata, whose numbers reduce at high CO2 even today.
Basic rates of weathering have changed, too. There was the rise of land plants whose acidic exudates break down silicate rocks into carbonates, adding to the carbonic acid in rain that falls through a CO2-containing atmosphere. The net effect is a decrease in the average amount of CO2in the air.
Volcanism and the deep-Earth part of the CO2 cycle rests upon a thermally active Earth. The Earth’s rock from the bottom of the mantle to the surface churns or convects in rolls or columns like a nearly boiling pot. The rock acts as a very viscous fluid. The fluid movement entrains the movement of the tectonic plates on the surface. Part of California is being pushed off to the north; earthquakes are a corollary, as they are around the whole Ring of Fire around the Northern Pacific Ocean. India is still colliding with the rest of Asia, building mountains. Volcanoes arise from the breakthrough of some of the hot rock. They, too, are all around the Northern Pacific Ocean. In the middle of it is the string of volcanic islands called Hawaii, built as the vast Pacific Plate moves over a hot spot. Interiors of continents are not immune. Yellowstone National Park in the US is a monstrous volcano that erupts about every 600,000 years. The last time it obliterated a 100-km range of mountains. That was 630,000 years ago, a date to make one pause.
Our Earth is hot inside, to the tune of about 6,000°C at the core. Much of the heat is remanent from the initial coalescence of nebular dust to make the planet; collisions, with the loss of gravitational potential energy, is a potent source of heat. There is also the radioactive decay of uranium, potassium-40, and thorium-230; recall that potassium is mildly radioactive, with a half-life of 1.25 billion years. Another source is the solidification of the core, releasing the heat of fusion. Yet one more is the continued migration of heavy elements deeper into the interior, with the loss of gravitational potential being converted into heat. Ask any miner or mining engineer – it’s hotter every step down toward the core, about 1°C per 40 meters. Even at a milder site, 3.9 km (2.4 mi) down in Earth’s deepest mine, the TauTona gold mine, the rock face is at 60oC or 140oF (miners only stay alive with air conditioning!). Thermal circulation was absent on the early Earth. It didn’t kick in until about 1.6 billion years after the Earth formed. We didn’t get an outsized greenhouse effect at first.
Tectonics gives us beneficial and life-erasing volcanism and CO2 injection, alike. It gives us the renewal of continental surfaces to bring back elements eroded away. Think phosphorus, which can’t move in air and barely so in water. (Tectonics won’t bring back all the phosphate we’ve dug up for fertilizer and let accumulate rather uselessly… .at least, it won’t for about 150 million years for the next tectonic overturn.)
Volcanoes also add sulfur dioxide to the air. This can oxidize to sulfuric acid to create sulfate particulates. Rock particulates add to sunlight-obscuring effects of the sulfate particulates. The effects are geologically very short-lived, several years. That still affects us humans a lot in our short lives. The cooling of the Earth at the time of Julius Caesar’s death from the eruption of Okmok in Alaska contributed notably to the fall of the Roman Republic. Crops failed, all the way to Egypt, preventing Cleopatra from sending food aid to Rome. A major fraction of the Athabascan Indians in Canada fled the effects of volcanic eruptions by Mt. St Elias around 1400 years ago to become shortly the Navajo and Apache of the US Southwest.
Earth’s thermal activity, for its plusses and minuses, devolves on the Earth’s properly large but not too large size (to get enough heat of accretion and also to not cool quickly) and on its complement of radionuclides in its interior. Mars largely missed out. It appears to have ceased significant tectonic activity on its surface long ago. It also rather early lost water that lubricates plate motion and subduction. Probably the majority of exoplanets that otherwise look tolerable miss having the right combination of heat from accretion, radionuclide loading, and size to hold both water and heat. Add in the need for continents above water as a big consideration.
Our luck in the Universe: Earth has a deep carbon cycle, managed, as it were, by mantle convection to cycle carbon to and from the surface. This adds stability to our greenhouse effect. Even some of our Solar System neighbors, such as Mars, lacked this after a (short?) time.
Change-agents: life itself.
Changing the type of greenhouse gas: How did Snowball Earth get set up? At the time, 2.2 billion years ago, only bacteria thrived, being the only types of living organisms and only in the oceans. Life had been living off preformed organic chemicals that could be broken down for energy, and then photosynthesis emerged perhaps 3 billion years ago to make new energy-rich chemicals to support life. At first only photosystem I evolved. This biochemical and photophysical system has many components. It starts with chlorophyll and auxiliary pigments embedded in lipid vesicles (hollow pockets of fats, in a way) capturing photons of sunlight as electronic excitation that diffuses by hopping among chlorophylls to a reaction center. A metalloenzyme complex uses the electronic energy to drive charge separation and the synthesis of reductants and ATP (adenosine triphosphate). Reductants chemically reduce the oxidation state of matter; plants today reduce CO2 to carbohydrates, and we, their consumers directly or indirectly, are more chemically reduced than our atmosphere that’s rich in the oxidant oxygen. ATP is an essential carrier of chemical energy in all cells. It’s formed by pushing a third, negatively charged phosphate group onto an already doubly negatively charged adenosine diphosphate molecule, like spring-loading. Photosynthesis was only able to use some less-abundant electron donors such as sulfide in its reactions.
Then came some cyanobacteria (inaccurately called blue-green algae, for they are not eukaryotic cells with nuclei). They evolved photosystem II. It’s broadly similar to photosystem I with its energy capture but at its reaction center hydrogen as reductant is split from extremely abundant water and oxygen is released as, initially, the Earth’s ultimate waste product. Over millions of years the oxygen oxidized the methane in the air, originally the dominant greenhouse gas, to CO2, a much weaker greenhouse gas. At 2.2 billion years ago, the sun had nearly 16% less energy output, insufficient to keep the Earth’s surface above freezing. The oceans froze, perhaps even to the equator. Life almost perished; some bacteria and the cyanobacteria themselves survived – maybe poking along under an ice cover. So much for the Gaia hypothesis that life alters the Earth to keep it favorable for life itself! It took about 50 million years of volcanoes injecting CO2 into the air to create a strong enough GHE to melt the ice… and then there is some evidence that the temperature overshot to levels that would be lethal to all multicellular life. The evidence for Snowball Earth episodes is widespread, including dropstones, boulders moved by glaciers to almost everywhere on the ancient Earth’s surface.
Another evidence of the oxygen crisis, as it may be called, is the oxidation of iron in the oceans. They were green to start off, with copious amounts of reduced ferrous iron ions, Fe2+. This iron was oxidized to ferric ion, Fe3+, which is monumentally insoluble via the reaction to form ferric hydroxide,
Here, the down arrow indicates precipitation as a solid; OH– is the hydroxyl ion from water, which naturally dissociates in the reversible reaction
At neutral pH, or pH 7, the concentration of hydroxyl, [OH–], is 10-7 moles per liter (a mole is a fixed, very large number of individual molecules, 6.02×1023, Avogadro’s number; mole is very close to a mass equal to the gram atomic mass of the elements – e.g., 1 gram of H, 2 grams of hydrogen gas, H2; 12 grams of C). At current ocean pH of 8, the concentration is ten times higher, at 10-6 moles per liter (molar, symbol M). The reaction of ferric ion to ferric hydroxide is reversible, but it is far to the right at pH 7. The equilibrium can be expressed as the relation (simplified, as it is a multistep reaction),
With [OH–] = 10-7, the concentration of ferric ion is 10-15 M! The massive precipitates of ferric hydroxide got slowly transformed into red ferric oxide layers, the Red Bands that are over 250 meters thick in the Grand Canyon in the US!
The loss of readily soluble iron is a problem for oceanic organism to today. No organisms can grasp iron out of solution at that level just waiting for it to diffuse to them. They have to create acids or organic molecules to make soluble forms of iron. A “fix” of iron as many fine, if mostly insoluble, particles is provided two main ways. At the continental shelf, river deltas carrying eroded sediments. Desert winds loft iron-bearing sands to fall into the ocean… but only regions near those deserts. The Sahara provides rather nicely for the North Atlantic but the Southern Pacific is very iron-poor because there are few desert areas nearby.
Another big story of life changing the greenhouse effect is the acceleration of rock weathering. Purely geological weathering discussed above is a “sink” for CO2, competing with its addition that’s largely by volcanism. The evolution of land plants has amplified weathering. Plants with roots and vascular systems – the green plants we know and love – excrete organic acids from their roots. The value of this to the plants is being able to acquire iron and phosphorus from soils. Almost every plant grows on soils sufficiently oxidized and only alkaline to only moderately acidic. The conditions keep the key nutrient iron in the oxidized ferric form and insoluble. Phosphate is almost always highly insoluble in all but quite acidic conditions… as provided, nicely, by root acids. (Plants – and bacteria with whom they compete – have additional ways to get iron by excreting complex compounds called siderophores or iron-bearers. These chelators can attach to ferric ions and remain soluble as negatively charged anions. Plants also take up phosphorus by preference as organic compounds containing phosphorus. There are products of incomplete breakdown of dead matter.) Another value to plants of excreting acids is providing a gradient of protons (that is, of pH) inward from soil to root cell interiors. This activates the active transport of many nutrients from soil to plant. There are many references to this in the literature of plant physiology.
The net effect of the extensive colonization of the land by plants is a new, lower point of balance for CO2 in the atmosphere between volcanic injection and removal by weathering. In a broad but real sense, plants have been making their lives more difficult.
Another route by which plants have reduced the carbon content of the air is by burying the carbon in forms that don’t recycle well. Coal, oil, and natural gas have accumulated since living, carbon-containing organisms got buried without decomposition. Sometimes its luck (our luck? Fossil fuels are a mixed blessing) that organisms, dead or alive, got buried before decomposer organisms got a chance to reach them. Along the same line, plants have protected themselves from organisms and agents in the abiotic environment that can break down their tissues. Plants can’t run or hide from insects, hoofed herbivores, fungi, and the like. They biochemically synthesize compounds that are difficult to break down. The lignins are an example. They strengthen wood mechanically and make it hard for microbes, the fungi and bacteria, to start the breakdown of a living plant’s defenses… or even a dead plant’s remains. Yes, fungi, in particular, evolved to do a better job of digesting lignin, but it’s incomplete. In any case, just having big bodies that can be buried under muck and rubble is a route to burying carbon. Nonetheless, by their successful defenses, plants have helped reduce their own source of growth, the CO2 in the air. As an evolutionary “relief,” starting about 20 million years ago, some species evolved to have a new way to capture CO2 for photosynthesis at lower concentrations in the air. These are the C4 plants. They’re named for the first stable product of photosynthesis, a 4-carbon compound, malic acid or malate ion. In essence, they have a biochemical pump from outer leaf cells to the inner bundle cells. The effective concentration of CO2 is raised in cells that do the tricky conversion of a 5-carbon carbon compound, ribulose bisphosphate, to two molecules of 3-carbon phosphoglycerate ions. It’s effective to help out the slow enzyme that catalyzes this reaction, which is the second-hardest reaction in biochemistry after reacting N2 gas from air with hydrogen donors to make ammonia. There are at least 8,100 species of C4 plants now. They dominate in a number of environments, such as dry ones where their higher efficiency in using water is important. Surprisingly, they haven’t taken over completely. The ecological reasons for the incomplete takeover are complex and we won’t delve into them here.
Living and dead organisms on land and in the waters alter the levels of methane and of nitrous oxide. In anaerobic conditions living bacteria of the types called methanogens produced methane from carbon compounds available from various sources, including dead organic matter. Anaerobic environments are moderately rare and have been since the Great Oxidation Event and the outburst of plant life on land. They are found in still waters, flooded soils… and guts of animals. We humans host microbes in our gut that can make methane (thus flatulence is often flammable). We’re pikers compared to ruminant animals. We now raise enough cattle, a billion (gotta make all those McBurgers) that they are a major source of methane on Earth. The production was quite modest before humans evolved to raise many cattle and to do agriculture with flooded fields (rice). For N2O, the same anaerobic niches (though not cattle, significantly) are sources. A recent study concludes that our agricultural system with its heavy use of nitrogenous fertilizers, its unprecedently large area to feed an unprecedently large population, and it intensive use of fossil fuels on- and off-farm creates a major fraction of all anthropogenic GHG emissions. It’s so large that by itself it will thwart any possibility of keeping GHG concentrations stable, barring major changes in how we do agriculture.
Fire-using humanity changes the climate. Two big uses of fire are, first, combustion of fossil fuels to motive power, electric power, and heating, and, second, firing fields for harvesting (as for sugar cane) and for land clearance (especially forests all over the globe). All this combustion of fossil fuels and of plant biomass generates two effects, partly in competition for their effects on climate. Combustion generates CO2 as a primary output. That’s the biggest contribution to the increasing GHE and global warming. Combustion also generates aerosols directly (dirty diesel soot, smoke particulates from burning fields). Depending upon color and loft altitude, aerosols either reduce solar energy gain by reflecting radiation to space or increase solar energy gain by increasing the absorption of radiation. Currently the reflection mode wins. It hides some of the GHE increase from rising levels of CO2 and methane. Our global warming is actually a bit worse than it seems.
We humans have control of the greenhouse effect and of our climate in a moderate number of ways -how we farm, how many and how much of energy-demanding services we seek, which energy technologies we deploy. Natural processes outside of our control will continue to operate – the volcanoes, orogeny, and all, though on a vastly human-altered landscape, even if we were to suddenly disappear. I have more to say about climate change actions in a later section.
Our luck in the Universe: Well, mixed luck: Life is capable of driving long-term trends that might push our greenhouse effect to levels untenable for life. Perhaps the majority of planets with an intermediate level of habitability, Habitability 2.0 (presence of multicellular organisms), face this risk. We may be a rare case that life hasn’t pushed climate over a cliff. These statements are not a defense of the Gaia Hypothesis of the late James Lovelock, which proposes that life forms act together (though not consciously) to preserve the conditions for life. While living forms enter into food webs and the like, so that metabolic wastes and dead bodies don’t accumulate, life can nonetheless pose insurmountable problems for itself.
And on exoplanets? With or without technological civilizations?
All the change-agents just noted are based on universal physical laws and we are rather certain will operate on any planet (good luck if you debate that; even general relativity has been tested in the most extreme conditions at the black hole in galaxy M87). The only one not certain is the biological driver. There are surely potentially habitable planets that have not yet evolved life. The specific balance of all the drivers can take on a huge diversity among all the planets. Many combinations, really, most combinations are likely deal-breakers for the evolution of or continuity of life. On Earth the Australians call their nation the lucky country. We should call ourselves the lucky planet on so very many bases.
How do organisms faced with changes in climate, especially big and/or fast changes, change themselves to stay alive? If not all of them survive, then on the higher levels can they keep their populations from getting to small to recover? The answer is that: (1) excepting bacteria and the like, no individuals survive; we all die eventually, no surprise, even in a stable climate; (2) generally, and especially in a changing climate, many populations of many species decline, while often recovering; (3) in the longer term, a complex species such as us humans only lasts about 2 million years on average; we’re either replaced by an offshoot species, or else our whole line of descent vanishes; 99% of all species that have ever existed are now extinct: (4) changing and unstable climate accelerates the rate of extinction.
All species over-reproduce; most individuals die before reproducing, other than among humans and some other social species with care of offspring. Think insects, frogs, etc. that produce hundreds of thousands of eggs, while in a stable population only two survive (in sexually reproducing speakers with a sex ratio of 1:1; let’s not get into complications). Even among humans the survival of most individuals to reproduce is a very new thing in the past, oh, century or so.
Let’s take it to the species level, using the somewhat simplified idea that a species is a collection of organisms reproducing only within the group; hybrids don’t occur with other species, or die or don’t reproduce. The capacity to reproduce well over replacement levels is an absolute requirement for any species to persist through any run of downturns in reproduction. Otherwise, each downturn decreases the population until some number of downturns eliminates it. So, there’s resilience in the Malthusian dilemma. Many species have bounced back from big downturns in numbers from adverse conditions. There’s some evidence that humans did so. A hallmark of going through a low population size is reduced genetic diversity, and humans genomes bear some signs of this.
It’s really the population of genes that maintains itself; bodies of individuals host the genes, which are the persistent features of biology. We don’t pass on our bodies, only our genes. Check out The Selfish Gene and more recent publications. Yes, that’s true – all of us sexually reproducing humans die but our genes get passed on more or less intact. So, more relevant that population count is the count of the genetic variants in a population or entire species. There’s so much to say about how genes get passed on to the next generation, how they change in their association with each other on chromosomes (haplotypes), how they change by mutation, and how they get selected positively or negatively by the performance of the bodies that host them. Any biology book will keep you engaged in this for as long as you wish (or more, if you’re a student just taking a course as a requisite; as one student’s review of a book about penguins related, “This book teaches you more about penguins than you want to know”).
Both physical individuals and genes matter in the ability of organisms to persist. To be able to survive and reproduce, an organism or its population can:
- Acclimate in its physiology, developmental schedule, or ecological interactions. Individual humans notably get used to new conditions seasonally, or when they move to a new location, and the like. This involves no genetic changes (OK, there are epigenetic or transient marker changes on the DNA sequence, a richness of function now under intensive research). Plants of the same genetic makeup such as elite crop lines grow differently in different soils and weather series. Salmon begin as freshwater fish and migrate to the sea and then return to spawn. Often this allows survival and reproduction, if not always. Note that reproduction is critical to count as persistence of the genes.
- Migrate – go to a place that may be better. The fossil record records huge numbers of incidences. The opening of the land bridge between North and South America, the Isthmus of Panama, led to many species swaps, with some big winners but some big losers. We got armadillos in the north; many species went extinct in the south. In current climate change we’re seeing tropical and semitropical fish move northward up the shores of the US; lionfish, known by many common names, are now off the coast of the Atlantic states. Insect and plant species limited to elevations (mountain ranges or the like) are moving upslope.
- Adapt by genetic change. This does not happen in the individual, of course; our DNA sequences are set and only limited epigenetic markings on genes can occur, with decaying inheritance between generations. Genetic change brings up a longer discussion about how it proceeds:
Genetic replacement. Consider a genetic locus on chromosomes that is very important for tolerance of a cool climate. Suppose there are two variant forms carried by all the individual organisms in the population. One, call it allele C or just C, confers good performance in a cool climate, the other, W, good performance in a warm climate. (OK, geneticists would like a different termininology such as C and c, but that’s unimportant.) Both alleles may be present at nonzero levels, for a variety of reasons. In a cool climate there might be 90% C and 10% W in the population. Now the climate changes from cool to warm. Natural selection might switch the abundances to 20% C and 80% W. What happened to all those C’s? If the population of individuals stayed the same, then a lot of the C-carrying individuals either died off or failed to reproduce while W individuals survived and thrived. It is also possible that the population, at least for awhile, shrunk as many of the C holders died off. This is an oversimplification, given that there can be individuals with allele combinations CC, CW, and WW and that the genes might be linked to other genes. In brief, however, there were excess genetic deaths among the C population. Many C genes had to fail to get to the next generation. Suppose you were a human in a population that, like most, was composed of individuals most of whom were lactose intolerant. Your population migrated or otherwise had to adopt a modified lifestyle with a substantial reliance on milk. If you were a lactose-intolerant individual you likely had a low chance of surviving and reproducing. This is documented indirectly in human history. The story of human skin pigmentation is another good example, if even more complex.
The net effect of a climate change is a change in population number and in the frequencies of genes for climate response… plus changes in the frequencies of genes poorly related to or unrelated to climate but linked strongly to the climate-response genes. There is also a definite chance that genes adequate to respond to the new climate did not exist in the initial population and that did not arise, or not fast enough, by spontaneous mutation. That’s the demise of the species. It has happened over evolutionary time to > 99% of all species. Note that the response to the climate need not be to its physical aspects alone, such as tolerance of new temperatures or to related low water availability in the physiology of the individual body. The critical response can have a lot to do with the interaction with competitors, food plants, and other factors in the whole biome. All this makes the response of any population or species to a new climate very hard to predict, even if we know its genetics, physiology, and developmental program…but not those of its living neighbors. There’s a great deal that looks like chance in who thrives and who doesn’t. Who would have guessed that smaller dinosaurs that nested in river banks were the main survivors in the dinosaurian clade at the End Cretaceous mass extinction, continuing to evolve as birds?
Genes have their demise, too. The genes of the dodo are gone forever. So are the genes of all non-avian dinosaurs, all mammoths (we might resurrect some, but don’t count on it). Evolution is chancy and the chance that any new gene will arise by recombination and mutation to match an extinct gene is effectively zero. That’s the reason that biologists of so many stripes are deeply concerned about species extinctions. Here’s a daunting set of consequences: we lose the wild progenitors of a major crop species such as wheat. This is slowly but almost inexorably happening with land clearance and crop breeding practices closed to (most) wild species. Now comes a new wheat disease with limited effectiveness against the disease offered by all genes available in the genetic lines held by crop breeders. A gene (or several genes) that would have been effective was (were) present in wild progenitors but these are now extinct.
It is then hard to predict who – species or genes- will survive or if any life survives on a planet in a new, major climate disruption. Many clades (defined shortly) survived our mass extinctions. Birds made it through the End Cretaceous extinction. Many fungi certainly survived each extinction, though we have no fossils to check this. Trilobites, however, lost out at the end of the Permian. (A clade is a group of organisms all derived from a common ancestor, and it must include all of the descendants. Fish are not a clade separate from humans, since humans evolved from the same ancestors as modern fish. Check out Your Inner Fish by Neil Slubkin.) In the hokey but thrilling movie, Jurassic Park, Jeff Goldblum says, “Life has a way.” So far, so good, on Earth, but there were and are no guarantees at our next turn of climate on Earth or on any exoplanet.
That said, our ongoing climate change on Earth has some daunting possibilities. These are explored in a section coming up.
Our luck in the Universe: Life is extremely adaptable, and might be on other planets, too. Not having a second, independent origin of life to study, we can only guess if this tenacity is an emergent property of life. Since we will have an extremely low probability of meeting any other life “systems,” speculation is moot. Asking the question of life’s tenacity is like a jocular exam question, “Define the Universe, and give three examples.” “Life” persists, we don’t, often lurching through catastrophes. Even genes go extinct. We’d best not help this extinction along very much.
We would not be happy with the geographic pattern of temperature – and precipitation – were it not for the redistribution of heat via wind and ocean currents, as well as the frequent evening out of received radiation by the rotation of our planet. Let’s look at the last item.
The Earth rotates in 24 hours, meaning that it turns to put the same longitude to face the Sun in that time. There are lots of interesting subtleties here. First, in a reference frame embedded in the stars, it takes nearly four minutes less to attain the same orientation relative to the stars. The extra four minutes is to make that final turn to the Sun as we circle it; it’s a moving target.
Second, we keep redefining time to make it 24 hours, even though the Earth’s rotation is slowing down. The Moon is the reason. It pulls a tidal bulge on the Earth’s oceans (and rocks). The ocean tides lag a bit, much like water in a pan that we’re carrying and watching it slosh. So, the tidal bulge is not directly in line with the gravitational force of the Moon. The Moon pulls back slightly on the Earth’s rotation. The rotation slows down, and, by the equal force exerted on the Moon (also viewed as the conservation of the angular momentum of the Earth plus Moon system), the Moon slowly moves farther away. The early Earth rotated in about 10 modern hours, and the Moon was much closer. Now the Moon is just at the right distance to occasionally make perfect solar and lunar eclipses. Also, precision timekeeping requires that we add leap-seconds to our most precise clocks such as the atomic clocks. You may find the book From Sundials to Atomic Clocks very informative and interesting for the details.
With our day of 24 hours, the side that was sunlit cools off at night and rewarms the next photoperiod. The daily swing in temperature varies from a few degrees C in cloudy places to 20°C or more at high elevations under their often clear skies. There are two reasons for the pattern. One is that heat is stored in the water, air and a bit of soil, being released at a modest rate when the sunlight stops at a location. Water has the highest heat capacity in the depths that can churn up the heat, air the next highest, and soil the lowest. Soil has no convection so heat only moves in and out several cm in a day (the depth of the wave of temperature is proportional to the square root of the time; a profile of a meter tells us about the year, a profile of 10 meters tells us about a century). Thus, maritime or big-lake climates show less day-night variation, dry high land the most. Second, heat has several routes to take. One is laterally on the surface in winds and ocean currents. The other is radiating away to space, as noted in the look into the greenhouse effect. Water vapor is the main retarder of this radiating away. High elevations have less retardation because there is less water vapor in the air, for two reasons. One is that there’s simply less air above them; the air pressure, hence the mass of air above them, falls off exponentially with elevation (as the negative exponential, that is). Second, high elevations are colder. Air that rises over increasing elevation expands. It does work against the surrounding air in expanding. That decreases the internal energy, as we call it, lowering the temperature. The phenomenon is called adiabatic expansion and it explains the common drop of 4 to nearly 10°C with each 1000 m of elevation, depending upon whether the air is dry (biggest drop rate) or wet (with condensation of water at lowering temperatures releasing the heat of vaporization, much as steam does on our skin if we hesitate over a boiling pot) .
Because parts of the Earth are all getting varying amounts of solar heating, there is a rise in the air at the warmer spots, pushing air out laterally, that is, generating the winds. This uses about 5% of incident solar energy – a modest amount of a huge heat source, so that tapping wind energy has the potential of massive energy use for humans. The patterns in wind are rich, too much so to go into detail but the details are available in many books, websites, and wetware (living humans willing to talk to you – they’re the complement of computer hardware, firmware, and software). Ocean currents arise in analogy to winds, with a lot more steering of them by continents that bound them. Both winds and ocean currents are also “twisted” by the Coriolis effect. That’s the apparent force perpendicular to their motion that seems to exist because we look at them in our frame of reference that rotates with the planet; it’s not an inertial frame of reference.
On any rotating planet with an atmosphere and oceans, much heat is redistributed. The poles may be cold now, and so is Jolly Old England, but they’d be much colder without the redistribution. We still have swings in temperature over various time scales – hour, day, season, decade -, as we are so aware. They can be described quite roughly as following an exponential decay with a characteristic time of relaxation – actually a mix with several such relaxation times. A slowly rotating planet has time for much decay in surface temperatures and has bigger swings. All the Solar System planets are pretty fast rotators, with the exception of Mercury which is locked into a resonance of 3 rotations per 2 orbits around the Sun. This slow rotation gives time for extreme heating on the sunlit side and extreme cooling on the dark side, observed in the range 430°C to -180°C. No slowly rotating planet will be habitable.
An extreme case of slow rotation is posed by a planet that it tidally locked to have one face permanently facing the star. This occurs when a planet it too close to a star. Consider the planet Proxima Centauri b around the star that’s our nearest neighbor, 4.3 light-years away (75,000 years away at the speed of our fastest spacecraft, the Voyagers at 61,850 kilometers per hour, after slingshotting around big planets; no human will ever live to travel there). It has been measured to orbit the very cool star at a distance that gives it a reasonable surface temperature. In a sidebar I estimate the top-of-the-atmosphere T = -26°C. With a decent greenhouse effect it might have an average surface temperature allowing liquid water. Of course, the star-facing side is much hotter and the opposite side is near the temperature of interstellar space, 2.3K. One might say that there are zones that are of moderate temperature, where the starlight strikes at angles that give a nice amount of energy flux density. I offer a calculation of these zones in a sidebar. Alas, that doesn’t work. Any atmosphere – necessary for organism respiration and for precipitation – quickly migrates to the cold side to condense out at strikingly low temperatures. This condensation even happens on Mars in winter, to the extent of 25% of the atmosphere. End of habitability, or no chance of a beginning. (There’s also stellar flares at the red dwarf that would also strip away the atmosphere as well as sterilizing the surface with radiation.) The reason that the planet gets tidally locked is that the star pulls a huge bulge on the rock of the planet, a great handle to keep the planet facing it.
An intermediate case of slow redistribution of stellar energy input is that of a planet rotating at an extreme tilt. Uranus is tilted at 98°, so that it lies almost flat along its orbital plane. As a result, it spends long times with its equator facing almost normal to the Sun’s rays, getting “hot” while the poles freeze even harder. One-quarter revolution before or after, one pole or the other gets the sunlight and the equator and the other pole freeze. A habitable planet must not have an extreme axial tilt.
Our luck in the Universe: Lessons from planetary rotation and fluid circulation: A habitable planet has to rotate fast enough and have enough atmosphere and ocean to move heat around, avoiding great variations in temperature. Bacteria can take more variation than can multicellular organism, but they have their limits. Besides, they’re not the ultimate interest in habitability other than habitability 1.0 as defined early on here. The tidal locking phenomenon cuts off the search for a habitable planet at stars that are too much smaller than the Sun. The small stars may “live long and prosper” but not for organisms.
Tidal flexing: I am frankly appalled to read that the warming of Jupiter’s moon, Europa, makes it a candidate for life. Yes, it orbits Jupiter so closely that it gets tidal flexing. It appears to be enough that some 2 to 30 km beneath its icy surface there may be liquid saltwater. Heat alone cannot be a source of metabolic energy; it’s low grade, not able to drive (bio)chemical reactions at even an infinitesimal rate. Thermal energy is about 1/25 of an electron volt (eV) while a number of key biochemical reactions need several eV; the probability of the high end of the probability distribution including molecules with even 1 eV is around 1 chance in 10 billion. Second, the other potential source of energy, sunlight, is very weak, with Europa at 5.2 times greater distance from the Sun as is Earth… and then it would need to penetrate those 2 to 30 km of ice. Ask a hardy scuba diver exploring Antarctica for science how dark it is beneath even a few tens of meters of ice.
Geothermal (but it has other roles in cycling elements): The other heat source is internal to the planet. Our Earth is hot inside, to the tune of about 6,000°C. Much of the heat is remanent from the initial coalescence of nebular dust to make the planet; collisions, with the loss of gravitational potential energy, is a potent source of heat. There is also the radioactive decay of uranium, potassium, and thorium; recall that potassium is mildly radioactive, with a half-life of 1.25 billion years. Another source is the solidification of the core, releasing the heat of fusion. Yet one more is the continued migration of heavy elements deeper into the interior, with the loss of gravitational potential being converted into heat. Ask any miner or mining engineer – it’s hotter every step down toward the core, about 1°C per 40 meters. Even at a milder site, 3.9 km (2.4 mi) down in Earth’s deepest mine, the TauTona gold mine, the rock face is at 60oC or 140oF (miners only stay alive with air conditioning!).
There are lots of good ways to estimate the internal heat of the Earth. One way is from the observed heat flow per area at the surface. Now, this varies by the type of surface cover and whether we talk about the surface on land or on the ocean. We can skim over some details and take what geophysicists have found. The flow of heat is about 60 milliwatts per square meter in nice metric units, or about 50 thousandths of a watt per square yard. We can compare that the to average thermal radiation leaving the Earth now at about 375 watts per square meter, a magnitude that keeps us toasty at a mean temperature of 15°C. I like to note that this is also the average annual air temperature in Las Cruces, New Mexico, USA, where I live and write. Let’s quote that in absolute temperature, the scale that starts at absolute zero, which is -273.16°C or -459.67 °F. We’re basking at about 288K (that’s “kelvin,” named after Lord Kelvin, who was plain old William Thomson before he did some great physics back in the late 1800s).
We can figure out what absolute temperature the Earth’s surface would hit with our measly interior heat flow. Without the Sun, Earth would face cold outer space, at 2.725K, which is really cold but is a now precisely measured relic of the Big Bang that started the Universe. The Earth’s surface, as almost all bodies, radiates away energy at a rate proportional to the fourth power of its absolute temperature, or P = kBT4 as a formula. The kB here is the Stefan-Boltzmann constant, named for two more great physicists of old. This formula is for a black body, something that absorbs all kinds of radiation as well as it emits all kinds of radiation. Like most bodies, the Earth is close to acting like a black body, but not perfectly. We should put in a multiplier on the right-hand side called the emissivity. That’s about 0.96, so close to 1.00 that we’ll skip it. To continue, we can invert this formula to find that the absolute temperature is proportional to the ¼ power (mathematical power, or square root of the square root) of the radiated power density. Well, 60 milliwatts is 1/6250 of 375 watts, so the surface temperature running with only geothermal power would be (1/6250)025, or 0.11 times as high as our surface temperature is now. That’s 32.4K, or -240.76°C or -401.37°F. The only thing not frozen solid at this temperature and normal air pressure is helium.
Our Solar System’s gas giant planets, Jupiter, Saturn, and Neptune, are so large that their core temperatures are much higher than Earth’s. That’s an estimated 24,000K for Jupiter, 12,000K for Saturn, and 7,300K for Neptune. (You may ask how these could be estimated – it’s from known masses and densities, with the assumption of hydrodynamic equilibrium for the sorting out of densities. Recent spacecraft flybys added much information using complex rocket science – the mass distributions affect the trajectories of the spacecrafts.) They had a lot more loss of gravitational potential energy to convert to heat in the coalescence to greater mass and greater depth to their cores. Jupiter’s “surface” temperature (odd term for a gas planet) is about 25°C above what’s expected just from its interception of sunlight.
Short story: internal heat won’t help make any Earth=sized or smaller planet much warmer than starlight and its own greenhouse effect make it.
Getting the right chemical elements – at the surface
The elements needed by life include the biotic and the abiotic. The former elements are those incorporated into the bodies of organisms – carbon, hydrogen, oxygen, nitrogen, iron, and about 20-some more. Some of the latter abiotic elements are important for an equable environment. For example, we stand on a lot of silicon dioxide = sand as the major portion of the solid Earth. Humans require carbon, hydrogen, oxygen, nitrogen, fluorine, sodium, magnesium, sulfur, chlorine, potassium, calcium, chromium, manganese, iron, cobalt, copper, zinc, selenium, iodine, and possibly a few others. The first three make up most of our mass, and light hydrogen makes up 80% of our atoms. Other organisms, especially the plants on which we all rely, also need, variously, boron, silicon, vanadium, molybdenum, and tungsten. These have to be available at the surface. Elements important for the abiotic environment, at least on Earth, include the radioactive elements uranium, thorium, and potassium that heat the Earth’s interior to drive tectonic movements (hence, that gave us dry land, eventually, and surface renewal with volcanos and mountains).
We got our elements above lithium from aged and exploded stars. Oxygen is made in stars at medium mass like our Sun and larger. Our Sun has not made (not released, certainly) appreciable amounts of almost any of the biotic and abiotic elements beyond helium. Oxygen and all the rest were present in the presolar nebula formed from the gas and dust left by earlier stars at the ends of their lives – probably a couple of supernovae or neutron stars. The elements above iron (atomic number 26) – in us, iodine, cobalt, and nickel – owe their existence to the r-process, r standing for rapid. When a stellar cataclysm occurs, there are lots of neutrons flying. They can be absorbed by atomic nuclei to create heavier and heavier elements, likely to berkelium or einsteinium. The absorption is too fast for a major mass of the elements to fission into small pieces. The r-process was theorized by Fred Hoyle in the 1940s and more firmly by Hans Seuss and Harold Urey in 1956. It has been seen in action in stars recently by Derek Watson and colleagues. They saw light emitted by masses of the element strontium formed where LIGO detectors “saw” a neutron star merger. Our (relatively) peaceful existence on Earth relies on unimaginable violence in the past. Joni Mitchell sang that we are stardust; so true.
Water! Water has physical and chemical properties that make it the compound unmatched for roles as the medium of life, the main greenhouse gas, the shared heat transporter in the climate system, the fluid for the Earth’s heat engine, and a major recycler of chemical elements. On Mars none of these functions operate. It is extremely likely that water is an irreplaceable compound on any habitable planet. The properties that are widely appreciated by chemists, physicists, biologists, geologists, meteorologists, and people in so many other fields include:
- Water is a simple molecule, widely available astronomically (hydrogen is everywhere; stars and the remains of their spectacular explosions make oxygen). Any stellar system with solid planets will have it in notable abundance. It is a remarkably stable molecule, capable of being exposed to many physical conditions and radiation environments while returning to any standard state of liquid, vapor, or ice. Only a tiny bit gets photolyzed to make ozone on our planet. We may compare it with three other simple compounds of hydrogen with which it is isoelectronic, CH4, NH3, and HF. These, and water, have eight valence electrons that make chemical bonds with hydrogen.
Methane is a gas in terrestrial surface conditions. An atmosphere of methane irradiated by sunlight will make (has made, in ancient history of the Earth) many breakdown products, polymers and soot. Ammonia is also a gas in terrestrial surface conditions, though it liquefies at not too low a temperature (-33°C at our normal atmospheric pressure). Like water, its molecules make weak but important hydrogen bonds with each other, making it more condensable. Ammonia is less widely available astronomically than is water because nitrogen is made in stars in much reduced quantity than is oxygen (a reflection of the odd atomic number pattern for nucleosynthesis).
Astronomy Stack Exchange
Hydrogen fluoride is also far less abundant than water astronomically, for the same reason, F having atomic number 9. It also makes hydrogen bonds and can be liquefied (at 19.5°C). None of these three other compounds has the full suite of the following properties:
- Water has a liquid range spanning high temperatures. At the atmospheric pressure on Earth’s surface it is liquid between about 0°C and 100°C, or 273K to 373K, with some deviations from solute content. Compare that with -182°C to -162°C for methane, -78°C to -33°C for ammonia, and -84°C to 20°C for hydrogen fluoride. The long and high range of liquid temperatures is a consequence of the pervasive hydrogen bonding on both ends of its molecules. Only as a liquid can a compound be the basis for living cell contents. Gases expand and contract greatly, denying stable structure; solids deny mobility of cell solutes. The centroid of the liquid range is critical. Chemical reactions and particularly biochemical reactions proceed at rates that are strongly dependent on temperature. An often-cited rate increase is a doubling per 10°C rise. That’s, of course, very crude, as the rate multiplier depends on the height of the energy barrier to be crossed in a reaction. But with that rough guide, the difference in “average” reaction rates is about 1500-fold comparing water to its closest competitor on various grounds, ammonia, at the centers of their liquid ranges (50°C vs. -55°C)
- Water has a very high heat capacity per mole and per gram. In the range of Earthly temperatures it’s a bit over 4 joules per gram per degree Celsius (per K). Heat capacity is the amount of heat, quoted variously per gram, per mole, or other basis, needed to raise the temperature of some body by one degree, also quoted variously as Celsius or Fahrenheit. Water’s heat capacity is about four times or more higher than virtually every other compound that exists. Again, this is a consequence of its pervasive hydrogen bonding. A lot of heat has to go into water to break a given fraction of those hydrogen bonds. Water is a great heat storage medium. It ameliorates temperature swings, daily, annual, and on longer time scales. That’s evident over water bodies contrasted with land. It’s equally important for temperature swings in the bodies of organisms.
- Water has a very high heat of vaporization, ΔHvap (yes, it’s H-bonding again!). Per gram it’s 2500 joules at room temperature, decreasing to about 2260 joules at the boiling point when about 15% of H-bonds have already been broken. Compare methane at 480 joules per gram and ammonia at 1370 joules per gram. Water is outdone by iron and aluminum, for example, but searingly hot liquid metals don’t make good cell solvents. The high ΔHvap makes for a high heat dissipation for sweating humans, for plant leaves transpiring in sunlight, and for cooling of water bodies. It also makes for a very large heat release when water condenses in the process opposite to vaporization; think hurricanes, for one, as powered by this heat release. Water is thus the working fluid of a great heat engine on Earth that is a good part of powering massive air circulation, including storms but also the delivery of water to land. Water obligingly delivers itself to high elevations, good for high-elevation vegetation but also for our hydropower technology. A lower heat of vaporization of a compound other than water in a planet’s oceans makes for a much less dynamic planet. Think nitrogen oceans on Saturn’s moon, Titan. Yes, striking to see, but by analogy I may refer to the phenomenon of a dancing bear. Someone said it’s amazing, not that it’s done well but that it’s done at all.
- Water has a high dielectric constant, 78, in contrast to about 2 for common nonpolar substances, e.g., benzene. Here we may view this as the ability to shield charged particles from each other, such as the ions and the charged parts of proteins that abound in cells. Lots of ions can be held in aqueous solutions, notably in all of our cells. Add to this its hydrogen bonding and medium polarity and we get water as the “universal solvent” – almost. It certainly dissolves more compounds and stabilizes more colloids and other dispersions than any other fluid anyone can name. (It has a counterpart for purely organic molecules such as hydrocarbons and their derivatives, in dimethylformamide. DMF is the “universal organic solvent,” which also applies to our skin oils. Don’t wash car parts in DMF if you value your skin and your safety from cancer, as well. I know of someone who paid the price.) In living cells there are literally tens of thousands of proteins and hundreds of simpler charged molecules to keep mobile. In geological processes, water’s solvation ability leads to strong cycling / recycling of important compounds. Water dissolves CO2, as a major sink and reservoir for the greenhouse gas; the resultant carbonic acid solubilizes many metals, from calcium to iron and beyond. Geological deposits of ores, cave formation, and much more also result from this.
- At the same time, water is a very poor conductor of electricity by itself. It won’t short out potential differences within cells and between cells and their environment that are necessary for metabolism. Water does dissolve ions that can carry electrical currents extremely well. For late-evolved organism such as we, think nerve conduction. Let the ions do the work; they are controlled nicely in concentration and location.
- On the geological front, water is also a good lubricant for rock motion in tectonics. Only a few percent of water in crustal and mantle rocks allows them to flow much faster as hyperviscous materials. That flow was inconceivable when Alfred Wegener proposed tectonic motions but now it’s accepted, invoked for models of movement, and studied in high-pressure equipment in laboratories worldwide. The results of this flow for life are a mixed bag. We get renewal of the Earth’s surface as it relentlessly churns depths to surface and back again to restore chemical elements to the surface. The hard part is that it’s by volcanism and mountain-building. Those two processes are not only traumatic for local organisms. They also shift the climate, at times massively, as they affect the amount of CO2 in the air positively (volcanism) and negatively (weathering of new mountain surfaces). There’s more to say about this in the section on the greenhouse effect, , as well as on mass extinctions of organisms from climate changes.
- Water expands upon freezing, by 8.1% in volume. There are very few other materials that do so: three metals (gallium, which has other “cool” properties; bismuth; and antimony), two semimetals (silicon and germanium), and acetic acid. None of these are media of life. The consequences of the expansion are numerous. Because ice floats it is most amenable to seasonal melting by solar radiation. If ice were to sink, many “temperate”-zone lakes would be solid ice below a thinner water layer in the warm season, with still water being a rather poor conductor of heat. Also, ice forms on significant portions of the Earth’s surface, directly or as snow. Its reflectivity has a major handle on climate. Insulation by snow actually reduces the wintertime plunge of soil temperatures, to the benefit of later plant growth. On the negative side for living organisms, the expansion of water upon freezing readily bursts cells. Evolved adaptations abound to prevent freezing or direct freezing into smaller, less-damaging crystals.
- Famously, water participates in transfers of its protons with other molecules. It dissociates naturally into hydrated protons, often denoted simply as H+, and hydroxyl ions, OH–. The balance of these ions, expressed as pH, can be shifted by acidic and basic molecules in solution. Protons can be added to modifiable sites on proteins in acidic conditions or abstracted in basic conditions.
The change in electrical charges modifies the shapes of proteins, often with great effects on protein activity as enzymes or as structural molecules. Re the latter: consider the solidification of egg white by acid. The charge states of other, smaller molecules can also be altered.
Protons are also the prime currency for bringing solutes into cells or expelling them. In the 1960s Peter Mitchell formulated the chemiosmotic hypothesis that was borne out dramatically in experiments. Living cells possess proton pumps, molecular assemblies that use chemical energy, commonly as ATP, to push protons across membranes. This generates a difference in concentration of protons between inside and outside compartments, and, therefore, a step difference in chemical energy. At other protein complexes, infalling protons can carry along negative ions such as nitrate (countering some of the electrical barrier that exists with cell interiors commonly being negative) and can drive changes in protein shape to bring in not only anions but also uncharged molecules. This is how our own cells import key nutrients. Diffusion alone can’t do it. Were the medium of life to be other than water chemiosmosis would be almost impossible to consider.
- Water sits in a broad middle zone in the span of reduction and oxidation reactions. It is neither highly oxidized nor highly reduced relative to all the other compounds on Earth. Had it been on an extreme, it would drive other compounds on the planet to states that would be hard to change chemically, making a static and lifeless system.
More to the point, water can be split to free oxygen and to equivalents of hydrogen. The latter compounds are reductants such as NADPH (Google it) in photosynthetic reactions. The splitting takes a lot of solar energy for photosynthetic organisms, but it’s bearable, as organisms evolved two-step photosynthesis. Two photons move one electron from oxygen onto, ultimately, carbon dioxide, through photosystems PS II (second to be discovered, hence, the “II”) and PS I. Four repetitions with 8 photons move 4 electrons and complete the process of chemically reducing one CO2 molecule. That splitting stores a great deal of chemical energy. Letting the reactions reverse, if indirectly in oxidative metabolism of such molecules as sugars, releases that energy copiously. It’s why we aerobic organisms are so active. When we exercise anaerobically as in sprints we lack endurance. Before cyanobacteria evolved oxygen-releasing photosynthesis the splitting of less energetic – and less abundant – molecules such as hydrogen sulfide made for an energetically low-key biosphere.
- Water is polar, with a modest excess of electrons at the oxygen atom making a modest negative charge there and the converse at the hydrogens to make modest positive charges. Polarity underlies its huge dielectric constant and also part of its hydrogen bonding. There is also a major consequence for its behavior with electromagnetic radiation. Electromagnetic radiation has rapidly oscillating electrical fields that can grab and shake the separated charges in water, the electrical dipole. As a result, water can absorb radiation at the natural vibration or rotation frequencies of its structure. These lie in the range of wavelengths in the thermal infrared, near 2.6 and 6.5 micrometers. The absorption makes water a strong absorber of the thermal radiation making its way out from the planet, effecting a greenhouse effect. By the physical principle of microscopic reversibility water can also emit thermal infrared radiation, which makes up another part of the greenhouse effect as well as setting up a good part of the energy balance of surface organisms.
Where did we get our water? A habitable planet is a rocky planet, not a gaseous planet such as Jupiter where a solid surface exists only deep inside; liquid material is not reached until at least 21,000 km in. Again, check out Mark Denny’s book, Air and Water, for compelling arguments why even tiny, lightweight microbes can’t live in a gas. Earth formed as a rocky planet, as did Mercury, Venus, and Mars, with the more volatile elements and compounds such as water and methane blown away by the early solar wind. Replenishment of water and other volatiles occurred within about 100 million years after the Earth solidified. Originally thought to have been delivered by icy comets, water is now deemed to have come from asteroids. This conclusion is based on the direct measurements on the comet 67P/Churyumov-Gerasimenko by the Rosetta spacecraft of the European Space Agency. Telltale are the abundances of the different isotopes of hydrogen and of oxygen: 1H = ordinary hydrogen and 2H = deuterium, now at 156 parts per mission by atoms in Earth’s hydrogen; 16O = ordinary oxygen, 17O at 0.04%, and 18O at 0.20%). Earth has a Goldilocks amount of water, enough for all manner of life but not so much as to create deep oceans above which dry land could never rise tectonically. Mars and Venus both lost their water, for different reasons. Mars is a lightweight planet whose gravity isn’t strong enough to retain speedy hydrogen atoms formed from the photolysis of water by sunlight; its weak magnetic field didn’t help much in retaining the charged protons. Venus got hit by the increasing radiation from the Sun, volatilizing and photolyzing its water in a runaway greenhouse effect.
Hold onto water but not too much gas! A planet can fairly readily be too big to be habitable. If it has a mass even as little as only 50% larger than that of Earth and it sits near a good Sun-like class-G star, it may retain so much of its primordial hydrogen cloud that it becomes a gaseous planet like all outer planets in our Solar System. While it might have a solid surface, it could be buried under a hydrogen atmosphere several tens of times thicker than Earth’s fairly dense atmosphere (e.g., 1024 grams of hydrogen in total). This conclusion is based on physical modeling of element accretion, stellar evolution, and stellar radiation interacting with the planet’s atmosphere. It is not easily tested, even with thousands of exoplanets detected to date. To resolve the atmosphere’s composition on an exoplanet requires elaborate spectral measurements of the light passing by the rim of the planet as it occludes the central star. Orbiting a smaller, cooler star also of long life exacerbates the problem, with the star failing to produce enough radiation, particularly in the extreme ultraviolet range, to blow away enough hydrogen. The problem goes away if the small star like Alpha Centauri has dramatic flares, but the possibility for life also disappears with the radiation and the loss of all the atmosphere.
Is life only carbon-based? Abundant speculation has been done on alternatives to carbon-based chemistry for life on other planets. Life needs large molecules, the genetic materials and proteins or any analogs that can be constructed. Metabolism needs elaborate biochemical controls, possible only with large molecules with exquisitely attuned chemical properties such as highly selective enzyme activity. Carbon does very well for all of use living organisms on Earth, making long chains in hydrocarbons and lipid tails and interspersed with nitrogen and oxygen in proteins, hemes, and nucleic acids. People have proposed life based instead on silicon. Silicon is an analog of carbon, sitting below it in the periodic table, and it can make chain molecules. Silicon and oxygen alternate in silicone rubbers that seal our bathtubs. Silicon-silicon chains are, however, are absurdly less stable than carbon-carbon chains. Ask a chemist who has synthesized silanes. They are pyrophoric, spontaneously bursting into flame with the oxygen in air. They’d be expected to do the same in any planetary environment that has an energy source in having a strong oxidant (O2 for us) and a strong reductant (lots of reduced carbon compounds for us, whether fats or carbohydrates). Silicon chains are also not very stable at elevated temperatures. I think we can safely dismiss silicon.
We are, of course, lucky with having carbon-containing greenhouse gases, originally mostly methane and congeners and now mostly carbon dioxide from the oxidation of methane. There is deep carbon in the Earth’s crust, mantle, and core, but it would not have done us much good. We needed light methane and/or CO2 for a greenhouse effect. Methane came with water from comets, luckily.
Again, our luck in the Universe: Goodly amounts of water and light gases were delivered to Earth. Water’s remarkable properties are, of course, shared with planets and life anywhere else. This is “absolute” luck of the structure of matter, not our luck relative to what’s on other planets. Part 2: carbon is another remarkable item in chemistry, with its chain-forming ability. Again, this is “absolute” luck. Yes, forget silicon.
The condensable or non-gaseous elements accreted on Earth principally in chemical compounds such as silicates or simple organic compounds. They sorted out through the depths of the Earth in several ways. The gross sorting during condensation into planets left Earth with rocky material, little hydrogen and other light elements, and a lot of heavy metals (including in this term I put the transition metals such as copper and iron). The sorting went on for some time as the Earth solidified. Consequently, Earth has a large core with much iron, though a lot is left in the mantle, fortunately for life: energy metabolism relies on chemical reactions called electron transport. These reactions are catalyzed by transition metals such as iron and manganese swapping among their multiple valence states. Heavy elements even now are sinking toward the core, releasing gravitational energy as heat. It’s not much, nor is the total of all geothermal heat (including radioactive decay of U, Th, and P) at about 0.06 watts per square meter, compared with mean solar radiation at the surface of 239 watts per square meter.
Nonetheless, geothermal heat is also critical for cycling elements. It drives tectonic motions that continuously renew the surface of the Earth. Large crustal areas, the plates, occasionally dive one under another, but new surface emerges at mid-ocean ridges, bringing chemical elements back to the surface. On smaller scales, volcanoes do the same. Without tectonic circulation from the deep mantle, the land surface would eventually lose several life-critical elements. Earth has a large ocean and resulting hydrologic cycle that erodes the surface. Erosion moves phosphorus to the ocean sediments, out of reach of life on land because phosphorus can’t come back to land in gaseous form the way that carbon or sulfur can. There’s an interesting side story here. Our modern agricultural practices use about 20 million tonnes of P each year, mined from deposits such as in Morocco, Florida, and Nauru. (Nauru has almost disappeared from mining, with the residents moving to other places!) We have to move to poorer and poorer deposits eventually, at high energy cost that will be out of economic reach for much of our population. We didn’t face this problem in the past because we got our P in crops grown locally and then defecated that P in the same place. Sanitation wins for disease control but not for P.
Going back to electron transport reactions: these are oxidation-reduction reactions. When electrons get transferred from one atom or molecule to another, the electron-loser is oxidized and the gainer is reduced. Earth has both oxidizable and reducible substances that allow such exchanges. A fully oxidized planet would lack such chemical – and biochemical – opportunities. We have an abundance of oxidized minerals in the crust – prime examples are silica, SiO2 that forms most of the crust’s mass, a major fraction of iron as oxides or other oxidized compounds, and oxidized aluminum compounds. Most elements are metals, and most are in oxidized form on Earth; only the noble metals, silver, gold, platinum, rhodium, iridium, palladium, ruthenium and osmium are found as native metals. I made a simple calculation of the degree of oxidation of the Earth’s crust. About 95% of the chemical elements are in the oxidized state. We rely on having another 5% not in that state.
A bit more – why oxygen? Reduction-oxidation (redox) cycles work with sulfur, as they do even for some modern bacteria in niches without oxygen. Early life, all bacterial, used sulfur redox cycles extensively while oxygen was all tied up in minerals, not present in the atmosphere. Oxygen’s redox cycles have the advantage of yielding high energy exchanges. This is a fact of chemistry. Anywhere in the universe, oxygen has the same bond strengths in chemical compounds (such as CO2) and the same thermodynamic driving forces in chemical, photochemical, and biochemical changes (The laws of physics seem to be the same everywhere, as witnessed by the uniform observation of energy levels in atoms and simple molecules even in distant stars) Other redox cycles, as with halogens such as fluorine or chlorine, yield carbon compounds that are more difficult to break up again in cycles (our own earthly bacteria find this hard to do, and they never use the halogen reactions for a significant energy source…). The halogen oxidants are rarer than oxygen, thanks to the quirks of nucleosynthesis – they have odd numbers of protons (9, 17, and so on), which makes them rarer. That’s not a deal-breaker, of course. Our life depends on much nitrogen being available, and it’s moderately rare among the elements produced in supernovae that left us the building blocks of everything on the planets.
There’s no problem retaining oxygen, unlike the case of hydrogen. Of course, we large, multicellular animals, metazoans, and our active green plants like oxygen to be present in the air uncombined with other elements. The planet didn’t start out with free (uncombined) oxygen, as is well known; what is the history? Earlier we covered the current oxygen balance between oxygen-liberating photosynthesis and oxygen-consuming respiration. We may ask if evolution on other planets is likely to produce oxygenic photosynthesis. We have to accept that there’s a great number of elements of chance in evolution, by almost all estimates. While we can’t rerun evolution from scratch on the whole of Earth’s biosphere, evolution has many attributes of a chaotic process, sensitive to tiny events in the longer term. (I may qualify that statement, in that chaos in Earth’s weather can still be consistent with more predictability in the long term (climate), where long-term averages of many chaotic paths preserve some regularity; an examination of chaos theory is beyond the scope here but is illuminating.) Certainly, life on Earth has shown incredible persistence in keeping free oxygen in the atmosphere. The oxygen level has been inferred from a range of geological and biological evidence too extensive to present here, other than in some reconstructions:
I would take the upper curve with a grain of salt. Above a level of about 0.28 = 28% oxygen, spontaneous combustion of biomass has been predicted, based on experimental studies. Digging deeper, we may ask what chemical conditions are needed for liberation of free oxygen. It may seem counterintuitive but I offer that sufficient reduced elements need be present. Living organisms are in a partly reduced state. They use the offset in oxidation levels between oxidized and reduced compounds for their metabolic energy. The question becomes, Would a habitable planet always have reduced compounds or elements, particularly near its surface? Given the problem noted earlier of getting rid of excess hydrogen for planets even modestly larger than Earth, this requirement might seem to be met easily. Then again, Earth and the other inner planets had their surface volatiles stripped away early on, including the reduced methane. Volatiles were delivered by asteroids, “shortly” in geological time, about 100 million years. In other stellar systems, we might expect asteroids with volatiles to be pushed toward nascent exo-Earths… provided that there are enough additional planets such as exo-Jupiters to do the pushing.
The elements for life are not distributed evenly on the Earth’s surface. Vast areas of the land are deficient in one or more of these elements. Vast sections of the ocean are deficient in iron that is superabundant in the crust, because iron in its oxidized ferric state (Fe3+) is extremely poorly soluble in water in contact with an atmosphere with so much oxygen. Reaction with oxygen locked most iron into the red bands of sedimentary rock. Ocean life depend in large measure on meager inputs of iron from iron-bearing sand blowing in over long distances from deserts. Deserts have an important function for life, just not as much as for life on themselves. Australia is a textbook case for mineral deficiencies, well-known to farmers and graziers. I have an atlas of Australia in which one page shows areas with deficiencies of various elements. The portion of the area with no deficiencies looks like a sprinkling of some crumbs on a plate. Still, it could be worse without plate tectonics to raise mountains and feed volcanos that resupply elements from the mantle.
Those other essential elements. Life on Earth requires a considerable number of chemical elements beyond carbon, hydrogen, and oxygen. We covered a bit about phosphorus earlier, considering its poor recycling on the surface of the Earth other than on a 150-million-year scale of tectonic turnover. To recapitulate, humans require nitrogen, fluorine, sodium, magnesium, sulfur, chlorine, potassium, calcium, chromium, manganese, iron, cobalt, copper, zinc, selenium, iodine, and possibly a few others. Other organisms also need these and, variously, boron, silicon, vanadium, molybdenum, and tungsten. These have to be available at the surface. The initial surface composition of the Earth, plus tectonic recycling, established the availability of these elements. So, the supernova that preceded the formation of the Solar System was critical, and so was Earth forming in the right place at the right size. All is not favorable, of course. We just noted the huge areas of land and oceans with deficiencies of elements.
Nitrogen is an extremely likely component of life anywhere in the Universe. It has diverse states of valence, making 2 to 5 bonds variously. It makes a double bond to oxygen in nitric oxide, NO, a signaling molecule in human physiology. It makes 3 bonds to hydrogens in ammonia or to 2 hydrogens and a carbon in amines or to one carbon in a cyanogen or to 2 carbons in a pyridine. It makes 5 bonds in a nitro group. Nitrogen can be in both oxidized states, as in nitric oxide, and reduced states, as in ammonia, NH3. It “plays nicely” with carbon, hydrogen, and oxygen. It readily does partial donation of electrons in dative bonds that can grab metallic elements such as magnesium in chlorophyll and iron in hemoglobin, plant leghemoglobin, and siderophores that deliver otherwise insoluble iron to bacteria and to plant roots. Almost ironically, then, its most stable state as the very abundant N2 gas in the atmosphere has a triple bond, N≡N, that requires the second most energy of all bonds to break.
To make nitrogen reactive for incorporation into biological molecules requires either extreme abiological conditions (lightning to make nitrogen oxides, a similar reaction with ozone, or industrial heat and pressure with hydrogen) or the biological action of nitrogenase enzyme. Biological nitrogen fixation to make ammonia consumes precious reductant and much input of energy via ATP. It also needs the presence of enough molybdenum and iron. The iron is a problem in the oceans that are not near the mouths of rivers that deliver iron in silt or are not near deserts from which iron-containing sand blows onto the surface. The high input of metabolic energy for the action of nitrogenase poses a challenge for life. Organisms that do biological nitrogen fixation (Nif) ultimately share it with their neighbors who avoid that energetic cost. Organisms don’t do well if they benefit their competitors, so that Nif is fairly rare among organisms. Even where metals abound (on land) and energy from photosynthesis is significantly available organisms invest in Nif sparingly.
Reactive nitrogen is found throughout the Earth’s life-accessible soils and water, even if often in very growth-limiting quantities. Inputs from Nif, lightning, and ozonization is reasonably balanced against losses, resulting in various concentrations of different N compounds on Earth. Reactive N reverts to gaseous N2 or nitrous oxide, N2O by several biological routes. Microbes in environments can oxidize organic N compounds to nitrate. Nitrate can move to anoxic regions such as still water bodies or waterlogged soils; there it can be used as an oxidizer for metabolic energy by yet other microbes. The complexities of the total nitrogen cycle among organisms and the abiotic environment is fascinating as well as of compelling interest for agriculture and climate change (N2O is a greenhouse gas). My second most popular research publication laid out a sketch of the total nitrogen cycle.
Lessons for a planet: the continuous availability of nitrogen at the surface of a generally habitable planet (meeting the other conditions noted) is likely. Nitrogen will be present in a presolar nebula made from remnants of a prior supernova or neutron star merger. Nitrogen activation into reactive forms is also quite assured if a planet has water and, thus, weather that produces lightning creating nitrogen compounds. The compounds would be of N with carbon (cyanide and relatives) or with carbon plus hydrogen and oxygen from water. If oxygenic photosynthesis starts making free O2 in the air, then ozonization in the stratosphere will help. With all the loss routes for reactive N for life the element still recycles into the atmosphere as gaseous N2; there’s no need for tectonics to recycle N, though there is for carbon, phosphorus, and other elements.
Phosphorus is also an extremely likely component of life. Earth’s organisms use it solely in its most oxidized, pentavalent form, phosphate. Chemically reduced phosphorus such as in the extremely toxic phosphine is never found in living organisms, though decomposition of dead bodies can produce traces of phosphine. A parent form of phosphate is singly ionized H2PO4–. Phosphate is moderately common in rocks throughout the crust and upper mantle. As such it was available to early life, though with difficulty as it continues today because phosphates are rather insoluble when combined with ubiquitous divalent metal ions (e.g., Ca2+) and trivalent metal ions (Fe3+, Al3+). Organisms on land secrete acids, as do plant roots, to solubilize phosphate. In oceans organisms that secreted acids would have the acids diffuse away; it would lose effectiveness and also benefit competitors. For this and other reasons, the oceans are biological deserts, much lower in productivity absolutely and per unit area than is the land. The total biomass in the oceans is about 1/1000th that of the land. In addition to low abundances of nutrients the challenge for organisms is that there’s always some other organism at hand to eat them. Even bacteria have a hard time, with viruses called bacteriophages (“bacteria eaters”) being ten times as numerous as the bacteria.
All modern organisms use phosphate in their genetic material, DNA, and its transcribed form, RNA. They also use it their cell membranes, where phosphate groups on the ends of lipids (fats) give the double membranes an end favored to be in the water. Key to energy metabolism is the pair, adenosine triphosphate, ATP, and adenosine diphosphate, ADP. There are hypotheses that early life might have used another charged entity, glyoxalate, an organic compound without phosphorus. Substitution of phosphorus in organisms by its chemical analog, arsenic, has been proposed but disproven. A study of the possible stability and utility of arsenates instead of phosphates used first principles of quantum mechanics and chemistry. The authors found that the longer or fat As bonds to other atoms are readily broken, causing problems of spontaneous breakdown of the ATAs analog of ATP. Earthly organisms can incorporate arsenic adventitiously and, really, unavoidably, given the abundance of arsenic in all soils, for one. However, in water-based life (= any life), arsenic is not functional.
The terrestrial cycle of phosphate is one of use and reuse with eventual losses to precipitation in insoluble forms lost to the local organisms; soils erode into rivers, dead organisms fall to depths in oceans and lakes. Sedimentation of phosphate increased with the increased abundance of very active oxygen-metabolizing and multicellular organisms in the Cryogenian Period starting 735 Mya. Phosphorus has no gaseous forms to recycle into the atmosphere. Phosphine, PH3, is a gas but it burns up in air readily to yield phosphorus oxide, en route to becoming phosphate. Phosphate is only returned to the surface by tectonics. Tectonic plates get subducted, melted, and their material recycled in volcanic eruptions and uplift of mountains. It takes on the order of 100 million years for the cycle to run. Of course, the accumulation in sediment leads to rich concentrations that technological humankind exploits. Our modern agricultural practices use about 20 million tonnes of P each year, mined from deposits such as in Morocco, Florida, and Nauru. We have to move to poorer and poorer deposits eventually, at high energy cost that will be out of economic reach for much of our population. We didn’t face this problem in the past because we got our P in crops grown locally and then defecated that P in the same place. Sanitation wins for disease control but not for P.
In brief, phosphorus cycling in the absence of technology has always depended on plate tectonics. Tectonics, in turn, requires the four sources of natural geothermal energy noted earlier – radioactive decay of U, Th, and 40K in the Earth’s interior, the initial energy of accretion, the continued solidification of the core, and continued migration of metals toward the core. In a way, then, phosphorus helps a bit with its own recycling. A planet that is habitable in the long term – long enough for complex life to evolve – almost certainly must have tectonic activity. The planet must be big enough. Earth is, Mars was not. Together with the problem of removing excess hydrogen, it appears that we’re narrowing the mass range of habitable planets. Another note is that all recycling, natural or industrial, requires energy inputs. For mobile elements such as nitrogen, the input via solar energy ultimately in photosynthesis is at a rate greatly exceeding the geothermal energy for tectonics.
Transition metal elements are surely needed for all the biochemical electron-transport reactions. We large animals and the vascular plants all around us need elements 25 (manganese, Mn), 26 (iron, Fe), 27 (cobalt, Co), 29 (copper, Cu), and 30 (zinc, Zn). We seem to skip 23 (vanadium, V, though some plants may need it), 24 (chromium, Cr, or at least the evidence for its need is weak) and 28 (nickel, Ni, though some plants and many microbes need it). Perhaps half of all animal species, humans among them, require cobalt for vitamin B12, while free cobalt ions are toxic. The reversible transitions between 2 valence states, as of ferrous (Fe2) and ferric ions (Fe3+) are used in countless metabolic reactions among species from microbes to humans. Reviews of the metabolic roles of the transition metals are available copiously. The need for more than one transition metal in all organisms stems from the need for steps of oxidation potential in electron-transport reactions, among other things.
Iron is a very abundant element on Earth, comprising 2% of atoms in the crust. It is abundant and in any stellar system formed from the aftermath of a supernova or neutron star merger – that is, in any event that can create rocky planets. Its nucleus is the most stable, having the highest binding energy per nucleon. Cobalt and nickel are beyond that stability maximum and so are formed from more energetic events via the r-process. (So is iodine, the heaviest element used in living organisms.) Like all the other transition metals, its oxidized forms as oxides and hydroxides are quite insoluble in water. This difficulty was noted several times above. Organisms solubilized the now dominant ferric iron with acids or with complex organic molecules, the siderophores (iron-bearers). Iron recycling on land is rarely needed to maintain life forms, as iron is so abundant. In oceans its continuous input as river sediments and desert sands blown in is both critical for biological productivity and highly variable both in space and in time. In past wet geologic times with few deserts, there is evidence of lower oceanic productivity. An interesting experiment was proposed by the late John Martin in artificially fertilizing the Southern Pacific Ocean. The hypothesis that increased biotic productivity would lead to some of the biomass sinking and that, scaled up, it could provide a major way to take carbon out of the atmosphere to mitigate climate change. The field experiment showed that productivity jumped but the “detritus pump” to depth was not so significant. Ethical questions about such a massive change in the ocean biomes also terminated the experiment.
In sum, recycling of iron to oceans of planets with free oxygen in the atmosphere requires a strong hydrologic cycle with many rivers and perhaps dry areas with deserts. Any watery planet with stellar radiation input and not frozen solid has a hydrologic cycle In the long term, replacement of eroded landmass for land-based life requires plate tectonics again. For oceanic life on an exoplanet without plate tectonics the eventual deposition of iron-laden sediments from land to depths would lead to the end of productivity on an oxygen-rich planet but not on a planet with a ferrous ocean such as that on the early Earth. On either type of exoplanet without land it is impossible for life to reach the complexity needed for creating technology.
We are free to imagine a watery planet that “solved” the problem of transition metal availability and a strong redox couple or couples, with a very different overall chemistry. A problem for the origin of life might then be the need for shallow areas with weak mixing. There have been intellectual waves of considering how life on Earth evolved, with one being that nutrients adsorbed fairly stably on clays helped keep precursor molecules in near contact. Such prolonged content of precursors at relatively high concentrations may well be requisite for evolution of the first cells. An open ocean, in contrast, keeps nutrient dilute.
Our luck in the Universe: The r-process of nucleosynthesis is a wonder that gave us a whole periodic table of elements, and, thus, a vast variety of chemical reactions and structures. That’s a bit of “absolute luck,” but a lot of relative luck – many exoplanets may not have formed around stars that inherited the richness of elements from previous stars exploding nearby. Furthermore, many elements recycle nicely for surface life because we have a convective mantle powered by relatively heavy elements: the radioactive elements provide heat as they decay; the whole schmeer of the elements as heavy as iron or more release a lot of gravitational energy as a heat source, where light elements might not do.
Here I must speculate, as must everyone else, since we’re not going to visit any exoplanets in person. Right, no visiting an exoplanet. I have already noted that a journey even to Mars tests our biological and psychological limits…and that getting to the non-prospect of alpha Centauri would take about 75,000 years with our fastest spacecraft and lots of gravitational slingshotting like the NASA planetary missions used… and that the average large animal species on Earth survives only about 2 million years. We’ve got to get over this fantasy – there are far more fantastic things right here on Earth.
We certainly know what has worked phenomenally well on Earth in the basic chemistry of life. It’s carbon-based and water-based. Is life elsewhere only carbon-based? Abundant speculation has been done on alternatives to carbon-based chemistry for life on other planets. I refer to earlier sections on the chemistry of the planet. On the basis of carbon’s unique chemistry that suits it to make large, complex molecules, I cast all my votes for carbon. I do the same for life being water-based.
Above the molecular level, all living organisms are composed of discrete cells. There are no new cells originating from scratch; current highly evolved cells would instantly outcompete or consume them. There is no restarting the evolution of cells. We are all descended from the first cells, from one ancestor. For sure, the ancestor was a population of very similar cells that exchanged a bit of genetic information, not a single cell. There was a set of bacterial Adams and Eves (well, “Adeves,” having no sex back then), just as humans arose more proximately from a population of who knows how many Adams and Eves. While cells may have originated on Earth in several places, once the populations mixed there were absolute winners . One sign is that all organisms now use nucleic acids (DNA) as genetic material, all share the same genetic code with minor variations, all use proteins as catalysts.
Bundling into cells as compartments prevents dilution of biochemical reactants into the environment. The concentration differences between the inside and the outside also enable active transport of many substances:
By the author
Living organisms have membranes made two back-to-back layers of lipids (fats) with special chemical groups on two ends – a negatively charged phosphate group that faces into the water on both sides and “oily” chains of simple hydrocarbons making up the interior of the membrane. Imbedded in the membranes are many things, including proteins that can be made to change shape and thus push a small molecule from inside to outside or vice versa. Shown above are examples. Leftmost is a potassium-sodium exchanger or pump, driven by the energy release in splitting ATP to ADP and inorganic phosphate. Next to it is a proton pump. Next to that is a transporter that uses the infall of two protons to bring a negatively charged nitrate ion into a negatively charged cellular interior. Finally there’s a protein allowing passive inflow of potassium when the cell interior is lower in potassium than is the interior. The proteins are remarkably complex. Who knows how many times they have evolved.
The cells we all have and love, the eukaryotic cells, have not only the outer cell membrane but further compartmentalization inside. They have a nucleus inside in its own envelope to hold genetic information and to control its expression as genes and its duplication for cell division. Most have mitochondria that carry out the primary generation of metabolic energy by the oxidation of glucose or other precursors. Operating with a remarkable array of enzymes on membranes, that oxidation is coupled to the generation of adenosine triphosphate from adenosine diphosphate and inorganic phosphate. Membranes are essential in the process, allowing oxidation events to push hydrogen ions across the membranes. The proteins change shape to “squirt” protons across, the process shown second from the left in the figure. The backflow of protons then drives a multi-protein motor that pushes ADP and Pi together. There are reverse motors that use ATP splitting to push protons out. In plant roots the protons that they push out constitute an acidity that solubilizes iron and phosphate compounds for the plant to take up.
These are all effects that we can understand with thermodynamics, the science of energy transformations. Understanding the evolution is a much harder problem. In any event, active transport of molecules by organic molecule complexes, the pumps, is certain to be a feature of any living organism, anywhere – if only to keep higher concentrations of some key chemicals inside the cell and keep low concentrations of others. Plants keep sodium out of their main compartment, the cytoplasm, and keep valuable sugars and such in (unless they have to move, say, sugars out from leaf cells to support all other cells). Bilayer membranes are also to be expected, of course. Life elsewhere might have some fairly different metabolic schemes, but building blocks of cells, with membranes and with active transport should always be present.
Thus, we should expect that all life elsewhere is cellular and that the genetics of all its forms shows descent from a common ancestor – unless we find it in the first million years or so before contact was made among original cells with only one winner yet to be chosen. The nature of the catalytic molecules is up for grabs among proteins and nucleic acids. Both offer dazzling variety of forms and thus of activities. While current cells use proteins as catalysts, there is convincing evidence that the original catalysts were ribonucleic acids (RNAs), or ribozymes. In this RNA world, RNAs were the genetic material, the material to transfer genetic information to make enzymes, and also the enzymes. It was a simpler world, made with simpler materials. Now we have DNA as the genetic material that is capable of greater fidelity in being copied during cell division and in being expressed to make proteins. RNAs are formed from chemicals with carbon, nitrogen, hydrogen, and oxygen atoms that readily and continuously formed in the atmospheric soup of the early atmosphere. Such soups are highly likely on the exoplanets that are habitable.
Proteins made of 21 or so different amino acids in sequence allow an effectively infinite variety of linear and 3-D structures. In addition to having many structural roles, as in our own connective tissues, different proteins achieve a stunning range of actions as catalysts. They fold around intricately to create active sites, sometimes with metal ions. They greatly increase the speed of chemical reactions, while also having their rates controlled in ways ranging from simple to elaborate. They act with striking specificity, each enzyme working on only one or a few reactants among all those present in the compartments where they are found (of course, keeping that diversity low enough is a key function of compartments). The net effect of having diverse catalysts under control is a finely tuned biochemical metabolism that can acclimate a given cell to a considerable range of environments – various chemical concentrations, temperatures, even pressures. Only the most specialized cells such as our brain cells need a highly stable environment while our skin cells take it all in stride; tardigrades (look them up; “water bears”) tolerate the most remarkable ranges of conditions. Many simple organisms can become spores that are highly resistant to conditions outside their growth conditions. Life that’s long-evolved is likely to use proteins or protein-like long-chain molecules. In earlier stages we might find an RNA-like world. In any case, it will be a carbon-hydrogen world with extremely high probability – and also very likely built with nitrogen and oxygen linkers; phosphorus and sulfur as analogs of N and O are bulkier and make less stable compounds.
A real quirk of evolution is the handedness of the amino acids that make up proteins. We may look at
L-alanine. Almost any complex organic molecule is likely to have a chiral center…even as small as the common amino acid, alanine, at 11 atoms. The diagram below helps to explain this. Each of its 3 carbon atoms makes 4 bonds, as is highly typical of carbon. The carbon atom near the bottom has 4 different groups attached to it: the methyl group at the bottom, CH3; the amino group, NH2, the hydrogen atom shown here a below the plane of the drawing; and the carboxylic acid group, COOH, at the top. If you have some molecular models to make a model of alanine, or you’re good at 3-D visualization, you’ll see that there’s another version with the very same chemical composition with the same connections among the parts… but its bonds, free to rotate, can never make the molecule look like the first one. The two stereoisomers or enantiomers can’t be superposed. It’s the same with two shoes or two fitted gloves. There’s much to say about this phenomenon, but the key thing is that the shapes don’t fit other chiral shapes the same way. A pair of left and right gloves mesh nicely, but not two left gloves. That means, among other things, that an enzyme won’t fit one of the two shapes in trying to carry out a catalytic reaction. All Earthly organisms use specific enantiomers of each of the 21 common amino acids. By convention we call these L-amino acids, for the majority rotate polarized light to the left as it the light passes through a solution. D-amino acids are either not used or toxic. NASA realized this in building the experiment designed by Gilbert Levin on the first several Mars surface probes. The task for the probes was to present sugars that Martian bacteria might ferment, to show that they’re alive. The probe presented different enantiomers of sugar, just in case the Martians used the opposite forms from those used by terrestrial life. The universal use of only L-amino acids in Earth’s organisms is (1) a demonstration of the element of chance in evolution – all D-amino acids would work, too, but they just didn’t get chosen and (2) a resounding confirmation that all of us living beings came from a common ancestor. Life the evolved on another planet is equally likely to use molecules of the same handedness as on Earth, or the opposite. I may repeat that the only test will be within our Solar System. No party of humans on a spacecraft could maintain themselves for the enormously long journey to another stellar system. Mars could be more interesting that we thought. Venus has toasted away any remnants of life, alas.
Energy for organisms: Total energy is conserved; energy flows in to a planet, or a single organism, and energy leaves. Energy storage is temporary; even the fattest of us humans will not hold onto our energy after death. More significantly for life is the second law of thermodynamics, that entropy (disorder, assessed in its many forms) naturally increases. This may be stated as that energy available for work (including just pumping out wastes, e.g.) or free energy declines. Let two bodies, solid or liquid, be in contact and mixing occurs that would require new free energy to unmix, as we do in reverse osmosis to unmix salt from water. Let two or more chemicals mix that can react among themselves and we get a trend toward the final conditions that takes, again, more free energy to undo (and it may be impractically long or hard to achieve in practice; try un-burning gasoline). The apparent quandary of the existence of life is that it is less disordered, more ordered, than its surroundings. It has less entropy. It’s self-organizing, vs. the tendency to disorder. It achieves this by using strong sources of free energy. That’s sunlight now in photosynthesis for almost all life, excluding some rare instances of lithotrophy, bacteria living on breaking down rocky minerals in reduction-oxidation events in mine spoils and in some deep rock. As I noted earlier, only very high-quality energy sources have lots of high-quality free energy. Thermal energy is heat is the lowest possible quality. Anything such as biochemical constituents that can be created by thermal/ low-free energy source can be rapidly undone by those same sources; that’s the well- verified principle of microscopic reversibility in physical and chemical processes. Plain old heat is, by definition, the most disordered and high-entropy source. It’s only useful in being transferred from a hot source to a cold source in a heat engine. Sadi Carnot figured this out in detail in the 1800s to get the limits on extracting work in a steam engine. Life can’t use heat as its energy source; there would need to be a huge difference in temperature between two bodies with which it is in contact. Technological materials, the metals and some thermoelectric materials (found in your picnic cooler running on the 12V source in your car), can withstand the difference. Cells cannot. Cells can’s use bulk metals or semimetals, either, because they’re all so corrodible as well as infeasible to assemble from atoms. We’d never find an organism made of transistors, not even organic semiconductors with their exquisite composition and layering requirements. Artificial intelligence run amok on Earth is itself a fiction; any life form has to be able to construct new copies of itself from simple materials.
Photosynthesis with starlight will be or is ultimate source of energy for virtually all life. That’s the case in the long term. Life starts out by exploiting some energy-rich compounds that were formed abiologically by ultraviolet light, lightning, etc. Good estimates for what was present as life was about to start includes amino acids, nitrogen-containing ring compounds, and simple sugars – all comprising a leg up on what life needs. As life proliferates it depletes those stores. It then has to start using a source of high-quality energy, a.k.a., sunlight (starlight) by doing photosynthesis. That’s one or two chains of chemical reactions started with a step that captures the energy in light. On Earth these chains are driven by sunlight captured by chlorophyll. A quick comparison of energy in sunlight vs. thermal energy is useful. Earlier we covered the range of energy in photons of visible light, 1.8 to 3.0 electron-volts, and comparted it to thermal energies at life’s operating temperatures, about 0.04 eV. So, light is good. The challenge is in capturing it efficiently and safely. Re safety that means not making dangerous reactive species such as singlet oxygen. (You can make by mixing sodium hypochlorite, which is ordinary bleach, with hydrogen peroxide. It gives off an eerie,low-intensity glow by an unusual two-photon combo, but don’t breath it in!). Chlorophyll is the universal molecule of choice, with a few small variants called Chl a, Chl b, and bacteriochlorophyll. It’s a large molecule, with Chl a having 137 atoms, 55 as carbon, 72 as hydrogen, 4 as nitrogen, 5 as oxygen, and 1 as magnesium . When large molecules get their electrons excited to higher energy levels, they usually are effective in losing energy several ways (see earlier). They can drop a bit in energy and emit light of lower energy, fluorescence. Chlorophyll does this, emitting a beautiful deep red light, but not so fast that the energy can’t make it to a site where photochemistry stores it. (Crush a leaf and extract it with acetone or isopropyl alcohol. Filter the extract and shine sunlight or a violet laser pointer beam on it!) Big organic molecules are also usually adept at dumping the energy as heat in a process called internal conversion. Chlorophyll does that but slowly, so that energy can jump from one Chl to the next quickly (another of its feats, in resonant energy transfer) and arrive at a special reaction center. There, with the help of a number of other molecules, the energy is used to split electrons off water (at so-called photosystem II) or move electrons further. In green plants, for example, the energy is used to make powerful reducing agents that can reduce a CO2-carrying molecule to a molecule destined to make glucose. The energy also drives other carrier molecules that eventually cause energy-currency, ATP, to be made from ADP and inorganic phosphate, as noted earlier. Chlorophyll also nicely avoids doing much intersystem crossing, not creating notable amounts of its so-called triplet state that can create nasty singlet oxygen. The four major traits of Chl: relatively slow rates of fluorescence, internal conversion and intersystem crossing, and good energy hopping among multiple molecules, are very rare in organic molecules; in fact, we know of only one type. It’s a great molecule. Expect something very similar to evolve on other habitable planets, and to be singular among organic molecules. We should expect also that the “exo-chlorophyll” absorbs light around the visible spectrum for us humans. That region has enough energy to drive electron transport to make reductants for biochemically useful molecules – the sugars, especially. Not too much lower energy and those sugars or the like can’t be made; too high energy in ultraviolet light breaks up all kinds of organic molecules.
Physiology is the putting together of all the cellular metabolism and its responses to the environment. Simple single cells such as bacteria can be said to have a well-defined physiology. They exploit a range of environments as chemical availabilities and concentrations and temperatures. They respond to their environment. They may sense the direction of increase of a food source or the direction of decrease of a toxin, and they move appropriately. That motion is achieved even with simple biased random walking: they can swim and tumble but bias their stopping direction. They reproduce. They maintain and repair themselves. They repair genetic damage, more or less exquisitely. They may show rudimentary communication among individuals and they may exchange genetic material. The latter can be termed bacterial sex, giving short pieces of genetic material called plasmids to each other. Some plasmids are notable for conferring on bacteria resistance to drugs that we produce or that their natural compatriots, often fungi, produce. Expect intricate and multi-dimensional communication among cells. Even bacteria communicate. Over and above bacterial “sex,” they often make changes as a group in what is called quorum sensing. They change gene expression to a common form that best maintains themselves, and all of them together.
More complex organisms are the eukaryotes that have a distinct, membrane-bound nucleus. Some are single-celled, such as the yeasts. Most that we are aware of are multicellular with differences in function among cells, or cell differentiation. That’s an oddity. Mostly all the cells have the same DNA but they differ among themselves in the parts of the whole genome that they express. Liver cells don’t make eye lens proteins; cells that expressed proteins to form our eye lenses don’t detoxify drugs or terminally eliminate hormones at rates that keep the signaling channels clear. It took evolution over 2 billion years to generate multicellular organisms. It’s tough to get all the cell signaling set up right and to create a genetic code that reliably encodes for the same pattern of cell division and development to make accurate copies of the organism. Organisms with sexual reproduction are the most complex. They also must die, not just divide and prosper. There’s no way to fission a body like ours and copy the halves. Only truly simple animals such as the hydra can do that. Sex has the cost, of death (which also clears out the old in favor of the new, a glimmer of hope in politics, e.g. ). It also has the cost of having only some of the organisms able to host the development of the new generation from a single fertilized egg cell. That’s the women, in the case of humans. The existence of males was a quandary – why have both sexes? Yes, there are even large vertebrates such as a number of lizard species that have only females reproducing themselves, but that’s rare. One clue is that sexual reproduction scrambles the genetic variations present in the two sexes to create new sets of the many individual genes that may work better together than either original set in the parents in a new environment. Diseases in the environment are a big factor. Males may be useful in reducing the impacts of diseases, otherwise being competitors for resources and sometimes a positive nuisance (men, ask your female partners). Another value of genetic recombination is avoiding the expression of mutations that prove lethal when we diploid organisms with our two copies of each gene but often variant forms on the two chromosomes. We humans individually carry about 6 conditionally lethal mutations in our genomes – mutations that, if found on both our copies, kill us. That happens often at conception. Most of the time it yields a nonviable embryo or fetus. Almost every eukaryotic species uses features in mating and development that assure outbreeding to reduce the chance of such a genetic load knocking down our population. It’s amazing that the pharaohs made it, marrying their sisters; it didn’t continue, as usurpers and invaders came in often enough. Breeding within a small population is problematic. Cheetahs are nearly goners already. A spacecraft going to Mars with 200 humans is in genetic trouble in a big way. All small populations go extinct fast. On an exoplanet expect no cute, small populations like dollhouse inhabitants.
An interesting question is whether nonliving, replicating elements – viruses – also might exist on another planet. Viruses are small bits of DNA or RNA, possibly broken off bits of ordinary genomes, though no one so far has good hypotheses about their origins. By nature of being genetic material (and I’ll skip over prions, which are proteins) they can take over living cells while not being alive themselves. They’re like, well, computer viruses – not computers themselves but altering the function of computers. Viruses challenge the bacteria that dominate the (quite low) biomass in our oceans. With an estimate of perhaps 40 bacterial viruses, the bacteriophages, per bacterial cell, life is only a few days for the average bacterial cell there. There are sophisticated immune mechanisms in the bacteria that capture RNA from phage infections and make a record used by a memory system to chop up the DNA. That’s the CRISPR system of recent fame for genetic engineering. We humans have our share of viruses. We have our episodic infections and even virally induced cancers. We also have about 6% of our own genomes incorporated from long-ago viral infections of our ancestors.
More elaborate organisms have more elaborate physiologies. Consider the vascular plants, the gymnosperms such as pine trees and the flowering angiosperms, with their flows of xylem and phloem in tubular tissues. They have 13 cell types or more, flexible body organization (branching, switching parts from growth such as leaves to reproduction such as flowers), and communication with a number of hormones that we find to increase as we look harder. Human bodies have over 200 cell types. We have highly specialized organs from epidermis to liver to brain and much more, and long-distance sensing and communication via fast nerves and slower hormones. We may expect some exoplanets to be habitable by complex organisms and a much smaller set of these to have had sufficient time for evolution of these complex organisms. Stable environments help these to exist. Sure, Earth has regions such as Siberia with wild swings in temperature from -60°C to +40°C, but these are predictable in the large, so that organisms evolved seasonal acclimation methods – bud formation and bud-breaking, for one. The jump to sentient organisms should be extremely rare, if only for the long evolutionary time needed. One thing to expect in their fast communication centers analogous to our brain is that the communication will be electrical, as in our nervous system. It’s much faster and subject to exquisite routing than is chemical (hormonal) communication.
All organisms must have highly competent mechanisms in biochemistry, genetics, and cell development to repair damage. Genetic repair has tiers, with eukaryotes having the most sophisticated systems. Bodily repair as replacement of differentiated tissues is tricky the more complex the organism is. We readily regrow skin and mend broken bones. We don’t regenerate lost fingers, though we may look at the axolotl. It’s a salamander lacking a way to shut off juvenile hormones. It can regenerate whole limbs. The capacity for bodies to proliferate new cells and have them differentiate into various tissues is rare. It’s also fraught with some danger. Our physiological controls and communications among cells ensures that a skin abrasion doesn’t get repair that just keeps growing. Plants and animals restrain cell division and differentiation to avoid benign or cancerous growth. Be glad that your stem cells can’t have their way unrestrained. For life forms other than on Earth, there may be some interesting diversity in repair mechanisms.
The combinations of genetic traits that work well together are numerous but still highly constrained. The googolplex or so of possible combinations among all organisms sieve down into individual species that do not interbreed – they are reproductively isolated. Even close relatives among large animals rarely interbreed, can’t interbreed at all, or only give rise to sterile hybrids. We don’t see mules creating offspring. Most interspecies crosses are simply lethal at any of several stages. Plants are much more tolerant. Bread wheat is a two-level mixing of three ancient grass species with three times the original number of chromosomes. Humans can’t tolerate tiny rearrangements of chromosomes. Witness Downs’ Syndrome or Fragile X. In consequence, life has split into a bewildering number of separate species. The count is perhaps a good part of a million for flowering plants and in the millions for insects. For bacteria the definition of species itself is a problem, given both their lack of real sex and their sharing of genes. We can expect the same Balkanization of genetic identities on any other planet.
The other reason to expect many, many species once evolution has run awhile is that many ecological functions – interactions among organisms – are needed for a whole biosphere to be stable. At the very basic level, every organism’s waste or dead carcass has to be recycled by other organisms. Else, we’re all hip deep in excrement or dead bodies, nutrients are tied up forever, and we all die. Trees are broken down by insects, fungi, and bacteria, leaving organic matter in the soil for successor plants to use, likewise, carbon dioxide and water vapor in the air that even distant photosynthesizers can use. Elephant droppings are recycled by dung beetles, first. There’s lots of organic matter in them, such as cellulose, that elephants can’t digest, not having the symbionts that cattle do; dung beetle larvae can thrive in them. Human bodies that are simply buried (or, now, composted, as became legal in Washington state) get decomposed by insects, nematodes, fungi, bacteria, etc.; the end is mostly soluble nutrients for plants and gases that dissipate.
Expect, then, that a huge diversity of organisms evolves on a long-habitable planet. Here on Earth there is a Tree of Life with millions of species linked by common descent with currently uncountable thousands of branches. We can trace a good bit of that descent, and mostly not from fossils, which are rare or absent for most species. Molecular systematics is a good handle we’ve been developing in science; check out how we know even when we diverged from chimps. Any long run of evolution gives such ramification. The pattern has been regenerated in muh-modified form after each mass extinction on Earth. One thing to note is that evolution is not directed by any great organizer, not by any guild of organisms, and it has enormous elements of chance. Evolution is contingent on past evolution; a rerun would not generate the same patterns of species, even if we could do such an impossible experiment. One corollary is that a species that goes extinct will never be regenerated – no more dodos, no more elephants when the last one is poached in a few decades; I’m so glad that Lou Ellen and I got to see them in the wild, as well as sad to be likely in the last generation of humans to do so.
The linkages of organisms are needed for stability of the biosphere, though this is not guaranteed. We almost bought the farm, or the ocean floor, a number of times; think Snowball Earth episodes. One may assume that all ecological niches will get filled as they do on Earth. No resource seems to go untapped… and that includes each other with parasitism and predation. These exploitative interactions can, and very often do, terminate other species. We’ve seen it in fast action in the US with the chestnut blight and Dutch elm disease, which have almost completely extinguished the host tree species. The fossil record is replete with losers; about 99% of all species that ever existed are now extinct. Species naturally endanger each other. Expect no less on other worlds, though we have no way to see it. (This calls to mind the classic grad student humorous exam question: define the Universe and give three examples.)
Despite many ultimate extinctions, diversity abounds, both obvious and hidden. The occurrence of hundreds of tree species in a few hectares in Amazonia is testimony. So is the more readily observed diversity of about 50 or 60 species of plants in a square meter on the sandplain heath of Western Australia; nothing is taller than about 50 cm and many are under 10 cm, so a survey is easy. The universality of rather stable diversity within ecosystems remains hard to explain. Simple partitioning of major basic resources such as nutrients, a` la Tilman, is unable to explain 100 species aggregating, even if communities are a bit haphazard and able to re-form anew. I suspect that there must be havens for species in a resource and abiotic condition space of very high dimension that we have yet to figure out. Still, expect it on any world.
Given the ubiquity of diversity, there are many ecological functions that are met by the organisms all finding themselves in a location. For plants there are pollinators, seed dispersers (often birds or mammals), predators on their “predators” (birds that eat insects, parasitoid wasps that do in the caterpillars eating them), and more. For we mammals there are the plants, animals, fungi, and protists (e.g., nori seaweed) that we eat. There are soil fungi that sanitize the soil we’ve been in contact with so long, by their production of antibiotics to save their food supplies from bacteria. All ecosystems on Earth are rather complex. Even when the Earth had only bacteria long ago, various bacteria cycled sulfur, oxygen, nitrogen, phosphorus, etc., maintaining parts of the biosphere that differed in oxidation status, among other conditions. We should expect the same complexity for any biosphere on any planet. For those of who are brave (or, I say, foolhardy) enough to travel to Mars, I fear that oversimplified ecosystems bundled into a spacecraft are set for collapse. Remember, different organisms aren’t in the game to cooperate altruistically with each other. When they work together it’s in a collaboration in which each follows its own interests and in which each has colonized the best place to tap the ecological functions of others. It’s all subject to rule-breaking. There are invasive species moved by human agency, such as tumbleweed arriving in wheat from Russia that became an iconic but noxious plant in the US West. There is the origination of new diseases by mutation or recombination. We see both of these all the time – COVID-19 for mammals, Uganda-99 fungal rust for wheat, toxin-tolerant grasshoppers. One food crop’s substantial drop in a Mars colony and it’s Donner Party time.
In the final analysis, we would find what we find, certainly with surprises. Recall the shock of finding the Burgess Shale with the exploding variety of organisms 550 million years ago, and the equally striking finding of the 70-million-years-earlier Ediacaran organism. Hallucinogenia, anyone? As impossible to predict would be the stage of evolution on an exoplanet. Still in the bacterial stage? A few in a trillion in the high-tech sentient being stage? It’s rather a fantasy, other than on Mars or a moon in our Solar System. We’re not going to be around as a species long enough to travel (mega-difficult) or even to get an answer from a high-speed spacecraft with sensors. Let’s exult in the wonders of our home planet and appreciate the marvels of diversity, of evolution, of lucky breaks that made us a species even capable of appreciation. Finding life on another planet will inform us about our immensely lucky breaks and the care we need to take to preserve what luck left us.
Not really more of our luck in the Universe, but a remarkable structuring and dynamics of life that, taken in as a whole, are more mind-bending than any hallucinogen.
The Earth has clearly been habitable (version 1.0, at least) continuously, even if very tenuously at times.
Looking back at the introduction with various qualified definitions of habitability, we clearly denote Earth as continuously habitable for at least some forms of life since nearly 4 billion years ago. Without break this led to an occasionally explosively ramifying evolutionary tree that now includes us. Species of organisms of all types have come and gone, sometimes with many or most of them disappearing catastrophically in mass extinctions. Habitable locations have varied widely overall, with toxic ocean regions precluding life occasionally, and the same for the land surface that may have been completely covered by ice several times. Life forms have supported each other, not altruistically but by mutual natural selection, in scenarios that have changed over time. For the last half-billion years or so we have bacteria to make vitamin B12 for many multicellular animals. We get B12 when we eat animals that harbored the bacteria. Before that there were no such animals but well before multicellularity came to be, Eubacteria (true bacteria) and Archaebacteria merged to give more complex eukaryotes. These evolved into us and the species that now cohabit the Earth with us (alas, fewer such species each year).
The past mass extinctions – five, at least – underscore the concepts that habitability is not guaranteed from initial conditions as a planet forms nor from current conditions, that it bears major elements of chance over time, and that it applies selectively to different life forms. It is also very apparent that humankind holds a great many aspects of habitability in its hands, with a grip that is proving to be somewhat slippery. We are dealing with natural and anthropogenic hazards, about each of which many volumes have been written. This online book is not the place to cover all the points already made so well; rather, it may be of use to explore what habitability studies in the large, up to extragalactic scales, may offer as insights or at least foci.
Often, presentations about natural and anthropogenic hazards are siloed. Soil pollution by heavy metals is not found in discussions of climate change, and both of these rarely mix with discussions of waste recycling and depletion of mineral reserves. In a Martian’s-eye view of habitability they do fall together. I will attempt to do some tying-up.
- Foremost, there is no planet B as an alternative home for humankind. Studies of exoplanets have inflamed interest in how our Solar System came to be but the selling point to the public (and some scientists) is the tantalizing thought that we may find life somewhere else. Regarding the possibility of us humans moving there, only the Brooklynese expression is merited: Fuggetaboutit!
- The adage that you can’t get there from here is verified. No nearby stellar system has a habitable planet, and even the nearest one would take around 75,000 years to reach with current technology. Any group of potential colonists would be very small, representing a very tiny “aliquot” of humanity with likely unfortunate consequences from the founder effect in genetics.
- We, and our companion life forms, face hazards from both natural causes and our own actions. To meet challenges in the sequence of recognition, planning, and action, we’re largely in the first stage. For climate change we’re sliding soooo slowly into substantive planning and a bit of action. It’s possible to progress. For natural hazards our record is generally confused. We have no plans for dealing with the next eruption of the great Yellowstone volcano. We’re getting some basic biology for understanding how novel diseases emerge.
- There is a great plus side. Having to study in many dimensions and in great depth what life needs in the way of star, planet, and neighbors, we get a firmer grip on how the habitability of Earth is maintained.
- We have many intellectual and practical tools to understand our situation and consequent needs for action. We can put it all together from many fields. As a mere sample of our applicable knowledge:
- Biology, including the physiology of tolerances to environmental conditions both abiotic and abiotic, the ecology of species interactions, genetics, evolution, and more;
- Geology, from plate tectonics with its cohort of mountain building, volcanos, and earthquakes, to the geochemical cycles from local to global, to the informative and contrasting geology of planets and asteroids;
- Atmospheric physics and meteorology that can comprehend and often predict weather and climate, past and present;
- Paleontology, showing us our own evolution and that of our compatriot species of all the diverse taxa of life, tracing environmental drivers with increasingly sophisticated physicochemical methods;
- The diverse fields of physics, from the nuclear physics of fusion in stars to relativistic phenomena of black holes to quantum theory of stars’ structure and dramatic ends;
- Cosmology, taking the grand view of how the Universe is structured;
- A welter of analytical and modeling tools in mathematical physics, chaos theory, statistics, big data methods, artificial intelligence, …;
- A range of social sciences to fathom how we perceive risks, plan actions, choose leaders or devise our own personal leads.
It often seems that little is happening in a field we deem critical but then something suddenly bursts in from another field to illuminate the original one. There are copious examples in the hard sciences; In the social sciences it may occur with some frequency, but I’m personally less cognizant of such sudden cross-fertilization. It comes down to a matter of timing. Can we put it all together, or enough of it, to have most of us make it through in the, say, next thousand years?
Some aspects of habitability are regional in extent, others are global. Depletion and contamination of groundwater is widespread but localized; the High Plains Aquifer of the US doesn’t connect to the Northwest India Aquifer. In contrast, climate change affects the entire globe, with impacts that vary by location (e.g., polar regions warm the most) but all tied to greenhouse gas injection into a well-mixed atmosphere. Still, sociopolitical ties make changes in habitability in one place drive changes elsewhere. The Syrian drought that triggered the civil war spilled into Europe as a huge influx of refugees. With trade ties and financial links now extensive across the globe, all local changes in habitability bleed over to far larger scales.
The challenge for the human psyche is that our brain evolved to deal well only with immediate, localized threats. At the Chapman Conference of the American Geophysical Union in 2013, one speaker used this image of a truly terrifying situation (hippos are fast and aggressive in the mating season; they kill about 500 people in Africa each year).
Individually we’re poor at attending to slowly developing threats. We are also poor at weighing risk of loss vs. potential for gain; talk with any compulsive gambler or a smoker. Socially we’re poor at dealing with threats arising from collective action, particularly those involving “them” even more than “us” as in climate change. With the evolution of language and then complex societies we got better on the whole but have had to rely on hierarchical leadership structures that are often diverted to placing wildly skewed estimates in ranking concurrent risks, as well as having ideological blindness and outright corruption. There has to be a way out, or many conjoined partial ways out of the conundrum to achieve unified and appropriate action on threats to habitability and other kinds of threats. We have done such in the past. The European Union formed and still functions, inevitable complaints notwithstanding. The US achieved rural electrification at a cost of 2% of GDP each year for 10 years. Scott Denning of the Colorado State University pointed that out in a public presentation and noted that we, as in the world’s population could do the same for climate change at the same relative cost. I view the latter achievement, rural electrification, as having required a dedicated bureaucracy, out of people’s faces every day to get serious work done in the background, not worrying everyone about all the complex steps. We have to hit the sweet spot, and it can be done.
Natural hazards range from local and repetitive, such as hurricanes, to global and singular, such as asteroid impacts of the massive, explosive volcanic eruptions. Humans have dealt with hurricanes and similar repetitive extreme weather events, now with increasing success via detection and warning systems, at least over much of the globe. Repetitive or just recurrent in a large sense is the evolution of new diseases of humans, crops, livestock, and natural resource bases such as forests. COVID-19 emerged as the most recent mutation of one of the many coronaviruses from the mixing of several species, bats, pangolins, and humans. In that sense, it is part natural hazard and part anthropogenic hazard. The Uganda 99 wheat rust is similar, as a natural mutation that exploited intense cropping in order to spread. We might toss in here the expansion of ranges of diseases endemic to restricted locations. Mosquitos started as nectar feeders and only evolved later their blood feeding and their lamented ability to spread malaria, dengue, Zika, Marburg fever, and the like but then all these diseases spread massively. By the time the genus Homo had evolved the change had long been in place, so there was no human management of it until we developed partial ameliorations rather recently.
Massive volcanic eruptions and large asteroid impacts are singular and have global reach. Both of them greatly alter one locale and change the climate globally. Continuation of habitability, piecewise as it will be around the globe, is greatly affected by the state that humans have put the biosphere into in the near future We have limited the refugia for many life forms where they can persist or to which they can move to reestablish viable populations. Lions used to roam vast areas of Africa, Asia, and Europe. They have few places to go now in case of a big disruption of their currently limited habitat. Thousands of species are in such constraints. There is little that humans can do in response to new natural events that hit these refugia. Our multi-millenial legacy of changes to the biosphere strongly locks in the outcomes for organisms in the wild. Our own persistence as a species and that of our crops and livestock would be slightly under our control after such events. I say “slightly” because mounting a response demands a great deal of previous planning. That’s not possible when there are so many places to prepare with expensive infrastructure and services for events of unknowable magnitude. We can’t stay of red alert everywhere and for all time. Another reason for the qualifier “slightly” is that we have reduced the biotic reserves needed for recovery.
Considering both natural or anthropogenic hazards for the continued habitability of Earth, let’s take a look at one aspect of our preparedness for coming change. I refer to the critical ability of plant breeders to breed crops for human consumption that are resistant to diseases and pests and tolerant of drought and other environmental stressors. Breeders spend about 95% of their efforts on these factors. High yield, easy harvesting, and marketability are prominent in seed catalogs but are supporting actors in the play. Crop breeders’ work is about 90% in getting protection against diseases and stresses. They get the necessary resistance and tolerance traits almost exclusively from wild relatives of crop plants, directly or transferred from breeding-only lines that have had these traits introgressed. An increasing challenge is to find wild relatives when we’ve cleared most wild land in the world that hosts crop species relatives. When a species or a variant natural population goes extinct, there’s no rerunning evolution to recreate its integrated suite of genes that clearly works splendidly, even if imperfectly as all evolutionary theory admits.
A fallback to consider is genetic engineering. However, genetic engineers at their most sophisticated efforts still take traits of wild plants, just hastening the introgression of genes. Also, they can consider one, two, maybe five genes at a time, whether these are new or preexisting. That may not be enough for a desired trait. Devising genes de novo is far beyond even their ken as anything like a routine practice. To devise a new gene that will work with the suite of tens of thousands of other genes in a plant is still far beyond the means of the novel discipline of systems biology. Knowing how whole suites of genes work together is a grand challenge. One inference from these facts is that the more we leave of the biosphere intact the better prepared we will be. The aesthetic and biophilic benefits that attend this are plusses.
There are many anthropogenic hazards to habitability of the Earth. A litany of them would be far too lengthy here, as well as too diffuse in aim. I then choose to address climate change as global threat with continuous worsening, ready perception of its progress, alarming tipping points, and long-term irreversibility. I focus on levels of preparedness and on ethics. Among the characteristics of climate change are:
- The mechanistic links to greenhouse gas injection are solid, appallingly so;
- The major amelioration is stark: reduced the use of fossil fuels and reduce land clearance; that’s fraught with hits to economies both advanced and nascent and with the costs of stranded investments in the old technologies;
- Another “wedge” in the solution, as Stephen Pacala and Richard Socolow term these pieces, is increasing the sink for CO2 with:
- Afforestation and other increases of standing biomass; this has limited scope – as Rob Jackson of Duke University noted to me once, this may buy use two years’ worth of compensation of our injection of CO2. It also adds to demands of water and land use, which China has found problematic already;
- Active technologies to scrub CO2 from the atmosphere. These are very costly in monetary terms and in energy use. That energy had better come from a massive expansion of renewable energy sources, not fossil energy!;
- Geoengineering to make the Earth cooler, if unevenly so. Make it more reflective of sunlight with SO2 aerosols, orbiting mirrors, and the like; this does not address ocean acidification by the CO2 still hanging around. It does not address direct effects of high CO2 on land plants, especially declining protein content . It does not address the induced changes in weather patterns. Thus, geoengineering has serious ethical problems, particularly pitting industrial nations against developing nations;
- There are tipping points beyond which phenomena cannot be reversed on any human time scale. Waiting to act until the tipping point is near is far too late; the momentum of fossil fuel use and land clearance can’t be halted fast.:
- Geographer Vaclav Smil presented a sobering analysis of the time it takes to replace a major energy technology such as natural fats to coal or coal to oil. It’s about 60 years. We don’t have that time. The changes will have to be in desperation mode, which we may fondly hope will be in a cooperative spirit;
- These tipping points include loss of ice cover, which amplifies solar energy absorption. This is one of the positive feedbacks pushing an exponential increase in effects. Yes, there are negative feedbacks, including CO2 uptake by plants and the ocean, though the balance is precarious. As I noted earlier, it would require the most reckless and preposterous use of all fossil fuel reserves to exceed the tipping point for a runaway greenhouse effect;
- Sea level rise is irreversible; there’s no bottle to put the extra water into. About one-fifth of the world’s population lives on land within 10 meters of current sea level. Dislocation of that fraction of humanity is almost inconceivable. Whole nations such as Bangladesh and the Malvides essentially disappear, and rather soon on our scale of planning for ameliorations;
- Sure, Earth did come out periods of extremes of temperature, such as the Cretaceous, sometimes nicknamed the Saurian Sauna for its mean surface temperature that may have been as high as 35°C. However, that took millions of years for equable climate to return. Fortunately, it appears that even our most reckless use of fossil fuels is unlikely to push the Earth into such a near runaway greenhouse effect. That said, novel climates that are only short of catastrophic seem to be setting in for the long term. Inner East Asia switched to a hotter and drier climate in the 1990s.
- We as the genus Homo (H. sapiens sapiens, sapiens, H. sapiens neanderhalensis, H. denisova) have faced natural tipping points in the past, without any significant means to respond (does Sahara desertification counts) . Now we have great fonts of knowledge, but with lagging sociopolitical means to respond – and we’re likely already past some tipping points.
- In the same vein, even for reversible effects we see only half the final effects at any one time. We see the transient changes but the feedbacks operate to amplify them. If we suddenly stop using all fossil fuels, the final warming of the planet will be about twice what it is now. That arises from development of the mostly positive feedbacks such as ice melting.
- Our agricultural system is tuned to current climate:
- Our soils evolved under long-term regimes of precipitation and temperature. One problematic change quoted is the shift of the corn belt from the US into Canada, where acidic and lower-nutrient soils won’t support the usual yields;
- China saw an initial rise in rice yield from rising CO2 in the early 21st century, followed by decreases as high temperatures sterilized a fraction of the pollen;
- Global total precipitation has risen, but unevenly. True, higher temperatures do raise the vapor pressure of water, leading to increased atmospheric water content. However, some geographic regions are showing increased precipitation but others show decreases. The “haves” tend to gain, the have-nots tend to lose from a phenomenon called moisture convergence. Increased floods and droughts impact yields, at times catastrophically for local human populations; the Syrian civil war had a multi-year drought as its trigger by causing farmers to move into cities, exacerbating the food shortages;
- Shifts in temperature alter the timing of pollinator activity, causing mismatches between the pollinators and crops in some places and for some crops;
- Indonesian farmers presently double- and triple-crop rice under the abundant precipitation of the Intertropical Convergence Zone. Julian Sachs and Conor Myhrvold point out that the ITCZ is already showing evidence of moving poleward to the north. If it largely skips Indonesia in a century or more the reduced cropping will lead to famines or, more dramatically socio-politically, perhaps as much as a hundred million climate migrants;
- Crops, livestock, and humans variously adapted in part in the past to climate change, or failed to do so:
- Collapses of civilizations were part of the result, as of several empires in ancient Mesopotamia and the Maya in Mexico and Central America. Other factors, political but also environmental, factored in, such as inevitable salinization of irrigated land;
- In the very long term, humans, crops, and livestock evolved physiologically and behaviorally to environmental changes. Consider the modest genetic change of humans migrating to northern Europe and evolving lactose tolerance as they relied on milk (see the wide-ranging story of this in a recent book, Lactose: Evolutionary Role, Health Effects, and Applications). However, the current rate of climate change is unprecedented, too fast for evolution. Also, evolution always involves “excess genetic deaths” so that various alleles (variant forms of genes) replace other alleles. Some losers die outright, others lose reproductive output. They shrink in population size or go extinct;
- Biotic extension of climate change:
- Insect vectors of diseases are expanding their ranges. Dengue, chikungunya, zika, Japanese encephalitis b, West Nile, and even malaria are in or moving to the temperate USA. Their spread is abetted by the accidental introduction of their mosquito vectors, actions verging on criminally negligent. It is to be admitted that malaria was endemic to much of the US until the last case in 1950 in Farmington, New Mexico. The US management method was control of mosquitos with DDT but, more effectively yet, the introduction of window screens to bar night-biting mosquitos from bedrooms. This represented a concerted social action of extended duration and high public acceptance; these things can be done;
- A fair and yet uncertain fraction of biodiversity across taxa will be lost. Natural biogeographic ranges of species are set in good part by temperature and precipitation patterns. As these patterns shift poleward, species dispersal may not move as fast. Montane species see more of a threat, as they commonly have nowhere to go but up until the last habitat is gone.
- Our time scales for technological and political change are too long for the rate of climate change. The major societal mitigation of climate change has to be the changeover to renewal energy resources. Vaclav Smil’s analysis of a 60-year time lag in changeovers between energy sources (wood, whale oil, coal, oil) is sobering… I hope. In the consumer technology there is much talk of disruptive technologies, with some accuracy. The disruption is rather modest to date in the energy sector.
There are a number of other hazards to habitability of the Earth that I will not even sketch here, for lack of space as well as their being familiar and of less dramatic dimensionality than climate change. One hazard that merits note here is nuclear war. It would resemble a major asteroid impact in two of the three key ways. Deaths in nuclear war arise from direct blasts, radioactive fallout, and global decreases in temperature and sunlight penetration from the lofted soot that encircles the globe. Two flash points are of critical interest. The US-Russia confrontation is largely defused now, through decades of diplomacy accompanied by leaders and countless subordinates attaining an intellectual grasp of what’s at stake. I wish also to thank one Soviet hero, Stanislav Petrov. On 26 September 1983 he was staffing the Oko early warning system. He received false warnings of 6 missiles fired from the US but held fire, against standing orders. The second flash point is the conflict of India and Pakistan over Kashmir. Brian Toon of the University of Colorado, Boulder and his colleagues have addressed this in detail, with a recent publication in the 26 March 2020 issue of the journal Nature. The two nations are on track to have 400 to 500 nuclear warheads by 2025 with small to medium yields. A war scenario has a total of 250 weapons detonated, killing 50 to 125 million of their citizens directly. Delayed deaths from radioactive fallout spread among a number of nations. Toon and colleagues leave the estimates to radiation physicists and others and address the “nuclear winter” driven by the soot. Globally the surface temperature would be expected to drop 2° to 5°C. Sunlight would decrease 20-35%. Recovery would take more than 10 years, during which time photosynthetic productivity would decline by 15-30% on land. Famines would ensue in many areas, unrelieved by imports from breadbasket nations hit similarly by decreased crop productivity.
This scenario would not be the end of habitability of the Earth, in that many people would survive, if largely miserably. Many social systems across the globe would come apart, eventually to be replaced. Many ecosystems maintained by political will would be very hard hit, such as national parks. Desperate citizens would raid them for bush meat, as happened in Gorongosa National Park in Mozambique during their civil war. The park’s large animals were replenished from outside sources. Thus, humans would not be the only species to suffer. In sum, from here the most likely scenario is a paroxysmal retreat from nuclear war and continued habitability with reduced populations of humans and many other species. Some, however, may thrive.
What are potential ways out of such nightmare scenarios, taking the two above, climate change and nuclear war? There is no magic bullet for either, as Pacala and Socolow indicate for climate change. International agreements with teeth for enforcement have taken steps forward and backward, with the apparent result being somewhat more forward progress. Arms limitation treaties are a good example. The rise of truly global trade has mixed effects, positive for reducing prospects of violence in international confrontations, while very mixed for commonly promoting economic inequality. Two more developments in the intermediate to long term would strongly advance the prospects for continued habitability.
First is population control. Without stabilization of population the prospects for success are dim for dealing with the other hazards to habitability – war, climate change, pollution, fisheries collapse, resource depletion. Population growth and growth of resource consumption per capita intensify all the hazards. Wars typically have an economic basis rather than a religious or ideological basis (not to deny, however, many of the latter types). Japan foresaw its resource limitations and began colonizing Asia and starting WW II. Ancient empires fought as much for land to feed their populations as much as for pride and riches. Forced measures of population control succeeded in China during the one-child policy, at great social cost. More humanistic resolution has come about with the vaunted demographic transition. Continued urbanization worldwide and rising incomes in many economic entities have made it much less attractive for couples to have children. They become viewed as economic burdens, even if much loved, rather than field hands or a personal safety net in lieu of national social security systems. The prospects for continuation of the transition remain mixed, in part from religious objections, though many individuals even in stern societies adopt nominally forbidden methods of birth control. The religious objections to birth control in the Abrahamic religions implicitly derives from the “need to breed” replacement warriors for continuously intense rivalries while diseases and trauma exerted a high toll on population in the background. The objections became long-lived ideologies.
Second is a new economic system. Capitalism is really only about 400 years old, having replaced mercantilism, which replaced a succession of systems going back to barter. The fundamental flaw in capitalism is that only exponential growth sustains it. The Earth’s resources are finite and we’re depleting them. OK, the Club of Rome report in 1980 missed the mark quantitatively in predict the dates when certain key materials would run out. Yes, there are materials substitutions (ultrahigh purity glass for optical communication fibers replacing some copper lines). Yes, we can use progressively lower quality resources, as we are doing for gold mining (if at much higher costs in energy and in land use). Yes, post-industrial economies such as the US are moving to services that use fewer materials than does manufacturing. Still, these changes do not stop the growth or resource use; they merely slow it. The “29th day” might become the “35th day” in the classic puzzle. To recall that puzzle, if a weed doubles in coverage of a pond every day and it covers the whole pond on the 30th day, when does it cover half the pond?” The running out of various resources occurs quickly at the end.
Capitalism did lift many people out of material poverty over the centuries. It did so only with the help of socialist policies incorporated to curb the vast abuses of unrestrained capitalism. Think back to the “Satanic mills” of the beginning of the Industrial Revolution. Even now, a basic feature of capitalism is that a fraction of losers is mandated for profits to be sustained; this point is addressed in the series of videos, money as debt. Cycling many times, the gains in overall wealth generate losers who would fill the planet, were it not for legal systems (no more debtors’ prisons, in most places), secondary opportunities, and social safety nets. Capitalism no longer exists in pure form anywhere on a significant scale. Several replacement systems are both in the process of design and in pragmatic evolution. The challenges of meeting peoples’ needs and avoiding resource depletion just may be met, we hope in due time without undue trauma.
Might we be down on our luck in the Universe, to be just intelligent enough to undo all the rest of our luck with nuclear war and anthropogenic climate change? Einstein said that we have the perfection of means and the confusion of ends. I may be more flippant to claim that ants exhibit intelligence (only) collectively while humans exhibit stupidity collectively at the real crux of the game. Time will tell, very, very quickly.
… yet the exploration gives us perspective on keeping our own planet habitable
If you’re in the market for a habitable planet:
- Pick a good earlier supernova in the neighborhood, but no recent ones. Make sure it made enough heavy element;
- Look for one near the size of Earth. That gives it a chance of keeping water. That gives it a chance of having tectonic renewals of chemical elements necessary for life. That gives it a chance of not being so big as to have a gaseous surface;
- Make sure it rotates and orbits “nicely;”
- Look for good-neighbor planets akin to Jupiter to clean out a lot of debris that might collide with the planet;
- Don’t make it too clean; you’ll need water delivered by impacts of some of the “space junk.”
- Its star needs to be near the Sun along the main sequence, no bigger than our Sun, long-lived and not too hot. It can’t be too small or the planet will need to so close to stay warm that it will be tidally locked; its atmosphere will freeze out on the cold side;
- Watch out for close neighbors that might chaotically perturb your planet’s orbit;
- Do look for a big moon that can stabilize its spin axis. Do look for a spin axis that’s upright in its orbital plane (of course, that’s common);
- Put your trust in carbon and water;
- Look for a unique combination of greenhouse gases, carbon-based, that can make the transition from strong to weaker as the host star evolves over time;
- …and don’t expect you’ll find one in a sample smaller than many billions. That is, don’t expect you’ll see one; also, we’re not in a position to take billions of samples, personally or by telescopes.
I have the overweening conviction that we are alone, in the sense that we will never make contact with extraterrestrial civilizations while our civilization lasts. The extensive set of constraints on life I have just elaborated makes the chance of life evolving on any planet extremely small. There are likely many planets in the Universe with life on them, but they are far between. The chance that any are near us is exceedingly small, and even lower is the chance that any of them harbors life capable of communicating with us…and interested in doing so.
I admit that not all of the constraints are extremely restrictive. For one, low energy inputs could support slow life (maybe not intelligent life, something more like Congress). Taken all together, however, the chance of any life remotely like that on Earth looks very, very small. Referring back to the title, I put the chance at far less than anyone’s chance of winning the Powerball Lottery. The chance for intelligent life that communicates over interstellar distances is even tinier. Remember, in the Drake equation there is a factor for how long an intelligent civilization would broadcast signals. These have to be astoundingly energetic signals to cover interstellar distance…and what would be the point for the originators? The average lifetime for a whole species on Earth is only about 2 million years, less than 1/2000 of the age of the Earth. Civilizations are even more ephemeral. Many discrete civilizations have come and gone. Sumerians were among the first and they only originated about 7500 years ago. There are many cogent arguments that resource-intensive civilizations such as we have become would rapidly destroy their resource base and, with it, their ability to communicate and their own existence. I recently found the wide-ranging free thought of Yuval Harari in his 2014 book, Sapiens, in which he expresses cautious optimism at the end, but after explicating copious social processes, many of which militate against our continuance. Again, stay tuned!
Is the search for extraterrestrial life worth the effort? My take on what we’ve learned about life and the conditions that led to it can be summarized in a few sentences. Look up at the stars and wonder. Enjoy the ride. Keep the Earth turning, figuratively.
Regarding the searches for exoplanets, I deem it of some value, if far from my highest priorities. I wouldn’t sell it as a search for life on other planets. As with other basic science, it can tell us a good deal about our place in the Universe. The effort is akin to planetary geology studies on other planets in our own Solar System, similarly informing us about our unique place in the Universe. We’re not going to reengineer the Earth with what we’ve learned. It’s more like art, or for a closer analogy, I cite the testimony of physicist Robert Wilson before a Congressional committee to defend the building of the Tevatron particle accelerator at Argonne National Laboratory. Asked what the Tevatron contributed to national defense, he replied that it gave the nation something worth defending.
The where question is easy, almost: look all over the sky. This takes time and requires repositioning of a telescope, whether on Earth or in space. The search can be at relatively nearby stars, hoping for more detailed information about an exoplanet’s mass, orbital radius, perhaps in the near future some information about its atmosphere (measuring the selective absorption of starlight by atmospheric compounds). The search may be broad-based for a large range of distances. Individual telescopes and researchers have goals that may overlap minimally or not at all with those of other telescopes and researchers.
The why question has answers that typically mix science and politics. Plain old scientific curiosity is a powerful motive. Selling that to the taxpayers who pay for the telescopes, data processing, and time of the researchers is tricky. The safest way to sell exoplanet searching is to claim to be looking for life elsewhere in the Universe; many people get excited about that, at least enough to accept the cost. The same selling point is used for space missions within our Solar System – e.g., going to Titan to check out its encased ocean. I find some of the projections to be extreme, even a betrayal of scientific objectivity. That’s my opinion. I’m OK with looking at exoplanets per se. Their abundance and their properties can tell us more about our own lucky Solar System and our Earth, if not as much as the fantastic studies down here on Earth in biology, chemistry, physics, geology, meteorology, ecology, and more.
The answer to the how question brings up some incredibly sophisticated technology. Not so many decades ago our views of planets in our own Solar System were of low resolution. More powerful telescopes in space and on the ground added enormous details. Space missions up through the Voyagers and New Horizons added real detail not even available, or very faintly so, to telescopes – e.g., the magnetic fields of planets and moons.
One can ask, can one view an exoplanet directly in a telescope’s image? The answer is Yes; 49 exoplanets have been found that way as of May 2020. There are more widely applicable ways, one being the slight diminution of starlight when an exoplanet crosses the stellar disk along our line of sight. This transit method has revealed 3,164 exoplanets. It requires very high spatial resolution to track the same star over time, plus great precision in measuring stellar brightness. That brightness can vary from the star’s own behavior, so there are sophisticated ways to back that out. For ground-based telescopes there is also the sophisticated modeling of light absorption and scattering by our own atmosphere. You might go back to the question of aiming the telescope, in the face of Earth’s own wobbly rotation rate (see, for example, the absorbing book, From Sundials to Atomic Clocks, by James Jespersen and Jane Fitz-Randolph, Dover).
The transit method works for exoplanets that orbit in a plane nearly exactly edge-on to us. Think of trying to use it to detect the Earth from far away. Allow any vantage point in terms of the angle between our orbit and the center of the Sun. A perfect alignment of our orbit with the center of the Sun’s image is clearly ideal. Shift the view angle up or down by ¼ ° and you miss seeing the Earth transiting the Sun. So, another method less demanding of alignment is seeing the wobble of the star as the exoplanet swings around it. An exoplanet has only a tiny mass compared with that of the star, but it does swing the star slightly back and forth along our sight line. Any two-body system has both bodies orbiting about a common center of mass.
Now, if the planet has an orbit that’s close to circular it has an orbital velocity, vp, that balances the centrifugal acceleration, vp2/rp, against the gravitational acceleration from the star of mass ms:
For the Earth, this comes out to 2980 meters per second, about 6.6 miles per second, as a free ride for us. Both planet and star have to complete their orbits around their common center of mass at the same rate, so we now equate the angular speeds of the Sun and the Earth. The angular speed for the Sun is vs/rs and it’s vp/rp for the planet, so we get
For the last step above, we used the definition for the center of mass, rsms=rpvp. Plugging in the numbers for the Sun and Earth (mp/ms=3×10-6 !), we get vs=0.09 m s-1, a really slow dance! Between moving toward us and moving away from us, the shift is twice this about 0.18 m s-1. To a first-order approximation, that shifts the frequencies of solar radiation by a fraction vs/c, where c is the speed of light. That’s only a shift of 0.3 parts per billion! It’s swamped by the Sun’s “surface” rotational velocity, which varies by latitude but is around 200 m s-1. You need a more massive planet to get better resolution for your spectroscopy measurements. At 1000x the Earth’s mass, the orbital speed is =31.6 times larger and vs/vp is 1000 times larger, for an effect that’s 31,600 times bigger, or nearly 10 parts per million. Still, 802 exoplanets have been discovered this way.
There are several other ways to detect exoplanets. One is seeing the wobble of the star against the background of distant stars. That works for exoplanets orbiting edge-on to flat-on to our line of sight. Well, it has worked to find 1 exoplanet, so far. Another way is to observe the sudden refocusing of light as the path of light is bent ever so slightly by the mass of the planet moving across our view. (This bending was verified for starlight passing by the Sun during an eclipse, in 1935!). This technique of gravitational microlensing has discovered 89 exoplanets. There are a lot of really intent astronomers out there to see this in their data!
The when question is answered as: since 2006. Let’s merge that with a form of the what question, taken to mean what telescopes have been and are being used. The most famous is the Kepler Space Telescope with at least 2347 discoveries. Alas, its reaction wheels that help keep it aligned tightly have failed. The baton has passed to a range of other telescopes. A nice review is at https://en.wikipedia.org/wiki/List_of_exoplanet_search_projects. That site also links to the who question, listing the research projects. You can trace these to the scientists, engineers, technicians, and others pursuing the projects.
The answer to the what question is rather rich. Exoplanets from the size of the moon to 30 times larger than Jupiter have been detected. The nearest one is Proxima Centauri b. It orbits the star nearest to us. I wrote up a post on that in 2016. PCb is a bit bigger than the Earth. Its star, Proxima Centauri or PC, is a red dwarf with a low radiative temperature of 3042K. That would make PCb quite cold if not for its orbit being very close to the star. The estimated radiative temperature of the exoplanet is then a “moderate” 234K or -39°C. It won’t support life, for many reasons. One is that PC flares, periodically toasting the planet and also blasting away its atmosphere. Another is that the orbit is so close that PCb is tidally locked to have one face to the star permanently. Any atmosphere left would all be condensed onto the very cold opposite side. Too bad. That’s the only exoplanet that we had the remotest chance of ever reaching, in several tens of thousands of years with our fastest spacecraft.
SETI, the search for extraterrestrial (ET) intelligence
A number of groups, including the eponymous group and other amateur societies advocate for SETI and set up hopeful monitoring systems. I don’t take this as a productive effort, for two major reasons. First, the energy burden on an extraterrestrial civilization (let’s call it an ETC) to broadcast its existence would be enormous for no apparent benefit, and, second, it’s dangerous to advertise our existence.
Consider the energy cost, first. The shortest communication distance, as for us to Barnard’s star, is about 4 light-years. Radio waves broadcast with a certain power get spread over a vast area at increasing distances from the source. Intensity (as flux per unit area) decreases as one over the square of the distance traveled. How weak can the signal be at the Earth and still be detected? This will tell us how much power has to be put out at the other civilization’s planet. The most dramatic example of communication with very low radio power that I know of is our communication with the Voyager space probes, with Voyager 1 now at 19.5 billion km from Earth (about 12 billion miles in old English units). Voyager 1 is broadcasting with 23 watts of power. A very large antenna on Earth, such as at Goldstone in California or Woomera in Australia intercepts a tiny fraction of this power. Detection is still reliable, at a very slow rate of data transfer (Claude Shannon brilliantly proved that the noise won’t overwhelm any weak signal if we send that signal slowly.) We might assume that practical communication (if very, very slow) might work 10 times further out, which is about 40 billion km. This is only 1/50 of a light-year. Communication is only working because 1) we know exactly where to point our antennas, with excruciating precision and 2) Voyager knows where to aim at us. An ETC knows nothing about where to expect a receptive civilization. Suppose that they are as good as we are at aiming broadcasts and reception, so that our antennas have to receive from the ETC only as much power as we get from Voyager. Suppose, also, that they have put all their hopes on us as the nearest star system, 4 light-years away. The distance covered is 200 times larger than the distance to Voyager, so they’ll need 40,000 times more power (200 squared). That’s near one megawatt, not a big burden yet. However, they would be wise not to put all their eggs in one basket and aim at other stellar systems. Each one adds an energy cost, and the cost to more distant stellar systems rises again at the square of the distance. We’re still assuming that 1) very, very slow communication is acceptable, or else much higher power is needed and 2) that they know which communication mode we will accept and interpret properly (AM, FM, PWM, etc.) and which carrier frequency we will listen to.
Maybe the communication power and mode are problems that we and they can deal with. The other factor for the ETC is, What do they stand to gain by communicating with us or any other civilization? Ignoring all problems of translation and culture, there is the problem of the response time. That’s over 8 years at the speed of light between us and Barnard’s star, and even longer for more distant stars. At very low bit rates of communication, a reasonable exchange of a number N of messages takes N times longer.
What could we communicate? Neil de Grasse Tyson considered our intelligence relative to chimps, with 98+% shared DNA, and noted that we can’t even really hold a conversation with a chimp. What if ET life that’s another 1% “genetically smarter” than we are contacted us? We’d be unlikely to be able to hold a conversation. Yes, how intelligent are we? We set the criteria for intelligence, based on our parochial view – we’re audacious, hubristic
A very cogent objection to searching for communication with an ETC is that any civilization more technologically advanced than another has had tragic consequences for the other civilization. That’s Earth’s history many times over – the succession of Mesopotamian civilizations, the contact of Europeans with every indigenous culture in the Americas, perhaps even modern humans with Neanderthals. Renowned physicist Stephen Hawking warned in a documentary aired in 2010 that “If aliens visit us, the outcome would be much as when Columbus landed in America, which didn’t turn out well for the Native Americans.”
Many other people have commented on the probability of extrasolar life. Are there any evolutionary bottlenecks that make it extremely hard to make the transition to intelligence? – Yes – keeping an ecology, in short. We seem to be doing our damnedest to dismantle our global ecology. If high-tech civilizations are as resource-voracious as we are, the lifetime of the average intelligent life forms may be pretty short. Don’t expect millions of years of communication.
The probability of contact with an ETC is nearly infinitesimal, and a contact is overwhelmingly likely to be successful for them and not for us. Given the first factor only, SETI is probably a harmless exercise. It may have peripheral value in getting people interested in the splendor of the Universe, if their minds are open to all the other possibilities of exploring our place in the space.
My conclusion: searches for habitability on physicochemical grounds, yet; SETI, no.
Several hundred people or so have said publicly and credibly that they would like to colonize Mars. Elon Musk has said he wants to die on Mars… just not on first impact. That’s a good part of the “Who” in the classic newspaper reporting mandate of “Who, what when, where, and why?” There are others who may come forward, and many more who offer encouragement or even propose help with technologies.
The “What” question has multiple dimensions. First, there is no one with credibility proposing the colonization, much less visitation, by any humans of other bodies in our Solar System. Anything beyond Mars is terminally cold and bereft of support for ecosystems, and Mars is really close to that, too. Much more extreme ideas propose colonizing another stellar system. I readily dismiss this out of hand for the duration of the trip, exceeding by far even the 75,000 years I cited earlier for travel to the nearest stars, Proxima Centauri. To say that the goal is to colonize Mars is not a full specification. Is the duration limited or is it for as long as possible? Is it proposed to shift over time to supporting itself only on Martian physical resources (water, metals, etc.)? Will the human population be replenished periodically from Earth (avoiding a solar system analogy of the Jamestown settlement), given not only risks of death but also the risk of inbreeding in a small population? Re the last point, consider that we each carry, on average, about six conditionally lethal mutations. I mentioned this earlier. Trading genes within a small population, hundreds or thousands, gives high odds that many bad genetic combinations will arise to cause a significant death rate.
Let’s admit as a first estimate that colonization is irreversible for the colonizers: no one is coming back from Mars. A special psychology is a must. The bases of this assertion are, first, the famous rocket equation of Tsiolkovsky and, second, the size of the gravitational bindings to overcome on the way to Mars that dwarf those for going to the Moon. First, the rocket equation. I have a derivation, rather a standard one, and a discussion in an Appendix that you may find interesting. Here’s a simple case for a rocket in free space, not fighting gravity. The rocket burns propellant that exits its body at an exhaust velocity, vex. The force has a magnitude equal to the mass rate of burning multiplied by vex. It gives better and better acceleration as the rocket lightens with the using up of propellant….but then less of the rocket is left. In a single stage, the final velocity is related to its initial velocity, v0, and the ratio of the initial to final mass, m0/mf:
Here, ln is the natural logarithm. Now, the highest exhaust velocity with chemical propellants is 4400 meters per second (assuming no one wants to be around a rocket using intensely poisonous beryllium with hydrogen!). That’s with liquid oxygen and liquid hydrogen, a bit dicey to use practically. Current rockets use liquid oxygen and kerosene, with its vex of 2,941 meters per second. We need to reach much higher velocities, over 11,200 (to get to the Moon) or even 4 times larger, if not all at once (to get to Mars). Take the case of the Moon. Take v0 as zero and ignore the extra push needed against the initial atmospheric drag (and a delay-to-escape-velocity time that wastes fuel, in a sense). We need v = (11200/2941)vex. That is, we need the natural logarithm of m0/mf to be 3.81. That occurs when the initial mass is 45 times the final mass, or the empty rocket weighs just 2.2% of the initial mass! That is not possible; the shell of the rocket and its fuel tanks alone weigh more that that. OK, jettisonable boosters help, but not enough. The way around this is to use multiple stages – get rid of the heavy shell and tanks and propel a smaller fraction of the take-off mass. Really, use 3 stages. That’s what the Apollo missions did. Of course, that means that the final mass is still a tiny fraction of the take-off mass. We could calculate it from assumptions about the staging, but suffice it to say that the final masses of the Apollo spacecraft that made it to the Moon had a mass (119 tonnes) that was just 4% that of the enormous rocket at lift-off (2950 tonnes).
That all worked with these advantages over flights to Mars and back:
- No fighting to move to higher energy in the Sun’s gravitational field (calculations are in a sidebar), thus, only ¼ as much total energy to gain;
- A much easier return from the Moon with its gravity 1/6 as strong as that on Earth; the escape velocity is only 2,380 meters per second, about 1/5 that for escaping Earth – so, only a light engine needed to be carried to get back home. From Mars the escape velocity is 5,030 meters per second, 45% as much as from Earth.
Admittedly, there’s less need for retrorocket braking for landing on Mars if the drag of its atmosphere is used, just as there’s no retrorocket need for landing on Earth. It’s tricky to use the thin atmosphere on Mars for braking. Remember the seven minutes of terror when the Curiosity Rover landed on Mars with a supersonic parachute. It’s even tricky using Earth’s dense atmosphere – hit it at the wrong angle and the craft bounces off, not to return!
What about ion propulsion and other modes that have very high vex? The problem is that the rate of firing is so low that the thrust is very small; the time to reach the needed velocity is extremely long. Chemical propellants will always be needed for fast trips and absolutely for lift-off and landing.
More “what:” What would a Mars colony look like?
A lot of technology would have to be brought on multiple supply runs, mostly uncrewed. Needed as infrastructure: habitations, electric power, water supply, storage facilities, transportation, food production… All have great challenges. It’s up to the adventurers to design the logistics (compare preparations for a lunar colony: https://www.lpi.usra.edu/lunar_resources/documents/13_0IntegratedISRUPresenta.pdf) – they cover what parts must arrive first, who assembles them, etc. Rather, we may focus on a few points.
Keeping warm is difficult on Mars. At its aphelion it is at 1.67 astronomical units, that is, 1.67 times as far from the Sun as is the Earth at the Earth’s mean distance. By the energy balance equations presented earlier, the radiative temperature at the top of the Martian atmosphere is times that of the Earth. That’s 0.774*255K = 197K, or -76°C. The temperature is higher at low latitudes but still very low. With a minimal atmosphere and a consequently minimal greenhouse effect, Mars is cool to very cold, barely hitting 20°C on a clear summer day near the equator and going to -125°C at the poles. A corollary is that it cools fast from any state, to -73°C from that hot summer to its night.
The thin atmosphere offers a negligible greenhouse effect. It’s up to the colony to provide habitation with its own huge greenhouse effect, or harvesting solar energy on additional area for electrical heating, or both. Both methods are shut off by dust storms. That makes battery storage imperative. That then adds notably to the mass of technology that would have to be carried.
Mars, viewed before the dust storm (left) and during the dust storm (right) by the Mars Reconnaissance Orbiter (NASA/JPL)
The storm that peaked on 2 July 2018 reached an optical depth of 10.7. That represents a decrease of sunlight by a factor of e-10, leaving only 45 millionths of the clear sky value. Battery storage of high capacity will be needed; dust storms have lasted for months. They killed the Opportunity Rover.
Energy sources: Well, the Sun in any term over a few months, given optimistic estimates of what could be carried on arrival by the spacecraft and at the expenses of reducing the size of the human crew. Recall the rocket equation and sequelae for the tiny fraction of liftoff mass that is the payload.
The colony would need energy for: ● Heating of habitations, battery storage area…and the greenhouses for growing food crops ● Water recovery from soil (see below) ● Water electrolysis for some oxygen production (see below, but this becomes theoretically unnecessary or minimal once plants start growing well in, say, a year) ● Transport… to where? Higher latitudes may offer more water, though at huge transport costs ● Industrial processes, including production of medicinals (presumably minor, with only latent diseases carried in genomes; care of ageing colonists comes a lot later) ● Several other processes, but especially, ● Lighting! On Earth lighting is about 10% of the total energy budget in industrial or post-industrial economies, but on Mars it’s critical to support crop plant growth during dust storms and seasonal lows.
● Sunlight intensity on the surface of Mars is low. On average at the top of the atmosphere or TOA it’s 43% of that reaching the Earth, as noted earlier from simple geometry of light propagation over different distances from the Sun. On Earth clouds reduce mean sunlight at the surface to 0.69 that at the TOA. On Mars persistent dust reduces intensity at the surface by a factor of about e-1 = 0.37 and much more during month-long dust storms. Make it about 0.34 (there are studies from landers and rovers to consult for higher accuracy). The annual average solar radiation per ground area is then about 0.43*(0.34/0.69) = 0.21 or 21% that of sunlight on Earth.
● Essential fact: crop plants can grow at significantly lower light levels than Earthly sunshine. In fact, leaves at the top of the canopy of field crops are supersaturated with light. In an Appendix I reproduce a model of total-crop photosynthesis that compares ordinary plants with plants bred or naturally occurring to have half the normal content of chlorophyll in leaves. The top leaves are less supersaturated in strong light while passing more light to share with lower leaves. My model predicted an 8% seasonal gain in yield. It was verified by D. B. Peters and colleagues at the University of Illinois with pea mutants. No one followed up on it commercially because (1) yield is actually subsidiary in breeding programs to many other crop traits, especially pest and disease resistance, harvestability, etc., and (2) farmers don’t want light green crops! Still, with realistic densities of leaves termed as leaf area index (leaf area per ground area), photosynthetic rates per ground area respond close to linearly with light level. So, the choice is slower growth with more area or supplemental lighting, and that is very energy-costly.
● Essential fact: photosynthesis is absolutely limited to 6% efficiency as the energy content in biomass produced divided by total energy in sunlight intercepted. That’s at high CO2 enrichment of the atmosphere, which is easy on Mars. A more realistic figure is 3%. Averaging over all vegetation on Earth it’s 0.3%, given losses to herbivory, damage from weather, costs of maintaining organs and tissues, and the like. For the crops, the 3% figure applies only substantially complete crop cover. Early in growth of a crop it starts near 0%. The canopy might be kept nearly complete at all times by clever intercropping. It’s a lot of work, but what else do the colonists have to do?
● Put these together. Mars’ surface has 0.21 times the mean solar energy flux density of the Earth’s surface. That’s 50 watts per square meter. It’s higher near Mars’ equator but the colony may opt for middle latitudes (see below), so we’ll take that figure. A crop operating at 3% energy efficiency captures 1.5 watts per square meter.
● How much food energy does that supply? Most crops have less than 25% of their total growth as edible portion. A big part of the Green Revolution was breeding short grains that had less stem and root and more seed mass. We might assume that in the mix of all crops to be grown that the harvest index is somewhat lower, perhaps 15%. Colonists will want and need a diverse diet. Now we’re at an edible crop mass production of 1.5*0.15 = 0.225 watts per square meter.
● There will be processing costs. The processing needs and methods are hard to project but may cut consumable food yield to 0.8 times the above, about 0.18 watts per square meter. Oops, we have to increase that cost quite a bit. Assume that the colonists will want, and need, food that is both more attractive and nutritious. Some plant protein will need digestion to amino acids and synthesis of a balance of essential amino acids. My rough estimate is a final consumable yield of no more than 0.6 times the cost above, or 0.135 watts per square meter.
● How much food does a human need in terms of energy? The classic figure is 2500 kilocalories per day or 10,500,000 joules per day. Spread over 86,400 seconds in a day that’s 120 watts. If Martian sunlight provides this each colonist would need a greenhouse area of 120/0.135 or close to 900 square meters.
● There must be intermittent artificial lighting, certainly during lengthy dust storms.
- Solar panels would intercept sunlight, on average, at 50 watts per square meter. I assume that the panel arrays are not steerable to constantly face normal to the Sun but that they are optimized for the latitude.
- Standard solar panels have a 25% energy efficiency. I’ll omit the possibility that the colonists have multilayer panels that have attained 50% efficiency. These are very expensive. So, the panels provide 15 watts per square meter on average.
- Their output would be stored in batteries and recovered for the adverse times. Storage has a fair energy efficiency. Batteries store and then release energy with about 80% efficiency. Put the delivered power density at 12 watts per square meter.
- LED lights would be used, with an efficiency of 25% in converting electrical energy to radiant energy in the photosynthetically active part of the spectrum. A square meter of solar panels then gives 3 watts on average.
- The average natural sunlight intensity on the crop would be 50 watts per square meter, reduced by a light transmission factor of the greenhouse covering. That might still be 90%, giving 45 watts per square meter.
- Replacing that full time would require 15 square meters of panels per square meter of greenhouse space.
- Assume that 1/10 of the time lighting is needed. That comes to 1.5 square meters of solar panels per square meter of greenhouse footprint.
- Per capita that’s 1.5*900 = 1350 square meters of solar panels. Compare that to a terrestrial home that might have perhaps 750 peak watts of panel output per resident from about 4 square meters of panels. Growing plants is costly with the low energy efficiency of plant photosynthesis, the small fraction of the crop that’s edible, and so on. The expense of energy sources is staggering. So is the mass of panel material to be delivered. The energy demands other than for the crops – heating (hey, no cooling need!), habitation lighting, transport, etc. are quite modest. Still, the colonists must eat!
- To ameliorate these costs, the crop might be harvested when a dust storm starts… but there’s no good predictor of storms to time the crop planting months earlier. Having a range of sowing dates and thus of harvest dates could help, but some major fraction of all crops would have to be kept going during a dust storm to avoid a bare-ground (bare hydroponic system) start, with its low light capture efficacy.
Is the technology ripe? SpaceX has proven the launch capability of its heavy rockets. One suited for lifting significant payloads to Mars is its Starship, which has had incremental testing in short launches in 2020. The company’s reusable rockets have proven their function, and the Starship is designed for orbital refueling: get it into low Earth orbit (up to about 25% of the battle of getting to Mars), refuel it there, and continue. There are other competitors as I write this, particularly Blue Origin.
Rockets are only part of the effort. Designing the colony habitations and support, selecting crews, and timing the orbit for the lowest-energy approach to Mars are all difficult tasks. The next good window for launch that the crewed missions could make is in 2022.
Getting there: For travel between two places (e.g., planets), each in elliptical orbits around a star, the most efficient route in energy use is almost always the Hohmann trajectory, using two engine burns (https://en.wikipedia.org/wiki/Hohmann_transfer_orbit). Surprisingly it was invented in 1925 by Walter Hohmann, well before space travel was remotely feasible. Use of the trajectory requires a specific alignment of the planets. Such a launch window recurs every 26 months for Earth to Mars travel.
The journey takes 9 months, which brings a number of hazards:
● Exposure to particulate radiation (energetic atomic nuclei) as cosmic rays and solar flares. Travel will be outside Earth’s magnetic field that deflects particles and outside Earth’s atmosphere that kills most of the kinetic energy of the particles. Remember, ten tonnes of air are above you on every square meter if you’re at sea level – a nice blanket. We who live at higher elevations have less protection, but still a lot. For the Moon missions, exposure for 8+ days is not critical. For Mars “missions” the more than 30 times greater duration is problematic. Shielding of sufficient thickness x density can reduce the exposure to what we may call space-health limits. However, it makes the spacecraft so heavy as to drastically reduce the crew size and total payload. An option suggested by our friend physicist David Anderson, is no shielding. Cosmic rays generate a shower – the primary rays hits a nucleus that sends two or more fragments off, and these each set off more fragments, until there’s a whole tree of many particles. Most of the energy deposition is later in the branching. Letting the first or a few of the first collision occur in the crew results in less energy deposition with its physiological and genetic damage. David said he wouldn’t suggest it to NASA because he thinks that crewed Mars missions are irresponsible and of low scientific value. For colonizers, what can one say?
● Adverse physiological changes. These are well documented in astronauts and cosmonauts who made long flights on the International Space Station. Wanted: human beings evolved to withstand extended space flight. Or, that’s what I infer from the analyses of physical and mental conditions of astronauts who spend long times at the International Space Station. One US resident there, Scott Kelly, is the identical twin of Mark, who stayed Earthbound. When Scott returned to Earth, more ways that we evolved for Earth’s gravity showed up. Earlier studies with other astronauts showed loss of bone mass and muscle function. By comparison with Mark, Scott also had a different gut microbiome (now seen as quite important for health). His chromosomes showed inversions from space radiation, though these returned to near normal. More of the end caps of his chromosomes were critically shortened or lengthened. His carotid artery was distended. His cognitive capacity declined, largely recoverably. The same would happen on a flight to Mars, but with more genetic damage from radiation in space. Still, some people are promoting colonization of distant star systems. The crew members suffer loss of bone and muscle mass, even needing help walking when they land. Exercise regimes don’t prevent much of the losses. Brains swell when gravity no longer keeps more blood at the extremities. The crew suffer cognitive deficits.
● Psychological challenges – during the trip and then “forever.” For the ultra-long no-return stay, toss in the hazards of extreme cabin fever; our psychology isn’t made for this. Let’s suppose that the colonists have been rigorously selected for personalities that tolerate long, close, highly dependent associations with others. They must also lack mood and cognitive disorders such as manic depression and schizophrenia, as well as lesser disorders that get amplified on long trips in close quarters. Astronaut and cosmonaut programs have done well at such selection. There will still be challenges during the trip. Space station crew members have shown shrinking of communication with others, greater egocentricity, and formation of cliques. Less well-selected exploratory teams have shown catastrophic breakdowns in group organization. Post-flight, some space station crew members have shown anxiety, depression, and substance abuse. Of course, for the colonists there is no post-flight; the stressors stay in place. On the plus side for the space crews who have been surveyed they reported greater appreciation of beauty, especially of the Earth. That’s going to swing to the other side as colonists view an Earth never to be experienced again. There will be no long walks, no wild lands with fauna and flora – rather, a tawny, dusty sky, and small living quarters.
To alleviate some of these challenges, some people argue for genetic selection of colonists. They’ve overlooked the negative effects of inbreeding in a small population, including the founder effect. Let’s have a time out for a few millenia to evolve humans who can take all this.
More “how:” Keeping hale and hearty on the ground on Mars
- Making and retaining oxygen… and water
We metabolize our food. We inhale oxygen and exhale carbon dioxide. The loss of one and gain of the other clearly can’t be sustained in a closed environment without releasing new O2 into the air and disposing of or reprocessing CO2. Let’s consider CO2 first. While we exhale CO2 at about 5% molar fraction, inhaling air at about 10% CO2 or higher is fairly quickly lethal. It’s not a simple asphyxiant like nitrogen. As it dissolves in blood it reversibly creates carbonic acid – CO2 + H2O ↔ H2CO3. Higher acidity in our blood puts many biochemical reactions askew. In 1986, Lake Nyos in Cameroon literally fizzed over. Its waters over a volcanic vent had supersaturating amounts of CO2. Some small disturbance made some water release bubble of CO2. That made the water column less dense, so it rose, also pulling other water up. At the lower pressures higher in the water column, even more gas came out. It overtopped the crater rim and flowed down the mountain slope, killing thousands of sleeping people and their livestock. So, on a spacecraft or a Mars colony we must get rid of CO2. On the International Space Station (ISS) that was easy. Cabin air was compressed into a container with zeolites, porous ceramics that adsorb CO2. The CO2-laden zeolite was then connected to space, where the CO2 quickly desorbed.
The other problem is restoring the oxygen level. On the Apollo missions to the Moon, the spacecraft simply carried compressed O2. That works for the 8+ day missions but not for a long mission on the ISS. There’s not enough space on the craft to store enough O2. That was, and still is, solved by electrolyzing water: 2H2O 2H2 + O2. The hydrogen gas was also vented to space. The mix sent off was really the loss of the elements in carbohydrate, CO2 and H2. We may look at the full stoichiometry (balance of the chemical constituents):
Add the electrolysis of water done at the same time:
Strike out the same things that appear on both sides and add these:
The result is the two gases that the ISS vented.
The unwanted effect was using up water that had to be resupplied. On a long mission out of reach of resupply – as a crewed Mars mission would be – that’s a problem: carry a lot of water and fewer humans. NASA used the ISS as a testbed for an alternative chemistry. It uses the Sabatier reaction (please forgive the typo in the figure):
The hydrogen from the reaction is supplied again by electrolyzing water but the water can be fully recycled. Here’s a figure with the whole scheme.
* Don’t lose water, other than “water of constitution” in carbohydrates in the astronaut(s)
* Must import high-value, high-energy reactant, H2
* Must vent CH4, which is high-energy, but only relative to an atmosphere with O2 (not Mars, i.e.)
So, the net result per use of the elements in food (CH2O, carbohydrate only is shown here) and the venting of the same elements as ½ CO2 + ½ CH4. There is a necessary input of energy at two steps and the loss of “energy-rich” methane.
Methane, is valued on Earth as a fuel for combining with oxygen. It could be kept and stored in case more oxygen is obtained, as upon landing on Mars and (very optimistically) getting water from the soil to electrolyze with solar power.
Of course, as much precious O2 and water must be kept contained in the habitat, using careful airlocks whose air gets pumped back into the habitat after an astronaut (colonizer) is ready to open the outer door and take a walk (not to admire the flowers).
The loss of carbon compounds is equivalent to the loss of food. That’s the next problem after keeping the atmosphere breathable in the colony habitation, once the spacecraft lands.
- Naturally, growing plants to do photosynthesis is a good way to replenish oxygen, as well as food. The nominal ideal cycling is this:
Note that two waters enter plants in the photosynthetic cycle and one comes out; the * in the water molecules indicates that the oxygen comes from the water, not the CO2.
This diagram is oversimplified: the crew can’t eat and digest all the plant productivity. So, add another closed cycle. Plant photosynthesize CH2O (and other stuff) in excess, most of which is human-inedible waste consumed by decomposers (or potentially combusted as fuel, which includes biomass digestion)
If only 1 in 4 joules or calories in plants is edible and metabolized by humans, then 3 cycles of plants doing photosynthesis ultimately for use by decomposers are needed. That is, n = 3. It’s unavoidable.
- The combined cycles are closed… in the long term
- In the short term there are lags and advances in the availability of food and O2; excesses must be stored and then consumed later.
- The balance is tricky in another way. If plants are grown in soil, the soil organic carbon has a high latent demand for oxygen. In the original Biosphere experiment, a substantial amount of SOC got metabolized by soil organisms, depleting the O2. A way around this is to grow plants hydroponically.
More about growing food
- Crops are always at risk of latent diseases in soil (OK, go hydroponic) or in their genomes (latent viruses). They can also fail from errors in managing the abiotic environment (insufficient aeration of the hydroponic solution). The special hazard of hydroponics is the easy spread of disease. Loss of a crop means death of the humans up there. Can everything being sent to Mars be fully sterilized? What about viruses in the lytic state in a crop? In terrestrial cropped ecosystems the backup is genetic diversity. Wild relatives of crop plants can be found with genes for tolerance of pests and diseases. Breeders introgress those genes into the crop varieties by traditional cross breeding (with minor help from genetic engineering). Who knows enough about what genes and what wild relatives that carry them may be necessary? Sending such genetic stocks in an emergency doesn’t work: there’s a 26-month wait of the next Hohmann trajectory, or a little less with an inefficient but faster trajectory for a small payload of plants.
- Any simplified ecosystem is at risk of collapse from continuing over- or underexpression of ecosystem functions. All natural systems have several levels of decomposers for plant residues – small invertebrate animals such as Collembola, a variety of fungi, a similar variety of bacteria. Who knows what to take along in case of problems?
- Hydroponics is more technically demanding for circulation, monitoring, and provision of exacting amounts of chemicals.
- There has to be disposal of even the plant exudates into the hydroponic solutions. A sophisticated separation and decomposition system is needed.
- Sorry, no meat. It takes 3 to 8 times the mass of crops as food for animals as the mass of meat harvestable. Vegans! This is not to mention the complications of animal husbandry, acclimation (or not!) of the animals to colony conditions (no free-range chickens!). Soy burgers, anyone?
- In any system at all, there are wastes other than CO2 and water, particularly nitrogenous wastes from humans and as crop residues. Recycling these has its challenges:
- One can let microorganisms break down the amino compounds – to ammonia (recoverable), but some will continue to be nitrified (with partial loss as N2O) and then denitrified (in anaerobic parts of the environment, to N2 and N2O). The N2 and N2O have to be converted back to reactive N, as in ammonia equivalents. That means either biological nitrogen fixation (tricky to balance) or an energy-intensive technology (Haber-Bosch – but no one wants to run a high-temperature high-pressure reactor on a spacecraft or in a foundling Mars or lunar colony).
- Other elements have to be recycled – P, Fe, etc. This is much more easily done in soils with a complex ecosystem of organisms…. but these ecosystems can hold plant diseases, as noted, or can crash with management mistakes (poor temperature control, or even latent diseases of the organisms themselves!).
- Human wastes. Let’s consider how these are handled on near-Earth space missions.
- Urine can be and is recycled, involving as it does simple soluble chemicals. The recovery of water is important.
- Fecal matter, hair trimming, skin sloughing… much less agreeable stuff. What do the ISS crew do with this? They pack it in a capsule and jettison it back to Earth, to burn up in the atmosphere like a meteor. A poop meteor. This is not an option on Mars – no good landfills, certainty of contamination of Mars with human gut microbes (sterilization is hard to do 100.000%), and loss of valuable carbon, nitrogen, and various minerals.
- Chemical and physical recycling of these organic wastes (organic in the true chemical sense) is clunky, needing fair-sized machinery, and power-hungry. Consider how it happens on Earth. Let small soil animals, fungi, and bacteria do it, with careful selection of all these critters.
- Other gases. We humans create in our intestines gases with odors and with other problems. Thiols and skatole (fart smells), methane, hydrogen sulfide need all be removed. Soil organisms can handle some of this. On Earth, hydroxyl radical helps, it being generated in the atmosphere thanks to solar radiation and some organics – a bit of smog, as it were. Ask some Los Angelenos to come along to create air pollution? Not really, but the concern is there. On the Space Station, the gases are filtered out (and then what?)
- Other components of air are necessary for comfort or more. As on the Space Station, the colonizers will want nitrogen, as on Earth. They may want it for their air so that it has Earthlike total pressure and also for plant growth. Some loss of nitrogen from the soil as N2 or N2O is inevitable from denitrifying bacteria, and some will be vented from habitations and greenhouses, as when airlocks are opened. The colonists will definitely want water vapor (normal relative humidity levels). Close control is mandated to avoid drying out (causing respiratory distress) or excess humidity (condensation, discomfort, damage to electronics). Powered humidifiers/dehumidifiers can/must handle this.
- Getting water – no simple matter. Mars orbiters and rovers have explored the planet extensively. Surface ice is found seasonally at high latitudes, far from habitable latitudes. Liquid water is not to be found on Mars except at rare times in rare places. Water bound by adsorption or chemically might be found in soil at lower latitudes. Energy is necessary for extracting it. The problem lies in the phase diagram of water – that is, which of the phases of ice, liquid water, and water vapor are present at a given temperature and pressure. On the surface of the Earth, ice and vapor coexist below the freezing point, commonly near 0°C with a tiny variation with atmospheric pressure. Above the freezing point, liquid and vapor coexist. Vapor pressure increases markedly with temperature, 6% to 7% per 0°C near “room temperature.” On Mars, temperatures above 0°C occur rarely in time and place – only near the equator. Even then, the atmospheric pressure is so low that only vapor is stable; essentially, liquid water cannot remain, evaporating instantly if it were to be introduced. As a result, water for a colony would have to be retrieved from higher latitudes (out of the question for costs in energy and travel time) or from subsoils. The surface soils are dry. Water would have to be obtained by drilling to “wet” soils where water is adsorbed (lightly bonded to mineral soil) of chemically bound. Colonists would not expect to find ice at any practical depth in the soil because its vapor can traverse soils to the atmosphere readily over the millions of years since the atmosphere had significant water content; water ice has left. Water would be extracted by heating the soil (an energy-intensive process) and condensing the vapor. A precious resource for the first stage of colonization would be the water carried from Earth, with an equally precious resource being equipment for solar energy capture and storage. There will be no long, steamy showers for the colonists.
Shielding from cosmic radiation and solar flares. Mars lost its magnetosphere 4 billion years ago, being a small planet with a cooler core that couldn’t maintain a churning magneto as does the Earth. Our own atmosphere seems thin (unless you find yourself in a hurricane!) but it amounts to 10 metric tonnes per square meter above our heads. That mass density is like a modest lead shield. Well, not exactly. Primary cosmic rays hit molecules in the atmosphere and creating a cascade of secondary and higher-order rays, with each type of radiation having a different behavior for its absorption by air. The derived rays are more damaging to living organisms. If you have to make do without an atmosphere, you’re going to need some massive shielding that will be costly to transport… even if you use a lot of Martian rock with a binder you brought with you – a Martian sod house, of sorts.
Mars lost its magnetosphere 4 billion years ago, https://planetary-science.org/mars-research/martian-atmosphere/
- Lesser problem: ultraviolet radiation. About 13% of the Sun’s total electromagnetic radiation is UV. Of course, shielding for the more energetic radiation should take care of this, though for Mars walks the colonists’ faceplates will need hefty UV screening.
- System safety. Any complex technological system needs proactive care for safety against simple interruptions of critical services, fire, electrical hazards, internal radiation hazards (e.g., as from a nuclear power reactor), chemical contamination among system parts, etc. There’s no calling in an outside fired department, cleanup crew, etc.
I have relatively little to offer here. The flux of solar energy for power and for growing crops in special greenhouses is highest at the equator. It falls off on an annual scale closely as the cosine of the latitude. Weather events strongly modify the seasonal pattern – that is, dust storms. These are more likely at perihelion, the closest approach to the Sun. They also have been analyzed and modeled as having a role in the loss of water from Mars, a loss that is nearly complete. Because dust storms readily envelop the entire planet, there is no place to escape them for purposes of keeping a supply of solar energy. For solar energy capture the lowest latitudes are the most favorable. Much energy storage would be needed to survive the long time at aphelion with 40% lower energy flux density and even lower temperatures. Surviving long dust storms adds to the storage requirements.
The second major problem is acquiring water, noted above. It would be acquired from water adsorbed on soils. The likelihood of finding sufficient adsorbed water decreases markedly with mean temperature. The low latitudes with higher temperatures would be the most problematic for recovering water. A compromise would be middle latitudes.
Elon Musk takes an extreme position, that our future on Earth is limited in duration. Even omitting the prospect that we may do ourselves in with pollution, wars, lethal climate change, or the like, there is the Sun’s role in our future. He points out, rightly, that in around half a billion years the Sun will have become so much brighter that the mean temperature on Earth will be lethal. At 1% gain in luminosity or expressed alternatively as radiant flux density per 100 million years, the projected gain in half a billion years is 5%. The calculations done earlier here would tie that to an increase of about 1.2% in absolute temperature. That’s 0.012*288K, at least another 3.5K, which is another 3.5°C. Sure, in 5 billion years the Sun will brighten and expand, toasting the Earth to lethal temperatures beyond what any sentient life could tolerate, and, yes, that would also toast Mars “nicely.”
Let’s face the facts. On Earth, large vertebrate species such as ours have a lifetime of about 2 million years. For mammals in the “modern” or Cenozoic era, that lifetime has been about 1 million years on average. We big vertebrates go extinct for a variety of reasons.- competitors (e.g., we modern humans vs. archaic Homo species), predation (esp. by humans now– the dodo by European hunters, and probably the North American megafauna by Paleo-Indians), habitat shrinkage from climate change (a good part of the cause of five mass extinction events), exotic species invasion (bird malaria in Hawaii introduced by humans; extinction of South American marsupial species by North American mammals after Isthmus of Panama emerged close to 3 million years ago), overspecialization on food sources (watch out, koalas and pandas), natural emergence of novel diseases (transmissible facial cancer threatening the Tasmanian Devil), and, surely a mix of several causes. Larger species are more likely to go extinct fast; they are, among other things, slower-breeding. Once a population shrinks very much in what’s termed a population bottleneck, the remaining individuals represent low genetic diversity for adaptive responses to new threats (climate plus other species). Extinction probability rises – a worry of ours for cheetahs. There have been studies, mostly genetic analyses that show we humans also have low genetic diversity for a species with such an enormous population. These studies have been interpreted as evidence that humans went through a population bottleneck – say, 70,000 years ago during the eruption of the Toba supervolcano in Indonesia. More recent analyses say that our genetic diversity has an explicable pattern and, moreover, that this pattern is not what derives from a population bottleneck. Most of the causative agents of extinction are other biological species, and some are climatic. In an isolated Mars colony one expects very few other species and fewer active agents of extinction, but the very simplicity of the ecosystem gives little recovery potential from crop failures or emerging diseases. Lethal failure of life-support technologies is a more likely cause. In any event, it is supremely unlikely that humans could survive more than a handful of millions of years. I dismiss the idea as specious that humans can become a near-eternal multi-planetary species. Besides, who wants to live forever on one more planet. Recall how bored was the character Bowerick Wowbagger in the hilarious social commentary trilogy, the Hitchhiker’s Guide to the Galaxy. Having become immortal by an industrial accident, he spent his time traveling to insult every other being. The Dyson Sphere of a long-lived civilization (look it up) sounds just as terminally boring.
Colonizing other planets is forgoing a lot of our luck in the Universe on a wild bet that we’ll fill in a lot of bad luck that makes those other planets barren. It’s a tech tour de force and predictably a short-lived one.
Conclusion: It’s enormously easier and enormously safer and enormously more enjoyable to live on Earth than on Mars. As astronomer Lucianne Walkowicz said, “…for anyone to tell you that Mars will be there to back up humanity is like the captain of the Titanic telling you that the real party is happening later on the lifeboats.” Keep this great Earth going!
We live in a remarkable Universe that we are just beginning to comprehend. Go back to the origin of the stars and planets. The simplest chemical element fills the Universe, accompanied by a little helium and lithium. Almost preposterously it condenses from great dilution to an extreme density where it self-ignites in a fusion reaction nearly forbidden by the need for the truly weak force. The stars light up, yet, restrained by the weak force, last a long time on our human scale. They later make the panoply of chemical elements in our biosphere by the startling r-process. Our Earth came about around a star with remnants of a chemically enriching catastrophe of previous stars, favorably single rather than binary, at the right distance from the star, and at the right size of its own to keep water and gas, not too much of either, in a planetary system that had the help of Jupiter and asteroids… but, again, not too many asteroids. Einstein once said, “The most incomprehensible thing about the universe is that it is comprehensible.” Earth’s beauty may lie in part in the eye of the beholder; nonetheless, it is an extreme rarity among planets. William Butler Yeats once penned, in essence, that beauty is uninhabitable. I beg to differ, for the Earth in its span. He also wrote that “There is another world, but it is in this one.”
This is an outreach activity of the Las Cruces Academy (lascrucesacademy.org). The LCA is a non-profit private school in Mesilla, NM, serving students from early kindergarten through 8th grade with strong academics, including 3 languages (English, Spanish, and Chinese every day), Singapore Math, much science, social studies, even cursive writing. Author Vince Gutschick is the Board Chair and a teacher of science, computer programming, and (pre-COVID-19)tennis there. He retired from a career in research and teaching, spanning positions at the University of Notre Dame (B.S., Chemistry), Caltech (Chemistry w/joint research in Chemical Engineering, basically chemical physics), the University of California, Berkeley (NSF Postdoctoral Fellow), Yale University (J. W. Gibbs Instructor), Los Alamos Scientific Laboratory (before renaming to National Laboratory; in Theoretical Biology and Environmental Science), and New Mexico State University (Biology), with postings at Georg-August Universität, Göttingen (Forestry), CSIRO Canberra (Plant Industry), LaTrobe University, Melbourne, the Australian National University (Ecosystem Dynamics), the Carnegie Institute of Washington at Stanford (Plant Biology), the US National Science Foundation (Program Officer, Functional and Physiological Ecology and cluster leader, Integrative Biology), INRA/ Laboratoire d’Ecophysiologie des Plantes sur Stress Environnementaux, Montpellier, France. Vince has over 60 peer-reviewed publications in 23 international journals in physics, chemistry, ecology, plant physiology, meteorology, radiative transfer, remote sensing, and agronomy (i.e., he has a pretty short attention span in any one field!). He and his wife, Dr. Lou Ellen Kay (City University of New York, Biology) also ran a small scientific consulting company, the Global Change Consulting Consortium, which had, alas, a similarly small number of contracts in the US and the UK. He had a radio program on a small community radio station in Las Cruces, NM, KTAL-FM, after which he moved to weekly spots on KRWG-FM, the area NPR station (krwg.org/programs/science-digest). He maintains the website science-technology-society.com with analyses and commentaries on science, technology, and social ramifications, as well as interesting (he hopes) demos and experiments. Lou Ellen is his lifelong partner in science and education, as well as a partner in travel to 41 countries; their son, David, has traveled to most of those countries with them, and now David’s wife, Yi, is a great traveler with us. We bring world culture to the Las Cruces Academy in presentations and as interjections in many classes.