Olle Häggström's exceptional new book Here Be Dragons is surely one of the most thrilling, rewarding - and frightening - reads out there. I've read it twice now, and I could easily read again. In fact, I probably should. (Were it not for the ever growing pile of other books on my table.) Here's Olle on the future of humanity:
Another, less dramatic and in a sense diametrically opposite, scenario in which humanity might prosper despite a small value of q * is what we may call the Bullerby Scenario (after Astrid Lindgren's children's stories about the idyllic life in rural Sweden in the late 1940s). Here, humanity settles down into a peaceful and quiet steady state based on green energy, sustainable agriculture, and so on, and refrains from colonization of space and other radical technologies that might lead in that direction. I mention this possibility because it seems to an implicit and and unreflected assumption underlying much of current sustainability discourse, not because I consider it particularly plausible. In fact, given the Darwinian-style arguments discussed above, plus the paradigm of neverending growth that has come to reign both in the economy and in knowledge production (the scientific community), it seems very hard to imagine how such a steady state might come about, except possibly through the strict rule of a totalitarian government (which I tend to consider incompatible with human flourishing).
The more dramatic - and realistic - scenario being a preemptive and aggressive colonization of the entire universe.
When I described that scenario to my eleven-year-old son, he pronounced, after some thought:
Are we then nothing more than cosmic cockroaches, devastating everything in our path, until there is nothing left in the universe? Is this the only possible fate for our civilization - indeed for every civilization?
(or something to that effect, in tween parlance; aided and abetted by yours truly).
My own reaction was something along these lines, and I told Olle as much at the time:
Yeah, about "totalitarianism"... Human culture, morals, perceptions are really quite malleable; more so than we usually imagine. Relatively delicate nudges can have large impacts in the long run - if they are applied consistently.
Personally, I find it hard to accept the notion that oppression (perceived or "objective") is necessary in order to "handle" human nature.
Furthermore, we had better be able to deal with some measure of limitation! And people do, all the time - no problem! It's just a question of what you're used to.
Think about parenting: Any perceived injustice that a child may experience is always relative to what it construes as "normal" by observing its environment. Consequently, as a parent one must arrange this environment carefully.
In the Bullerby village you grow up with a natural respect for the environment, for your peers, and for the council of elders. There are no alternatives; the village is surrounded by a forest where you have no chance of surviving on your own.
Our problem is that people no longer have any sense of the borders of their (global) village, or of their place in it.
Add to this the scientific and technological developments that could be used in service of the community, e.g., different ways of influencing people's morals - directly as well as indirectly.
As a case in point, the educational initiative Naturvetenskap+ (Science+) is my small contribution to a positive feedback-loop intended to buttress society against selfishness, short-sightedness and sheer stupidity. It's my way of pointing the way towards Bullerbyn, in effect interpreting the curriculum as a collectivist agenda for sustainability. (And I have reason to believe that the Swedish National Agency for Education approves.)
Dejected, I study Olle's list of existential risks. It seems we have no choice but to abandon Earth. If we can. The only questions are when and how. And "who's 'we'?".
Threats from civilization, human or alien: climate change, environmental degradation, atomic and biological war and terrorism, nano-bots and AI running amok. Things that could (will) kill us all within 100 - 1000 years. Somehow I have been able to live with that knowledge until now. Mainly because there is at least a theoretical possibility of avoiding them.
Threats from nature: pandemics, meteoroids, volcanoes, cosmic rays, and eventually the Sun. These are things that could (will) kill us all no later than 100 000 - 1 000 000 years from now (give or take) **. And there isn't much we can do about it. Except, possibly, try to escape into space.
So, sterile as it is, I may have to accept Neal Stephenson's space habitat scenario - or something much worse - as the only option available to us, if that.
The old Gaia-hugging me looks utterly pathetic. I have lost my existential footing. Oh, the lure of the neighborhood church. (Strictly off-limits, of course.)
But, still... Hanson's and other's techno-social-Darwinism is truly sickening. Deplorable. Horrendous.
The whole idea of "Darwinian-style arguments" to the effect that we, as a society, are incapable of preventing a lone madman (or two) from destroying us all seems hopelessly defeatist. If nothing else, it seems to imply that we are forced to accept that a society can never be stronger than its weakest link; that we are forever bound by the law of least resistance. Perversely, Hanson (and others, mainly economists) seems to revel in the prospect of actualizing Parfit's repugnant conclusion.
Actually, I see this as a violation of Hume's law. Just because something is (or seems to be) a certain way, we can't concede that it should be so, or that it is unavoidable. *** We cannot resign ourselves to letting our propensity for recklessness violate societal and existential borders. Or even worse, the propensity of just a few.
What about the precautionary principle? And what about the asymmetry between the (relatively) known and safe, and the unknown and unsafe?
One of the most interesting chapters in Olle's book deals with the following question:
What do we really want?
What do we want our future techno-selves to want?
What do we want our smarter-than-us
(and possibly also wiser-than-us)
AI to want?
It will take over soon, you know. The AI, that is.
Now, if we ask it to find out what we really want, we will be sorry - for several reasons:
- we don't know
- we don't want to know
- we can't agree (not even with ourselves); and
- what we want isn't really what we want anyway.
So what if we instead ask the AI to find out what we (objectively) should want...
- if there even is such a thing
- if we are able to formulate the question
- if we are able to understand the answer
- if we are able to verify it
- if we are able to comply
...well, then we will also be sorry, for that is surely not what we want.
So what do we want? We want to strive for, but not attain, any and all of the goals that could plausibly appear as candidate answers above. The journey is the goal.
Maybe, then, not even I would be entirely content in a sustainable steady-state Bullerby village.
And what happens, pray tell, once Sandberg et al have colonized the entire universe at lightning-speed? 'Tis but a moment's work (geologically speaking).
I return to my motto: Responsibility trumps liberty. Specifically, we must get into the habit of restraining ourselves - and others. And to find satisfaction in doing so.
It is only in relation to boundaries that we may find meaning and harmony. This is a universally applicable principle of aesthetics. (Here is one example, relating to creativity and music.)
Speaking of facts and values...
Olle repeatedly calls attention to the all too common mistake of mixing the two. What about this:
If, for instance we take the (from the point of view of mainstream economics) extremely small discounting rate r = 0,1%, then we see from Table 10.1 that this corresponds to retaining 90% of value a hundred years from now, which may seem relatively reasonable. But look what happens 10,000 years from now: the fraction of value retained after such a time period is (1 - 0.001)10,000 ≈ 0.000045, meaning, in frank terms, that we do not care about the economy and welfare of our great-great-...-great-grandchildren 10,000 years hence.
(p. 235, my italics)
Is this a subtle shift from fact to value? (Maybe not in itself; see Olle's comment below.) Would an economist reply that, in fact, our grandchildren will be 1/0,000045 ≈ 22,000 times richer than we are? ****
Am I, and could perhaps also Häggström be, a Kantian rather than an axiological actualist? And would that be so bad, compared to the hyper-rationality of some twisted utilitarianism? Sentimentality might be good thing. And a bit of Gaia-hugging.
(*) q is the conditional probability that a society - having sprung to life on a life-supporting planet and developed to the technological level of present-day humanity - goes on to develop into an intergalactic civilization.
(**) This makes me wonder why, later (in chapter 10), upwards of a billion years of continued existence as mere flesh-and-blood creatures on Earth alone is described as a "conservative estimate"...
...the point of this conservative estimate being that the future holds far more lives worth saving than all that have hitherto existed...
...which, according to classical calculations of expected value, leads to the conclusion that even ridiculously small increases in spending on the prevention of extinction now correspond to millions of lives (later on).
(***) "Just because something is, we shouldn't let it." The status-quo bias, or the is-ought problem of induction?
(****) I guess that would imply r = g, η = 1, and γ = 0 in Ramsey's formula.
Of course, I agree completely that positing a positive g (and r) is reckless, especially over an extended time. Actually, it epitomizes our selfishness and short-sightedness, and maybe also our stupidity. But it could also be an almost unavoidable consequence of our psychological makeup: If we did manage to override it, that would mean the end of our journey.
Read part 2 of this text here.