Long-termism - but not as we know it (Jim)
 
 
 


Although in favour of long-term planning for the benefit of my own health and welfare, it has its dangers when associated with an assumed, overarching purpose in all human life. Of course I could simply adopt a purpose of my own choosing if I felt that there was something sufficiently attractive about it for me, but I could not then justify imposing it on every other member of my species.

It seems, however, that there is now a philosophical movement that seeks to do just that. It supports the idea of the long-term view and, along with it, the importance of human life in the universe. This long-termism is not though simply ‘caring about the long term’ or ‘valuing the well-being of future generations’. It goes well beyond this. 
A small group of theorists, mostly based in Oxford, has been working out the details of a new moral world view. It is one which emphasises our importance to the universe over an unimaginably long period of time - measured in billions of years. This idea of the importance of the long-term view and, along with it, the importance of human life in the universe, they call ‘long-termism’.

At its core is an analogy between individual persons and humanity as a whole. The premature death of an Einstein would be a personal tragedy. It would however be a much greater tragedy looked at on a global scale - his death would rob the world of an intellectual superstar destined to make extraordinary contributions to human knowledge.

But long-termists would apply these claims to humanity itself, as if humanity were an individual with its very own ‘potential’ to fulfil over the course of ‘its lifetime’. So then, a catastrophe that reduces the human population to zero would be tragic not only because of the individual suffering it would inflict. They would argue that the real tragedy would be astronomically worse: our extinction would permanently destroy what could be a human future lasting billions of years. It would irreversibly destroy the ‘vast and glorious’ long-term potential of humanity, an opportunity to spread to the rest of the Universe the intelligent life which has developed here on Earth.

To those thinkers nothing matters more, ethically speaking, than fulfilling our supposed ‘purpose’ of spreading ‘Earth originated intelligent life’. After all, there may be no other intelligent life in our corner of the galaxy or, indeed anywhere apart from our planet.

On this view, a climate catastrophe may not matter. The long-termist ideology enables its believers to take an uncaring view - they would say a rational attitude - towards climate change. Why? Because even if climate change causes island nations to disappear, triggers mass migrations and kills millions of people, it probably isn’t going to compromise our long-term potential over the coming billions of years. If one takes a cosmic view of the situation, even a climate catastrophe that cut the human population by 75 per cent for the next two millennia would, in the grand scheme of things, be nothing more than a small blip. As a species, we have lived through previous disasters.

One of its main protagonists, perhaps prophets would be a better word, an Oxford philosopher, Nick Bostrom argues that ‘a non-existential disaster causing the breakdown of global civilisation is, from the perspective of humanity as a whole, a potentially recoverable setback.’ It might be ‘a giant massacre for man’, but as long as humanity bounces back in order ultimately to fulfil its potential, its purpose, it will be little more than ‘a small misstep for mankind’. He writes that the worst natural disasters and devastating atrocities in history become almost imperceptible trivialities when seen from this grand perspective.

He has argued that we mustn’t ‘fritter … away’ our finite resources on ‘feel-good projects of suboptimal efficacy’ such as ‘alleviating global poverty’ and ‘reducing animal suffering’, since neither threatens our long-term potential, and it is the realisation of our overarching purpose which really matters. Another of the disciples of the new thinking, Nick Beckstead, has even argued that, for the sake of attaining this goal, we should actually prioritise the lives of people in rich countries over those in poor countries, since influencing the long-term future is of ‘overwhelming importance’, and the former are more likely to influence the long-term future than the latter.

If our top priority is to avoid an existential catastrophe in order to fulfil ‘our potential’ then what steps are not available to us to make this happen? The notion of the ‘greater good’ has been used to justify atrocities (e.g., during war). If the ends ‘justify’ the means, Bostrom argues, and the ends are thought to be sufficiently large (e.g., national security), then this ‘can be brought to bear to ease the consciences of those responsible for a certain number of charred babies’.

He has argued that we should seriously consider establishing a global, invasive surveillance system that monitors every person on the planet in real time, to amplify the ‘capacities for preventive policing’ (e.g. to prevent homicidal terrorist attacks that could devastate civilisation). Elsewhere, he’s written that states should use pre-emptive violence/war to avoid existential catastrophes, and argued that it would be immoral to use immense resources, even to save billions of actual people from harm, if to do so would only reduce the all-important existential risk to humanity by minuscule amounts looked at over the long term.

The strange thing is that there are many multi-billionaires who support this way of looking at things. They include Elon Musk and Peter Thiel, both of whom have donated substantial funds to Institutes promoting these ideas and given talks supporting their ideas. Musk wants to end his days on Mars. Why? Well, probably because another part of the thesis is that of the desirability of transhumanism – incorporating our intelligence into computerised robots, no doubt wearing armour-plating and carrying a raygun like Iron Man. I imagine they see themselves as powering into an infinite future.

In fact, the umbrella organisation for all of this, called the Effective Altruism (EA) movement, which was created in around 2011, now boasts that it has a mind-boggling $46 billion in committed funding and about 10,000 members, many of whom, they say, are in positions of power in government. Long-termism is in danger of becoming a very influential secular religion. Karl Marx in 1845 declared that the point of philosophy isn’t merely to interpret the world but change it, and this is exactly what long-termists are now doing, and winning many well-healed supporters in the process. Not an encouraging thought.

https://80000hours.org/2021/07/effective-altruism-growing/

Paul Buckingham

1 November 2021




Home      A Point of View     Philosophy     Who am I?      Links     Photos of Annecy