Crisis and Inertia (3) – Technological Threats and Crises

(This is part 3 in the “Crisis and Inertia” series.)

Some advances of technology are feared by many.1 Some of those fears may be justified; others less so. Nuclear weapons are an obvious threat, but whether artificial intelligence (AI), for example, is likely to cause our demise is more controversial. This series isn’t about threats or fears, however, but about crises. The difference is that threats or fears may materialize, while crises are either already occurring or are unavoidable and thus will occur. Nuclear weapons are not a crisis, but their use would be, and as both the probability of their use in war and their effects are closely related to (other) crises discussed in this series, nuclear weapons are a topic that cannot be ignored. Whether AI is or becomes a crisis, on the other hand, depends on whether it leads to a (significant) shift in the momentum of civilization or even mankind, and the same is true for (the development of) other technologies. The most likely candidates for “crisis” status (regardless of whether they deserve that status) may be fertilizers, pesticides, antibiotics, and genetic engineering (GMOs). This is not a comprehensive list, however – plastics may also be a candidate, for example – but there is a limit to what can be covered in a single post.

Artificial Intelligence (AI)

Of all modern technologies the one that is most feared as a possible cause of humanity’s demise is artificial intelligence (AI). Supposedly, AI might become an existential risk (and thus a “terminal crisis” – see part 1) in a number of different ways, but most of those can be grouped under three different headers: (1) “smart AI” or superintelligence with murderous intent, (2) “dumb AI” with bad programming, and (3) (side-) effects of the use and abuse of AI.

Abuse of AI isn’t really a technological problem but a social problem. AI and related technologies can be used to increase authoritarian or totalitarian control, to make weapons more “effective”, and in various other threatening ways. Because this isn’t a technological issue, I won’t say much about this here, but I will turn to the related social problem(s) of totalitarianism, authoritarianism, and the rise of surveillance in a later part of this series.

Smart AI is superintelligent, meaning that it is smarter – in relevant respects – than us humans. We seem to be nowhere near developing anything like that. Surely, computers can play Chess and Go and carry out insanely complicated calculations at lightning speed, but smart AI is supposed to be much more than that. The 18th century philosopher David Hume said provocatively that “reason is, and ought only to be the slave of the passions”. What he meant is that “the passions” determine and include our goals, concerns, and desires, and that reason is impotent by itself and can only work out how to best reach those goals, deal with those concerns, and satisfy those desires. To be intelligent in the way we humans generally understand that notion is to have both reason and “the passions”, as well as to be (self-)conscious. AI, however, is “pure” reason. There is no (self-)conscious AI, and there is no AI that has anything like “the passions” – the closest to the latter are the goals of some computer program. These goals maybe badly selected, but that is a problem of dumb AI to which I’ll turn below. Furthermore, those goals are the goals of the creator of that AI, and not really of that AI itself.

Even though we don’t have anything approaching smart AI – that is, something that can be considered an artificial, intelligent being with conscious thoughts, goals, desires, concerns, beliefs, ideas, and so forth – I don’t see any fundamental reason why such smart AI would be impossible, and given that technological developments are unpredictable and sometimes make big leaps, it is entirely possible that we’ll develop something like that (if we can avert collapse long enough). And perhaps, at some point smart AI could take over and develop its own even smarter successors.2

If such smart AI would be developed, why would it be a threat? Some of the silliest scenario’s of an AI apocalypse seem to assume that smart AI would be like a super-villain in a comic book or action movie, aiming to take over the world. It seems rather unlikely, however, that any smart AI would be interested in taking over the world, unless it is programmed to, but then (again) we’re dealing with a case of badly selected goals, which is a problem of dumb AI primarily.

Probably the most famous scenario of dumb AI posing an existential threat is Nick Bostrom’s paperclip manufacturer.3 Imagine an AI in control of a paperclip factory, but with unlimited power. The only goals set for this AI are self-preservation and manufacturing as many paperclips as possible. This AI will realize at some point that some of the resources it needs to make more paperclips are already being used by humans and that humans themselves have no value for either of its goals. So it decides that humans are just a nuisance, and gets rid of humans to make more paperclips. While something like this scenario may be hypothetically possible, it is not clear to me why an AI in charge of a paperclip factory would be given the unlimited power needed to realize this scenario. What the scenario does vividly illustrate, however, is that the more power we give to some system (regardless of how intelligent it is, and regardless of whether it is artificial), the more important it is to set that system’s goals in the right way. Powerful systems with dumb goals are almost guaranteed to cause serious damage.

My personal favorite AI scenario is Thomas Metzinger’s Benevolent Artificial Anti-Natalism (BAAN).4 BAAN would be much closer to smart AI than Bostrom’s paperclip manufacturer, but in both scenarios the AI has unlimited power and has goals set by us humans. In case of BAAN, those goals are to prevent and reduce (human) suffering. Metzinger suggests that it is quite likely that this AI would then decide that the best way to accomplish the reduction of suffering is to prevent the birth of any more children, leading to the eventual extinction of mankind. Perhaps, Metzinger is right – if we’d create a benevolent system with unlimited power and unlimited knowledge and wisdom, then it may very well decide that the only way to significantly reduce suffering is to let humanity die out (or even to exterminate humans).5 However, I find it rather unlikely that we’ll ever intentionally create an AI with near unlimited power, and I find it even more unlikely that we’ll ever create a (non-powerless) benevolent AI. Benevolence doesn’t produce wealth or power and as it tends to be the strive for wealth and power that determines the goals we select for AI, benevolent AI is a sympathetic, but rather unrealistic idea.

Again, if we give some AI system too much power and the wrong goals, disaster is likely follow. What these scenarios ignore, however, is the question of why we would ever make an omnipotent AI. I can’t think of any plausible reason why an AI would ever given enough power to carry out the paperclip or BAAN scenario (or any other doomsday scenario). It seems to me that the only way in which some AI could become that powerful is if it would somehow be able to grab that power. But to have any chance at succeeding at this power grab it would have to be implausibly powerful to start with. Furthermore, I doubt that such near-omnipotence is technologically possible, but perhaps it will be in the future (if there is sufficient future). And more importantly, ignoring the silly artificial super-villain scenarios, there is no reason for an AI to even try to become that powerful, unless it is programmed to. In other words, for a plausible AI doomsday scenario, we’d need an extremely powerful AI that is programmed to become omnipotent and let nothing get in its way. Why (and by whom) would such an AI ever be created?

Perhaps, AIs can at some point learn to set or change their own goals, concerns, and desires, rather than just follow their programs, but it is difficult to see why that would necessarily be a threat. There is little reason to assume that an AI would be interested in wealth or power, so artificial super-villains are highly unlikely. If some AI becomes a threat due to its goals, then it is far more likely that we are to blame for that. And because of the considerations in the previous paragraph, it is very improbable that this would lead to an existential threat.

Hence, AI as an existential threat is an implausible crisis, but there is another AI-related crisis that is far less implausible. This crisis belongs to the same category as the military and totalitarian abuses of AI mentioned above: the use of AI. Even if AI isn’t used (or abused) for malicious purposes it still can have very negative social impacts. AI – like some other kinds of technology – takes over jobs. That is no problem if other, new kinds of jobs are being created, as has happened in the past, but unlike previous technological developments, AI can potentially take over very many different kinds of jobs (including most “new” jobs). This will almost certainly lead to an increase in unemployment, to a decrease in real wages and financial security, and to a further concentration of wealth and power in the hands of a small elite (that controls AI and its application as well as almost everything else). This is the real AI crisis: the creation of an ever-growing class of disposable people for whom there is work nor purpose. But this isn’t really a technological crisis; rather, it shares much of its roots with the crisis of climate change and other crises yet to be discussed in this series – it is a crisis of neoliberal capitalism.

AI itself is not a plausible threat. People who believe that AI will try to grab power and wealth and thereby destroys everything else forget (or ignore?) that a small group of humans are already doing exactly that.

Agriculture and Antibiotics

There are a number of potential technological crises that are all related to agriculture. The list of problems is long, but I’ll be brief and just mention a few.

Plowing loosens the top soil – that is its very purpose – which then more easily washes away when it rains, leading to erosion. Such erosion reduces the fertile top soil and can make land infertile and useless. The more erosion, the more artificial measures such as fertilizers are needed to use land for agriculture, but like plowed topsoil (which contains natural fertilizers), fertilizers wash into surface and ground water and ultimately into the oceans. This leads to deoxygenization (lack of oxygen) and algae growth, turning ever larger swaths of coastal sea into dead zones where almost nothing can survive. (See also the previous part of this series.) In other words, industrial agriculture simultaneously destroys soil, rivers, and seas. Unfortunately, it may be impossible to feed the current world population without resorting to such methods. (And thus, feeding the people who live now may make it impossible to feed people in the future.)

A second problem – also already mentioned in the previous part – is that pesticides are killing insects. That is, of course, more or less what they are designed to do, but unfortunately they do not just kill “pests” but very many other insects as well. There has been a massive decline in insect populations,6 and this is increasingly recognized as a serious problem. Insects play key roles in many ecosystems, and we humans also depend for much of our food supply on insects. Many agricultural crops are pollinated by bees, and bee populations are declining in many industrialized countries, most likely because of pesticides. But without bees, agriculture and much of our food supply will collapse.

A third problem is the widespread use of antibiotics in agriculture, exacerbating (or even partially causing) the increase of antibiotic resistance. The agricultural use of antibiotics is not the main cause of this problem, however – that dubious honor goes to the over-prescription of antibiotics in medicine. When disease-causing bacteria become resistant to antibiotics it becomes very difficult (if not impossible) to fight those diseases. Already, tens of thousands of people die ever year due to infection by antibiotic resistant bacteria. According to one estimate, this number will rise to 10 million per year by 2050,7 but several other scientists have pointed out that there are too many unknowns for a reliable estimate. What is uncontroversial, however, is that antibiotic resistance is a genuine crisis and will become an increasingly serious problem. On the scale of minor to terminal crises (see part 1 in this series), this is merely a minor crisis, however. Antibiotic resistance won’t lead to human extinction or the collapse of civilization (and neither will it prevent those).

A fourth problem is the proliferation of genetically modified plants in agriculture, better known as GMOs (the “O” stands for “organism”). Genetic modification isn’t a problem in itself – there is no fundamental difference between selecting genes by means of selective breeding and by means of more direct techniques such as genetic modification – but the spread of GMOs reduces genetic diversity of food crops, thereby increasing vulnerability, and the production of GMOs is associated with various abusive practices. GMOs are created mostly to increase the profits of Monsanto and similar companies and not for the good of mankind (as some of its defenders tend to proclaim). While genetic modification could be used to create more heat- and drought-resistant crops (which we will need in the coming decades) most effort goes to creating plants that are adapted to (cancer-causing) herbicides made by the same company (i.e. Monsanto), for example, and to other patentable and profitable modifications that are of little use to anyone except the shareholders of Monsanto etcetera.

Nuclear Weapons

Nuclear weapons kill lots of people when they explode (duh! – that’s what they’re made for), but potentially many more afterwards, and not just through the effects of radiation. The firestorm that developed in Hiroshima in the hours after the bomb exploded released approximately one thousand times as much energy as the blast itself. More than the explosion, radiation, and fallout, it is this fire and what it releases that is the greatest potential danger. The bomb on Hiroshima burned 13km² and thereby produced a lot of black carbon or soot. How much soot a bomb produces and how much of that reaches the upper atmosphere depends on the size or yield of the bomb and on the nature of the terrain that is being burned. Cities produce lots of soot; deserts much less. Assuming that most targets will be in urban areas, the average amount of soot that reaches the upper atmosphere in case of a bomb the size of the one dropped on Hiroshima is 50,000 metric tons. By modern standards, the bomb dropped on Hiroshima was tiny, however. Most nuclear weapons that are currently stockpiled around the world have a yield of at least 100 kiloton (up to 5,000), which is about 7 times the size of the bomb dropped on Hiroshima. The relation between yield and soot production isn’t linear – if we’d put the yield of the “average” atomic bomb on 150 kiloton, then the “average” bomb would produce (at least) 0.3 Tg (million metric tons) of soot (if dropped on “average” terrain).

A nuclear conflict is unlikely to involve a single bomb. 250 such “average” bombs would produce more than 75 Tg of soot, which would have very drastic effects on global climate. 75 Tg of soot would lead to a decrease of global temperatures to a level similar to that in the last ice age, and a worldwide decrease of precipitation of more than 25%. The effects of drought would be greatest in the tropics; those of the temperature drop would be greatest in the moderate and cold zones. The combined effect is an almost complete collapse of food production everywhere on Earth – due to cold in some parts of the world, and due to drought in others. But even 5 Tg of soot – the combined effect of only 15 or 16 “average” bombs – would have disastrous effects. It would reduce rainfall and growing seasons for anywhere between 5 and 25 years. In more vulnerable areas, precipitation could decrease by up to 80%. The resulting famines and other indirect effects could kill a billion or even several billions of people. 8

The implications of this are obvious: a minor nuclear conflict involving only 15 or 16 bombs could mess up the global climate enough to result in over a billion casualties; a large scale nuclear conflict involving 250 or more bombs will wipe out most of mankind. This conclusion raises two questions: (1) How likely is nuclear conflict (of either type)? And (2) can it be avoided?

The larger the global chaos, the larger the chance of conflict, including nuclear conflict. Nationalism – especially jingoism – further increases the risk. Right now, we see a global increase of nationalism and militarism (and thus, perhaps, jingoism), and climate change is a potential source of large-scale chaos and serious conflicts. Both India and Pakistan have nuclear weapons, for example, and are already in a permanent state of low-level conflict (with occasional flashes of more heated confrontation). Drought, heat, and other consequences of climate change (including indirect effects such as famines and refugee flows) are very likely causes of future escalations (especially when taking the region’s geography and political boundaries into account). If – in case of such an escalation – these two countries use only one fifteenth of their nuclear weapons that will probably be sufficient to produce the 5 Tg of soot needed to add a billion or so indirect victims to the tens of millions dying in the blasts themselves. (Most of those victims will probably be in that region itself as it will be hit hardest by drought, but this would also be the case if the nuclear conflict took place elsewhere.)

It’s hard to give a good estimation of the likelihood of nuclear conflict, and it is even harder to answer the question whether it is avoidable. The history of human stupidity does not suggest optimistic answers. I’m inclined to say that nuclear conflict is very likely, unless the world sinks into chaos (due to climate change) so fast that none of the nuclear powers has enough time to “push the button”. And I’m equally pessimistic about the chance of avoiding this – knowing their effects, sensible political leaders wouldn’t even make nuclear weapons. If the political leaders we actually have believe that it is good to have nuclear weapons, then they probably also believe that there are situations in which it would be good to use them. If they still don’t understand that nuclear weapons are a bad idea by now, then they probably never will. And if they don’t understand that nuclear weapons are a bad idea, then there is little that would prevent their use in case of a serious conflict

It should, perhaps, be noted here that, although 5 Tg of soot would cause global cooling, the effects of nuclear war are not the mirror image of the effects of climate change due to CO₂ increase. Hence, it isn’t the case that a nuclear war would balance out global warming, and thus could prevent a bigger disaster. Nuclear war wouldn’t stop ocean acidification, for example, and some effects of the two disasters might even reinforce each other. On the other hand, a temperature drop due to nuclear war would (temporarily) stop permafrost melting. If permafrost melting would become so severe that this would produce a feedback loop leading to mass extinction (including human extinction), then this could – perhaps – be postponed by nuclear war. It would only be postponed, however, because the soot would leave the atmosphere in a few decades while the CO₂ would stay there much longer, and thus, after a few decades catastrophic melting would resume.9 In any case, even in case of a hypothetical (?) crisis of runaway climate change, nuclear war would be very much like the “tomato catcher” of the first part of this series. We’d avoid the wall, but we’d be reduced to pulp anyway.10

Technological Solutions?

If nuclear weapons would be used to cool the Earth, that would be a particularly stupid kind of geo-engineering. Less stupid proposals for technological solutions to the problem of climate change (see previous part) or some of the problems mentioned above have been suggested. In most cases, these proposed technological solutions haven’t proceeded beyond the level of suggestions, however. Nevertheless, it is worth paying at least some attention to technology not as threat or crisis, but as possible solution.

Technologies that reduce CO₂ output help, but there already is too much CO₂ in the atmosphere and the only real technological solution to the climate change crisis would be a method to remove CO₂ from the atmosphere. We cannot do that to anywhere near the extent required. It costs a lot of money, effort, and energy to remove a tiny bit of CO₂ from the atmosphere, but we need to remove massive amounts. Again, we cannot do that, and given the second law of thermodynamics it doesn’t seem likely that we’ll ever be able to.

If technology cannot (sufficiently) help to avert the climate crisis, could it at least alleviate it? There are several proposals to cool down the planer by technological means. The cheapest and most feasible is stratospherical sulfur or sulfate injection: using airplanes to spread light-reflecting chemicals high in the atmosphere to reduce the amount of sunlight that reaches the surface and lower atmosphere and that warms up the planet. This would cool down Earth, but it could have some unfortunate side effects. It might lead to ozone depletion (increasing skin cancer), for example. It might increase acid rain (damaging forests and buildings). It might bleach the sky (resulting in a whiter sky). It might have all kinds of unexpected effects on atmospheric air circulations. All of these (and some other side-effects) are uncertain. What is more certain is that if these injections would suddenly stop, Earth would suddenly warm up very quickly. (Much to quickly for anything or anyone to adapt.) And most importantly, reducing the amount of sunlight that reaches the surface doesn’t just cool the planet, it also reduces the amount of energy available for plants. According to recent research, the loss in agricultural food production due to this decrease of sunlight is approximately equal to the gains resulting from cooling.11 In other words, for the global food supply this kind of geo-engineering doesn’t help at all. (It might, of course, alleviate some of the other effects of climate change, but the risks mentioned may outweigh the potential benefits.)

Does that mean that we cannot expect anything from technology? Are there no technological solutions? Perhaps, but there really aren’t any technological problems or crises either. None of the crises mentioned above are caused by technology itself – they are caused by how we use (or used, or will use) technology. We cause crises by means of technology (as well as other means!). Perhaps, we can also solve some crises by means of technology (as we have done in the past – although it is easy to overstate this), but right now the prospects are dim.

Links:
Part 1 (introduction)
Part 2 – Climate Change
Part 4 – Economic, Political, and Cultural Crises
Part 5 (conclusion) – Derailing a Speeding Train


If you found this article and/or other articles in this blog useful or valuable, please consider making a small financial contribution to support this blog 𝐹=𝑚𝑎 and its author. You can find 𝐹=𝑚𝑎’s Patreon page here.

Notes

  1. And that fear has produced literary classics such as Marry Shelley’s Frankenstein.
  2. Personally, I would find that an extremely exciting development, but I suppose that those who think of AI in more Frankensteinian terms would be horrified by the thought.
  3. Nick Bostrom (2003). “Ethical Issues in Advanced Artificial Intelligence”. In: I. Smit et al. eds., Cognitive, Emotive and Ethical Aspects of Decision Making in Humans and in Artificial Intelligence, Volume 2: 12-17.
  4. See also: Thomas Metzinger (2016). “Suffering”. In: K. Almqvist & A. Haag eds., The Return of Consciousness: A New Science on Old Questions (Stockholm), 237-262.
  5. Personally, I have no objection to the creation of such a system, nor to the implementation of such a conclusion. If a truly benevolent and sufficiently knowledgeable AI decides that the only way to significantly reduce suffering is to exterminate mankind, then I’ll accept that. Then, indeed, we should go extinct (or be exterminated). I don’t expect this to be a popular position, however. In any case, there are some interesting philosophical questions about human extinction, and this is a topic that I hope to return to in some future post.
  6. See, for example: Caspar Hallman et al. (2017). “More than 75 Percent Decline over 27 years in Total Flying Insect Biomass in Protected Areas”, PLOS One 12.10: e0185809.
  7. The Review on Antimicrobial Resistance Chaired by Jim O’Neill (2014). Antimicrobial Resistance: Tackling a Crisis for the Health and Wealth of Nations (Review on Antimicrobial Resistance).
  8. Most of the data in this and the preceding paragraph comes from: Owen Toon, Alan Robock, & Richard Turco (2008). “Environmental Consequences of Nuclear War”, Physics Today December 2008: 37-42. And: Adam Liska, Tyler White, Eric Holley, and Robert Oglesby (2017). “Nuclear Weapons in a Changing Climate: Probability, Increasing Risks, and Perception”, Environment 59.4: 22-33.
  9. But nuclear war may also destroy much of civilization and thereby significantly reduce further CO₂ release by humans.
  10. Or actually, we’d be reduced to pulp first, and then continue flying towards the wall at lower speed, but end up smashing into it anyway.
  11. Jonathan Proctor, Solomon Hsiang, Jennifer Burney, Marshall Burke, & Wolfram Schlenker (2018). “Estimating Global Agricultural Effects of Geoengineering Using Volcanic Eruptions”, Nature, August 8.

Leave a Reply

Your email address will not be published. Required fields are marked *