Harman Patil (Editor)

Global catastrophic risk

Updated on
Edit
Like
Comment
Share on FacebookTweet on TwitterShare on LinkedInShare on Reddit
Global catastrophic risk httpsuploadwikimediaorgwikipediacommonsthu

How to manage global catastrophic risk


A global catastrophic risk is a hypothetical future event that has the potential to damage human well-being on a global scale. Some events could cripple or destroy modern civilization. Any event that could cause human extinction is also known as an existential risk.

Contents

Global catastrophic risk Global Catastrophic and Existential Risks adrianjonklaas

Potential global catastrophic risks include but are not limited to hostile artificial intelligence, nanotechnology weapons, climate change, nuclear warfare, total war, pandemics or even an impact event such as an earth-asteroid collision on a scale that wiped out the dinosaurs and millions of species.

Global catastrophic risk Terminatorstyle ROBOTS could wipe us out in the next five years

Researchers experience difficulty in studying near human extinction directly, since humanity has never been destroyed before. While this does not mean that it will not be in the future, it does make modelling existential risks difficult, due in part to survivorship bias.

Global catastrophic risk Global catastrophic risks connected with nuclear weapons and energy

Jamais cascio global catastrophic risks


Global catastrophic vs existential

Global catastrophic risk GCR News Summary January 2015 Global Catastrophic Risk Institute

Philosopher Nick Bostrom classifies risks according to their scope and intensity. A "global catastrophic risk" is any risk that is at least "global" in scope, and is not subjectively "imperceptible" in intensity. Those that are at least "trans-generational" (affecting all future generations) in scope and "terminal" in intensity are classified as existential risks. While a global catastrophic risk may kill the vast majority of life on earth, humanity could still potentially recover. An existential risk, on the other hand, is one that either destroys humanity (and, presumably, all but the most rudimentary species of non-human lifeforms and/or plant life) entirely or prevents any chance of civilization recovering. Bostrom considers existential risks to be far more significant.

Global catastrophic risk Global Catastrophic Risks Nick Bostrom Milan M Cirkovic

Similarly, in Catastrophe: Risk and Response, Richard Posner singles out and groups together events that bring about "utter overthrow or ruin" on a global, rather than a "local or regional" scale. Posner singles out such events as worthy of special attention on cost-benefit grounds because they could directly or indirectly jeopardize the survival of the human race as a whole. Posner's events include meteor impacts, runaway global warming, grey goo, bioterrorism, and particle accelerator accidents.

Other classifications

Bostrom identifies four types of existential risk. "Bangs" are sudden catastrophes, which may be accidental or deliberate. He thinks the most likely sources of bangs are malicious use of nanotechnology, nuclear war, and the possibility that the universe is a simulation that will end. "Crunches" are scenarios in which humanity survives but civilization is irreversibly destroyed. The most likely causes of this, he believes, are exhaustion of natural resources, a stable global government that prevents technological progress, or dysgenic pressures that lower average intelligence. "Shrieks" are undesirable futures. For example, if a single mind enhances its powers by merging with a computer, it could dominate human civilization. Bostrom believes that this scenario is most likely, followed by flawed superintelligence and a repressive totalitarian regime. "Whimpers" are the gradual decline of human civilization or current values. He thinks the most likely cause would be evolution changing moral preference, followed by extraterrestrial invasion.

Likelihood

Some risks, such as that from asteroid impact, with a one-in-a-million chance of causing humanity's extinction in the next century, have had their probabilities predicted with considerable precision (although some scholars claim the actual rate of large impacts could be much higher than originally calculated). Similarly, the frequency of volcanic eruptions of sufficient magnitude to cause catastrophic climate change, similar to the Toba Eruption, which may have almost caused the extinction of the human race, has been estimated at about 1 in every 50,000 years. The 2016 annual report by the Global Challenges Foundation estimates that an average American is more than five times more likely to die during a human-extinction event than in a car crash.

The relative danger posed by other threats is much more difficult to calculate. In 2008, a small but illustrious group of experts on different global catastrophic risks at the Global Catastrophic Risk Conference at the University of Oxford suggested a 19% chance of human extinction over the next century. The conference report cautions that the results should be taken "with a grain of salt".

There are significant methodological challenges in estimating these risks with precision. Most attention has been given to risks to human civilization over the next 100 years, but forecasting for this length of time is difficult. The types of threats posed by nature may prove relatively constant, though new risks could be discovered. Anthropogenic threats, however, are likely to change dramatically with the development of new technology; while volcanoes have been a threat throughout history, nuclear weapons have only been an issue since the 20th century. Historically, the ability of experts to predict the future over these timescales has proved very limited. Man-made threats such as nuclear war or nanotechnology are harder to predict than natural threats, due to the inherent methodological difficulties in the social sciences. In general, it is hard to estimate the magnitude of the risk from this or other dangers, especially as both international relations and technology can change rapidly.

Existential risks pose unique challenges to prediction, even more than other long-term events, because of observation selection effects. Unlike with most events, the failure of a complete extinction event to occur in the past is not evidence against their likelihood in the future, because every world that has experienced such an extinction event has no observers, so regardless of their frequency, no civilization observes existential risks in its history. These anthropic issues can be avoided by looking at evidence that does not have such selection effects, such as asteroid impact craters on the Moon, or directly evaluating the likely impact of new technology.

Moral importance of existential risk

Some scholars have strongly favored reducing existential risk on the grounds that it greatly benefits future generations. Derek Parfit argues that extinction would be a great loss because our descendants could potentially survive for four billion years before the expansion of the Sun makes the Earth uninhabitable. Nick Bostrom argues that there is even greater potential in colonizing space. If future humans colonize space, they may be able to support a very large number of people on other planets, potentially lasting for trillions of years. Therefore, reducing existential risk by even a small amount would have a very significant impact on the expected number of people who will exist in the future.

Little has been written arguing against these positions, but some scholars would disagree. Exponential discounting might make these future benefits much less significant. Gaverick Matheny has argued that such discounting is inappropriate when assessing the value of existential risk reduction.

Some economists have discussed the importance of global catastrophic risks, though not existential risks. Martin Weitzman argues that most of the expected economic damage from climate change may come from the small chance that warming greatly exceeds the mid-range expectations, resulting in catastrophic damage. Richard Posner has argued that we are doing far too little, in general, about small, hard-to-estimate risks of large scale catastrophes.

Numerous cognitive biases can influence people's judgement of the importance of existential risks, including scope insensitivity, hyperbolic discounting, availability heuristic, the conjunction fallacy, the affect heuristic, and the overconfidence effect.

Scope insensitivity influences how bad people consider the extinction of the human race to be. For example, when people are motivated to donate money to altruistic causes, the quantity they are willing to give does not increase linearly with the magnitude of the issue: people are roughly as concerned about 200,000 birds getting stuck in oil as they are about 2,000. Similarly, people are often more concerned about threats to individuals than to larger groups.

There are economic reasons that can explain why little effort is going into existential risk reduction as well. It is a global good, so even if a large nation decreases it, that nation will only enjoy a small fraction of the benefit of doing so. Furthermore, the vast majority of the benefits may be enjoyed by far future generations, and though these quadrillions of future people would be willing to pay massive sums for existential risk reduction, the obvious transaction difficulties prevent them from doing so.

Precautions and prevention

Planetary management and respecting planetary boundaries have been proposed as approaches to preventing ecological catastrophes. Within the scope of these approaches, the field of geoengineering encompasses the deliberate large-scale engineering and manipulation of the planetary environment to combat or counteract anthropogenic changes in atmospheric chemistry. Space colonization is a proposed alternative to improve the odds of surviving an extinction scenario. Solutions of this scope may require megascale engineering. Food storage has been proposed globally, but the monetary cost would be high. Furthermore, it would likely contribute to the current millions of deaths per year due to malnutrition. David Denkenberger and Joshua Pearce have proposed in Feeding Everyone No Matter What a variety of alternate foods for global catastrophic risks such as nuclear winter, volcanic winter, asteroid/comet impact, and abrupt climate change. The alternate foods convert fossil fuels or biomass (e.g. trees and wood) into food. However, significantly more research is needed in this field to make it viable for the entire global population to survive using these methods. Asteroid deflection has been proposed to reduce impact risk. Nuclear disarmament has been proposed to reduce the nuclear winter risk. Precautions being taken include:

  • Some survivalists stocking survival retreats with multiple-year food supplies.
  • The Svalbard Global Seed Vault, which is a vault buried 400 feet (120 m) inside a mountain in the Arctic with over ten tons of seeds from all over the world. 100 million seeds from more than 100 countries were placed inside as a precaution to preserve all the world’s crops. A prepared box of rice originating from 104 countries was the first to be deposited in the vault, where it will be kept at −18 °C (0 °F). Thousands more plant species will be added as organizers attempt to get specimens of every agricultural plant in the world. Cary Fowler, executive director of the Global Crop Diversity Trust said that by preserving as many varieties as possible, the options open to farmers, scientists and governments were maximized. “The opening of the seed vault marks a historic turning point in safeguarding the world’s crop diversity,” he said. Even if the permafrost starts to melt, the seeds will be safe inside the vault for up to 200 years. Some of the seeds will even be viable for a millennium or more, including barley, which can last 2,000 years, wheat (1,700 years), and sorghum (almost 20,000 years).
  • Organizations

    The Bulletin of the Atomic Scientists (est. 1945) is one of the oldest global risk organizations, founded after the public became alarmed by the potential of atomic warfare in the aftermath of WWII. It studies risks associated with nuclear war and energy and famously maintains the Doomsday Clock established in 1947. The Foresight Institute (est. 1986) examines the risks of nanotechnology and its benefits. It was one of the earliest organizations to study the unintended consequences of otherwise harmless technology gone haywire at a global scale. It was founded by K. Eric Drexler who postulated "grey goo".

    Beginning after 2000, a growing number of scientists, philosophers and tech billionaires created organizations devoted to studying global risks both inside and outside of academia.

    Independent non-governmental organizations (NGOs) include the Machine Intelligence Research Institute (est. 2000) which aims to reduce the risk of a catastrophe caused by artificial intelligence and the Singularity. The top donors include Peter Thiel and Jed McCaleb. The Lifeboat Foundation (est. 2009) funds research into preventing a technological catastrophe. Most of the research money funds projects at universities. The Global Catastrophic Risk Institute (est. 2011) is a think tank for all things catastrophic risk. It is funded by the NGO Social and Environmental Entrepreneurs. The Global Challenges Foundation (est. 2012), based in Stockholm and founded by Laszlo Szombatfalvy, releases a year report on the state of global risks. The Future of Life Institute (est. 2014) aims to support research and initiatives for safeguarding life considering new technologies and challenges facing humanity. Elon Musk is one of its biggest donors. The Nuclear Threat Initiative seeks to reduce global threats from nuclear, biological and chemical threats, and containment of damage after an event. It maintains a nuclear material security index.

    University-based organizations include the Future of Humanity Institute (est. 2005) which researches the questions of humanity's long-term future, particularly existential risk. It was founded by Nick Bostrom and is based at Oxford University. The Centre for the Study of Existential Risk (est. 2012) is a Cambridge-based organization which studies four major technological risks: artificial intelligence, biotechnology, global warming and warfare. All are man-made risks, as Huw Price explained to the AFP news agency, "It seems a reasonable prediction that some time in this or the next century intelligence will escape from the constraints of biology". He added that when this happens "we're no longer the smartest things around," and will risk being at the mercy of "machines that are not malicious, but machines whose interests don't include us." Stephen Hawking is an acting adviser. The Millennium Alliance for Humanity & The Biosphere is a Stanford University-based organization focusing on many issues related to global catastrophe by bringing together members of academic in the humanities. It was founded by Paul Ehrlich among others. Stanford University also has the Center for International Security and Cooperation focusing on political cooperation to reduce global catastrophic risk.

    Other risk assessment groups are based in or are part of governmental organizations. The World Health Organization (WHO) includes a division called the Global Alert and Response (GAR) which monitors and responds to global epidemic crisis. GAR helps member states with training and coordination of response to epidemics. The United States Agency for International Development (USAID) has its Emerging Pandemic Threats Program which aims to prevent and contain naturally generated pandemics at their source. The Lawrence Livermore National Laboratory has a division called the Global Security Principal Directorate which researches on behalf of the government issues such as bio-security, counter-terrorism, etc.

    References

    Global catastrophic risk Wikipedia


    Similar Topics