While it is easy to fixate on future doom, we should look sometimes at all the terrible things that didn’t happened.
Organ transplant technology has not led to organ harvesting of the poor. The first successful kidney transplant in 1954 raised ethical concerns that the wealthy would monopolize organs and create a market for exploitation. There were concerns that the poor might be “farmed” for their organs and the rich would become immortal. However, most countries and the World Health Organization implemented strict laws prohibiting the buying and selling of organs, ensuring the regulated and non-controversial practice of organ transplantation we know today.
Psychological and genetic testing has not become a pre-condition for employment. The classic movie “Gattaca” explores a world where employment and social status is determined by genetic quality. But genetic analysis is hardly necessary for the storyline of Gattaca. More realistically, psychological evaluation (IQ scores and personality tests) could fill the role of “genetic” determination. And yet neither of these sources of information is of major importance in the real world. Why? There is a practical – it is more accurate to judge a person by their actual output rather than by a predictor of their output. There is also a legal reason. As in the organ donor case, it is not hard to write laws limiting what information may be used in the hiring process. The same goes for genetic privacy and fear of insurance companies using genetic information when setting premiums. If we decide it is bad, we can make it illegal. Laws work. Mostly.
The invention of nuclear weapons has not led to nuclear war. Why not? Books have been written on that topic. But loosely, the world leaders were humans with common sense who appreciated that nuclear war would be bad. Diplomacy works, at least sometimes. Similarly, for nuclear power, the dangers of radiation were understood and sensible engineers designed reactors to be safe. Yes, Chernobyl, Three Miles island, and Fukashima witness imperfections in the designs. But these are exceptions that prove the rule. Our engineers have so successfully harnessed nuclear power that, while intrinsically the most dangerous of our energy sources, it became the safest. Engineering works.
Genetic engineering has not lead to the escape of “super weeds” that choke out native species. Genetically modified plants and animals are typically specialized to be achieve a human end rather than to maximize fitness. A myostatin-knockout cow will have a massive musculature which may be good for the butcher, but is an unnecessary metabolic cost in nature. Domestic plants engineered to produce maximal fruit per acre in pesticide/herbicide/fertilizer laden monoculture, are not going to out-compete native species in a forest. When was the last time you saw a stalk of corn growing outside a farm or garden? The “super weed” threat never materialized because no likely route to accidental creation of such a weed exists. It would be exceedingly difficult to make such a weed on purpose, let alone by accident.
Genetic engineering has not led to widespread use of biological weapons. This is not for lack of trying. Certainly there was research on biological weapons in the cold war and before. But as with “super weeds”, it is hard to genetically engineer a better pathogen. They could be selectively bred, but this would require large numbers of human test subjects. Motivation is also lacking. Biological weapons are not effective from a military theory point of view. Unlike nuclear weapons, they have limited stopping power. You cannot stop a line of tanks by dusting them with anthrax. Poison gas or conventional bombs are faster. Biological weapons also lack specificity. So, as with super weeds, there are practical and motivational reasons why biological weapon dangers remain unrealized.
Genetically engineered super-viruses are not escaping from research labs. There has been a fear for years that viruses collected from nature and bred or engineered in the lab could escape. This is a valid concern. Virologists often study the mutations that make a virus more virulent or that allow transmission between species. To this end, a researcher may randomly mutate a virus and select for variants with these features. If such variants can be found in the lab by random mutation, then such variants are likely to eventually arise in nature. Understanding this mutational profile helps us assess the risk a virus posses and aids in the design of vaccines. Good laboratory practice can minimize the risk of the viruses escaping. But accidents can happen. It is possible that the covid pandemic began in this way. Government cover-ups certainly happen. But even if covid did escape from a lab, it is just a sample of what was already in nature. New viral outbreaks occur every year. Most fizzle out. Virologists collect sample of these viruses and others from natural reservoirs. These virologists may be exposed to the viruses they collect, but so can anyone living proximity of the natural reservoir. The only difference is that the virologist knows to be careful. Fears of lab grown viruses over-estimate our ability to design pathogens, wrongly separates natural versus artificial mutagenesis, and underestimate our ability to contain viruses within the lab and respond to any outbreaks.
It is a little harder to be optimistic about the environment. But even here, where humanity is perhaps failing most severely, our record is not all bad. In the mid-20th century, acid rain became a major problem due to the release of sulfur dioxide and nitrogen oxides from burning fossil fuels. This acid rain caused damage to forests, lakes, and buildings. In response, countries implemented regulations to reduce emissions of these pollutants. Today, the problem of acid rain has been largely solved. Similarly, in the 1980s, it was discovered that the release of chlorofluorocarbons (CFCs) was causing a depletion of the ozone layer in the upper atmosphere, causing the ozone hole. In response, the international community came together to sign the Montreal Protocol, which phased out the production of CFCs. As a result, the ozone layer is slowly recovering. These are cherry picked examples, certainly, but they serve as examples of what is possible when a problem is understood and important enough to solve.
So what does all this tell us about the doomsday issues that scare us now? I am not arguing that we should be complacent about the challenges we face, but rather that we shouldn’t assume the worst will happen. We are at least somewhat sensible. The fears that abuse of AI will lead to super-viruses (computer or biological), floods of dangerously convincing fake news, the end of art, or the unraveling of human society may be overblown. Already models like ChatGPT moderate the content they produce. High-quality fact checking programs in the future may be able to counter generated fake news. AI generated viruses can be countered by AI anti-viruses. Laws may be enacted that protect humans from AI competition (for better and worse).
Solutions can usually be found if we choose to seek them. And even if we don’t, many natural feedback loops often counter the most negative outcomes. There is a place for worry. A large place. But rather than panic, let’s just sit back, enumerate the problems, grab a cup of coffee, and then either solve them or adapt to a new world.