Dangerous Minds- The Risk of Belief Systems. Published in The European, November 2014
On the dichotomy between belief and action.
As a species, we are faced by global threats and potential existential risks. An existential risk, as defined by Oxford University philosopher Nick Bostrom, is a situation that would either annihilate Earth-originating intelligent life or permanently and drastically curtail its potential...one where humankind as a whole is imperiled, with major adverse consequences for the course of human civilization for all time to come."
One might ask: Well, what kinds of things constitute existential risks? We might consider a similar impact faced by the dinosaurs, but in reality there are possibilities with much higher probabilities than an asteroid or comet impact. What happens if political situations build up to nuclear war? What about the misuse or poor handling of the technologies that we predict to develop in the next century? Artificial intelligence, nanotechnology, and biotechnology all have as much potential to be a global threat as they do to be a tool for global good. I started thinking about existential risks a few years ago after I picked up Nick Bostrom and Milan Cirkovic’s tome Global Catastrophic Risks. My first initial reaction to considering a vastly dystopian and possibly finite landscape for myself and future generations got me thinking, well, there must be a huge number of people working on mitigating these risks. In reality, it would take less than a minute to count the total number focusing on this sort of research.
Isolating the most fundamental problems
I have been considering the complexity around risk mitigation for the last two years. It has been an attempt to find clarity in a web of systems that all interplay to prevent more positive action. I originally thought that too few individuals were aware of these issues, so whilst living in Germany I started a discussion group called Berlin Singularity in a bid to start the conversation – especially with business and finance students. After a year of lecturing in Berlin universities and organizing events through Berlin Singularity, I realized that the information gap was part of a much bigger core problem. Did any of my students go on to work in areas in which their goal was to maximize positive impact? Did any of them create start-ups to tackle problems bigger than consumer ones? Maybe a few, but more than often I was merely the crazy teacher who contradicted the other faculty who thought the biggest problem in the world was figuring out how to get humans to buy more stuff.
The problem we might intuitively consider is that there aren’t really jobs out there that combine value and business. What did we expect from students...to go into philanthropy or further academic research? To transfer to philosophy and work out what was good first? Only a handful of my fellow philosophy graduates seemed to to be able to resist career decisions purely based on salary size. How would I prevent these business students from losing their interest in my class and going on to take jobs at terrible German start-up clones that would promise them a relaxed environment, an endless supply of free hoodies and a yearly income of 100k?
It seems hard enough to get extremely successful start-ups going in any area, let alone when the game plan is to collectively work towards a better future for the rest of humanity. Can anyone save the world?
Risk and fear
Human beings are naturally risk averse. Risk is the unpredictability of outcomes. Averting risk allows us to create a more predictable model of the world, which brings comfort. But admitting that we don’t understand how the world works, and then trying to understand some slice of it, can only be terrifying. It’s far easier to inconclusively accept the world model of others. It’s even more comforting to then conclusively justify the truth-validity by the volume of people who share that world model. To attempt to stand outside of viral ideas – mimetic beliefs – and to take an assumption-free approach at understanding the world is one of the hardest challenges faced by individuals today. Attempting to look at world models with an assumption-free approach is definitively frightening.
Resolving fear with the gathering of appropriate information is not plausible in every situation (nor does the information necessarily always exist), but in the instances when we feel fear, there are benefits in reframing it in our minds as an absence of knowledge. When I feel afraid, I turn the thought around: “I must get to understand this situation better.” If I write down my list of fears, I think of the information that would resolve them. Sometimes the information I need to handle the situation isn’t known: I don’t want to die, but I don’t currently understand the exact pathway to overcoming the failure of my biological system. But in order to resolve the fear to some degree, I strive to understand the potential situation better and act on improving my outcomes.
The real existential risk
We recognize that there are systems. We may also recognize that they are messed up. What we don’t believe is that we can trick them, or influence them. We think there might be some people who can (or even not), but definitely not us. We are afraid of unfriendly AI, but we do not recognize the lack of information or the necessity to contribute towards finding the missing knowledge in order to mitigate the risk. The thing that separates some of us from the rest of the world is that at least we have had the opportunity to realize the problem. We know about the need to promote the construction of ethical artificial intelligence. But we often don’t think that we can personally do anything about it, so we rely on others to mitigate the problem.
When individuals aren’t immediately faced with the problem, it’s even more complicated. They don’t know about a problem or don’t think they can do anything about it. Perhaps they think other people can solve the hypothetical problem better regardless of any contribution they could make. The question I pose then is: Well, is the biggest existential risk not just the fallibility of the human mind? I mean, we continue to ignore threats presented to us, and we go so far as to even create threats against ourselves. It seems that the total potential risk created by the negative impacts of people’s belief systems is larger than any outside existential risk in this world. When we get to these fundamental challenges around people’s belief systems, we realize how hard the path to safeguarding the future truly is.