| ̄ ̄ ̄ ̄ ̄ ̄ ̄ ̄ ̄ ̄ ̄| Science has always been Political |___________| (__/) || (•ㅅ•) || / づ#HistorianSignBunny
— Dr. Audra J. Wolfe (@ColdWarScience) July 12, 2018
It had sparked a brisk and mostly unproductive debate. If you want to see people talking past each other, snide comments, and applause lights, check out the thread. One of the few productive exchanges centres on bridges.
Bridges are clearly a product of science (and its offspring, engineering) – only the simplest bridges can be built without scientific knowledge. Bridges also clearly have a political dimension. Not only are bridges normally the product of politics, they also are embedded in a broader political fabric. They change how a space can be used and change geography. They make certain actions – like commuting – easier and can drive urban changes like suburb growth and gentrification. Maintenance of bridges uses resources (time, money, skilled labour) that cannot be then used elsewhere. These are all clearly political concerns and they all clearly intersect deeply with existing power dynamics.
Even if no other part of science was political (and I don’t think that could be defensible; there are many other branches of science that lead to things like bridges existing), bridges prove that science certainly can be political. I can’t deny this. I don’t want to deny this.
I also cannot deny that I’m deeply skeptical of the motives of anyone who trumpets a political view of science.
You see, science has unfortunate political implications for many movements. To give just one example, greenhouse gasses are causing global warming. Many conservative politicians have a vested interest in ignoring this or muddying the water, such that the scientific consensus “greenhouse gasses are increasing global temperatures” is conflated with the political position “we should burn less fossil fuel”. This allows a dismissal of the political position (“a carbon tax makes driving more expensive; it’s just a war on cars”) serve also (via motivated cognition) to dismiss the scientific position.
(Would that carbon in the atmosphere could be dismissed so easily.)
While Dr. Wolfe is no climate change denier, it is hard to square her claims that calling science political is a neutral statement:
You are getting warmer. Fascinating how “science” is read as “empirical findings” and “political” as inherently bad.
— Dr. Audra J. Wolfe (@ColdWarScience) July 12, 2018
With the examples she chooses to demonstrate this:
When chemists choose to produce synthetics at an industrial scale without investigating their safety, that’s a political choice.
— Dr. Audra J. Wolfe (@ColdWarScience) July 12, 2018
When pointing out that science is political, we could also say things like “we chose to target polio for a major elimination effort before cancer, partially because it largely affected poor children instead of rich adults (as rich kids escaped polio in their summer homes)”. Talking about the ways that science has been a tool for protecting the most vulnerable paints a very different picture of what its political nature is about.
(I don’t think an argument over which view is more correct is ever likely to be particularly productive, but I do want to leave you with a few examplesfor myposition.)
Dr. Wolfe’s is able to claim that politics is neutral despite only using negative examples of its effects by using a bait and switch between two definitions of “politics”. The bait is a technical and neutral definition, something along the lines of: “related to how we arrange and govern our society”. The switch is a more common definition, like: “engaging in and related to partisan politics”.
I start to feel that someone is being at least a bit disingenuous when they only furnish negative examples, examples that relate to this second meaning of the word political, then ask why their critics view politics as “inherently bad” (referring here to the first definition).
This sort of bait and switch pops up enough in post-modernist “all knowledge is human and constructed by existing hierarchies” places that someone got annoyed enough to coin a name for it: the motte and bailey fallacy.
It’s named after the early-medieval form of castle, pictured above. The motte is the outer wall and the bailey is the inner bit. This mirrors the two parts of the motte and bailey fallacy. The “motte” is the easily defensible statement (science is political because all human group activities are political) and the bailey is the more controversial belief actually held by the speaker (something like “we can’t trust science because of the number of men in it” or “we can’t trust science because it’s dominated by liberals”).
I have a lot of sympathy for the people in the twitter thread who jumped to defend positions that looked ridiculous from the perspective of “science is subject to the same forces as any other collective human endeavour” when they believed they were arguing with “science is a tool of right-wing interests”. There are a great many progressive scientists who might agree with Dr. Wolfe on many issues, but strongly disagree with what her position seems to be here. There are many of us who believe that science, if not necessary for a progressive mission, is necessary for the related humanistic mission of freeing humanity from drudgery, hunger, and disease.
It is true that we shouldn’t uncritically believe science. But the work of being a critical observer of science should not be about running an inquisition into scientists’ political beliefs. That’s how we get climate change deniers doxxing climate scientists. Critical observation of science is the much more boring work of checking theories for genuine scientific mistakes, looking for P-hacking, and doubled checking that no one got so invested in their exciting results that they fudged their analyses to support them. Critical belief often hinges on weird mathematical identities, not political views.
When anyone says science is political and then goes on to emphasize all of the negatives of this statement, they’re giving people permission to believe their political views (like “gas should be cheap” or “vaccines are unnatural”) over the hard truths of science. And that has real consequences.
Saying that “science is political” is also political. And it’s one of those political things that is more likely than not to be driven by partisan politics. No one trumpets this unless they feel one of their political positions is endangered by empirical evidence. When talking with someone making this claim, it’s always good to keep sight of that.
One of the best things about taking physics classes is that the equations you learn are directly applicable to the real world. Every so often, while reading a book or watching a movie, I’m seized by the sudden urge to check it for plausibility. A few scratches on a piece of paper later and I will generally know one way or the other.
One of the most amusing things I’ve found doing this is that the people who come up with the statistics for Pokémon definitely don’t have any sort of education in physics.
Takes Onix. Onix is a rock/ground Pokémon renowned for its large size and sturdiness. Its physical statistics reflect this. It’s 8.8 metres (28′) long and 210kg (463lbs).
Onix, being tough. I don’t own the copyright to this image, but I’m claiming fair use for purpose of criticism. Source.
Surely such a large and tough Pokémon should be very, very dense, right? Density is such an important tactile cue for us. Don’t believe me? Pick up a large piece of solid medal. Its surprising weight will make you take it seriously.
Let’s check if Onix would be taken seriously, shall we? Density is equal to mass divided by volume. We use the symbol ρ to represent density, which gives us the following equation:
We already know Onix’s mass. Now we just need to calculate its volume. Luckily Onix is pretty cylindrical, so we can approximate it with a cylinder. The equation for the volume of a cylinder is pretty simple:
Where π is the ratio between the diameter of a circle and its circumference (approximately 3.1415…, no matter what Indiana says), r is the radius of a circle (always one half the diameter), and h is the height of the cylinder.
Given that we know Onix’s height, we just need its diameter. Luckily the Pokémon TV show gives us a sense of scale.
Here’s a picture of Onix. Note the kid next to it for scale. I don’t own the copyright to this image, but I’m claiming fair use for purpose of criticism. Source.
Judging by the image, Onix probably has an average diameter somewhere around a metre (3 feet for the Americans). This means Onix has a radius of 0.5 metres and a height of 8.8 metres. When we put these into our equation, we get:
For a volume of approximately 6.9m3. To get a comparison I turned to Wolfram Alpha which told me that this is about 40% of the volume of a gray whale or a freight container (which incidentally implies that gray whales are about the size of standard freight containers).
Armed with a volume, we can calculate a density.
Okay, so we know that Onix is 30.4 kg/m3, but what does that mean?
Well it’s currently hard to compare. I’m much more used to seeing densities of sturdy materials expressed in tonnes per cubic metre or grams per cubic centimetre than I am seeing them expressed in kilograms per cubic metre. Luckily, it’s easy to convert between these.
There are 1000 kilograms in a ton. If we divide our density by a thousand we can calculate a new density for Onix of 0.0304t/m3.
How does this fit in with common materials, like wood, Styrofoam, water, stone, and metal?
From this chart, you can see that Onix’s density is eerily close to Styrofoam. Even the notoriously light balsa wood is five times denser than him. Actual rock is about 85 times denser. If Onix was made of granite, it would weigh 18 tonnes, much heavier than even Snorlax (the heaviest of the original Pokémon at 460kg).
While most people wouldn’t be able to pick Onix up (it may not be dense, but it is big), it wouldn’t be impossible to drag it. Picking up part of it would feel disconcertingly light, like picking up an aluminum ladder or carbon fibre bike, only more so.
This picture is unrealistic. Because of its density, no more than 3% of Onix can be below the water. I don’t own the copyright to this image, but I’m claiming fair use for purpose of criticism. Source.
How did the creators of Pokémon accidently bestow one of the most famous of their creations with a hilariously unrealistic density?
I have a pet theory.
I went to school for nanotechnology engineering. One of the most important things we looked into was how equations scaled with size.
Humans are really good at intuiting linear scaling. When something scales linearly, every twofold change in one quantity brings about a twofold change in another. Time and speed scale linearly (albeit inversely). Double your speed and the trip takes half the time. This is so simple that it rarely requires explanation.
Unfortunately for our intuitions, many physical quantities don’t scale linearly. These were the cases that were important for me and my classmates to learn, because until we internalized them, our intuitions were useless on the nanoscale. Many forces, for example, scale such that they become incredibly strong incredibly quickly at small distances. This leads to nanoscale systems exhibiting a stickiness that is hard on our intuitions.
It isn’t just forces that have weird scaling though. Geometry often trips people up too.
In geometry, perimeter is the only quantity I can think of that scales linearly with size. Double the length of the sides of a square and the perimeter doubles. The area, however does not. Area is quadratically related to side length. Double the length of a square and you’ll find the area quadruples. Triple the length and the area increases nine times. Area varies with the square of the length, a property that isn’t just true of squares. The area of a circle is just as tied to the square of its radius as a square is to the square of its length.
Volume is even trickier than radius. It scales with the third power of the size. Double the size of a cube and its volume increases eight-fold. Triple it, and you’ll see 27 times the volume. Volume increases with the cube (which again works for shapes other than cubes) of the length.
If you look at the weights of Pokémon, you’ll see that the ones that are the size of humans have fairly realistic weights. Sandslash is the size of a child (it stands 1m/3′ high) and weighs a fairly reasonable 29.5kg.
(This only works for Pokémon really close to human size. I’d hoped that Snorlax would be about as dense as marshmallows so I could do a fun comparison, but it turns out that marshmallows are four times as dense as Snorlax – despite marshmallows only having a density of ~0.5t/m3)
Beyond these touchstones, you’ll see that the designers of Pokémon increased their weight linearly with size. Onix is a bit more than eight times as long as Sandslash and weighs seven times as much.
Unfortunately for realism, weight is just density times volume and as I just said, volume increases with the cube of length. Onix shouldn’t weigh seven or even eight times as much as Sandslash. At a minimum, its weight should be eight times eight times eight multiples of Sandslash’s; a full 512 times more.
Scaling properties determine how much of the world is arrayed. We see extremely large animals more often in the ocean than in the land because the strength of bones scales with the square of size, while weight scales with the cube. Become too big and you can’t walk without breaking your bones. Become small and people animate kids’ movies about how strong you are. All of this stems from scaling.
These equations aren’t just important to physicists. They’re important to any science fiction or fantasy writer who wants to tell a realistic story.
Or, at least, to anyone who doesn’t want their work picked apart by physicists.
Under the Partial Test Ban Treaty (PTBT), all nuclear tests except for those underground are banned. Under the Non-Proliferation Treaty (NPT), only the permanent members of the UN Security Council are legally allowed to possess nuclear weapons. Given the public outcry over fallout that led to the PTBT and the worries over widespread nuclear proliferation that led to the NPT, it’s clear that we require something beyond pinky promises to verify that countries are meeting the terms of these treaties.
But how do we do so? How can you tell when a country tests an atomic bomb? How can you tell who did it? And how can one differentiate a bomb on the surface from a bomb in the atmosphere from a bomb in space from a bomb underwater from a bomb underground?
I’m going to focus on two efforts to monitor nuclear weapons: the national security apparatus of the United States and the Comprehensive Test Ban Treaty Organization (CTBTO) Preparatory Commission’s International Monitoring System (IMS). Monitoring falls into five categories: Atmospheric Radionuclide Monitoring, Seismic Monitoring, Space-based Monitoring, Hydroacoustic Monitoring, and Infrasound Monitoring.
Atmospheric Radionuclide Monitoring
Nuclear explosions generate radionuclides, either by dispersing unreacted fuel, as direct products of fission, or by interactions between neutrons and particles in the air or ground. These radionuclides are widely dispersed from any surface testing, while only a few fission products (mainly various radionuclides of the noble gas xenon) can escape from properly conducted underground tests.
For the purposes of minimizing fallout, underground tests are obviously preferred. But because they only emit small amounts of one particular radionuclide, they are much harder for radionuclide monitoring to detect.
Detecting physical particles is relatively easy. There are 80 IMS stations scattered around the world. Each is equipped with an air intake and a filter. Every day, the filter is changed and then prepared for analysis. Analysis involves waiting a day (for irrelevant radionuclides to decay), then reading decay events from the filter for a further day. This gives scientists an idea of what radioactive elements are present.
Any deviations from the baseline at a certain station can be indicative of a nuclear weapon test, a nuclear accident, or changing wind patterns bringing known radionuclides (e.g. from a commercial reactor) to a station where they normally aren’t present. Wind analysis and cross validation with other methods are used to corroborate any suspicious events.
Half of the IMS stations are set up to do the more difficult xenon monitoring. Here air is pumped through a material with a reasonably high affinity for xenon. Apparently activated charcoal will work, but more sophisticated alternatives are being developed. The material is then induced to release the xenon (with activated charcoal, this is accomplished via heating). This process is repeated several times, with the output of each step pumped to a fresh piece of activated charcoal. Multiple cycles ensure that only relatively pure xenon get through to analysis.
Once xenon is collected, isotope analysis must be done to determine which (if any) radionuclides of xenon are present. This is accomplished either by comparing the beta decay of the captured xenon with its gamma decay, or looking directly at gamma decay with very precise gamma ray measuring devices. Each isotope of xenon has a unique half-life (which affects the frequency with which it omits beta- and gamma-rays) and a unique method of decay (which determines if the decay products are primarily alpha-, beta-, or gamma-rays). Comparing the observed decay events to these “fingerprints” allows for the relative abundance of xenon nuclides to be estimated.
There are some background xenon radionuclides from nuclear reactors and even more from medical isotope production (where we create unstable nuclides in nuclear reactors for use in medical procedures). Looking at global background data you can see the medical isotope production in Ontario, Europe, Argentina, Australia and South Africa. I wonder if this background effect makes world powers cautious about new medical isotope production facilities in countries that are at risk of pursuing nuclear weapons. Could Iran’s planned medical isotope complex have been used to mask nuclear tests?
Not content merely to host several monitoring stations and be party to the data of the whole global network of IMS stations, the United States also has the WC-135 “Constant Phoenix” plane, a Boeing C-135 equipped with mobile versions of particulate and xenon detectors. The two WC-135s can be scrambled anywhere a nuclear explosion is suspected to look for evidence. A WC-135 gave us the first confirmation that the blast from the 2006 North Korean nuclear test was indeed nuclear, several days before the IMS station in Yellowknife, Canada confirmed a spike in radioactive xenon and wind modelling pinpointed the probable location as inside North Korea.
Seismic Monitoring
Given that fewer monitoring stations are equipped with xenon radionuclide detectors and that the background “noise” from isotope production can make radioactive xenon from nuclear tests hard to positively identify, it might seem like nuclear tests are easy to hide underground.
That isn’t the case.
A global network of seismometers ensures that any underground nuclear explosion is promptly detected. These are the same seismometers that organizations like the USGS (United States Geological Survey) use to detect and pinpoint earthquakes. In fact, the USGS provides some of the 120 auxiliary stations that the CTBTO can call on to supplement its fifty seismic monitoring stations.
Seismometers are always on, looking for seismic disturbances. Substantial underground nuclear tests produce shockwaves that are well within the detection limit of modern seismometers. The sub-kiloton North Korean nuclear test in 2006 appears to have been registered as equivalent to a magnitude 4.1 earthquake. A quick survey of ongoing earthquakes should probably show you dozens that have been detected that are less powerful than even that small North Korean test.
This probably leads you to the same question I found myself asking, namely: “if earthquakes are so common and these detectors are so sensitive, how can they ever tell nuclear detonations from earthquakes?”
It turns out that underground nuclear explosions might rattle seismometers like earthquakes do, but they do so with characteristics very different from most earthquakes.
First, the waveform is different. Imagine you’re holding a slinky and a friend is holding the other end. There are two mains ways you can create waves. The first is by shaking it from side to side or up and down. Either way, there’s a perspective from which these waves will look like the letter “s”.
The second type of wave can be made by moving your arm forward and backwards, like you’re throwing and catching a ball. These waves will cause moving regions where the slinky is bunched more tightly together and other regions where it is more loosely packed.
These are analogous to the two main types of body waves in seismology. The first (the s-shaped one) is called an S-wave (although the “S” here stands for “shear” or “secondary” and only indicates the shape by coincidence), while the second is called a P-wave (for “pressure” or “primary”).
I couldn’t find a good free version of this, so I had to make it myself. Licensed (like everything I create for my blog) CC-BY-NC-SA v4.0.
Earthquakes normally have a mix of P-waves and S-waves, as well as surface waves created by interference between the two. This is because earthquakes are caused by slipping tectonic plates. This slipping gives some lateral motion to the resulting waves. Nuclear explosions lack this side to side motion. The single, sharp impact from them on the surrounding rocks is equivalent to the wave you’d get if you thrust your arm forward while holding a slinky. It’s almost all P-wave and almost no S-wave. This is very distinctive against a background of earthquakes. The CTBTO is kind enough to show what this difference looks like; in this image, the top event is a nuclear test and the bottom event is an earthquake of a similar magnitude in a similar location (I apologize for making you click through to see the image, but I don’t host copyrighted images here).
There’s one further way that the waves from nuclear explosions stand out. They’re caused by a single point source, rather than kilometers of rock. This means that when many seismic stations work together to find the cause of a particular wave, they’re actually able to pinpoint the source of any explosion, rather than finding a broad front like they would for an earthquake.
The fifty IMS stations automatically provide a continuous stream of data to the CTBTO, which sifts through this data for any events that are overwhelmingly P-Waves and have a point source. Further confirmation then comes from the 120 auxiliary stations, which provide data on request. Various national and university seismometer programs get in on this too (probably because it’s good for public relations and therefore helps to justify their budgets), which is why it’s not uncommon to see several estimates of yield soon after seismographs pick up on nuclear tests.
Space Based Monitoring
This is the only type of monitoring that isn’t done by the CTBTO Preparatory Commission, which means that it is handled by state actors – whose interests necessarily veer more towards intelligence gathering than monitoring treaty obligations per se.
The United States began its space based monitoring program in response to the Limited Test Ban Treaty, which left verification explicitly to the major parties involved. The CTBTO Preparatory Commission was actually formed in response to a different treaty, the Comprehensive Test Ban Treaty, which is not fully in force yet (hence why the organization ensuring compliance with it is called the “Preparatory Commission”).
The United States first fulfilled its verification obligations with the Vela satellites, which were equipped with gamma-ray detectors, x-ray detectors, electromagnetic pulse detectors (which can detect the electro-magnetic pulse from high-altitude nuclear detonations) and an optical sensor called a bhangmeter.
Bhangmeters (the name is a reference to a strain of marijuana, with the implied subtext that you’d have to be high to believe they would work) are composed of a photodiode (a device that produces current when illuminated), a timer, and some filtering components. Bhangmeters are set up to look for the distinctive nuclear “double flash“, caused when the air compressed in a nuclear blast briefly obscuring the central fireball.
The bigger a nuclear explosion, the larger the compression and the longer the central fireball is obscured. The timer picks up on this, estimating nuclear yield from the delay between the initial light and its return.
The bhangmeter works because very few natural (or human) phenomena produce flashes that are as bright or distinctive as nuclear detonations. A properly calibrated bhangmeter will filter out continuous phenomena like lightning (or will find them too faint to detect). Other very bright events, like comets breaking up in the upper atmosphere, only provide a single flash.
There’s only been one possible false positive since the bhangmeters went live in 1967; a double flash was detected in the Southern Indian Ocean, but repeated sorties by the WC-135s detected no radionuclides. The event has never been conclusively proved to be nuclear or non-nuclear in origin and remains one of the great unsolved mysteries of age of widespread atomic testing.
By the time of this (possible) false positive, the bhangmeters had also detected 41 genuine nuclear tests.
The Vela satellites are no longer in service, but the key technology they carried (bhangmeters, x-ray detectors, and EMP detectors) lives on in the US GPS satellite constellation, which does double duty as its space-based nuclear sentinels.
One last note of historical errata: when looking into unexplained gamma-ray readings produced by the Vela satellites, US scientists discovered gamma-ray bursts, an energetic astronomical phenomenon associated with supernovas and merging binary stars.
Hydroacoustic Monitoring
Undersea explosions don’t have a double flash, because steam and turbulence quickly obscure the central fireball and don’t clear until well after the fireball has subsided. It’s true that radionuclide detection should eventually turn up evidence of any undersea nuclear tests, but it’s still useful to have a more immediate detection mechanism. That’s where hydroacoustic monitoring comes in.
There are actually two types of hydroacoustic monitoring. There’s six stations that use true underwater monitoring with triplets of hydrophones (so that signal direction can be determined via triangulation) which are very sensitive, but also very expensive (as hydrophones must be installed at a depth of approximately one kilometer, where sound transmission is best). There’s also five land based stations, which use seismographs on steeply sloped islands to detect the seismic waves underwater sounds make when they hit land. Land based monitoring is less accurate, but requires little in the way of specialized hardware, making it much cheaper.
In either case, data is streamed directly to CTBTO headquarters in Vienna, where it is analyzed and forwarded to states that are party to the CTB. At the CTBTO, the signal is split into different channels based on a known library of undersea sounds and explosions are separated from natural phenomena (like volcanos, tsunamis, and whales) and man-made noises (like gas exploration, commercial shipping, and military drills). Signal processing and analysis – especially of hydrophone data – is a very mature field, so the CTBTO doesn’t lacks for techniques to refine its estimates of events.
Infrasound Monitoring
Infrasound monitoring stations are the last part of the global monitoring system and represent the best way for the CTBTO (rather than national governments with the resources to launch satellites) to detect atmospheric nuclear tests. Infrasound stations try to pick up the very low frequency sound waves created by nuclear explosions – and a host of other things, like volcanos, planes, and mining.
A key consideration with infrasound stations is reducing background noise. For this, being far away from human habitation and blocked from the wind is ideal. Whenever this cannot be accomplished (e.g. there’s very little cover from the wind in Antarctica, where several of the sixty stations are), more infrasound arrays are needed.
The components of the infrasound arrays look very weird.
Specifically, they look like a bunker that tried to eat four Ferris wheels. Each array actually contains three to eight of these monstrosities. From the CTBTO via Wikimedia Commons.
What you see here are a bunch of pipes that all feed through to a central microbarometer, which is what actually measures the infrasound by detecting slight changes in air pressure. This setup filters out a lot of the wind noise and mostly just lets infrasound through.
Like the hydroacoustic monitoring system, data is sent to the CTBTO in real time and analyzed there, presumably drawing on a similar library of recorded nuclear test detonations and employing many of the same signal processing techniques.
Ongoing research into wind noise reduction might eventually make the whole set of stations much more sensitive than it is now. Still, even the current iteration of infrasound monitoring should be enough to detect any nuclear tests in the lower atmosphere.
The CTBTO has a truly great website that really helped me put together this blog post. They provide a basic overview of the four international monitoring systems I described here (they don’t cover space-based monitoring because it’s outside of their remit), as well as pictures, a glossary, and a primer on the analysis they do. If you’d like to read more about how the international monitoring system works and how it came into being, I recommend visiting their website.
This post, like many of the posts in my nuclear weapon series came about because someone asked me a question about nuclear weapons and I found I couldn’t answer quite as authoritatively as I would have liked. Consequently, I’d like to thank Cody Wild and Tessa Alexanian for giving me the impetus to write this.
This post is part of a series on special topics in nuclear weapons. The index for all of my writing on nuclear weapons can be found here. Previous special topics posts include laser enrichment and the North Korean nuclear program.
The Righteous Mind follows an argument structure I learned in high school debate club. It tells you what it’s going to tell you, it tells you it, then it reminds you what it told you. This made it a really easy read and a welcome break from The Origins of Totalitarianism, the other book I’ve been reading. Practically the very first part of The Righteous Mind proper (after the foreword) is an introduction to its first metaphor.
Imagine an elephant and a rider. They have travelled together since their birth and move as one. The elephant doesn’t say much (it’s an elephant), but the rider is very vocal – for example, she’s quick to apologize and explain away any damage the elephant might do. A casual observer might think the rider is in charge, because she is so much cleverer and more talkative, but that casual observer would be wrong. The rider is the press secretary for the elephant. She explains its action, but it is much bigger and stronger than her. It’s the one who is ultimately calling the shots. Sometimes she might convince it one way or the other, but in general, she’s buffeted along by it, stuck riding wherever it goes.
She wouldn’t agree with that last part though. She doesn’t want to admit that she’s not in charge, so she hides the fact that she’s mainly a press secretary even from herself. As soon as the elephant begins to move, she is already inventing a reason why it was her idea all along.
This is how Haidt views human cognition and decision making. In common terms, the elephant is our unconscious mind and the rider our consciousness. In Kahneman’s terms, the elephant is our System 1 and the rider our System 2. We may make some decisions consciously, but many of them are made below the level of our thinking.
Haidt illustrates this with an amusing anecdote. His wife asks him why he didn’t finish some dishes he’d been doing and he immediately weaves a story of their crying baby and barking incontinent dog preventing him. Only because he had his book draft open on his computer did he realize that these were lies… or rather, a creative and overly flattering version of the truth.
The baby did indeed cry and the dog indeed bark, but neither of these prevented him from doing the dishes. The cacophany happened well before that. He’d been distracted by something else, something less sympathetic. But his rider, his “internal press secretary” immediately came up with an excuse and told it, without any conscious input or intent to deceive.
We all tell these sorts of flattering lies reflexively. They take the form of slight, harmless embellishments to make our stories more flattering or interesting, or our apologies more sympathetic.
The key insight here isn’t that we’re all compulsive liars. It’s that the “I” that we like to think exists to run our life doesn’t, really. Sometimes we make decisions, especially ones the elephant doesn’t think it can handle (high stakes apologies anyone?), but normally decisions happen before we even think about them. From the perspective of Haidt, “I”, is really “we”, the elephant and its rider. And we need to be careful to give the elephant its due, even though it’s quiet.
Haidt devotes a lot of pages to an impassioned criticism of moral rationalism, the belief that morality is best understood and attained by thinking very hard about it. He explicitly mentions that to make this more engaging, he wraps it up in his own story of entering the field of moral psychology.
He starts his journey with Kohlberg, who published a famous account of the stages of moral reasoning, stages that culminate in rationally building a model of justice. This paradigm took the world of moral psychology by storm and reinforced the view (dating in Western civilization to the times of the Greeks) that right thought had to proceed right action.
Haidt was initially enamoured with Kohlberg’s taxonomy. But reading ethnographies and doing research in other countries began to make him suspect things weren’t as simple as Kohlberg thought. Haidt and others found that moral intuitions and responses to dilemmas differed by country. In particular, WEIRD people (people from countries that were Western, Educated, Industrialized, Rich, and Developed and most especially the most educated people in those countries) were very much able to tamp down feelings of disgust in moral problems, in a way that seemed far from universal.
For example, if asked if it was wrong for a family to eat their dog if it was killed by a car (and the alternative was burying it), students would say something along the lines of “well, I wouldn’t, but it’s gross, not wrong”. Participants recruited at a nearby McDonalds gave a rather different answer: “of course it’s wrong, why are you even asking”. WEIRD students at prestigious universities may have been working towards a rational, justice-focused explanation for morality, but Haidt found no evidence that this process (or even a focus on “justice”) was as universal as Kohlberg claimed.
That’s not to say that WEIRD students had no disgust response. In fact, trying to activate it gave even more interesting results. When asked to justify answers where disgust overpowered students sense of “well as long as no one was hurt” (e.g. consensual adult sibling incest with no chance of children), Haidt observed that people would throw up a variety of weak excuses, often before they had a chance to think the problem through. When confronted by the weakness of their arguments, they’d go speechless.
This made Haidt suspect that two entirely separate processes were going on. There was a fast one for deciding and a slower another for explanation. Furthermore, the slower process was often left holding the bag for the faster one. Intuitions would provide an answer, then the subject would have to explain it, no matter how logically indefensible it was.
Haidt began to believe that Kohlberg had only keyed in on the second, slower process, “the talking of the rider” in metaphor-speak. From this point of view, Kohlberg wasn’t measuring moral sophistication. He was instead measuring how fluidly people could explain their often less than logical moral intuitions.
There were two final nails in the coffin of ethical rationalism for Haidt. First, he learned of a type of brain injury that separated people from their moral intuitions (or as the rationalists might call them “passions”). Contrary to the rationalist expectation, these people’s lives went to hell, as they alienated everyone they knew, got fired from their jobs, and in general proved the unsuitability of pure reason for making many types of decisions. This is obviously the opposite of what rationalists predicted would happen.
Second, he saw research that suggested that in practical measures (like missing library books), moral philosophers were no more moral than other philosophy professors.
Abandoning rationalism brought Haidt to a sentimentalist approach to ethics. In this view, ethics stemmed from feelings about how the world ought to be. These feelings are innate, but not immutable. Haidt describes people as “prewired”, not “hardwired”. You might be “prewired” to have a strong loyalty foundation, but a series of betrayals and let downs early in life might convince you that loyalty is just a lie, told to control idealists.
Haidt also believes that our elephants are uniquely susceptible to being convinced by other people in face to face discussion. He views the mechanism here as empathy at least as much as logic. People that we trust and respect can point out our weak arguments, with our respect for them and positive feelings towards them being the main motive force for us listening to these criticisms. The metaphor with elephants kind of breaks down here, but this does seem to better describe the world as it is, so I’ll allow it.
Because of this, Haidt would admit that rationalism does have some purpose in moral reasoning, but he thinks it is ancillary and mainly used to convince other people. I’m not sure how testable making evolutionary conclusions about this is, but it does seem plausible for there to be selection pressure to make us really good at explaining ourselves and convincing others of our point of view.
As Haidt took this into account and began to survey peoples’ moral instincts, he saw that the ways in which responses differed by country and class were actually highly repeatable and seemed to gesture at underlying categories of people. After analyzing many, many survey responses, he and his collaborators came up with five (later six) moral “modules” that people have. Each moral module looks for violations of a specific class of ethical rules.
Haidt likens these modules to our taste-buds. The six moral tastes are the central metaphor of the second section of the book.
Not everyone has these taste-buds/modules in equal proportion. Looking at commonalities among respondents, Haidt found that the WEIRDer someone was, the less likely they were to have certain modules. Conservatives tended to have all modules in a fairly equal proportion, liberals tended to be lacking three. Libertarians were lacking a whopping four, which might explain why everyone tends to believe they’re the worst.
The six moral foundations are:
Care/Harm
This is the moral foundation that makes us care about suffering and pain in others. Haidt speculates that it originally evolved in order to ensure that children (which are an enormous investment of resources for mammals and doubly so for us) got properly cared for. It was originally triggered only by the suffering or distress of our own children, but can now be triggered by anyone being hurt, as well as cute cat videos or baby seals.
An expanding set of triggers seems to be a common theme for these. I’ve personally speculated that this would perhaps be observed if the brain was wired for minimizing negative predictive error (i.e. not mistaking a scene in which there is a lion for a scene without a lion), rather than positive predictive error (i.e. not mistaking a scene without a lion for a scene with a lion). If you minimize positive predictive error, you’ll never be frightened by a shadow, but you might get eaten by a lion.
Fairness/Cheating
This is the moral foundation that makes us want everyone to do their fair share and makes us want to punish tax evaders or welfare cheats (depending on our political orientation). The evolutionary story given for this one is that it evolved to allow us to reap the benefits of two-way partnerships; it was an incentive against defecting.
Loyalty/Betrayal
This is the foundation that makes us rally around our politicians, community leaders, and sports teams, as well as the foundation that makes some people care more about people from their country than people in general. Haidt’s evolutionary explanation for this one is that it was supposed to ensure coherent groups.
Authority/Subversion
This is the moral foundation that makes people obey their boss without talking back or avoid calling their parents by the first names. It supposedly evolved to allow us to forge beneficial relationships within hierarchies. Basically, it may have once been very useful to have people believe and obey their elders without question, (like e.g. when the elders say “don’t drink that water, it’s poisoned” no one does and this story can be passed down and keep people safe, without someone having to die every few years to prove that the water is indeed poisoned).
Sanctity/Degradation
This is the moral foundation that makes people on the right leery of pre-marital sex and people on the left leery of “chemicals”. It shows up whenever we view our bodies as more than just our bodies and the world as more than just a collection of things, as well as whenever we feel that something makes us “spiritually” dirty.
The very plausible explanation for this one is that it evolved in response to the omnivore’s dilemma: how do we balance the desire for novel food sources with the risk they might poison us? We do it by avoiding anything that looks diseased or rotted. This became a moral foundation as we slowly began applying it to stuff beyond food – like other people. Historically, the sanctity moral framework was probably responsible for the despised status of lepers.
Liberty/Oppression
This moral foundation is always in tension with Authority/Subversion. It’s the foundation that makes us want to band together against and cast down anyone who is aggrandizing themselves or using their power to mistreat another.
Haidt suggests that this evolved to allow us to band together against “alpha males” and check their power. In his original surveys, it was part of Fairness/Cheating, but he found that separating it gave him much more resolving power between liberals and conservatives.
–
Of these six foundations, Haidt found that libertarians only had an appreciable amount of Liberty/Oppression and Fairness/Cheating and of these two, Liberty/Oppression was by far the stronger. While the other foundations did exist, they were mostly inactive and only showed up under extreme duress. For liberals, he found that they had Care/Harm, Liberty/Oppression, and Fairness/Cheating (in that order).
Conservatives in Haidt’s survey had all six moral foundations, like I said above. Care/Harm was their strongest foundation, but by having appreciable amounts of Loyalty/Betrayal, Authority/Subversion, and Sanctity/Degradation, they would occasionally overrule Care/Harm in favour of one or another of these foundations.
Haidt uses these moral foundations to give an account of the “improbable” coalition between libertarians and social conservatives that closely matches the best ones to come out of political science. Basically, liberals and libertarians are descended (ideologically, if not filially) from those who embraced the enlightenment and the liberty it brought. About a hundred years ago (depending on the chronology and the country), the descendants of the enlightenment had a great schism, with some continuing to view the government as the most important threat to liberty (libertarians) and others viewing corporations as the more pressing threat (liberals). Liberals took over many auspices of the government and have been trying to use it to guarantee their version of liberty (with mixed results and many reversals) ever since.
Conservatives do not support this project of remaking society from the top down via the government. They believe that liberals want to change too many things, too quickly. Conservatives aren’t opposed to the government qua government. In fact, they’d be very congenial to a government that shared their values. But they are very hostile to a liberal, activist government (which is rightly or wrongly how conservatives view the governments of most western nations) and so team up with libertarians in the hopes of dismantling it.
It is an attraction to murder and treason which hides behind such perverted tolerance, for in a moment it can switch to a decision to liquidate not only all actual criminals but all who are “racially” predestined to commit certain crimes. Such changes take place whenever the legal and political machine is not separated from society so that social standards can penetrate into it and become political and legal rules. The seeming broad-mindedness that equates crime and vice, if allowed to establish its own code of law, will invariably prove more cruel and inhuman than laws, no matter how severe, which respect and recognize man’s independent responsibility for his behavior.
That said, it is possible for inconvenient or dangerous things to be true and their inconvenience or danger has no bearing on their truth. If Haidt saw his writings being used to justify or promote violence, he’d have a moral responsibility to decry the perpetrators. Accepting that sort of moral responsibility is, I believe, part of the responsibility that scientists who deal with sensitive topics must accept. I do not believe that this responsibility precludes publishing. I firmly believe that only right information can lead to right action, so I am on the whole grateful for Haidt’s taxonomy.
The similarities between liberals and libertarians extend beyond ethics. Both have more openness to experience and less of a threat response than conservatives. This explains why socially, liberals and libertarians have much more in common than liberals and conservatives.
Moral foundation theory gave me a vocabulary for some of the political writing I was doing last year. After the Conservative (Party of Canada) Leadership Convention, I talked about social conservative legislation as a way to help bind people to collective morality. I also talked about how holding other values very strongly and your values not at all can make people look diametrically opposed to you.
The third and final section of The Righteous Mind further focuses on political tribes. Its central metaphor is that humans are “90% chimp, 10% bee”. It’s central purpose is an attempt to show how humans might have been subject to group selection and how our groupishness is important to our morality.
Haidt claims that group selection is heresy in evolutionary biology (beyond hive insects). I don’t have the evolutionary biology background to say if this is true or not, although this does match how I’ve seen it talked about online among scientifically literate authors, so I’m inclined to believe him.
Haidt walks through the arguments against group selection and shows how they are largely sensible. It is indeed ridiculous to believe that genes for altruism could be preserved in most cases. Imagine a gene that would make deer more likely to sacrifice itself for the good of the herd if it seemed that was the only way to protect the herd’s young. This gene might help more deer in the herd attain adulthood, but it would also lead to any deer who had it having fewer children. There’s certainly an advantage to the herd if some members have this gene, but there’s no advantage to the carriers and a lot of advantage to every deer in the herd who doesn’t carry it. Free-riders will outcompete sacrificers and the selfless gene will get culled from the herd.
But humans aren’t deer. We can be selfish, yes, but we often aren’t and the ways we aren’t can’t be simply explained by greedy reciprocal altruism. If you’ve ever taken some time out of your day to help a lost tourist, congratulations, you’ve been altruistic without expecting anything in return. That people regularly do take time out of their days to help lost tourists suggests there might be something going on beyond reciprocal altruism.
Humans, unlike deer, have the resources and ability to punish free riders. We expect everyone to pitch in and might exile anyone who doesn’t. When humans began to form larger and larger societies, it makes sense that the societies who could better coordinate selfless behaviour would do better than those that couldn’t. And this isn’t just in terms of military cohesion (as the evolutionary biologist Lesley Newson had to point out to Haidt). A whole bunch of little selfless acts – sharing food, babysitting, teaching – can make a society more efficient than its neighbours at “turning resources into offspring”.
A human within the framework of society is much more capable than a human outside of it. I am only able to write this and share it widely because a whole bunch of people did the grunt work of making the laptop I’m typing it on, growing the food I eat, maintaining our communication lines, etc. If I was stuck with only my own resources, I’d be carving this into the sand (or more likely, already eaten by wolves).
Therefore, it isn’t unreasonable to expect that the more successful and interdependent a society could become, the more it would be able to outcompete, whether directly or indirectly its nearby rivals and so increase the proportion of its conditionally selfless genes in the human gene pool.
Conditional selflessness is a better description of the sorts of altruism we see in humans. It’s not purely reciprocal as Dawkins might claim, but it isn’t boundless either. It’s mostly reserved for people we view as similar to us. This doesn’t need to mean racially or religiously. In my experience, a bond as simple as doing the same sport is enough to get people to readily volunteer their time for projects like digging out and repairing a cracked foundation.
The switch from selfishness to selflessly helping out our teams is called “the hive switch” by Haidt. He devotes a lot of time to exploring how we can flip it and the benefits of flipping it. I agree with him that many of the happiest and most profound moments of anyone’s life come when the switch has been activated and they’re working as part of a team.
The last few chapters are an exploration of how individualism can undermine the hive switch and several mistakes liberals make in their zeal to overturn all hierarchies. Haidt believes that societies have both social capital (the bounds of trust between people) and moral capital (the society’s ability to bind people to collective values) and worries that liberal individualism can undermine these to the point where people will be overall worse off. I’ll talk more about moral capital later in the review.
II – On Shaky Foundations
Anyone who reads The Righteous Mind might quickly realize that I left a lot of the book out of my review. There was a whole bunch of supporting evidence about how liberals and conservatives “really are” or how they differ that I have deliberately omitted.
You may have heard that psychology is currently in the midst of a “replication crisis“. Much (I’d crudely estimate somewhere between 25% and 50%) of the supporting evidence in this book has been a victim of this crisis.
Here’s what the summary of Chapter 3 looks like with the offending evidence removed:
Here’s an incomplete list of claims that didn’t replicate:
IAT tests show that we can have unconscious prejudices that affect how we make social and political judgements (1, 2, 3 critiques/failed replications). Used to buttress the elephant/rider theory of moral decisions.
Disgusting smells can make us more judgemental (failed replication source). Used as evidence that moral reasoning can be explained sometimes by external factors, is much less rational than we’d like to believe.
Babies prefer a nice puppet over a mean one, even when pre-verbal and probably lacking the context to understand what is going on (failed replication source). Used as further proof for how we are “prewired” for certain moral instincts.
People from Asian societies are better able to do relative geometry and less able to absolute geometry than westerners (failed replication source). This was used to make the individualistic morality of westerners seem inherent.
The “Lady Macbeth Effect” showed a strong relationship between physical and moral feelings of “cleanliness” (failed replication source). Used to further strengthen the elephant/rider analogy.
The proper attitude with which to view psychology studies these days is extreme scepticism. There are a series of bad incentives (it’s harder and less prestigious to publish negative findings, publishing is necessary to advance in your career) that have led to scientists in psychology (and other fields) to inadvertently and advertently publish false results. In any field in which you expect true discoveries to be rare (and I think “interesting and counter-intuitive things about the human brain fits that bill), you shouldn’t allow any individual study to influence you very much. For a full breakdown of how this can happen even when scientists check for statistical significance, I recommend reading “Why Most Published Research Findings Are False” (Ioannidis 2005).
Moral foundations theory appears to have escaped the replication crisis mostly unscathed, (as have Tverskey and Kahneman’s work on heuristics, something that made me more comfortable including the elephant/rider analogy). I think this is because moral foundations theory is primarily a descriptive theory. It grew out of a large volume of survey responses and represents clusters in those responses. It makes little in the way of concrete predictions about the world. It’s possible to quibble with the way Haidt and his collaborators drew the category boundaries. But given the sheer volume of responses they received – and the fact that they based their results not just on WEIRD individuals – it’s hard to disbelieve that they haven’t come up with a reasonable clustering of the possibility space of human values.
I will say that stripped of much of its ancillary evidence, Haidt’s attack on rationalism lost a lot of its lustre. It’s one thing to believe morality is mostly unconscious when you think that washing your hands or smelling trash can change how moral you act. It’s quite another when you know those studies were fatally flawed. The replication crisis fueled my inability to truly believe Haidt’s critique of rationality. This disbelief in turn became one of the two driving forces in my reaction to this book.
Haidt’s moral relativism around patriarchal cultures was the other.
III – Less and Less WEIRD
It’s good that Haidt looked at a variety of cultures. This is a thing few psychologists do. There’s historically been an alarming tendency to run studies on western undergraduate students, then declare “this is how people are”. This would be fine if western undergraduates were representative of people more generally, but I think that assumption was on shaky foundations even before moral foundation theory showed that morally, at least, it was entirely false.
Haidt even did some of this field work himself. He visited South America and India to run studies. In fact, he mentioned that this field work was one of the key things that made him question the validity of western individualistic morality and wary of morality that didn’t include the sanctity, loyalty, and authority foundations.
His willingness to get outside of his bubble and to learn from others is laudable.
But.
There is one key way in which Haidt never left his bubble, a way which makes me inherently suspicious of all of his defences of the sanctity, authority, and loyalty moral foundations. Here’s him recounting his trip to India. Can you spot the fatal omission?
I was told to be stricter with my servants, and to stop thanking them for serving me. I watched people bathe in and cook with visibly polluted water that was held to be sacred. In short, I was immersed in a sex-segregated, hierarchically stratified, devoutly religious society, and I was committed to understanding it on its own terms, not on mine.
It only took a few weeks for my dissonance to disappear, not because I was a natural anthropologist but because the normal human capacity for empathy kicked in. I liked these people who were hosting me, helping me, and teaching me. Wherever I went, people were kind to me. And when you’re grateful to people, it’s easier to adopt their perspective. My elephant leaned toward them, which made my rider search for moral arguments in their defense. Rather than automatically rejecting the men as sexist oppressors and pitying the women, children, and servants as helpless victims, I began to see a moral world in which families, not individuals, are the basic unit of society, and the members of each extended family (including its servants) are intensely interdependent. In this world, equality and personal autonomy were not sacred values. Honoring elders, gods, and guests, protecting subordinates, and fulfilling one’s role-based duties were more important.
Haidt tried out other moral systems, sure, but he tried them out from the top. Lois McMaster Bujold once had a character quip: “egalitarians adjust to aristocracies just fine, as long as they get to be the aristocrats”. I would suggest that liberals likewise find the authority framework all fine and dandy, as long as they have the authority.
Would Haidt have been able to find anything worth salvaging in the authority framework if he’d instead been a female researcher, who found herself ignored, denigrated, and sexually harassed on her research trip abroad?
It’s frustrating when Haidt is lecturing liberals on their “deficient” moral framework while simultaneously failing to grapple with the fact that he is remarkably privileged. “Can’t you see how this other society knows some moral truths [like men holding authority over woman] that we’ve lost” is much less convincing when the author of the sentence stands to lose absolutely nothing in the bargain. It’s easy to lecture others on the hard sacrifices society “must” make – and far harder to look for sacrifices that will mainly affect you personally.
It is in this regard that I found myself wondering if this might have been a more interesting book if it had been written by a woman. If the hypothetical female author were to defend the authority framework, she’d actually have to defend it, instead of hand-waving the defence with a request that we respect and understand all ethical frameworks. And if this hypothetical author found it indefensible, we would have been treated to an exploration of what to do if one of our fundamental ethical frameworks was flawed and had to be discarded. That would be an interesting conversation!
Not only that, but perhaps a female author would have given more pages to the observation that woman and children’s role in societal altruism was just as important as that of men (as child-rearing is a more reliable way to demonstrate and cash-in on groupishness than battle) have been fully explored, instead of relegated to a brief note at the end of the chapter on group selection. This perspective is genuinely new to me and I wanted to see it developed further.
Ultimately, Haidt’s defences of Authority/Subversion, Loyalty/Betrayal, and Sanctity/Degradation fell flat in the face of my Care/Harm and Liberty/Oppression focused moral compass. Scott Alexander once wrote about the need for “a solution to the time-limitedness of enlightenment that works from within the temporal perspective”. By the same token, I think Haidt fails to deliver a defence of conservatism or anything it stands for that works from within the liberal Care/Harm perspective. Insofar as his book was meant to bridge inferential gaps and political divides, this makes it a failure.
That’s a shame, because arguments that bridge this divide do exist. I’ve read some of them.
IV – What if Liberals are Wrong?
There is a principle called “Chesterton’s Fence”, which comes from the famed Catholic conservative and author G.K. Chesterton. It goes like this: if you see a fence blocking the road and cannot see the reason for it to be there, should you remove it? Chesterton said “no!”, resoundingly. He suggested you should first understand the purpose of the fence. Only then may you safely remove it.
There is a strain of careful conservatism that holds Chesterton’s fence as its dearest parable. Haidt makes brief mention of this strain of thought, but doesn’t expound on it successfully. I think it is this thought and this thought only that can offer Care/Harm focused liberals like myself a window into the redeeming features of the conservative moral frameworks.
Here’s what the argument looks like:
Many years ago, western nations had a unified moral framework. This framework supported people towards making long term decisions and acting in a pro-social manner. There are many people who want to act differently than they would if left to their own devices and this framework helped them to do that.
Liberals began to dismantle this system in the sixties. They saw hierarchies and people being unable to do the things they wanted to do, so tried to take down the whole edifice without first checking if any of it was doing anything important.
Here’s the thing. All of these trends affect well educated and well-off liberals the least. We’re safe from crime in good neighbourhoods. We overwhelming wait until stable partnerships to have children. We can afford therapists and pills to help us with any mental health issues we might have; rehab to help us kick any drug habits we pick up.
Throwing off the old moral matrix has been an unalloyed good for privilege white liberals. We get to have our cake and eat it too – we have fun, take risks, but know that we have a safety net waiting to catch us should we fall.
The conservative appeal to tradition points out that our good time might be at the expense of the poor. It asks us if our hedonistic pleasures are worth a complete breakdown in stability for people with fewer advantages that us. It asks us consider sacrificing some of these pleasures so that they might be better off. I know many liberals who might find the sacrifice of some of their freedom to be a moral necessity, if framed this way.
But even here, social conservatism has the seeds of its own undoing. I can agree that children do best when brought up by loving and committed parents who give them a lot of stability (moving around in childhood is inarguablybad for manykids). Given this, the social conservative opposition to gay marriage (despite all evidence that it doesn’t mess kids up) is baffling. The sensible positon would have been “how can we use this to make marriage cool again“, not “how long can we delay this”.
This is a running pattern with social conservatism. It conserves blindly, without giving thought to what is even worth preserving. If liberals have some things wrong, that doesn’t automatically mean that the opposite is correct. It’s disturbingly easy for people on both sides of an issue to be wrong.
I’m sure Haidt would point out that this is why we have the other frameworks. But because of who I am, I’m personally much more inclined to do things in the other direction – throw out most of the past, then re-implement whatever we find to be useful but now lacking.
V – What if Liberals Listened?
In Berkeley, California, its environs, and assorted corners of the Internet, there exists a community that calls themselves “Rationalists”. This moniker is despite the fact that they agree with Haidt as to the futility of rationalism. Epistemically, they tend to be empiricists. Ethically, non-cognitivist utilitarians. Because they are largely Americans, they tend to be politically disengaged, but if you held them at gunpoint and demanded they give you a political affiliation, they would probably either say “liberal” or “libertarian”.
The rationalist community has semi-public events that mimic many of the best parts of religious events, normally based around the solstices (although I also attended a secular Seder when I visited last year).
The rationalist community has managed to do the sort of thing Haidt despaired of: create a strong community with communal morality in a secular, non-authoritarian framework. There are communal norms (although they aren’t very normal; polyamory and vegetarianism or veganism are very common). People tend to think very hard before having children and take care ensuring that any children they have will have a good extended support structure. People live in group houses, which combats atomisation.
This is also a community that is very generous. Many of the early adherents of Effective Altruism were drawn from the rationalist community. It’s likely that rationalists donate to charity in amounts more similar to Mormons than atheists (with the added benefit of almost all of this money going to saving lives, rather than proselytizing).
No community is perfect. This is a community made up of people. It has its fair share of foibles and megalomanias, bad actors and jerks. But it represents something of a counterpoint to Haidt’s arguments about the “deficiency” of a limited framework morality.
Furthermore, its altruism isn’t limited in scope, the way Haidt believes all communal altruism must necessarily be. Rationalists encourage each other to give to causes like malaria eradication (which mainly helps people in Africa), or AI risk (which mainly helps future people). Because there are few cost effective local opportunities to do good (for North Americans), this global focus allows for more lives to be saved or improved per dollar spent.
This is all of it, I think, the natural result of thoughtful people throwing away most cultural traditions and vestiges of traditionalist morality, then seeing what breaks and fixing those things in particular. It’s an example of what I wished for at the end of the last section applied to the real world.
VI – Is or Ought?
I hate to bring up the Hegelian dialectic, but I feel like this book fits neatly into it. We had the thesis: “morality stems from rationality” that was so popular in western political thought. Now we have the antithesis: “morality and rationality are separate horses, with rationality subordinate – and this is right and proper”.
I can’t wait for someone other than Haidt to a write a synthesis; a view that rejects rationalism as the basis of human morality but grapples with the fact that we yearn for perfection.
Haidt, in the words of Joseph Heath, thinks that moral discourse is “essentially confabulatory”, consisting only of made up stories that justify our moral impulses. There may be many ways in which this is true, but it doesn’t account for the fact that some people read Peter Singer’s essay “Famine, Affluence, and Morality” and go donate much of their money to the global poor. It doesn’t account for all those who have listened to the Sermon on the Mount and then abandoned their possessions to live a monastic life.
I don’t care whether you believe in The Absolute, or God, or Allah, or The Cycle of Rebirth, or the World Soul, or The Truth, or nothing at all. You probably have felt that very human yearning to be better. To do better. You’ve probably believed that there is a Good and it can perhaps be comprehended and reached. Maybe this is the last vestiges of my atrophied sanctity foundation talking, but there’s something base about believing that morality is solely a happy accident of how we evolved.
The is/ought fallacy occurs when we take what “is” and decide it is what “ought” to be. If you observe that murder is part of the natural order and conclude that it is therefore moral, you have committed this fallacy.
Haidt has observed the instincts that build towards human morality. His contributions to this field have helped make many things clear and make many conflicts more understandable. But in deciding that these natural tastes are the be-all and end-all of human morality, by putting them ahead of reason, religion, and every philosophical tradition, he has committed this fundamental error.
At the start of the Righteous Mind, Haidt approvingly mentions those scientists who once thought that ethics could be taken away from philosophers and studied instead only by them.
But science can only ever tell us what is, never what ought to be. As a book about science, The Righteous Mind is a success. But as a work on ethics, as an expression of how we ought to behave, it is an abysmal failure.
In this area, the philosophers deserve to keep their monopoly a little longer.
The first time I tried vegetarianism, I ended up deficient in B12. Since then, I’ve realized just how bad vitamin B12 deficiency is (hint: it can cause irreversible neural damage) and resolved to get it right this time.
I’m currently eating no meat, very little milk, almost no eggs, and a fair amount of cheese. I consider clams, oysters, and mussels to be morally (if not taxonomically) vegetables, but am too lazy to eat them regularly. To figure out what this diet put me at risk for, I trolled PubMed [1] until I found a recent article arguing for a vegan diet, then independently checked their nutritional recommendations.
Based on this, I’ve made a number of changes to my diet. I now take two vitamins in the morning and a slew of supplements in sugar-free fruit juice when I get home from work [2]. I hope the combined effect of this will be to protect me from any nutritional problems.
Pictured: the slew. Next: The science!
Once I went to all the work of collecting information and reading through paper abstracts, I realized that other people the same situation might find this research helpful. I’ve chosen to present everything as my diet, not my recommendations. This is what is currently working for me, not necessarily what is “correct” or what would work for anyone else. Diet is very personal and I’m no expert, so I’ve taken great pains to avoid the word “should” here.
That caveat out of the way, let’s get into the details!
Protein
Eating cheese gives a relatively easy (and low suffering) source of complete protein, but I didn’t want all of my protein to come from cheese. Therefore, it was heartening to find there are many easy ways to get complete protein from plants. These include combinations (like hummus + pitas or rice + beans) or quinoa.
I try to make some of my lunches revolve around these sources, rather than just cheese.
I’ve decided to supplement my protein intake with protein powder, because I found it hard to get enough protein (I’m aiming for 1g/kg daily, to be on the safe side, estimates of the minimum daily requirements range from at least 0.83g/kg/d to 0.93kg/day and I’m rather more active than the average North American, especially in the summer) with my limited appetite even when I was eating meat. I first tried whey, but found this incredibly hard on my stomach, so I’ve shifted to an unflavoured multiple source vegetable protein that I find not at all unpleasant when mixed with fruit juice.
Iron
It seems to be kind of hard to become iron deficient; the closer anyone gets to deficiency, the more effective their body becomes at pulling in iron and holding onto what it already has. This is good for vegetarians, because iron from plants is generally not very bioavailable and it’s harder to get iron when consuming significant calcium at the same time (e.g. a spinach salad with cheese or tofu isn’t that great a source of iron, until your body gets desperate for it).
As far as I can tell, my diet doesn’t include adequate B12 on its own, so I’m supplementing with 1000mcg sublingually each morning. If I did more of my own cooking, I’d consider nutritional yeast grown in B12 rich media, which seem to be effective in small scale trials and anecdotally among people I know. I can’t figure out if probiotics work or not; the study above says no. Another study I found said yes, but they were giving out the probiotics in yoghurt, which is naturally a good source of vitamin B12. This baffling decision makes me consider the study hopelessly confounded and has me overall pessimistic about probiotics.
I was frightened when I learned that folic acid fortification is very effective at preventing B12 deficiency driven anemia, but not effective against B12 deficiency driven neural damage (so the neural damage can sneak up with no warning). The NIH recommends keeping folic acid consumption below 1g/day, which can be difficult to do when many fortified foods contain much more folic acid than they claim to. If I was eating more breads or cereals I’d be worried about this. For now, I’m just filing it away as a thing to remember; if I ever start eating more bread and cereal, I’m going to want to be very careful to ensure I’m consuming enough B12.
One explanation for this is that the meta-analysis that finds no significant relationship between fracture risk and calcium intake didn’t find anyone with calcium levels low enough to observe significant effects. That would mean that the study that found vegans broke bones more often found the effect because the vegans they studied were so low on calcium.
Except that study is barely significant (the relative risk lower bound includes 1.02). Barely significant study + meta-analysis that turns up nothing points pretty strongly at “this was only significant because of P-hacking”.
Since yoghurt is apparently an ideal protein source for cycling recovery and three small containers of yoghurt provides an ideal amount of protein for cycling recovery (and Walmart gives a deal if you buy three cases of 4 of these, which makes it cheap to mix and match flavours), I will probably continue to have significant amounts of yoghurt (and therefore lots of extra calcium) whenever I’m cycling. This will make me feel a bit better about my mountain biking related fracture risk. Otherwise, I’m not going to worry about calcium intake (remember: I am eating plenty of cheese).
I am glad I looked into calcium though, because I found something really cool: Chinese vegetables (like Bok Choi, Chinese cabbage flower leaves, Chinese mustard greens, and Chinese spinach) provide calcium that is much more bioavailable than many western vegetables. I wonder if this is related to prevalence of milk drinking across cultures?
Vitamin D
Vitamin D is important for increasing absorption of calcium. Since Vitamin D is synthesized in the skin in response to light and I live in Canada, I’m pretty likely to be deficient in it, at least in the winter (something like 1 in 35 Canadians are). There was a story going around that the government wouldn’t pay for most vitamin D testing because Canadians are assumed to be deficient in it, but according to the Toronto Star article above, the real reason is that so many charlatans have claimed it can do everything under the sun that demand for tests was becoming a wasteful drain on funds.
My plan is to take a D3 supplement in the months where I don’t regularly wear shorts and a t-shirt. Given that I cycle to work and frequently walk around town, I expect to get more than enough D3 when my skin is actually being exposed to sunlight.
Omega-3 Fatty Acids
From what I read, the absolute level of these is less important that the ratio of Omega-3 fatty acids to Omega-6 fatty acids. An ideal ratio is close to 1:1. The average westerner has a ratio closer to 16:1. While it is clear that this isn’t just a vegetarian problem, it seems like omnivores who eat a lot of fish have a healthier ratio. Given that a good ratio is associated with pretty much every good thing under the sun (is this why Japan has such high life expectancies?), I’m pretty motivated to get my ratio to the sweet spot.
As far as I could tell, there was once controversy as to whether non-animal sources of Omega-3 fatty acids could be adequate, but that looks to be cleared up in favour of the vegetarian sources. This is good, because it means that I can follow the recommendations in this paper and consume about 6g of unheated flaxseed oil daily to meet my Omega-3 needs. This goes pretty easily into my fruit juice mixture with my protein powder and creatine.
Creatine
There’s some evidence (although no meta-analyses that I could find) that creatine improvescognitive performance in vegetarians (although not in omnivores, probably because it is present in meat [3]). I’ve decided to take 5g a day because it seems to be largely risk free and it also makes exercise feel somewhat easier.
That’s everything I was able to dig up in a few hours of research. If I’ve made any horrible mistakes, I’d very much like to hear about them.
Footnotes:
[1] I like PubMed because it doesn’t index journals unless they meet certain standards of quality. This doesn’t ensure anything, but it does mean I don’t have to constantly check the impact factor and editorial board of anything I read. ^
[2] The timing is based on convenience, not science. The fruit juice is actually important, because the vitamin C in it makes the iron in my protein powder more bio-available. It also makes the whole mixture palatable, which is what I originally chose it for. ^
[3] Although people I know have also speculated that this might just be the effect of poor diet. That is to say, if you’re studying university vegetarians, you might be primarily studying people who recently adopted vegetarianism and (like I was the first time I tried it) are deficient in a few important things because they’re restricting what already tends to be a somewhat poor student diet. A definitive mechanism will probably have to wait for many more studies. ^
I recently read The Singularity is Near as part of a book club and figured a few other people might benefit from hearing what I got out of it.
First – it was a useful book. I shed a lot of my skepticism of the singularity as I read it. My mindset has shifted from “a lot of this seems impossible” to “some of this seems impossible, but a lot of it is just incredibly hard engineering”. But that’s because I stuck with it – something that probably wouldn’t have happened without the structure of a book club.
I’m not sure Kurzweil is actually the right author for this message. Accelerando (by Charles Stross) covered much of the same material as Singularity, while being incredibly engaging. Kurzweil’s writing is technically fine – he can string a sentence together and he’s clear – but incredibly repetitious. If you read the introduction, the introduction of each chapter, all of Chapter 4 (in my opinion, the only consistently good part of the book proper), and his included responses to critics (the only other interesting part of the whole tome) you’ll get all the worthwhile content, while saving yourself a good ten hours of hearing the same thing over and over and over again. Control-C/Control-V may have been a cheap way for Kurzweil to pad his word count, but it’s expensive to the reader.
I have three other worries about Kurzweil as a futurist. One deals with his understanding of some of the more technical aspects of what he’s talking about, especially physics. Here’s a verbatim quote from Singularity about nuclear weapons:
Alfred Nobel discovered dynamite by probing chemical interactions of molecules. The atomic bomb, which is tens of thousands of times more powerful than dynamite, is based on nuclear interactions involving large atoms, which are much smaller scales of matter than large molecules. The hydrogen bomb, which is thousands of times more powerful than an atomic bomb, is based on interactions involving an even smaller scale: small atoms. Although this insight does not necessarily imply the existence of yet more powerful destructive chain reactions by manipulating subatomic particles, it does make the conjecture [that we can make more powerful weapons using sub-atomics physics] plausible.
This is false on several levels. First, uranium and plutonium (the fissile isotopes used in atomic bombs) are both more massive (in the sense that they contain more matter) than the nitroglycerine in dynamite. Even if fissile isotopes are smaller in one dimension, they are on the same scale as the molecules that make up high explosives. Second, the larger energy output from hydrogen bombs has nothing to do with the relative size of hydrogen vs. uranium. Long time readers will know that the majority of the destructive output of a hydrogen bomb actually comes from fission of the uranium outer shell. Hydrogen bombs (more accurately thermonuclear weapons) derive their immense power from a complicated multi-step process that liberates a lot of energy from the nuclei of atoms.
Kurzweil falling for this plausible (but entirely incorrect) explanation doesn’t speak well of his ability to correctly pick apart the plausible and true from the plausible and false in fields he is unfamiliar with. But it’s this very picking apart that is so critical for someone who wants to undertake such a general survey of science.
My second qualm emerges when Kurzweil talks about AI safety. Or rather, it arises from the lack of any substantive discussion of AI safety in a book about the singularity. As near as I can tell, Kurzweil believes that AI will emerge naturally from attempts to functionally reverse engineer the human brain. Kurzweil believes that because this AI will be essentially human, there will be no problems with value alignment.
This seems very different from the Bostromian paradigm of dangerously misaligned AI: AI with ostensibly benign goals that turn out to be inimical to human life when taken to their logical conclusion. The most common example I’ve heard for this paradigm is an industrial AI tasked with maximizing paper clip production that tiles the entire solar system with paper clips.
Kurzweil is so convinced that the first AI will be based on reverse engineering the brain that he doesn’t adequately grapple with the orthogonality thesis: the observation that intelligence and comprehensible (to humans) goals don’t need to be correlated. I see no reason to believe Kurzweil that the first super-intelligence will be based off a human. I think to believe that it would be based on a human, you’d have to assume that various university research projects will beat Google and Facebook (who aren’t trying to recreate functional human brains in silica) in the race to develop a general AI. I think that is somewhat unrealistic, especially if there are paths to general intelligence that look quite different from our brains.
Finally, I’m unhappy with how Kurzweil’s predictions are sprinkled throughout the book, vague, and don’t include confidence intervals. The only clear prediction I was able to find was Kurzweil’s infamously false assertion that by ~2010, our computers would be split up and worn with our clothing.
It would be much easier to assess Kurzweil’s accuracy as a predictor if he listed all of his predictions together in a single section, applied to them clear target dates (e.g. less vague than: “in the late 2020s”), and gave his credence (as it stands, it is hard to distinguish between things Kurzweil believes are very likely and things he views as only somewhat likely). Currently any attempts to assess Kurzweil’s accuracy are very sensitive to what you choose to view as “a prediction” and how you interpret his timing. More clarity would make this unambiguous.
Furthermore, we’ve already began to bump up against the limit on clock speed in silicon; we can’t really run silicon chips at higher clock rates without melting them. This is unfortunate, because speed ups in clock time are much nicer than increased parallelism. Almost all programs benefit from quicker processing, while only certain programs benefit from increased parallelism. This isn’t an insurmountable obstacle when it comes to things like artificial intelligence (the human brain has a very slow clock speed and massive parallelism and it’s obviously good enough to get lots done), but it does mean that some things that Kurzweil were counting on to get quicker and quicker have stalled out (the book was written just as the Dennard Scaling began to break down).
All this means that the exponential growth that is supposed to drive the singularity is about to fizzle out… maybe. Kurzweil is convinced that the slowdown in silicon will necessarily lead to a paradigm shift to something else. But I’m not sure what it will be. He talks a bit about graphene, but when I was doing my degree in nanotechnology engineering, the joke among the professors was that graphene could do anything… except make it out of the lab.
Kurzweil has an almost religious faith that there will be another paradigm shift, keeping his exponential trend going strong. And I want to be really clear that I’m not saying there won’t be. I’m just saying there might not be. There is no law of the universe that says that we have to have convenient paradigm shifts. We could get stuck with linear (or even logarithmic) incremental improvements for years, decades, or even centuries before we resume exponential growth in computing power.
It does seem like ardent belief in the singularity might attract more religiously minded atheists. Kurzweil himself believes that it is our natural destiny to turn the whole universe into computational substrate. Identifying god with the most holy and perfect (in fine medieval tradition; there’s something reminiscent of Anselm in Kurzweil’s arguments), Kurzweil believes that once every atom in the universe sings with computation, we will have created god.
I don’t believe that humanity has any grand destiny, or that the arc of history bends towards anything at all in particular. And I by no means believe that the singularity is assured, technologically or socially. But it is a beautiful vision. Human flourishing, out to the very edges of the cosmos…
Yeah, I want that too. I’m a religiously minded atheist, after all.
In both disposition and beliefs, I’m far closer to Kurzweil than his many detractors. I think “degrowth” is an insane policy that if followed, would create scores of populist demagogues. I think that the Chinese room argument is good only for identifying people who don’t think systemically. I’m also more or less in agreement that government regulations won’t be able to stop a singularity (if one is going to occur because of continuing smooth acceleration in the price performance of information technology; regulation could catch up if a slowdown between paradigm shifts gives it enough time).
I think the singularity very well might happen. And at the end of the day, the only real difference between me and Kurzweil is that “might”.
Do you want to understand how the material world works at the most fundamental level? Great! There’s a tool for that. Or a method. Or a collection of knowledge. “Science” is an amorphous concept, hard to pin down or put into a box. Is science the method of hypothesis generation and testing? Is it as Popper claimed, asking falsifiable questions and trying to refute your own theories? Is it inextricably entangled with the ream of statistical methods that have grown up in service of it? Or is it the body of knowledge that has emerged from the use of all of these intellectual tools?
I’m not sure what exactly science is. Whatever its definition, I feel like it helps me understand the world. Even still I have to remind myself that caring about science is like caring about a partner in a marriage. You need to be with it in good health and in bad, when it confirms things you’ve always wanted to believe, or when your favourite study fails to replicate or is retracted. It’s rank hypocrisy to shout the virtues of science when it confirms your beliefs and denigrate or ignore it when it doesn’t.
Unfortunately, it’s easy to collect examples of people who are selective about their support for science. Here’s three:
Unfortunately, this is a bipartisan phenomenon [1]. So called “race realists” belong on this list as well [2]. Race realists take research about racial variations in IQ (often done in America, with all of its gory history of repression along racial lines) and then claim that it maps directly onto observable racial characterises. Race realists ignore the fact that scientific attempts at racial clustering show strong continuity between populations and find that almost all genetic variance is individual, not between groups[3]. Race realists are fond of saying that people must accept the “unfortunate truth”, but are terrible at accepting that science is at least as unfortunate for their position as it is for blank slatism. The true scientific consensus lies somewhere in-between[4].
In all these cases, we see people who are enthusiastic defenders of “science” as long as the evidence suits the beliefs that they already hold. They are especially excited to use capital-S Science as a cudgel to bludgeon people who disagree with them and shallowly defend the validity of science out of concern for their cudgel. But actually caring about science requires an almost Kierkegaardian act of resignation. You have to give up on your biases, give up on what you want to be true, and accept the consensus of experts.
Caring about science enough to be unwilling to hold beliefs that aren’t supported by evidence is probably not for everyone. I’m not even sure I want it to be for everyone. Mike Alder says of a perfect empiricist:
It must also be said that, although one might much admire a genuine [empiricist] philosopher if such could found, it would be unwise to invite one to a dinner party. Unwilling to discuss anything unless he understood it to a depth that most people never attain on anything, he would be a notably poor conversationalist. We can safely say that he would have no opinions on religion or politics, and his views on sex would tend either to the very theoretical or to the decidedly empirical, thus more or less ruling out discussion on anything of general interest.
Science isn’t all there is. It would be much poorer world if it was. I love literature and video games, silly puns and recursive political jokes. I don’t try and make every statement I utter empirically correct. There’s a lot of value in having people haring off in weird directions or trying speculative modes of thought. And many questions cannot be answered though science.
But dammit, I have standards. This blog has codified epistemic statuses and I try and use them. I make public predictions and keep a record of how I do on them so that people can assess my accuracy as a predictor. I admit it when I’m wrong.
I don’t want to make it seem like you have to go that far to have a non-hypocritical respect for science. Honestly, looking for a meta-analysis before posting something both factual and potentially controversial will get you 80% of the way there.
Science is more than a march and some funny Facebook memes. I’m glad to see so many people identifying so strongly with science. But for it to mean anything they have to be prepared to do the painful legwork of researching their views and admitting when they’re wrong. I have in the past hoped that loudly trumpeting support for science might be a gateway drug towards a deeper respect for science, but I don’t think I’ve seen any evidence for this. It’s my hope that over the next few years we’ll see more and more of the public facing science community take people to task for shallow support. If we make it low status to be a fair-weather friend of science, will we see more people actually putting in the work to properly support their views with empirical evidence?
This is an experiment I would like to try.
Footnotes
[1] The right, especially the religious right, is less likely to use “science” as a justification for anything, which is the main reason I don’t have complaints about them in this blog post. It is obviously terrible science to pretend that evolution didn’t happen or that global warming isn’t occurring, but it isn’t hypocritical if you don’t otherwise claim to be a fan of science. Crucially, this blog post is more about hypocrisy than bad science per se. ^
[2] My problems with race realists go beyond their questionable scientific claims. I also find them to be followers of a weird and twisted philosophy that equates intelligence with moral worth in a way I find repulsive. ^
[3] Taken together, these are damning for the idea that race can be easily inferred from skin colour. ^
[4] Yes, I know we aren’t supposed to trust Vox when it comes to scientific consensus. But Freddie de Boer backs it up and people I trust who have spent way more time than I have reading about IQ think that Freddie knows his stuff. ^
If you don’t live in Southern Ontario or don’t hang out in the skeptic blogosphere, you will probably have never heard the stories I’m going to tell today. There are two of, both about young Ontarian girls. One story has a happier ending than the other.
First is Makayla Sault. She died two years ago, from complications of acute lymphoblastic leukemia. She was 11. Had she completed a full course of chemotherapy, there is a 75% chance that she would be alive today.
She did not complete a full course of chemotherapy.
Instead, after 12-weeks of therapy, she and her parents decided to seek so-called “holistic” treatment at the Hippocrates Health Institute in Florida, as well as traditional indigenous treatments. . This decision killed her. With chemotherapy, she had a good chance of surviving. Without it…
There is no traditional wisdom that offers anything against cancer. There is no diet that can cure cancer. The Hippocrates Health Institute offers services like Vitamin C IV drips, InfraRed Oxygen, and Lymphatic Stimulation. None of these will stop cancer. Against cancer all we have are radiation, chemotherapy, and the surgeon’s knife. We have ingenuity, science, and the blinded trial.
Anyone who tells you otherwise is lying to you. If they are profiting from the treatments they offer, then they are profiting from death as surely as if they were selling tobacco or bombs.
Makayla’s parents were swindled. They paid $18,000 to the Hippocrates Health Institute for treatments that did nothing. There is no epithet I possess suitable to apply to someone who would scam the parents of a young girl with cancer (and by doing so, kill the young girl).
There was another girl (her name is under a publication ban; I only know her by her initials, J.J.) whose parents withdrew her from chemotherapy around the same time as Makayla. She too went to the Hippocrates Health Institute. But when she suffered a relapse of cancer, her parents appear to have fallen out with Hippocrates. They returned to Canada and sought chemotherapy alongside traditional Haudenosaunee medicine. This is the part of the story with a happy ending. The chemotherapy saved J.J.’s life.
When J.J. left chemotherapy, her doctors at McMaster Children’s Hospital [1] sued the Children’s Aid Society of Brant. They wanted the Children’s Aid Society to remove J.J. from her parents so that she could complete her course of treatment. I understand why J.J.’s doctors did this. They knew that without chemotherapy she would die. While merely telling the Children’s Aid Society this fact discharged their legal duty [2], it did not discharge their ethical duty. They sued because the Children’s Aid Society refused to act in what they saw as the best interest of a child; they sued because they found this unconscionable.
The judge denied their lawsuit. He ruled that indigenous Canadians have a charter right to receive traditional medical care if they wish it [3].
Makayla died because she left chemotherapy. J.J. could have died had she and her parents not reversed their decision. But I’m glad the judge didn’t order J.J. back into chemotherapy.
To explain why I’m glad, I first want to talk about the difference between the inside view and the outside view. The inside view is what you get when you search for evidence from your own circumstances and experiences and then apply that to estimate how you will fare on a problem you are facing. The outside view is when you dispassionately look at how people similar to you have fared dealing with similar problems and assume you will fare approximately the same.
Dr. Daniel Kahneman gives the example of a textbook he worked on. After completing two chapters in a year, the team extrapolated and decided it would take them two more years to finish. Daniel asked Seymour (another team member) how long it normally took to write a text book. Surprised, Seymour explained that it normally took seven to ten years all told and that approximately 40% of teams failed. This caused some dismay, but ultimately everyone (including Seymour) decided to preserver (probably believing that they’d be the exception). Eight years later, the textbook was finished. The outside view was dead on.
From the inside view, the doctors were entirely correct to try and demand that J.J. complete her treatment. They were fairly sure that her parents were making a lot of the medical decisions and they didn’t want J.J. to be doomed to die because her parents had fallen for a charlatan.
From an outside view, the doctors were treading on thin ice. If you look at past groups of doctors (or other authority figures), intervening with (they believe) all due benevolence to force health interventions on Indigenous Canadians, you see a chillinglitany of abuses.
This puts us in a bind. Chemotherapy doesn’t cease to work because people in the past did terrible things. Just because we have an outside view that suggest dire consequences doesn’t mean science stops working. But our outside view really strongly suggests dire consequences. How could the standard medical treatment lead to worse outcomes?
Let’s brainstorm for a second:
J. could have died regardless of chemotherapy. Had there been a court order, this would have further shaken indigenous Canadian faith in the medical establishment.
A court order could have undermined the right of minors in Ontario to consent to their own medical care, with far reaching effects on trans youth or teenagers seeking abortions.
The Children’s Aid society could have botched the execution of the court order, leading to dramatic footage of a young screaming indigenous girl (with cancer!) being separated from her weeping family. Indigenous Canadians would have been reminded strongly of the Sixties Scoop.
There could have been a stand-off when Children’s Aid arrived to collect J.J.. Knowing Canada, this is the sort of thing that could have escalated into something truly ugly, with blockades and an armed standoff with the OPP or the military.
The outside view doesn’t suggest that chemotherapy won’t work. It simply suggests that any decision around forcing indigenous Canadians to receive health care they don’t want is ripe with opportunities for unintended consequences. J.J.’s doctors may have been acting out of a desire to save her life. But they were acting in a way that showed profound ignorance of Canada’s political context and past.
I think this is a weakness of the scientific and medical establishment. They get so caught up on what is true that they forget the context for the truth. We live in a country where we have access to many lifesaving medicines. We also live in a country where many of those medicines were tested on children that had been stolen from their parents and placed in residential schools – tested in ways that spit on the concept of informed consent.
When we are reminded of the crimes committed in the name of science and medicine, it is tempting to say “that wasn’t us; it was those who came before, we are innocent” – to skip to the end of the apologies and reparations and find ourselves forgiven. Tempting and so, so unfair to those who suffered (and still do suffer) because of the actions of some “beneficent” doctors and scientists. Instead of wishing to jump ahead, we should pause and reflect. What things have we done and advocated for that will bring shame on our fields in the future?
Yes, indigenous Canadians sometimes opt out of the formal medical system. So do white hippies. At least indigenous Canadians have a reason. If trips to the hospital occasionally for people that looked like me, I’d be a lot warier of them myself.
Scientists and doctors can’t always rely on the courts and on civil society to save us from ourselves. At some point, we have to start taking responsibility for our own actions. We might even have to stop sneering at post-modernism (something I’ve been guilty of in the past) long enough to take seriously its claim that we have to be careful about how knowledge is constructed.
In the end, the story of J.J., unlike that of Makayla, had a happy ending. Best of all, by ending the way it did, J.J.’s story should act as an example, for the medical system and indigenous Canadians both, on how to achieve good outcomes together.
In the story of Pandora’s Box, all of the pestilence and disease of the world sprung as demons from a cursed box and humanity was doomed to endure them ever more. Well we aren’t doomed forever; modern medicine has begun to put the demons back inside the box. It has accomplished this by following one deceptively simple rule: “do what works”. Now the challenge is to extend what works beyond just the treatments doctors choose. Increasingly important is how diseases are treated. When doctors respect their patients, respect their lived experiences, and respect the historical contexts that might cause patients to be fearful of treatments, they’ll have far more success doing what it is they do best: curing people.
It was an abrogation of duty to go to the courts instead of respectfully dealing with J.J.’s family. It was reckless and it could have put years of careful outreach by other doctors at risk. Sometimes there are things more important than one life. That’s why I’m glad the judge didn’t order J.J. back into chemo.
Footnotes:
[1] I have a lot of fondness for McMaster, having had at least one surgery and many doctors’ appointments there. ^
[2] Doctors have a legal obligation to report any child abuse they see. Under subsection 37(2)e of the Child and Family Services Act (CFSA), this includes “the child requires medical treatment to cure, prevent or alleviate physical harm or suffering, and the child’s parent refuses to consent to treatment”. ^
[3] I’m not actually sure how relevant that is here – Brian Clement is no one’s idea of an expert in Indigenous medicine and it’s not clear that this ruling still sets any sort of precedent, given that the judge later amended his ruling to “make it clear that the interests of the child must be paramount” in cases like this. ^
It can be hard to grasp that radio waves, deadly radiation, and the light we can see are all the same thing. How can electromagnetic (EM) radiation – photons – sometimes penetrate walls and sometimes not? How can some forms of EM radiation be perfectly safe and others damage our DNA? How can radio waves travel so much further than gamma rays in air, but no further through concrete?
It all comes down to wavelength. But before we get into that, we should at least take a glance at what EM radiation really is.
Electromagnetic radiation takes the form of two orthogonal waves. In one direction, you have an oscillating magnetic field. In the other, an oscillating electric field. Both of these fields are orthogonal to the direction of travel.
These oscillations take a certain amount of time to complete, a time which is calculated by observing the peak value of one of the fields and then measuring how long it takes for the field to return to that value. Luckily, we only need to do this once, because the time an oscillation takes (called the period) will stay the same unless acted on by something external. You can invert the period to get the frequency – the number of times oscillations occur in a second. Frequency uses the unit Hertz, which are just inverted seconds. If something has the frequency 60Hz, it happens 60 times per seconds.
EM radiation has another nifty property: it always travels at the same speed, a speed commonly called “the speed of light” [1] (even when applied to EM radiation that isn’t light). When you know the speed of an oscillating wave and the amount of time it takes for the wave to oscillate, you can calculate the wavelength. Scientists like to do this because the wavelength gives us a lot of information about how radiation will interact with world. It is common practice to represent wavelength with the Greek letter Lambda (λ).
Put in a more mathy way: if you have an event that occurs with frequency f to something travelling at velocity v, the event will have a spatial periodicity λ (our trusty wavelength) equal to v / f. For example, if you have a sound that oscillates 34Hz (this frequency is equivalent to the lowest C♯ on a standard piano) travelling at 340m/s (the speed of sound in air), it will have a wavelength of (340 m/s)/(34 s-1) = 10m. I’m using sound here so we can use reasonably sized numbers, but the results are equally applicable to light or other forms of EM radiation.
Wavelength and frequency are inversely related to each other. The higher the frequency of something, the smaller its wavelength. The longer the wavelength, the lower the frequency. I’m used to people describing EM radiation in terms of frequency when they’re talking about energy (the quicker something is vibrating, the more energy it has) and wavelength when talking about what it will interact with (the subject of the rest of this post).
With all that background out of the way, we can actually “look” at electromagnetic radiation and understand what we’re seeing.
Here wavelength is labeled with “λ”, the electric field is red and labelled with “E” and the magnetic field is blue and labelled with “B”. “B” is the standard symbol for magnetic fields, for reasons I have never understood. Image Credit: Lookang on Wikimedia Commons.
Wavelength is very important. You know those big TV antennas houses used to have?
Turns out that they’re about the same size as the wavelength of television signals. The antenna on a car? About the same size as the radio waves it picks up. Those big radio telescopes in the desert? Same size as the extrasolar radio waves they hope to pick up.
Fun fact: these dishes together make up a very large radio telescope, unimaginatively called the “Very Large Array”. Image Credit: Hajor on Wikimedia Commons
Even things we don’t normally think of as antennas can act like them. The rod and cone cells in your eyes act as antennas for the light of this very blog post [2]. Chains of protein or water molecules act as antennas for microwave radiation, often with delicious results. The bases in your DNA act as antennas for UV light, often with disastrous results.
These are just a few examples, not an exhaustive list. For something to be able to interact with EM radiation, you just need an appropriately sized system of electrons (or electrical system; the two terms imply each other). You get this system of electrons more or less for free with metal. In a metal, all of the electrons are delocalized, making the whole length of a metal object one big electrical system. This is why the antennas in our phones or on our houses are made of metal. It isn’t just metal that can have this property though. Organic substances can have appropriately sized systems of delocalized electrons via double bonding [3].
EM radiation can’t really interact with things that aren’t the same size as its wavelength. Interaction with EM radiation takes the form of the electric or magnetic field of a photon altering the electric or magnetic field of the substance being interacted with. This happens much more readily when the fields are approximately similar sizes. When fields are the same size, you get an opportunity for resonance, which dramatically decreases the loss in the interaction. Losses for dissimilar sized electric fields are so high that you can assume (as a first approximation) that they don’t really interact.
In practical terms, this means that a long metal rod might heat up if exposed to a lot of radio waves (wavelengths for radio waves vary from 1mm to 100km; many are a few metres long due to the ease of making antennas in that size) because it has a single electrical system that is the right size to absorb energy from the radio waves. A similarly sized person will not heat up, because there is no single part of them that is a unified electrical system the same size as the radio waves.
Microwaves (wavelengths appropriately micron-sized) might heat up your food, but they won’t damage your DNA (nanometres in width). They’re much larger than individual DNA molecules. Microwaves are no more capable of interacting with your DNA than a giant would be of picking up a single grain of rice. Microwaves can hurt cells or tissues, but they’re incapable of hurting your DNA and leaving the rest of the cell intact. They’re just too big. Because of this, there is no cancer risk from microwave exposure (whatever paranoid hippies might say).
Gamma rays do present a cancer risk. They have a wavelength (about 10 picometres) that is similar in size to electrons. This means that they can be absorbed by the electrons in your DNA, which kick these electrons out of their homes, leading to chemical reactions that change your DNA and can ultimately lead to cancer.
Wavelength explains how gamma rays can penetrate concrete (they’re actually so small that they miss most of the mass of concrete and only occasionally hit electrons and stop) and how radio waves penetrate concrete (they’re so large that you need a large amount of concrete before they’re able to interact with it and be stopped [4]). Gamma rays are stopped by the air because air contains electrons (albeit sparsely) that they can hit and be stopped by. Radio waves are much too large for this to be a possibility.
When you’re worried about a certain type of EM radiation causing cancer, all you have to do is look at its wavelength. Any wavelength smaller than that of ultraviolet light (about 400nm) is small enough to interact with DNA in a meaningful way. Anything large is unable to really interact with DNA and is therefore safe.
Epistemic Status: Model. Looking at everything as antenna will help you understand why EM radiation interacts with the physical world the way it does, but there is a lot of hidden complexity here. For example, eyes are far from directly analogous to antennas in their mechanism of action, even if they are sized appropriately to be antennas for light. It’s also true that at the extreme ends of photon energy, interactions are based more on energy than on size. I’ve omitted this in order to write something that isn’t entirely caveats, but be aware that it occurs.
Footnotes:
[1] You may have heard that the speed of light changes in different substances. Tables will tell you that the speed of light in water is only about ¾ of the speed of light in air or vacuum and that the speed of light in glass is even slower still. This isn’t technically true. The speed of light is (as far as we know) cosmically invariant – light travels the same speed everywhere in the galaxy. That said, the amount of time light takes to travel between two points can vary based on how many collisions and redirections it is likely to get into between two points. It’s the difference between how long it takes for a pinball to make its way across a pinball table when it hits nothing and how long it takes when it hits every single bumper and obstacle. ^
[2] This is a first approximation of what is going on. Eyes can be modelled as antennas for the right wavelength of EM radiation, but this ignores a whole lot of chemistry and biophysics. ^
[3] The smaller the wavelength, the easier it is to find an appropriately sized system of electrons. When your wavelength is the size of a double bond (0.133nm), you’ll be able to interact with anything that has a double bond. Even smaller wavelengths have even more options for interactions – a wavelength that is well sized for an electron will interact with anything that has an electron (approximately everything). ^
[4] This interaction is actually governed by quantum mechanical tunneling. Whenever a form of EM radiation “tries” to cross a barrier larger than its wavelength, it will be attenuated by the barrier. The equation that describes the probability distribution of a particle (the photons that make up EM radiation are both waves and particles, so we can use particle equations for them) is approximately (I say approximately because I’ve simplified all the constants into a single term, k), which becomes (here I’m using k1 to imply that the constant will be different), the equation for exponential decay, when the energy (to a first approximation, length) of the substance is higher than the energy (read size of wavelength) of the light.
This equation shows that there can be some probability – occasionally even a high probability – of the particle existing on the other side of a barrier. All you need for a particle to traverse a barrier is an appropriately small barrier. ^
These are two very different sorts of speculation. The first requires extreme attention to detail in order to make the setting plausible, but once you clear that bar, you can get away with anything. Ted Chiang is clearly a master at this. I couldn’t find any inconsistencies to pick at in any of his stories.
When you try to predict the future – especially the near future – you don’t need to make up a world out of whole cloth. Here it’s best to start with plausible near future events and let those give your timeline a momentum, carrying you to where you want to go on a chain of reason. No link has to be perfect, but each link has to be plausible. If any of them leave your readers scratching their heads, then you’ve lost them.
Predicting the future is also vulnerable to the future happening. Predictions are rooted in their age and tend to tell us more about the context in which they were made than about the future.
I think Pump Six is a book where we can clearly see and examine both of these problems.
First, let’s talk about chains of events. The stories The Fluted Girl, The Calorie Man, The Tamarisk Hunter, and Yellow Card Man all hinge on events that probably seem plausible to Bacigalupi, but that feel deeply implausible to me.
The Fluted Girl imagines the revival of feudalism in America. Fiefs govern the inland mountains, while there is a democracy (presumably capitalist) on the coasts. This arrangement felt unstable and unrealistic to me.
Feudal societies tend to have much less economic growth than democracies (see part 2 of Scott’s anti-reactionary FAQ). Democracies also aren’t exactly great at staying calm about atrocities right on their doorsteps. These two facts combined make me wonder why the (Coloradan?) feudal society in The Fluted Girl hasn’t been smashed by its economically (and therefore, inevitably militarily) more powerful neighbours.
In The Tamarisk Hunter, the Colorado River is slowly being covered by a giant concrete straw, a project that has been going on for a while and requires massive amounts of resources. The goal is to protect the now diminished Colorado River from evaporation as it winds its way into a deeply drought-stricken California.
In the face of a bad enough drought, every bit counts. But there are much more cost effective ways to get your drinking water. The Colorado river today has an average discharge of 640m3/s. In a bad drought, this would be lower. Let’s say it’s at something like 200m3/s.
You could get that amount of water from building about 100 desalination plants, which would cost something like $100 billion today (using a recently built plant in California as a baseline). Bridges cost something like $3,000 per m2 (using this admittedly flawed report for guidance), so using bridges to estimate the cost, the “straw” would cost about $300 million per kilometer (using the average width of the Colorado river). Given the relative costs of the two options, it is cheaper to replace the whole river (assuming reduced flow from the drought) with desalination plants than it is to build even 330km (<200 miles) of straw.
A realistic response to a decades long California drought would involve paying farmers not to use water, initiating water conservation measures, and building desalination plants. It wouldn’t look like violent conflict over water rights up and down the whole Colorado River.
In The Calorie Man and Yellow Card Man, bioengineered plagues have ravaged the world and oil production has declined to the point where the main source of energy is once again the sun (via agriculture). Even assuming peak oil will happen (more on that in a minute), there will always be nuclear power. Nuclear power plants currently provide for only ten percent of the world’s energy needs, but there’s absolutely no good reason they couldn’t meet basically all of them (especially if combined with solar, hydro, wind, and if necessary, coal).
With improved uranium enrichment techniques and better energy storage technology, it’s plausible that sustainable energy sources could, if necessary, entirely displace oil, even in the transportation industry.
The only way to get from “we’re out of oil” to “I guess it’s back to agriculture as our main source of energy” is if you forget about (or don’t even consider) nuclear power.
This is why I think the stories in Pump Six tell me a lot more about Bacigalupi than about the future. I can tell that he cares deeply about the planet, is skeptical of modern capitalism, and fearful of the damage industrialization, fossil fuels, and global warming may yet bring.
But the story that drove home his message for me wasn’t any of the “ecotastrophes”, where humans are brought to the brink of destruction by our mistreatment of the planet. It was The People of Sand and Slag that made me stop and wonder. It asks us to consider what we’d lose if we poison the planet while adapting to the damage. Is it okay if beaches are left littered with oil and barbed wire if these no longer pose us any threat?
I wish more of the stories had been like that, instead of infected with the myopia that causes environmentalists to forget about the existence of nuclear power (when they aren’t attacking it) and critics of capitalism to assume that corporations will always do the evil thing, with no regard to the economics of the situation.
Disregard for economics and a changing world intersect when Bacigalupi talks about peak oil. Peak oil was in vogue among environmentalists in the 2000s as oil prices rose and rose, but it was never taken seriously by the oil industry. As per Wikipedia, peak oil (as talked about by environmentalists in the ’00s, not as originally formulated) ignored the effects of price on supply and demand, especially in regard to unconventional oil, like the bitumen in the Albertan Oil Sands.
Price is really important when it comes to supply. Allow me to quote from one of my favourite economics stories. It’s about a pair of Texan brothers who (maybe) tried to corner the global market for silver and in the process made silver so unaffordable that Tiffany’s ran an advertisement denouncing them in the third page of the New York Times. The problems the Texans ran into as silver prices rose are relevant here:
But as the high prices persisted, new silver began to come out of the woodwork.
“In the U.S., people rifled their dresser drawers and sofa cushions to find dimes and quarters with silver content and had them melted down,” says Pirrong, from the University of Houston. “Silver is a classic part of a bride’s trousseau in India, and when prices got high, women sold silver out of their trousseaus.”
Unfortunately for the Hunts, all this new supply had a predictable effect. Rather than close out their contracts, short sellers suddenly found it was easier to get their hands on new supplies of silver and deliver.
“The main factor that has caused corners to fail [throughout history] is that the manipulator has underestimated how much will be delivered to him if he succeeds [at] raising the price to artificial levels”
By the same token, many people underestimated the amount of oil that would come out of the woodwork if oil prices remained high – arguably artificially high, no thanks to OPEC – for a prolonged period. As an aside, it’s also likely that we underestimate the amount of unconventional water that could be found if prices ever seriously spiked, another argument against the world in The Tamarisk Hunter.
This isn’t to say that there won’t be a peak in oil production. The very real danger posed by global warming and the fruits of investments in alternative energy when oil prices were high will slowly wean us off of oil. This formulation of peak oil is much different than the other one. A steady decrease in demand for oil will be hard on oil producing regions, but it won’t come as a sharp shock to the whole world economic order.
I don’t know how much of this could have been known in 2005, especially to anyone deeply embedded in the environmentalist movement. As an exoneration, that’s wonderful. But this is exactly my point from above. You can try and predict the future, but you can only predict from your flawed vantage point. In retrospect, it is often easier to triangulate the vantage point than to see the imagined future as plausible.
Another example: almost all science fiction before the late 00s drastically underestimated the current prevalence in mobile devices. In series that straddle the divide, you often see mobile devices mentioned much more in the latter books, as authors adjust their visions of the future to take into account what they now know in the present.
Writing is hard and the critic will always have an easier time than the author. I don’t mean to be so hard on Bacigalupi, I really did enjoy Pump Six and it’s caused me to do no end of thinking and discussing since I finished reading it. In this regard, it was an immensely successful book.