Model, Philosophy

Against Novelty Culture

So, there’s this thing that happens in certain intellectual communities, like (to give a totally random example) social psychology. This thing is that novel takes are rewarded. New insights are rewarded. Figuring out things that no one has before is rewarded. The high-status people in such a community are the ones who come up with and disseminate many new insights.

On the face of it, this is good! New insights are how we get penicillin and flight and Pad Thai burritos. But there’s one itty bitty little problem with building a culture around it.

Good (and correct!) new ideas are a finite resource.

This isn’t news. Back in 2005, John Ioannidis laid out the case for “most published research findings” being false. It turns out that when you have a small chance of coming up with a correct idea even using statistical tests for to find false positives can break down.

A quick example. There are approximately 25,000 genes in the human genome. Imagine you are searching for genes that increase the risk of schizophrenia (chosen for this example because it is a complex condition believed to be linked to many genes). If there are 100 genes involved in schizophrenia, the odds of any given gene chosen at random being involved are 1 in 250. You, the investigating scientist, decide that you want about an 80% chance of finding some genes that are linked (this is called study power and 80% is a common value) You run a bunch of tests, analyze a bunch of DNA, and think you have a candidate. This gene has been “proven” to be associated with schizophrenia at a p=0.05 confidence level.

(A p-value is the possibility of observing an event at least as extreme as the observed one, if the null hypothesis is true. This means that if the gene isn’t associated with schizophrenia, there is only a 1 in 20 chance – 5% – we’d see a result as extreme or more extreme than the one we observed.)

At the start, we had a 1 in 250 chance of finding a gene. Now that we have a gene, we think there’s a 19 in 20 chance that it’s actually partially responsible for schizophrenia (technically, if we looked at multiple candidates, we should do something slightly different here, but many scientists still don’t, making this still a valid example). Which probability to we trust?

There’s actually an equation to figure it out. It’s called Bayes Rule and statisticians and scientists use it to update probabilities in response to new information. It goes like this:

(You can sing this to the tune of Hallelujah; take P of A when given B / times P of A a priori / divide the whole thing by B’s expectation / new evidence you may soon find / but you will not be in a bind / for you can add it to your calculation.)

In plain language, it means that probability of something being true after an observation (P(A|B)) is equal to the probability of it being true absent any observations (P(A), 1 in 250 here), times the probability of the observation happening if it is true (P(B|A), 0.8 here), divided by the baseline probability of the observation (P(B), 1 in 20 here).

With these numbers from our example, we can see that the probability of a gene actually being associated with schizophrenia when it has a confidence level of 0.05 is… 6.4%.

I took this long detour to illustrate a very important point: one of the strongest determinants of how likely something is to actually be true is the base chance it has of being true. If we expected 1000 genes to be associated with schizophrenia, then the base chance would be 1 in 25, and the probability our gene actually plays a role would jump up to 64%.

To have ten times the chance of getting a study right, you can be 10 times more selective (which probably requires much more than ten times the effort)… or you can investigate something ten times as likely to actually occur. Base rates can be more powerful than statistics, more powerful than arguments, and more powerful than common sense.

This suggests that any community that bases status around producing novel insights will mostly become a community based around producing novel-seeming (but false!) insights once it exhausts all of the available true (and easily attainable) insights it could discover. There isn’t a harsh dividing line, just a gradual trend towards plausible nonsense as the underlying vein of truth is mined out, but the studies and blog posts continue.

Except the reality is probably even worse, because any competition for status in such a community (tenure, page views) will become an iterative process that rewards those best able to come up with plausible sounding wrappers on unfortunately false information.

When this happens, we have people publishing studies with terrible analyses but highly sharable titles (anyone remember the himmicanes paper?), with the people at the top calling anyone who questions their shoddy research “methodological terrorists“.

I know I have at least one friend who is rolling their eyes right now, because I always make fun of the reproducibility crisis in psychology.

But I’m just using that because it’s a convenient example. What I’m really worried about is the Effective Altruism community.

(Effective Altruism is a movement that attempts to maximize the good that charitable donations can do by encouraging donation to the charities that have the highest positive impact per dollar spent. One list of highly effective charities can be found on GiveWell; Givewell has demonstrated a noted trend away from novelty such that I believe this post does not apply to them.)

We are a group of people with countless forums and blogs, as well as several organizations devoted to analyzing the evidence around charity effectiveness. We have conventional organizations, like GiveWell, coexisting with less conventional alternatives, like Wild-Animal Suffering Research.

All of these organizations need to justify their existence somehow. All of these blogs need to get shares and upvotes from someone.

If you believe (like I do) that the number of good charity recommendations might be quite small, then it follows that a large intellectual ecosystem will quickly exhaust these possibilities and begin finding plausible sounding alternatives.

I find it hard to believe that this isn’t already happening. We have people claiming that giving your friends cash or buying pizza for community events is the most effective charity. We have discussions of whether there is suffering in the fundamental particles of physics.

Effective Altruism is as much a philosophy movement as an empirical one. It isn’t always the case that we’ll be using P-values and statistics in our assessment. Sometimes, arguments are purely moral (like arguments about how much weight we should give to insect suffering). But both types of arguments can eventually drift into plausible sounding nonsense if we exhaust all of the real content.

There is no reason to expect that we should be able to tell when this happens. Certainly, experimental psychology wasn’t able to until several years after much-hyped studies more-or-less stopped replicating, despite a population that many people would have previously described as full of serious-minded empiricists. Many psychology researchers still won’t admit that much of the past work needs to be revisited and potentially binned.

This is a problem of incentives, but I don’t know how to make the incentives any better. As a blogger (albeit one who largely summarizes and connects ideas first broached by others), I can tell you that many of the people who blog do it because they can’t not write. There’s always going to be people competing to get their ideas heard and the people who most consistently provide satisfying insights will most often end up with more views.

Therefore, I suggest caution. We do not know how many true insights we should expect, so we cannot tell how likely to be true anything that feels insightful actually is. Against this, the best defense is highly developed scepticism. Always remember to ask for implications of new insights and to determine what information would falsify them. Always assume new insights have a low chance of being true. Notice when there seems to be a pressure to produce novel insights long after the low hanging fruit is gone and be wary of anyone in tat ecosystem.

We might not be able to change novelty culture, but we can do our best to guard against it.

[Special thanks to Cody Wild for coming up with most of the lyrics to Bayesian Hallelujah.]

Model, Politics, Science

Science Is Less Political Than Its Critics

A while back, I was linked to this Tweet:

It had sparked a brisk and mostly unproductive debate. If you want to see people talking past each other, snide comments, and applause lights, check out the thread. One of the few productive exchanges centres on bridges.

Bridges are clearly a product of science (and its offspring, engineering) – only the simplest bridges can be built without scientific knowledge. Bridges also clearly have a political dimension. Not only are bridges normally the product of politics, they also are embedded in a broader political fabric. They change how a space can be used and change geography. They make certain actions – like commuting – easier and can drive urban changes like suburb growth and gentrification. Maintenance of bridges uses resources (time, money, skilled labour) that cannot be then used elsewhere. These are all clearly political concerns and they all clearly intersect deeply with existing power dynamics.

Even if no other part of science was political (and I don’t think that could be defensible; there are many other branches of science that lead to things like bridges existing), bridges prove that science certainly can be political. I can’t deny this. I don’t want to deny this.

I also cannot deny that I’m deeply skeptical of the motives of anyone who trumpets a political view of science.

You see, science has unfortunate political implications for many movements. To give just one example, greenhouse gasses are causing global warming. Many conservative politicians have a vested interest in ignoring this or muddying the water, such that the scientific consensus “greenhouse gasses are increasing global temperatures” is conflated with the political position “we should burn less fossil fuel”. This allows a dismissal of the political position (“a carbon tax makes driving more expensive; it’s just a war on cars”) serve also (via motivated cognition) to dismiss the scientific position.

(Would that carbon in the atmosphere could be dismissed so easily.)

While Dr. Wolfe is no climate change denier, it is hard to square her claims that calling science political is a neutral statement:

With the examples she chooses to demonstrate this:

When pointing out that science is political, we could also say things like “we chose to target polio for a major elimination effort before cancer, partially because it largely affected poor children instead of rich adults (as rich kids escaped polio in their summer homes)”. Talking about the ways that science has been a tool for protecting the most vulnerable paints a very different picture of what its political nature is about.

(I don’t think an argument over which view is more correct is ever likely to be particularly productive, but I do want to leave you with a few examples for my position.)

Dr. Wolfe’s is able to claim that politics is neutral despite only using negative examples of its effects by using a bait and switch between two definitions of “politics”. The bait is a technical and neutral definition, something along the lines of: “related to how we arrange and govern our society”. The switch is a more common definition, like: “engaging in and related to partisan politics”.

I start to feel that someone is being at least a bit disingenuous when they only furnish negative examples, examples that relate to this second meaning of the word political, then ask why their critics view politics as “inherently bad” (referring here to the first definition).

This sort of bait and switch pops up enough in post-modernist “all knowledge is human and constructed by existing hierarchies” places that someone got annoyed enough to coin a name for it: the motte and bailey fallacy.

Image Credit: Hchc2009, Wikimedia Commons.

 

It’s named after the early-medieval form of castle, pictured above. The motte is the outer wall and the bailey is the inner bit. This mirrors the two parts of the motte and bailey fallacy. The “motte” is the easily defensible statement (science is political because all human group activities are political) and the bailey is the more controversial belief actually held by the speaker (something like “we can’t trust science because of the number of men in it” or “we can’t trust science because it’s dominated by liberals”).

From Dr. Wolfe’s other tweets, we can see the bailey (sample: “There’s a direct line between scientism and maintaining existing power structures; you can see it in language on data transparency, the recent hoax, and more.“). This isn’t a neutral political position! It is one that a number of people disagree with. Certainly Sokal, the hoax paper writer who inspired the most recent hoaxes is an old leftist who would very much like to empower labour at the expense of capitalists.

I have a lot of sympathy for the people in the twitter thread who jumped to defend positions that looked ridiculous from the perspective of “science is subject to the same forces as any other collective human endeavour” when they believed they were arguing with “science is a tool of right-wing interests”. There are a great many progressive scientists who might agree with Dr. Wolfe on many issues, but strongly disagree with what her position seems to be here. There are many of us who believe that science, if not necessary for a progressive mission, is necessary for the related humanistic mission of freeing humanity from drudgery, hunger, and disease.

It is true that we shouldn’t uncritically believe science. But the work of being a critical observer of science should not be about running an inquisition into scientists’ political beliefs. That’s how we get climate change deniers doxxing climate scientists. Critical observation of science is the much more boring work of checking theories for genuine scientific mistakes, looking for P-hacking, and doubled checking that no one got so invested in their exciting results that they fudged their analyses to support them. Critical belief often hinges on weird mathematical identities, not political views.

But there are real and present dangers to uncritically not believing science whenever it conflicts with your politic views. The increased incidence of measles outbreaks in vaccination refusing populations is one such risk. Catastrophic and irreversible climate change is another.

When anyone says science is political and then goes on to emphasize all of the negatives of this statement, they’re giving people permission to believe their political views (like “gas should be cheap” or “vaccines are unnatural”) over the hard truths of science. And that has real consequences.

Saying that “science is political” is also political. And it’s one of those political things that is more likely than not to be driven by partisan politics. No one trumpets this unless they feel one of their political positions is endangered by empirical evidence. When talking with someone making this claim, it’s always good to keep sight of that.

Biology, Ethics, Literature, Philosophy

Book Review: The Righteous Mind

I – Summary

The Righteous Mind follows an argument structure I learned in high school debate club. It tells you what it’s going to tell you, it tells you it, then it reminds you what it told you. This made it a really easy read and a welcome break from The Origins of Totalitarianism, the other book I’ve been reading. Practically the very first part of The Righteous Mind proper (after the foreword) is an introduction to its first metaphor.

Imagine an elephant and a rider. They have travelled together since their birth and move as one. The elephant doesn’t say much (it’s an elephant), but the rider is very vocal – for example, she’s quick to apologize and explain away any damage the elephant might do. A casual observer might think the rider is in charge, because she is so much cleverer and more talkative, but that casual observer would be wrong. The rider is the press secretary for the elephant. She explains its action, but it is much bigger and stronger than her. It’s the one who is ultimately calling the shots. Sometimes she might convince it one way or the other, but in general, she’s buffeted along by it, stuck riding wherever it goes.

She wouldn’t agree with that last part though. She doesn’t want to admit that she’s not in charge, so she hides the fact that she’s mainly a press secretary even from herself. As soon as the elephant begins to move, she is already inventing a reason why it was her idea all along.

This is how Haidt views human cognition and decision making. In common terms, the elephant is our unconscious mind and the rider our consciousness. In Kahneman’s terms, the elephant is our System 1 and the rider our System 2. We may make some decisions consciously, but many of them are made below the level of our thinking.

Haidt illustrates this with an amusing anecdote. His wife asks him why he didn’t finish some dishes he’d been doing and he immediately weaves a story of their crying baby and barking incontinent dog preventing him. Only because he had his book draft open on his computer did he realize that these were lies… or rather, a creative and overly flattering version of the truth.

The baby did indeed cry and the dog indeed bark, but neither of these prevented him from doing the dishes. The cacophany happened well before that. He’d been distracted by something else, something less sympathetic. But his rider, his “internal press secretary” immediately came up with an excuse and told it, without any conscious input or intent to deceive.

We all tell these sorts of flattering lies reflexively. They take the form of slight, harmless embellishments to make our stories more flattering or interesting, or our apologies more sympathetic.

The key insight here isn’t that we’re all compulsive liars. It’s that the “I” that we like to think exists to run our life doesn’t, really. Sometimes we make decisions, especially ones the elephant doesn’t think it can handle (high stakes apologies anyone?), but normally decisions happen before we even think about them. From the perspective of Haidt, “I”, is really “we”, the elephant and its rider. And we need to be careful to give the elephant its due, even though it’s quiet.

Haidt devotes a lot of pages to an impassioned criticism of moral rationalism, the belief that morality is best understood and attained by thinking very hard about it. He explicitly mentions that to make this more engaging, he wraps it up in his own story of entering the field of moral psychology.

He starts his journey with Kohlberg, who published a famous account of the stages of moral reasoning, stages that culminate in rationally building a model of justice. This paradigm took the world of moral psychology by storm and reinforced the view (dating in Western civilization to the times of the Greeks) that right thought had to proceed right action.

Haidt was initially enamoured with Kohlberg’s taxonomy. But reading ethnographies and doing research in other countries began to make him suspect things weren’t as simple as Kohlberg thought. Haidt and others found that moral intuitions and responses to dilemmas differed by country. In particular, WEIRD people (people from countries that were Western, Educated, Industrialized, Rich, and Developed and most especially the most educated people in those countries) were very much able to tamp down feelings of disgust in moral problems, in a way that seemed far from universal.

For example, if asked if it was wrong for a family to eat their dog if it was killed by a car (and the alternative was burying it), students would say something along the lines of “well, I wouldn’t, but it’s gross, not wrong”. Participants recruited at a nearby McDonalds gave a rather different answer: “of course it’s wrong, why are you even asking”. WEIRD students at prestigious universities may have been working towards a rational, justice-focused explanation for morality, but Haidt found no evidence that this process (or even a focus on “justice”) was as universal as Kohlberg claimed.

That’s not to say that WEIRD students had no disgust response. In fact, trying to activate it gave even more interesting results. When asked to justify answers where disgust overpowered students sense of “well as long as no one was hurt” (e.g. consensual adult sibling incest with no chance of children), Haidt observed that people would throw up a variety of weak excuses, often before they had a chance to think the problem through. When confronted by the weakness of their arguments, they’d go speechless.

This made Haidt suspect that two entirely separate processes were going on. There was a fast one for deciding and a slower another for explanation. Furthermore, the slower process was often left holding the bag for the faster one. Intuitions would provide an answer, then the subject would have to explain it, no matter how logically indefensible it was.

Haidt began to believe that Kohlberg had only keyed in on the second, slower process, “the talking of the rider” in metaphor-speak. From this point of view, Kohlberg wasn’t measuring moral sophistication. He was instead measuring how fluidly people could explain their often less than logical moral intuitions.

There were two final nails in the coffin of ethical rationalism for Haidt. First, he learned of a type of brain injury that separated people from their moral intuitions (or as the rationalists might call them “passions”). Contrary to the rationalist expectation, these people’s lives went to hell, as they alienated everyone they knew, got fired from their jobs, and in general proved the unsuitability of pure reason for making many types of decisions. This is obviously the opposite of what rationalists predicted would happen.

Second, he saw research that suggested that in practical measures (like missing library books), moral philosophers were no more moral than other philosophy professors.

Abandoning rationalism brought Haidt to a sentimentalist approach to ethics. In this view, ethics stemmed from feelings about how the world ought to be. These feelings are innate, but not immutable. Haidt describes people as “prewired”, not “hardwired”. You might be “prewired” to have a strong loyalty foundation, but a series of betrayals and let downs early in life might convince you that loyalty is just a lie, told to control idealists.

Haidt also believes that our elephants are uniquely susceptible to being convinced by other people in face to face discussion. He views the mechanism here as empathy at least as much as logic. People that we trust and respect can point out our weak arguments, with our respect for them and positive feelings towards them being the main motive force for us listening to these criticisms. The metaphor with elephants kind of breaks down here, but this does seem to better describe the world as it is, so I’ll allow it.

Because of this, Haidt would admit that rationalism does have some purpose in moral reasoning, but he thinks it is ancillary and mainly used to convince other people. I’m not sure how testable making evolutionary conclusions about this is, but it does seem plausible for there to be selection pressure to make us really good at explaining ourselves and convincing others of our point of view.

As Haidt took this into account and began to survey peoples’ moral instincts, he saw that the ways in which responses differed by country and class were actually highly repeatable and seemed to gesture at underlying categories of people. After analyzing many, many survey responses, he and his collaborators came up with five (later six) moral “modules” that people have. Each moral module looks for violations of a specific class of ethical rules.

Haidt likens these modules to our taste-buds. The six moral tastes are the central metaphor of the second section of the book.

Not everyone has these taste-buds/modules in equal proportion. Looking at commonalities among respondents, Haidt found that the WEIRDer someone was, the less likely they were to have certain modules. Conservatives tended to have all modules in a fairly equal proportion, liberals tended to be lacking three. Libertarians were lacking a whopping four, which might explain why everyone tends to believe they’re the worst.

The six moral foundations are:

Care/Harm

This is the moral foundation that makes us care about suffering and pain in others. Haidt speculates that it originally evolved in order to ensure that children (which are an enormous investment of resources for mammals and doubly so for us) got properly cared for. It was originally triggered only by the suffering or distress of our own children, but can now be triggered by anyone being hurt, as well as cute cat videos or baby seals.

An expanding set of triggers seems to be a common theme for these. I’ve personally speculated that this would perhaps be observed if the brain was wired for minimizing negative predictive error (i.e. not mistaking a scene in which there is a lion for a scene without a lion), rather than positive predictive error (i.e. not mistaking a scene without a lion for a scene with a lion). If you minimize positive predictive error, you’ll never be frightened by a shadow, but you might get eaten by a lion.

Fairness/Cheating

This is the moral foundation that makes us want everyone to do their fair share and makes us want to punish tax evaders or welfare cheats (depending on our political orientation). The evolutionary story given for this one is that it evolved to allow us to reap the benefits of two-way partnerships; it was an incentive against defecting.

Loyalty/Betrayal

This is the foundation that makes us rally around our politicians, community leaders, and sports teams, as well as the foundation that makes some people care more about people from their country than people in general. Haidt’s evolutionary explanation for this one is that it was supposed to ensure coherent groups.

Authority/Subversion

This is the moral foundation that makes people obey their boss without talking back or avoid calling their parents by the first names. It supposedly evolved to allow us to forge beneficial relationships within hierarchies. Basically, it may have once been very useful to have people believe and obey their elders without question, (like e.g. when the elders say “don’t drink that water, it’s poisoned” no one does and this story can be passed down and keep people safe, without someone having to die every few years to prove that the water is indeed poisoned).

Sanctity/Degradation

This is the moral foundation that makes people on the right leery of pre-marital sex and people on the left leery of “chemicals”. It shows up whenever we view our bodies as more than just our bodies and the world as more than just a collection of things, as well as whenever we feel that something makes us “spiritually” dirty.

The very plausible explanation for this one is that it evolved in response to the omnivore’s dilemma: how do we balance the desire for novel food sources with the risk they might poison us? We do it by avoiding anything that looks diseased or rotted. This became a moral foundation as we slowly began applying it to stuff beyond food – like other people. Historically, the sanctity moral framework was probably responsible for the despised status of lepers.

Liberty/Oppression

This moral foundation is always in tension with Authority/Subversion. It’s the foundation that makes us want to band together against and cast down anyone who is aggrandizing themselves or using their power to mistreat another.

Haidt suggests that this evolved to allow us to band together against “alpha males” and check their power. In his original surveys, it was part of Fairness/Cheating, but he found that separating it gave him much more resolving power between liberals and conservatives.

Of these six foundations, Haidt found that libertarians only had an appreciable amount of Liberty/Oppression and Fairness/Cheating and of these two, Liberty/Oppression was by far the stronger. While the other foundations did exist, they were mostly inactive and only showed up under extreme duress. For liberals, he found that they had Care/Harm, Liberty/Oppression, and Fairness/Cheating (in that order).

Conservatives in Haidt’s survey had all six moral foundations, like I said above. Care/Harm was their strongest foundation, but by having appreciable amounts of Loyalty/Betrayal, Authority/Subversion, and Sanctity/Degradation, they would occasionally overrule Care/Harm in favour of one or another of these foundations.

Haidt uses these moral foundations to give an account of the “improbable” coalition between libertarians and social conservatives that closely matches the best ones to come out of political science. Basically, liberals and libertarians are descended (ideologically, if not filially) from those who embraced the enlightenment and the liberty it brought. About a hundred years ago (depending on the chronology and the country), the descendants of the enlightenment had a great schism, with some continuing to view the government as the most important threat to liberty (libertarians) and others viewing corporations as the more pressing threat (liberals). Liberals took over many auspices of the government and have been trying to use it to guarantee their version of liberty (with mixed results and many reversals) ever since.

Conservatives do not support this project of remaking society from the top down via the government. They believe that liberals want to change too many things, too quickly. Conservatives aren’t opposed to the government qua government. In fact, they’d be very congenial to a government that shared their values. But they are very hostile to a liberal, activist government (which is rightly or wrongly how conservatives view the governments of most western nations) and so team up with libertarians in the hopes of dismantling it.

This section, which characterized certain political views as stemming from “deficiencies” in certain “moral modules –, in a way that is probably hereditary – made me pause and wonder if this is a dangerous book. I’m reminded of Hannah Arendt talking about “tolerance” for Jews committing treason in The Origins of Totalitarianism.

It is an attraction to murder and treason which hides behind such perverted tolerance, for in a moment it can switch to a decision to liquidate not only all actual criminals but all who are “racially” predestined to commit certain crimes. Such changes take place whenever the legal and political machine is not separated from society so that social standards can penetrate into it and become political and legal rules. The seeming broad-mindedness that equates crime and vice, if allowed to establish its own code of law, will invariably prove more cruel and inhuman than laws, no matter how severe, which respect and recognize man’s independent responsibility for his behavior.

That said, it is possible for inconvenient or dangerous things to be true and their inconvenience or danger has no bearing on their truth. If Haidt saw his writings being used to justify or promote violence, he’d have a moral responsibility to decry the perpetrators. Accepting that sort of moral responsibility is, I believe, part of the responsibility that scientists who deal with sensitive topics must accept. I do not believe that this responsibility precludes publishing. I firmly believe that only right information can lead to right action, so I am on the whole grateful for Haidt’s taxonomy.

The similarities between liberals and libertarians extend beyond ethics. Both have more openness to experience and less of a threat response than conservatives. This explains why socially, liberals and libertarians have much more in common than liberals and conservatives.

Moral foundation theory gave me a vocabulary for some of the political writing I was doing last year. After the Conservative (Party of Canada) Leadership Convention, I talked about social conservative legislation as a way to help bind people to collective morality. I also talked about how holding other values very strongly and your values not at all can make people look diametrically opposed to you.

The third and final section of The Righteous Mind further focuses on political tribes. Its central metaphor is that humans are “90% chimp, 10% bee”. It’s central purpose is an attempt to show how humans might have been subject to group selection and how our groupishness is important to our morality.

Haidt claims that group selection is heresy in evolutionary biology (beyond hive insects). I don’t have the evolutionary biology background to say if this is true or not, although this does match how I’ve seen it talked about online among scientifically literate authors, so I’m inclined to believe him.

Haidt walks through the arguments against group selection and shows how they are largely sensible. It is indeed ridiculous to believe that genes for altruism could be preserved in most cases. Imagine a gene that would make deer more likely to sacrifice itself for the good of the herd if it seemed that was the only way to protect the herd’s young. This gene might help more deer in the herd attain adulthood, but it would also lead to any deer who had it having fewer children. There’s certainly an advantage to the herd if some members have this gene, but there’s no advantage to the carriers and a lot of advantage to every deer in the herd who doesn’t carry it. Free-riders will outcompete sacrificers and the selfless gene will get culled from the herd.

But humans aren’t deer. We can be selfish, yes, but we often aren’t and the ways we aren’t can’t be simply explained by greedy reciprocal altruism. If you’ve ever taken some time out of your day to help a lost tourist, congratulations, you’ve been altruistic without expecting anything in return. That people regularly do take time out of their days to help lost tourists suggests there might be something going on beyond reciprocal altruism.

Humans, unlike deer, have the resources and ability to punish free riders. We expect everyone to pitch in and might exile anyone who doesn’t. When humans began to form larger and larger societies, it makes sense that the societies who could better coordinate selfless behaviour would do better than those that couldn’t. And this isn’t just in terms of military cohesion (as the evolutionary biologist Lesley Newson had to point out to Haidt). A whole bunch of little selfless acts ­– sharing food, babysitting, teaching – can make a society more efficient than its neighbours at “turning resources into offspring”.

A human within the framework of society is much more capable than a human outside of it. I am only able to write this and share it widely because a whole bunch of people did the grunt work of making the laptop I’m typing it on, growing the food I eat, maintaining our communication lines, etc. If I was stuck with only my own resources, I’d be carving this into the sand (or more likely, already eaten by wolves).

Therefore, it isn’t unreasonable to expect that the more successful and interdependent a society could become, the more it would be able to outcompete, whether directly or indirectly its nearby rivals and so increase the proportion of its conditionally selfless genes in the human gene pool.

Conditional selflessness is a better description of the sorts of altruism we see in humans. It’s not purely reciprocal as Dawkins might claim, but it isn’t boundless either. It’s mostly reserved for people we view as similar to us. This doesn’t need to mean racially or religiously. In my experience, a bond as simple as doing the same sport is enough to get people to readily volunteer their time for projects like digging out and repairing a cracked foundation.

The switch from selfishness to selflessly helping out our teams is called “the hive switch” by Haidt. He devotes a lot of time to exploring how we can flip it and the benefits of flipping it. I agree with him that many of the happiest and most profound moments of anyone’s life come when the switch has been activated and they’re working as part of a team.

The last few chapters are an exploration of how individualism can undermine the hive switch and several mistakes liberals make in their zeal to overturn all hierarchies. Haidt believes that societies have both social capital (the bounds of trust between people) and moral capital (the society’s ability to bind people to collective values) and worries that liberal individualism can undermine these to the point where people will be overall worse off. I’ll talk more about moral capital later in the review.

II – On Shaky Foundations

Anyone who reads The Righteous Mind might quickly realize that I left a lot of the book out of my review. There was a whole bunch of supporting evidence about how liberals and conservatives “really are” or how they differ that I have deliberately omitted.

You may have heard that psychology is currently in the midst of a “replication crisis“. Much (I’d crudely estimate somewhere between 25% and 50%) of the supporting evidence in this book has been a victim of this crisis.

Here’s what the summary of Chapter 3 looks like with the offending evidence removed:

Pictured: Page 82 of my edition of The Righteous Mind, after some “minor” corrections. Text is © 2012 Jonathon Haidt. Used here for purposes of commentary and criticism.

 

Here’s an incomplete list of claims that didn’t replicate:

  • IAT tests show that we can have unconscious prejudices that affect how we make social and political judgements (1, 2, 3 critiques/failed replications). Used to buttress the elephant/rider theory of moral decisions.
  • Disgusting smells can make us more judgemental (failed replication source). Used as evidence that moral reasoning can be explained sometimes by external factors, is much less rational than we’d like to believe.
  • Babies prefer a nice puppet over a mean one, even when pre-verbal and probably lacking the context to understand what is going on (failed replication source). Used as further proof for how we are “prewired” for certain moral instincts.
  • People from Asian societies are better able to do relative geometry and less able to absolute geometry than westerners (failed replication source). This was used to make the individualistic morality of westerners seem inherent.
  • The “Lady Macbeth Effect” showed a strong relationship between physical and moral feelings of “cleanliness” (failed replication source). Used to further strengthen the elephant/rider analogy.

The proper attitude with which to view psychology studies these days is extreme scepticism. There are a series of bad incentives (it’s harder and less prestigious to publish negative findings, publishing is necessary to advance in your career) that have led to scientists in psychology (and other fields) to inadvertently and advertently publish false results. In any field in which you expect true discoveries to be rare (and I think “interesting and counter-intuitive things about the human brain fits that bill), you shouldn’t allow any individual study to influence you very much. For a full breakdown of how this can happen even when scientists check for statistical significance, I recommend reading “Why Most Published Research Findings Are False” (Ioannidis 2005).

Moral foundations theory appears to have escaped the replication crisis mostly unscathed, (as have Tverskey and Kahneman’s work on heuristics, something that made me more comfortable including the elephant/rider analogy). I think this is because moral foundations theory is primarily a descriptive theory. It grew out of a large volume of survey responses and represents clusters in those responses. It makes little in the way of concrete predictions about the world. It’s possible to quibble with the way Haidt and his collaborators drew the category boundaries. But given the sheer volume of responses they received – and the fact that they based their results not just on WEIRD individuals – it’s hard to disbelieve that they haven’t come up with a reasonable clustering of the possibility space of human values.

I will say that stripped of much of its ancillary evidence, Haidt’s attack on rationalism lost a lot of its lustre. It’s one thing to believe morality is mostly unconscious when you think that washing your hands or smelling trash can change how moral you act. It’s quite another when you know those studies were fatally flawed. The replication crisis fueled my inability to truly believe Haidt’s critique of rationality. This disbelief in turn became one of the two driving forces in my reaction to this book.

Haidt’s moral relativism around patriarchal cultures was the other.

III – Less and Less WEIRD

It’s good that Haidt looked at a variety of cultures. This is a thing few psychologists do. There’s historically been an alarming tendency to run studies on western undergraduate students, then declare “this is how people are”. This would be fine if western undergraduates were representative of people more generally, but I think that assumption was on shaky foundations even before moral foundation theory showed that morally, at least, it was entirely false.

Haidt even did some of this field work himself. He visited South America and India to run studies. In fact, he mentioned that this field work was one of the key things that made him question the validity of western individualistic morality and wary of morality that didn’t include the sanctity, loyalty, and authority foundations.

His willingness to get outside of his bubble and to learn from others is laudable.

But.

There is one key way in which Haidt never left his bubble, a way which makes me inherently suspicious of all of his defences of the sanctity, authority, and loyalty moral foundations. Here’s him recounting his trip to India. Can you spot the fatal omission?

I was told to be stricter with my servants, and to stop thanking them for serving me. I watched people bathe in and cook with visibly polluted water that was held to be sacred. In short, I was immersed in a sex-segregated, hierarchically stratified, devoutly religious society, and I was committed to understanding it on its own terms, not on mine.

It only took a few weeks for my dissonance to disappear, not because I was a natural anthropologist but because the normal human capacity for empathy kicked in. I liked these people who were hosting me, helping me, and teaching me. Wherever I went, people were kind to me. And when you’re grateful to people, it’s easier to adopt their perspective. My elephant leaned toward them, which made my rider search for moral arguments in their defense. Rather than automatically rejecting the men as sexist oppressors and pitying the women, children, and servants as helpless victims, I began to see a moral world in which families, not individuals, are the basic unit of society, and the members of each extended family (including its servants) are intensely interdependent. In this world, equality and personal autonomy were not sacred values. Honoring elders, gods, and guests, protecting subordinates, and fulfilling one’s role-based duties were more important.

Haidt tried out other moral systems, sure, but he tried them out from the top. Lois McMaster Bujold once had a character quip: “egalitarians adjust to aristocracies just fine, as long as they get to be the aristocrats”. I would suggest that liberals likewise find the authority framework all fine and dandy, as long as they have the authority.

Would Haidt have been able to find anything worth salvaging in the authority framework if he’d instead been a female researcher, who found herself ignored, denigrated, and sexually harassed on her research trip abroad?

It’s frustrating when Haidt is lecturing liberals on their “deficient” moral framework while simultaneously failing to grapple with the fact that he is remarkably privileged. “Can’t you see how this other society knows some moral truths [like men holding authority over woman] that we’ve lost” is much less convincing when the author of the sentence stands to lose absolutely nothing in the bargain. It’s easy to lecture others on the hard sacrifices society “must” make – and far harder to look for sacrifices that will mainly affect you personally.

It is in this regard that I found myself wondering if this might have been a more interesting book if it had been written by a woman. If the hypothetical female author were to defend the authority framework, she’d actually have to defend it, instead of hand-waving the defence with a request that we respect and understand all ethical frameworks. And if this hypothetical author found it indefensible, we would have been treated to an exploration of what to do if one of our fundamental ethical frameworks was flawed and had to be discarded. That would be an interesting conversation!

Not only that, but perhaps a female author would have given more pages to the observation that woman and children’s role in societal altruism was just as important as that of men (as child-rearing is a more reliable way to demonstrate and cash-in on groupishness than battle) have been fully explored, instead of relegated to a brief note at the end of the chapter on group selection. This perspective is genuinely new to me and I wanted to see it developed further.

Ultimately, Haidt’s defences of Authority/Subversion, Loyalty/Betrayal, and Sanctity/Degradation fell flat in the face of my Care/Harm and Liberty/Oppression focused moral compass. Scott Alexander once wrote about the need for “a solution to the time-limitedness of enlightenment that works from within the temporal perspective”. By the same token, I think Haidt fails to deliver a defence of conservatism or anything it stands for that works from within the liberal Care/Harm perspective. Insofar as his book was meant to bridge inferential gaps and political divides, this makes it a failure.

That’s a shame, because arguments that bridge this divide do exist. I’ve read some of them.

IV – What if Liberals are Wrong?

There is a principle called “Chesterton’s Fence”, which comes from the famed Catholic conservative and author G.K. Chesterton. It goes like this: if you see a fence blocking the road and cannot see the reason for it to be there, should you remove it? Chesterton said “no!”, resoundingly. He suggested you should first understand the purpose of the fence. Only then may you safely remove it.

There is a strain of careful conservatism that holds Chesterton’s fence as its dearest parable. Haidt makes brief mention of this strain of thought, but doesn’t expound on it successfully. I think it is this thought and this thought only that can offer Care/Harm focused liberals like myself a window into the redeeming features of the conservative moral frameworks.

Here’s what the argument looks like:

Many years ago, western nations had a unified moral framework. This framework supported people towards making long term decisions and acting in a pro-social manner. There are many people who want to act differently than they would if left to their own devices and this framework helped them to do that.

Liberals began to dismantle this system in the sixties. They saw hierarchies and people being unable to do the things they wanted to do, so tried to take down the whole edifice without first checking if any of it was doing anything important.

This strand of conservatism would argue that it was. They point to the increasing number of children born to parents who aren’t married (although increasingly these parents aren’t teens, which is pretty great), increasing crime (although this has started to fall after we took lead out of gasoline), increasing atomisation, decreasing church attendance, and increasing rates of anxiety and depression (although it is unclear how much of this is just people feeling more comfortable getting treatment).

Here’s the thing. All of these trends affect well educated and well-off liberals the least. We’re safe from crime in good neighbourhoods. We overwhelming wait until stable partnerships to have children. We can afford therapists and pills to help us with any mental health issues we might have; rehab to help us kick any drug habits we pick up.

Throwing off the old moral matrix has been an unalloyed good for privilege white liberals. We get to have our cake and eat it too – we have fun, take risks, but know that we have a safety net waiting to catch us should we fall.

The conservative appeal to tradition points out that our good time might be at the expense of the poor. It asks us if our hedonistic pleasures are worth a complete breakdown in stability for people with fewer advantages that us. It asks us consider sacrificing some of these pleasures so that they might be better off. I know many liberals who might find the sacrifice of some of their freedom to be a moral necessity, if framed this way.

But even here, social conservatism has the seeds of its own undoing. I can agree that children do best when brought up by loving and committed parents who give them a lot of stability (moving around in childhood is inarguably bad for many kids). Given this, the social conservative opposition to gay marriage (despite all evidence that it doesn’t mess kids up) is baffling. The sensible positon would have been “how can we use this to make marriage cool again“, not “how long can we delay this”.

This is a running pattern with social conservatism. It conserves blindly, without giving thought to what is even worth preserving. If liberals have some things wrong, that doesn’t automatically mean that the opposite is correct. It’s disturbingly easy for people on both sides of an issue to be wrong.

I’m sure Haidt would point out that this is why we have the other frameworks. But because of who I am, I’m personally much more inclined to do things in the other direction – throw out most of the past, then re-implement whatever we find to be useful but now lacking.

V – What if Liberals Listened?

In Berkeley, California, its environs, and assorted corners of the Internet, there exists a community that calls themselves “Rationalists”. This moniker is despite the fact that they agree with Haidt as to the futility of rationalism. Epistemically, they tend to be empiricists. Ethically, non-cognitivist utilitarians. Because they are largely Americans, they tend to be politically disengaged, but if you held them at gunpoint and demanded they give you a political affiliation, they would probably either say “liberal” or “libertarian”.

The rationalist community has semi-public events that mimic many of the best parts of religious events, normally based around the solstices (although I also attended a secular Seder when I visited last year).

This secular simulacrum of a religion has been enough to fascinate at least one Catholic.

The rationalist community has managed to do the sort of thing Haidt despaired of: create a strong community with communal morality in a secular, non-authoritarian framework. There are communal norms (although they aren’t very normal; polyamory and vegetarianism or veganism are very common). People tend to think very hard before having children and take care ensuring that any children they have will have a good extended support structure. People live in group houses, which combats atomisation.

This is also a community that is very generous. Many of the early adherents of Effective Altruism were drawn from the rationalist community. It’s likely that rationalists donate to charity in amounts more similar to Mormons than atheists (with the added benefit of almost all of this money going to saving lives, rather than proselytizing).

No community is perfect. This is a community made up of people. It has its fair share of foibles and megalomanias, bad actors and jerks. But it represents something of a counterpoint to Haidt’s arguments about the “deficiency” of a limited framework morality.

Furthermore, its altruism isn’t limited in scope, the way Haidt believes all communal altruism must necessarily be. Rationalists encourage each other to give to causes like malaria eradication (which mainly helps people in Africa), or AI risk (which mainly helps future people). Because there are few cost effective local opportunities to do good (for North Americans), this global focus allows for more lives to be saved or improved per dollar spent.

This is all of it, I think, the natural result of thoughtful people throwing away most cultural traditions and vestiges of traditionalist morality, then seeing what breaks and fixing those things in particular. It’s an example of what I wished for at the end of the last section applied to the real world.

VI – Is or Ought?

I hate to bring up the Hegelian dialectic, but I feel like this book fits neatly into it. We had the thesis: “morality stems from rationality” that was so popular in western political thought. Now we have the antithesis: “morality and rationality are separate horses, with rationality subordinate – and this is right and proper”.

I can’t wait for someone other than Haidt to a write a synthesis; a view that rejects rationalism as the basis of human morality but grapples with the fact that we yearn for perfection.

Haidt, in the words of Joseph Heath, thinks that moral discourse is “essentially confabulatory”, consisting only of made up stories that justify our moral impulses. There may be many ways in which this is true, but it doesn’t account for the fact that some people read Peter Singer’s essay “Famine, Affluence, and Morality” and go donate much of their money to the global poor. It doesn’t account for all those who have listened to the Sermon on the Mount and then abandoned their possessions to live a monastic life.

I don’t care whether you believe in The Absolute, or God, or Allah, or The Cycle of Rebirth, or the World Soul, or The Truth, or nothing at all. You probably have felt that very human yearning to be better. To do better. You’ve probably believed that there is a Good and it can perhaps be comprehended and reached. Maybe this is the last vestiges of my atrophied sanctity foundation talking, but there’s something base about believing that morality is solely a happy accident of how we evolved.

The is/ought fallacy occurs when we take what “is” and decide it is what “ought” to be. If you observe that murder is part of the natural order and conclude that it is therefore moral, you have committed this fallacy.

Haidt has observed the instincts that build towards human morality. His contributions to this field have helped make many things clear and make many conflicts more understandable. But in deciding that these natural tastes are the be-all and end-all of human morality, by putting them ahead of reason, religion, and every philosophical tradition, he has committed this fundamental error.

At the start of the Righteous Mind, Haidt approvingly mentions those scientists who once thought that ethics could be taken away from philosophers and studied instead only by them.

But science can only ever tell us what is, never what ought to be. As a book about science, The Righteous Mind is a success. But as a work on ethics, as an expression of how we ought to behave, it is an abysmal failure.

In this area, the philosophers deserve to keep their monopoly a little longer.

Science

Science Isn’t Your Cudgel

Do you want to understand how the material world works at the most fundamental level? Great! There’s a tool for that. Or a method. Or a collection of knowledge. “Science” is an amorphous concept, hard to pin down or put into a box. Is science the method of hypothesis generation and testing? Is it as Popper claimed, asking falsifiable questions and trying to refute your own theories? Is it inextricably entangled with the ream of statistical methods that have grown up in service of it? Or is it the body of knowledge that has emerged from the use of all of these intellectual tools?

I’m not sure what exactly science is. Whatever its definition, I feel like it helps me understand the world. Even still I have to remind myself that caring about science is like caring about a partner in a marriage. You need to be with it in good health and in bad, when it confirms things you’ve always wanted to believe, or when your favourite study fails to replicate or is retracted. It’s rank hypocrisy to shout the virtues of science when it confirms your beliefs and denigrate or ignore it when it doesn’t.

Unfortunately, it’s easy to collect examples of people who are selective about their support for science. Here’s three:

  1. Elizabeth May – and many other environmentalists – are really fond of the phrase “the science is clear” when talking about global warming or the dangers of pollution. In this they are entirely correct – the scientific consensus on global warming is incredibly clear. But when Elizabeth May says things like “Nuclear energy power generation has been proven to be harmful to the environment and hazardous to human health“, she isn’t speaking scientifically. Nuclear energy is one of the safest forms of power for both humans and the climate. Elizabeth May (and most of the environmental movement) are only fans of science when it fits with their fantasies of deindustrialization, not when it conflicts with them. See also the conflict between scientists on GMOs and environmentalists on GMOs.
  2. Hillary Clinton (who earned the support of most progressive Americans in the past election) is quite happy to applaud the March For Science and talk about how important science is, but she’s equally happy to peddle junk science (like the implicit association test) on the campaign trail.
  3. Unfortunately, this is a bipartisan phenomenon [1]. So called “race realists” belong on this list as well [2]. Race realists take research about racial variations in IQ (often done in America, with all of its gory history of repression along racial lines) and then claim that it maps directly onto observable racial characterises. Race realists ignore the fact that scientific attempts at racial clustering show strong continuity between populations and find that almost all genetic variance is individual, not between groups [3]. Race realists are fond of saying that people must accept the “unfortunate truth”, but are terrible at accepting that science is at least as unfortunate for their position as it is for blank slatism. The true scientific consensus lies somewhere in-between [4].

In all these cases, we see people who are enthusiastic defenders of “science” as long as the evidence suits the beliefs that they already hold. They are especially excited to use capital-S Science as a cudgel to bludgeon people who disagree with them and shallowly defend the validity of science out of concern for their cudgel. But actually caring about science requires an almost Kierkegaardian act of resignation. You have to give up on your biases, give up on what you want to be true, and accept the consensus of experts.

Caring about science enough to be unwilling to hold beliefs that aren’t supported by evidence is probably not for everyone. I’m not even sure I want it to be for everyone. Mike Alder says of a perfect empiricist:

It must also be said that, although one might much admire a genuine [empiricist] philosopher if such could found, it would be unwise to invite one to a dinner party. Unwilling to discuss anything unless he understood it to a depth that most people never attain on anything, he would be a notably poor conversationalist. We can safely say that he would have no opinions on religion or politics, and his views on sex would tend either to the very theoretical or to the decidedly empirical, thus more or less ruling out discussion on anything of general interest.

Science isn’t all there is. It would be much poorer world if it was. I love literature and video games, silly puns and recursive political jokes. I don’t try and make every statement I utter empirically correct. There’s a lot of value in having people haring off in weird directions or trying speculative modes of thought. And many questions cannot be answered though science.

But dammit, I have standards. This blog has codified epistemic statuses and I try and use them. I make public predictions and keep a record of how I do on them so that people can assess my accuracy as a predictor. I admit it when I’m wrong.

I don’t want to make it seem like you have to go that far to have a non-hypocritical respect for science.  Honestly, looking for a meta-analysis before posting something both factual and potentially controversial will get you 80% of the way there.

Science is more than a march and some funny Facebook memes. I’m glad to see so many people identifying so strongly with science. But for it to mean anything they have to be prepared to do the painful legwork of researching their views and admitting when they’re wrong. I have in the past hoped that loudly trumpeting support for science might be a gateway drug towards a deeper respect for science, but I don’t think I’ve seen any evidence for this. It’s my hope that over the next few years we’ll see more and more of the public facing science community take people to task for shallow support. If we make it low status to be a fair-weather friend of science, will we see more people actually putting in the work to properly support their views with empirical evidence?

This is an experiment I would like to try.

Footnotes

[1] The right, especially the religious right, is less likely to use “science” as a justification for anything, which is the main reason I don’t have complaints about them in this blog post. It is obviously terrible science to pretend that evolution didn’t happen or that global warming isn’t occurring, but it isn’t hypocritical if you don’t otherwise claim to be a fan of science. Crucially, this blog post is more about hypocrisy than bad science per se. ^

[2] My problems with race realists go beyond their questionable scientific claims. I also find them to be followers of a weird and twisted philosophy that equates intelligence with moral worth in a way I find repulsive. ^

[3] Taken together, these are damning for the idea that race can be easily inferred from skin colour. ^

[4] Yes, I know we aren’t supposed to trust Vox when it comes to scientific consensus. But Freddie de Boer backs it up and people I trust who have spent way more time than I have reading about IQ think that Freddie knows his stuff. ^

Biology, Politics

Medicine, the Inside View, and Historical Context

If you don’t live in Southern Ontario or don’t hang out in the skeptic blogosphere, you will probably have never heard the stories I’m going to tell today. There are two of, both about young Ontarian girls. One story has a happier ending than the other.

First is Makayla Sault. She died two years ago, from complications of acute lymphoblastic leukemia. She was 11. Had she completed a full course of chemotherapy, there is a 75% chance that she would be alive today.

She did not complete a full course of chemotherapy.

Instead, after 12-weeks of therapy, she and her parents decided to seek so-called “holistic” treatment at the Hippocrates Health Institute in Florida, as well as traditional indigenous treatments. . This decision killed her. With chemotherapy, she had a good chance of surviving. Without it…

There is no traditional wisdom that offers anything against cancer. There is no diet that can cure cancer. The Hippocrates Health Institute offers services like Vitamin C IV drips, InfraRed Oxygen, and Lymphatic Stimulation. None of these will stop cancer. Against cancer all we have are radiation, chemotherapy, and the surgeon’s knife. We have ingenuity, science, and the blinded trial.

Anyone who tells you otherwise is lying to you. If they are profiting from the treatments they offer, then they are profiting from death as surely as if they were selling tobacco or bombs.

Makayla’s parents were swindled. They paid $18,000 to the Hippocrates Health Institute for treatments that did nothing. There is no epithet I possess suitable to apply to someone who would scam the parents of a young girl with cancer (and by doing so, kill the young girl).

There was another girl (her name is under a publication ban; I only know her by her initials, J.J.) whose parents withdrew her from chemotherapy around the same time as Makayla. She too went to the Hippocrates Health Institute. But when she suffered a relapse of cancer, her parents appear to have fallen out with Hippocrates. They returned to Canada and sought chemotherapy alongside traditional Haudenosaunee medicine. This is the part of the story with a happy ending. The chemotherapy saved J.J.’s life.

When J.J. left chemotherapy, her doctors at McMaster Children’s Hospital [1] sued the Children’s Aid Society of Brant. They wanted the Children’s Aid Society to remove J.J. from her parents so that she could complete her course of treatment. I understand why J.J.’s doctors did this. They knew that without chemotherapy she would die. While merely telling the Children’s Aid Society this fact discharged their legal duty [2], it did not discharge their ethical duty. They sued because the Children’s Aid Society refused to act in what they saw as the best interest of a child; they sued because they found this unconscionable.

The judge denied their lawsuit. He ruled that indigenous Canadians have a charter right to receive traditional medical care if they wish it [3].

Makayla died because she left chemotherapy. J.J. could have died had she and her parents not reversed their decision. But I’m glad the judge didn’t order J.J. back into chemotherapy.

To explain why I’m glad, I first want to talk about the difference between the inside view and the outside view. The inside view is what you get when you search for evidence from your own circumstances and experiences and then apply that to estimate how you will fare on a problem you are facing. The outside view is when you dispassionately look at how people similar to you have fared dealing with similar problems and assume you will fare approximately the same.

Dr. Daniel Kahneman gives the example of a textbook he worked on. After completing two chapters in a year, the team extrapolated and decided it would take them two more years to finish. Daniel asked Seymour (another team member) how long it normally took to write a text book. Surprised, Seymour explained that it normally took seven to ten years all told and that approximately 40% of teams failed. This caused some dismay, but ultimately everyone (including Seymour) decided to preserver (probably believing that they’d be the exception). Eight years later, the textbook was finished. The outside view was dead on.

From the inside view, the doctors were entirely correct to try and demand that J.J. complete her treatment. They were fairly sure that her parents were making a lot of the medical decisions and they didn’t want J.J. to be doomed to die because her parents had fallen for a charlatan.

From an outside view, the doctors were treading on thin ice. If you look at past groups of doctors (or other authority figures), intervening with (they believe) all due benevolence to force health interventions on Indigenous Canadians, you see a chilling litany of abuses.

This puts us in a bind. Chemotherapy doesn’t cease to work because people in the past did terrible things. Just because we have an outside view that suggest dire consequences doesn’t mean science stops working. But our outside view really strongly suggests dire consequences. How could the standard medical treatment lead to worse outcomes?

Let’s brainstorm for a second:

  • J. could have died regardless of chemotherapy. Had there been a court order, this would have further shaken indigenous Canadian faith in the medical establishment.
  • A court order could have undermined the right of minors in Ontario to consent to their own medical care, with far reaching effects on trans youth or teenagers seeking abortions.
  • The Children’s Aid society could have botched the execution of the court order, leading to dramatic footage of a young screaming indigenous girl (with cancer!) being separated from her weeping family. Indigenous Canadians would have been reminded strongly of the Sixties Scoop.
  • There could have been a stand-off when Children’s Aid arrived to collect J.J.. Knowing Canada, this is the sort of thing that could have escalated into something truly ugly, with blockades and an armed standoff with the OPP or the military.

The outside view doesn’t suggest that chemotherapy won’t work. It simply suggests that any decision around forcing indigenous Canadians to receive health care they don’t want is ripe with opportunities for unintended consequences. J.J.’s doctors may have been acting out of a desire to save her life. But they were acting in a way that showed profound ignorance of Canada’s political context and past.

I think this is a weakness of the scientific and medical establishment. They get so caught up on what is true that they forget the context for the truth. We live in a country where we have access to many lifesaving medicines. We also live in a country where many of those medicines were tested on children that had been stolen from their parents and placed in residential schools – tested in ways that spit on the concept of informed consent.

When we are reminded of the crimes committed in the name of science and medicine, it is tempting to say “that wasn’t us; it was those who came before, we are innocent” – to skip to the end of the apologies and reparations and find ourselves forgiven. Tempting and so, so unfair to those who suffered (and still do suffer) because of the actions of some “beneficent” doctors and scientists. Instead of wishing to jump ahead, we should pause and reflect. What things have we done and advocated for that will bring shame on our fields in the future?

Yes, indigenous Canadians sometimes opt out of the formal medical system. So do white hippies. At least indigenous Canadians have a reason. If trips to the hospital occasionally for people that looked like me, I’d be a lot warier of them myself.

Scientists and doctors can’t always rely on the courts and on civil society to save us from ourselves. At some point, we have to start taking responsibility for our own actions. We might even have to stop sneering at post-modernism (something I’ve been guilty of in the past) long enough to take seriously its claim that we have to be careful about how knowledge is constructed.

In the end, the story of J.J., unlike that of Makayla, had a happy ending. Best of all, by ending the way it did, J.J.’s story should act as an example, for the medical system and indigenous Canadians both, on how to achieve good outcomes together.

In the story of Pandora’s Box, all of the pestilence and disease of the world sprung as demons from a cursed box and humanity was doomed to endure them ever more. Well we aren’t doomed forever; modern medicine has begun to put the demons back inside the box. It has accomplished this by following one deceptively simple rule: “do what works”. Now the challenge is to extend what works beyond just the treatments doctors choose. Increasingly important is how diseases are treated. When doctors respect their patients, respect their lived experiences, and respect the historical contexts that might cause patients to be fearful of treatments, they’ll have far more success doing what it is they do best: curing people.

It was an abrogation of duty to go to the courts instead of respectfully dealing with J.J.’s family. It was reckless and it could have put years of careful outreach by other doctors at risk. Sometimes there are things more important than one life. That’s why I’m glad the judge didn’t order J.J. back into chemo.

Footnotes:

[1] I have a lot of fondness for McMaster, having had at least one surgery and many doctors’ appointments there. ^

[2] Doctors have a legal obligation to report any child abuse they see. Under subsection 37(2)e of the Child and Family Services Act (CFSA), this includes “the child requires medical treatment to cure, prevent or alleviate physical harm or suffering, and the child’s parent refuses to consent to treatment”. ^

[3] I’m not actually sure how relevant that is here – Brian Clement is no one’s idea of an expert in Indigenous medicine and it’s not clear that this ruling still sets any sort of precedent, given that the judge later amended his ruling to “make it clear that the interests of the child must be paramount” in cases like this. ^