History, Quick Fix

Against Historical Narratives

There is perhaps no temptation greater to the amateur (or professional) historian than to take a set of historical facts and draw from them a grand narrative. This tradition has existed at least since Gibbon wrote The History of the Decline and Fall of the Roman Empire, with its focus on declining civic virtue and the rise of Christianity.

Obviously, it is true that things in history happen for a reason. But I think the case is much less clear that these reasons can be marshalled like soldiers and made to march in neat lines across the centuries. What is true in one time and place may not necessarily be true in another. When you fall under the sway of a grand narrative, when you believe that everything happens for a reason, you may become tempted to ignore all of the evidence to the contrary.

Instead praying at the altar of grand narratives, I’d like to suggest that you embrace the ambiguity of history, an ambiguity that exists because…

Context Is Tricky

Here are six sentences someone could tell you about their interaction with the sharing economy:

  • I stayed at an Uber last night
  • I took an AirBnB to the mall
  • I deliberately took an Uber
  • I deliberately took a Lyft
  • I deliberately took a taxi
  • I can’t remember which ride-hailing app I used

Each of these sentences has an overt meaning. They describe how someone spent a night or got from place A to place B. They also have a deeper meaning, a meaning that only makes sense in the current context. Imagine your friend told you that they deliberately took an Uber. What does it say about them that they deliberately took a ride in the most embattled and controversial ridesharing platform? How would you expect their political views to differ from someone who told you they deliberately took a taxi?

Even simple statements carry a lot of hidden context, context that is necessary for full understanding.

Do you know what the equivalent statements to the six I listed would be in China? How about in Saudi Arabia? I can tell you that I don’t know either. Of course, it isn’t particularly hard to find these out for China (or Saudi Arabia). You may not find a key written down anywhere (especially if you can only read English), but all you have to do is ask someone from either country and they could quickly give you a set of contextual equivalents.

Luckily historians can do the same… oh. Oh damn.

When you’re dealing with the history of a civilization that “ended” hundreds or thousands of years ago, you’re going to be dealing with cultural context that you don’t fully understand. Sometimes people are helpful enough to write down “Uber=kind of evil” and “supporting taxis = very left wing, probably vegan & goes to protests”. A lot of the time they don’t though, because that’s all obvious cultural context that anyone they’re writing to would obviously have.

And sometimes they do write down even the obvious stuff, only for it all to get burned when barbarians sack their city, leaving us with no real way to understand if a sentence like “the opposing orator wore red” has any sort of meaning beyond a statement of sartorial critique or not.

All of this is to say that context can make or break narratives. Look at the play “Hamilton”. It’s a play aimed at urban progressives. The titular character’s strong anti-slavery views are supposed to code to a modern audience that he’s on the same political team as them. But if you look at American history, it turns out that support for abolishing slavery (and later, abolishing segregation) and support for big corporations over the “little guy” were correlated until very recently. In the 1960s though 1990s, there was a shift such that the Democrats came to stand for both civil rights and supporting poorer Americans, instead of just the latter. Before this shift, Democrats were the party of segregation, not that you’d know it to see them today.

Trying to tie Hamilton into a grander narrative of (eventual) progressive triumph erases the fact that most of the modern audience would strenuously disagree with his economic views (aside from urban neo-liberals, who are very much in Hamilton’s mold). Audiences end up leaving the paly with a story about their own intellectual lineage that is far from correct, a story that may cause them to feel smugly superior to people of other political stripes.

History optimized for this sort of team or political effect turns many modern historians or history writers into…

Unreliable Narrators

Gaps in context, or modern readers missing the true significance of gestures, words, and acts steeped in a particular extinct culture, combined with the fact that it is often impossible to really know why someone in the past did something mean that some of history is always going to be filled in with our best guesses.

Professor Mary Beard really drove this point home for me in her book SPQR. She showed me how history that I thought was solid was often made up of myths, exaggerations, and wishful thinking on the parts of modern authors. We know much less about Rome than many historians had made clear to me, probably because any nuance or alternative explanation would ruin their grand theories.

When it comes to so much of the past, we genuinely don’t know why things happened.

I recently heard two colleagues arguing about The Great Divergence – the unexplained difference in growth rates between Europe and the rest of the world that became apparent in the 1700s and 1800s. One was very confident that it could be explained by access to coal. The other was just as confident that it could be explained by differences in property rights.

I waded in and pointed out that Wikipedia lists fifteen possible explanations, all of which or none of which could be true. Confidence about the cause of the great divergence seems to me a very silly thing. We cannot reproduce it, so all theories must be definitionally unfalsifiable.

But both of my colleagues had read narrative accounts of history. And these narrative accounts had agendas. One wished to show that all peoples had the same inherent abilities and so cast The Great Divergence as chance. The other wanted to show how important property rights are and so made those the central factor in it. Neither gave much time to the other explanation, or any of the thirteen others that a well trafficked and heavily edited Wikipedia article finds equally credible.

Neither agenda was bad here. I am in fact broadly in favour of both. Yet their effect was to give two otherwise intelligent and well-read people a myopic view of history.

So much of narrative history is like this! Authors take the possibilities they like best, or that support their political beliefs the best, or think will sell the best, and write them down as if they are the only possibilities. Anyone who is unlucky enough to read such an account will be left with a false sense of certainty – and in ignorance of all the other options.


Of course, I have an agenda too. We all do. It’s just that my agenda is literally “the truth resists simplicity“. I like the messiness of history. It fits my aesthetic sense well. It’s because of this sense, that I’d like to encourage everyone to make their next foray into history free of narratives. Use Wikipedia or a textbook instead of a bestselling book. Read something by Mary Beard, who writes as much about historiography as she writes about history. Whatever you do, avoid books with blurbs praising the author for their “controversial” or “insightful” new theory.

Leave, just once, behind those famous narrative works like “Guns, Germs, and Steel” or “The History of the Decline and Fall of the Roman Empire” and pick up something that embraces ambiguity and doesn’t bury messiness behind a simple agenda.

Biology, Ethics, Literature, Philosophy

Book Review: The Righteous Mind

I – Summary

The Righteous Mind follows an argument structure I learned in high school debate club. It tells you what it’s going to tell you, it tells you it, then it reminds you what it told you. This made it a really easy read and a welcome break from The Origins of Totalitarianism, the other book I’ve been reading. Practically the very first part of The Righteous Mind proper (after the foreword) is an introduction to its first metaphor.

Imagine an elephant and a rider. They have travelled together since their birth and move as one. The elephant doesn’t say much (it’s an elephant), but the rider is very vocal – for example, she’s quick to apologize and explain away any damage the elephant might do. A casual observer might think the rider is in charge, because she is so much cleverer and more talkative, but that casual observer would be wrong. The rider is the press secretary for the elephant. She explains its action, but it is much bigger and stronger than her. It’s the one who is ultimately calling the shots. Sometimes she might convince it one way or the other, but in general, she’s buffeted along by it, stuck riding wherever it goes.

She wouldn’t agree with that last part though. She doesn’t want to admit that she’s not in charge, so she hides the fact that she’s mainly a press secretary even from herself. As soon as the elephant begins to move, she is already inventing a reason why it was her idea all along.

This is how Haidt views human cognition and decision making. In common terms, the elephant is our unconscious mind and the rider our consciousness. In Kahneman’s terms, the elephant is our System 1 and the rider our System 2. We may make some decisions consciously, but many of them are made below the level of our thinking.

Haidt illustrates this with an amusing anecdote. His wife asks him why he didn’t finish some dishes he’d been doing and he immediately weaves a story of their crying baby and barking incontinent dog preventing him. Only because he had his book draft open on his computer did he realize that these were lies… or rather, a creative and overly flattering version of the truth.

The baby did indeed cry and the dog indeed bark, but neither of these prevented him from doing the dishes. The cacophany happened well before that. He’d been distracted by something else, something less sympathetic. But his rider, his “internal press secretary” immediately came up with an excuse and told it, without any conscious input or intent to deceive.

We all tell these sorts of flattering lies reflexively. They take the form of slight, harmless embellishments to make our stories more flattering or interesting, or our apologies more sympathetic.

The key insight here isn’t that we’re all compulsive liars. It’s that the “I” that we like to think exists to run our life doesn’t, really. Sometimes we make decisions, especially ones the elephant doesn’t think it can handle (high stakes apologies anyone?), but normally decisions happen before we even think about them. From the perspective of Haidt, “I”, is really “we”, the elephant and its rider. And we need to be careful to give the elephant its due, even though it’s quiet.

Haidt devotes a lot of pages to an impassioned criticism of moral rationalism, the belief that morality is best understood and attained by thinking very hard about it. He explicitly mentions that to make this more engaging, he wraps it up in his own story of entering the field of moral psychology.

He starts his journey with Kohlberg, who published a famous account of the stages of moral reasoning, stages that culminate in rationally building a model of justice. This paradigm took the world of moral psychology by storm and reinforced the view (dating in Western civilization to the times of the Greeks) that right thought had to proceed right action.

Haidt was initially enamoured with Kohlberg’s taxonomy. But reading ethnographies and doing research in other countries began to make him suspect things weren’t as simple as Kohlberg thought. Haidt and others found that moral intuitions and responses to dilemmas differed by country. In particular, WEIRD people (people from countries that were Western, Educated, Industrialized, Rich, and Developed and most especially the most educated people in those countries) were very much able to tamp down feelings of disgust in moral problems, in a way that seemed far from universal.

For example, if asked if it was wrong for a family to eat their dog if it was killed by a car (and the alternative was burying it), students would say something along the lines of “well, I wouldn’t, but it’s gross, not wrong”. Participants recruited at a nearby McDonalds gave a rather different answer: “of course it’s wrong, why are you even asking”. WEIRD students at prestigious universities may have been working towards a rational, justice-focused explanation for morality, but Haidt found no evidence that this process (or even a focus on “justice”) was as universal as Kohlberg claimed.

That’s not to say that WEIRD students had no disgust response. In fact, trying to activate it gave even more interesting results. When asked to justify answers where disgust overpowered students sense of “well as long as no one was hurt” (e.g. consensual adult sibling incest with no chance of children), Haidt observed that people would throw up a variety of weak excuses, often before they had a chance to think the problem through. When confronted by the weakness of their arguments, they’d go speechless.

This made Haidt suspect that two entirely separate processes were going on. There was a fast one for deciding and a slower another for explanation. Furthermore, the slower process was often left holding the bag for the faster one. Intuitions would provide an answer, then the subject would have to explain it, no matter how logically indefensible it was.

Haidt began to believe that Kohlberg had only keyed in on the second, slower process, “the talking of the rider” in metaphor-speak. From this point of view, Kohlberg wasn’t measuring moral sophistication. He was instead measuring how fluidly people could explain their often less than logical moral intuitions.

There were two final nails in the coffin of ethical rationalism for Haidt. First, he learned of a type of brain injury that separated people from their moral intuitions (or as the rationalists might call them “passions”). Contrary to the rationalist expectation, these people’s lives went to hell, as they alienated everyone they knew, got fired from their jobs, and in general proved the unsuitability of pure reason for making many types of decisions. This is obviously the opposite of what rationalists predicted would happen.

Second, he saw research that suggested that in practical measures (like missing library books), moral philosophers were no more moral than other philosophy professors.

Abandoning rationalism brought Haidt to a sentimentalist approach to ethics. In this view, ethics stemmed from feelings about how the world ought to be. These feelings are innate, but not immutable. Haidt describes people as “prewired”, not “hardwired”. You might be “prewired” to have a strong loyalty foundation, but a series of betrayals and let downs early in life might convince you that loyalty is just a lie, told to control idealists.

Haidt also believes that our elephants are uniquely susceptible to being convinced by other people in face to face discussion. He views the mechanism here as empathy at least as much as logic. People that we trust and respect can point out our weak arguments, with our respect for them and positive feelings towards them being the main motive force for us listening to these criticisms. The metaphor with elephants kind of breaks down here, but this does seem to better describe the world as it is, so I’ll allow it.

Because of this, Haidt would admit that rationalism does have some purpose in moral reasoning, but he thinks it is ancillary and mainly used to convince other people. I’m not sure how testable making evolutionary conclusions about this is, but it does seem plausible for there to be selection pressure to make us really good at explaining ourselves and convincing others of our point of view.

As Haidt took this into account and began to survey peoples’ moral instincts, he saw that the ways in which responses differed by country and class were actually highly repeatable and seemed to gesture at underlying categories of people. After analyzing many, many survey responses, he and his collaborators came up with five (later six) moral “modules” that people have. Each moral module looks for violations of a specific class of ethical rules.

Haidt likens these modules to our taste-buds. The six moral tastes are the central metaphor of the second section of the book.

Not everyone has these taste-buds/modules in equal proportion. Looking at commonalities among respondents, Haidt found that the WEIRDer someone was, the less likely they were to have certain modules. Conservatives tended to have all modules in a fairly equal proportion, liberals tended to be lacking three. Libertarians were lacking a whopping four, which might explain why everyone tends to believe they’re the worst.

The six moral foundations are:

Care/Harm

This is the moral foundation that makes us care about suffering and pain in others. Haidt speculates that it originally evolved in order to ensure that children (which are an enormous investment of resources for mammals and doubly so for us) got properly cared for. It was originally triggered only by the suffering or distress of our own children, but can now be triggered by anyone being hurt, as well as cute cat videos or baby seals.

An expanding set of triggers seems to be a common theme for these. I’ve personally speculated that this would perhaps be observed if the brain was wired for minimizing negative predictive error (i.e. not mistaking a scene in which there is a lion for a scene without a lion), rather than positive predictive error (i.e. not mistaking a scene without a lion for a scene with a lion). If you minimize positive predictive error, you’ll never be frightened by a shadow, but you might get eaten by a lion.

Fairness/Cheating

This is the moral foundation that makes us want everyone to do their fair share and makes us want to punish tax evaders or welfare cheats (depending on our political orientation). The evolutionary story given for this one is that it evolved to allow us to reap the benefits of two-way partnerships; it was an incentive against defecting.

Loyalty/Betrayal

This is the foundation that makes us rally around our politicians, community leaders, and sports teams, as well as the foundation that makes some people care more about people from their country than people in general. Haidt’s evolutionary explanation for this one is that it was supposed to ensure coherent groups.

Authority/Subversion

This is the moral foundation that makes people obey their boss without talking back or avoid calling their parents by the first names. It supposedly evolved to allow us to forge beneficial relationships within hierarchies. Basically, it may have once been very useful to have people believe and obey their elders without question, (like e.g. when the elders say “don’t drink that water, it’s poisoned” no one does and this story can be passed down and keep people safe, without someone having to die every few years to prove that the water is indeed poisoned).

Sanctity/Degradation

This is the moral foundation that makes people on the right leery of pre-marital sex and people on the left leery of “chemicals”. It shows up whenever we view our bodies as more than just our bodies and the world as more than just a collection of things, as well as whenever we feel that something makes us “spiritually” dirty.

The very plausible explanation for this one is that it evolved in response to the omnivore’s dilemma: how do we balance the desire for novel food sources with the risk they might poison us? We do it by avoiding anything that looks diseased or rotted. This became a moral foundation as we slowly began applying it to stuff beyond food – like other people. Historically, the sanctity moral framework was probably responsible for the despised status of lepers.

Liberty/Oppression

This moral foundation is always in tension with Authority/Subversion. It’s the foundation that makes us want to band together against and cast down anyone who is aggrandizing themselves or using their power to mistreat another.

Haidt suggests that this evolved to allow us to band together against “alpha males” and check their power. In his original surveys, it was part of Fairness/Cheating, but he found that separating it gave him much more resolving power between liberals and conservatives.

Of these six foundations, Haidt found that libertarians only had an appreciable amount of Liberty/Oppression and Fairness/Cheating and of these two, Liberty/Oppression was by far the stronger. While the other foundations did exist, they were mostly inactive and only showed up under extreme duress. For liberals, he found that they had Care/Harm, Liberty/Oppression, and Fairness/Cheating (in that order).

Conservatives in Haidt’s survey had all six moral foundations, like I said above. Care/Harm was their strongest foundation, but by having appreciable amounts of Loyalty/Betrayal, Authority/Subversion, and Sanctity/Degradation, they would occasionally overrule Care/Harm in favour of one or another of these foundations.

Haidt uses these moral foundations to give an account of the “improbable” coalition between libertarians and social conservatives that closely matches the best ones to come out of political science. Basically, liberals and libertarians are descended (ideologically, if not filially) from those who embraced the enlightenment and the liberty it brought. About a hundred years ago (depending on the chronology and the country), the descendants of the enlightenment had a great schism, with some continuing to view the government as the most important threat to liberty (libertarians) and others viewing corporations as the more pressing threat (liberals). Liberals took over many auspices of the government and have been trying to use it to guarantee their version of liberty (with mixed results and many reversals) ever since.

Conservatives do not support this project of remaking society from the top down via the government. They believe that liberals want to change too many things, too quickly. Conservatives aren’t opposed to the government qua government. In fact, they’d be very congenial to a government that shared their values. But they are very hostile to a liberal, activist government (which is rightly or wrongly how conservatives view the governments of most western nations) and so team up with libertarians in the hopes of dismantling it.

This section, which characterized certain political views as stemming from “deficiencies” in certain “moral modules –, in a way that is probably hereditary – made me pause and wonder if this is a dangerous book. I’m reminded of Hannah Arendt talking about “tolerance” for Jews committing treason in The Origins of Totalitarianism.

It is an attraction to murder and treason which hides behind such perverted tolerance, for in a moment it can switch to a decision to liquidate not only all actual criminals but all who are “racially” predestined to commit certain crimes. Such changes take place whenever the legal and political machine is not separated from society so that social standards can penetrate into it and become political and legal rules. The seeming broad-mindedness that equates crime and vice, if allowed to establish its own code of law, will invariably prove more cruel and inhuman than laws, no matter how severe, which respect and recognize man’s independent responsibility for his behavior.

That said, it is possible for inconvenient or dangerous things to be true and their inconvenience or danger has no bearing on their truth. If Haidt saw his writings being used to justify or promote violence, he’d have a moral responsibility to decry the perpetrators. Accepting that sort of moral responsibility is, I believe, part of the responsibility that scientists who deal with sensitive topics must accept. I do not believe that this responsibility precludes publishing. I firmly believe that only right information can lead to right action, so I am on the whole grateful for Haidt’s taxonomy.

The similarities between liberals and libertarians extend beyond ethics. Both have more openness to experience and less of a threat response than conservatives. This explains why socially, liberals and libertarians have much more in common than liberals and conservatives.

Moral foundation theory gave me a vocabulary for some of the political writing I was doing last year. After the Conservative (Party of Canada) Leadership Convention, I talked about social conservative legislation as a way to help bind people to collective morality. I also talked about how holding other values very strongly and your values not at all can make people look diametrically opposed to you.

The third and final section of The Righteous Mind further focuses on political tribes. Its central metaphor is that humans are “90% chimp, 10% bee”. It’s central purpose is an attempt to show how humans might have been subject to group selection and how our groupishness is important to our morality.

Haidt claims that group selection is heresy in evolutionary biology (beyond hive insects). I don’t have the evolutionary biology background to say if this is true or not, although this does match how I’ve seen it talked about online among scientifically literate authors, so I’m inclined to believe him.

Haidt walks through the arguments against group selection and shows how they are largely sensible. It is indeed ridiculous to believe that genes for altruism could be preserved in most cases. Imagine a gene that would make deer more likely to sacrifice itself for the good of the herd if it seemed that was the only way to protect the herd’s young. This gene might help more deer in the herd attain adulthood, but it would also lead to any deer who had it having fewer children. There’s certainly an advantage to the herd if some members have this gene, but there’s no advantage to the carriers and a lot of advantage to every deer in the herd who doesn’t carry it. Free-riders will outcompete sacrificers and the selfless gene will get culled from the herd.

But humans aren’t deer. We can be selfish, yes, but we often aren’t and the ways we aren’t can’t be simply explained by greedy reciprocal altruism. If you’ve ever taken some time out of your day to help a lost tourist, congratulations, you’ve been altruistic without expecting anything in return. That people regularly do take time out of their days to help lost tourists suggests there might be something going on beyond reciprocal altruism.

Humans, unlike deer, have the resources and ability to punish free riders. We expect everyone to pitch in and might exile anyone who doesn’t. When humans began to form larger and larger societies, it makes sense that the societies who could better coordinate selfless behaviour would do better than those that couldn’t. And this isn’t just in terms of military cohesion (as the evolutionary biologist Lesley Newson had to point out to Haidt). A whole bunch of little selfless acts ­– sharing food, babysitting, teaching – can make a society more efficient than its neighbours at “turning resources into offspring”.

A human within the framework of society is much more capable than a human outside of it. I am only able to write this and share it widely because a whole bunch of people did the grunt work of making the laptop I’m typing it on, growing the food I eat, maintaining our communication lines, etc. If I was stuck with only my own resources, I’d be carving this into the sand (or more likely, already eaten by wolves).

Therefore, it isn’t unreasonable to expect that the more successful and interdependent a society could become, the more it would be able to outcompete, whether directly or indirectly its nearby rivals and so increase the proportion of its conditionally selfless genes in the human gene pool.

Conditional selflessness is a better description of the sorts of altruism we see in humans. It’s not purely reciprocal as Dawkins might claim, but it isn’t boundless either. It’s mostly reserved for people we view as similar to us. This doesn’t need to mean racially or religiously. In my experience, a bond as simple as doing the same sport is enough to get people to readily volunteer their time for projects like digging out and repairing a cracked foundation.

The switch from selfishness to selflessly helping out our teams is called “the hive switch” by Haidt. He devotes a lot of time to exploring how we can flip it and the benefits of flipping it. I agree with him that many of the happiest and most profound moments of anyone’s life come when the switch has been activated and they’re working as part of a team.

The last few chapters are an exploration of how individualism can undermine the hive switch and several mistakes liberals make in their zeal to overturn all hierarchies. Haidt believes that societies have both social capital (the bounds of trust between people) and moral capital (the society’s ability to bind people to collective values) and worries that liberal individualism can undermine these to the point where people will be overall worse off. I’ll talk more about moral capital later in the review.

II – On Shaky Foundations

Anyone who reads The Righteous Mind might quickly realize that I left a lot of the book out of my review. There was a whole bunch of supporting evidence about how liberals and conservatives “really are” or how they differ that I have deliberately omitted.

You may have heard that psychology is currently in the midst of a “replication crisis“. Much (I’d crudely estimate somewhere between 25% and 50%) of the supporting evidence in this book has been a victim of this crisis.

Here’s what the summary of Chapter 3 looks like with the offending evidence removed:

Pictured: Page 82 of my edition of The Righteous Mind, after some “minor” corrections. Text is © 2012 Jonathon Haidt. Used here for purposes of commentary and criticism.

 

Here’s an incomplete list of claims that didn’t replicate:

  • IAT tests show that we can have unconscious prejudices that affect how we make social and political judgements (1, 2, 3 critiques/failed replications). Used to buttress the elephant/rider theory of moral decisions.
  • Disgusting smells can make us more judgemental (failed replication source). Used as evidence that moral reasoning can be explained sometimes by external factors, is much less rational than we’d like to believe.
  • Babies prefer a nice puppet over a mean one, even when pre-verbal and probably lacking the context to understand what is going on (failed replication source). Used as further proof for how we are “prewired” for certain moral instincts.
  • People from Asian societies are better able to do relative geometry and less able to absolute geometry than westerners (failed replication source). This was used to make the individualistic morality of westerners seem inherent.
  • The “Lady Macbeth Effect” showed a strong relationship between physical and moral feelings of “cleanliness” (failed replication source). Used to further strengthen the elephant/rider analogy.

The proper attitude with which to view psychology studies these days is extreme scepticism. There are a series of bad incentives (it’s harder and less prestigious to publish negative findings, publishing is necessary to advance in your career) that have led to scientists in psychology (and other fields) to inadvertently and advertently publish false results. In any field in which you expect true discoveries to be rare (and I think “interesting and counter-intuitive things about the human brain fits that bill), you shouldn’t allow any individual study to influence you very much. For a full breakdown of how this can happen even when scientists check for statistical significance, I recommend reading “Why Most Published Research Findings Are False” (Ioannidis 2005).

Moral foundations theory appears to have escaped the replication crisis mostly unscathed, (as have Tverskey and Kahneman’s work on heuristics, something that made me more comfortable including the elephant/rider analogy). I think this is because moral foundations theory is primarily a descriptive theory. It grew out of a large volume of survey responses and represents clusters in those responses. It makes little in the way of concrete predictions about the world. It’s possible to quibble with the way Haidt and his collaborators drew the category boundaries. But given the sheer volume of responses they received – and the fact that they based their results not just on WEIRD individuals – it’s hard to disbelieve that they haven’t come up with a reasonable clustering of the possibility space of human values.

I will say that stripped of much of its ancillary evidence, Haidt’s attack on rationalism lost a lot of its lustre. It’s one thing to believe morality is mostly unconscious when you think that washing your hands or smelling trash can change how moral you act. It’s quite another when you know those studies were fatally flawed. The replication crisis fueled my inability to truly believe Haidt’s critique of rationality. This disbelief in turn became one of the two driving forces in my reaction to this book.

Haidt’s moral relativism around patriarchal cultures was the other.

III – Less and Less WEIRD

It’s good that Haidt looked at a variety of cultures. This is a thing few psychologists do. There’s historically been an alarming tendency to run studies on western undergraduate students, then declare “this is how people are”. This would be fine if western undergraduates were representative of people more generally, but I think that assumption was on shaky foundations even before moral foundation theory showed that morally, at least, it was entirely false.

Haidt even did some of this field work himself. He visited South America and India to run studies. In fact, he mentioned that this field work was one of the key things that made him question the validity of western individualistic morality and wary of morality that didn’t include the sanctity, loyalty, and authority foundations.

His willingness to get outside of his bubble and to learn from others is laudable.

But.

There is one key way in which Haidt never left his bubble, a way which makes me inherently suspicious of all of his defences of the sanctity, authority, and loyalty moral foundations. Here’s him recounting his trip to India. Can you spot the fatal omission?

I was told to be stricter with my servants, and to stop thanking them for serving me. I watched people bathe in and cook with visibly polluted water that was held to be sacred. In short, I was immersed in a sex-segregated, hierarchically stratified, devoutly religious society, and I was committed to understanding it on its own terms, not on mine.

It only took a few weeks for my dissonance to disappear, not because I was a natural anthropologist but because the normal human capacity for empathy kicked in. I liked these people who were hosting me, helping me, and teaching me. Wherever I went, people were kind to me. And when you’re grateful to people, it’s easier to adopt their perspective. My elephant leaned toward them, which made my rider search for moral arguments in their defense. Rather than automatically rejecting the men as sexist oppressors and pitying the women, children, and servants as helpless victims, I began to see a moral world in which families, not individuals, are the basic unit of society, and the members of each extended family (including its servants) are intensely interdependent. In this world, equality and personal autonomy were not sacred values. Honoring elders, gods, and guests, protecting subordinates, and fulfilling one’s role-based duties were more important.

Haidt tried out other moral systems, sure, but he tried them out from the top. Lois McMaster Bujold once had a character quip: “egalitarians adjust to aristocracies just fine, as long as they get to be the aristocrats”. I would suggest that liberals likewise find the authority framework all fine and dandy, as long as they have the authority.

Would Haidt have been able to find anything worth salvaging in the authority framework if he’d instead been a female researcher, who found herself ignored, denigrated, and sexually harassed on her research trip abroad?

It’s frustrating when Haidt is lecturing liberals on their “deficient” moral framework while simultaneously failing to grapple with the fact that he is remarkably privileged. “Can’t you see how this other society knows some moral truths [like men holding authority over woman] that we’ve lost” is much less convincing when the author of the sentence stands to lose absolutely nothing in the bargain. It’s easy to lecture others on the hard sacrifices society “must” make – and far harder to look for sacrifices that will mainly affect you personally.

It is in this regard that I found myself wondering if this might have been a more interesting book if it had been written by a woman. If the hypothetical female author were to defend the authority framework, she’d actually have to defend it, instead of hand-waving the defence with a request that we respect and understand all ethical frameworks. And if this hypothetical author found it indefensible, we would have been treated to an exploration of what to do if one of our fundamental ethical frameworks was flawed and had to be discarded. That would be an interesting conversation!

Not only that, but perhaps a female author would have given more pages to the observation that woman and children’s role in societal altruism was just as important as that of men (as child-rearing is a more reliable way to demonstrate and cash-in on groupishness than battle) have been fully explored, instead of relegated to a brief note at the end of the chapter on group selection. This perspective is genuinely new to me and I wanted to see it developed further.

Ultimately, Haidt’s defences of Authority/Subversion, Loyalty/Betrayal, and Sanctity/Degradation fell flat in the face of my Care/Harm and Liberty/Oppression focused moral compass. Scott Alexander once wrote about the need for “a solution to the time-limitedness of enlightenment that works from within the temporal perspective”. By the same token, I think Haidt fails to deliver a defence of conservatism or anything it stands for that works from within the liberal Care/Harm perspective. Insofar as his book was meant to bridge inferential gaps and political divides, this makes it a failure.

That’s a shame, because arguments that bridge this divide do exist. I’ve read some of them.

IV – What if Liberals are Wrong?

There is a principle called “Chesterton’s Fence”, which comes from the famed Catholic conservative and author G.K. Chesterton. It goes like this: if you see a fence blocking the road and cannot see the reason for it to be there, should you remove it? Chesterton said
“no!”, resoundingly. He suggested you should first understand the purpose of the fence. Only then may you safely remove it.

There is a strain of careful conservatism that holds Chesterton’s fence as its dearest parable. Haidt makes brief mention of this strain of thought, but doesn’t expound on it successfully. I think it is this thought and this thought only that can offer Care/Harm focused liberals like myself a window into the redeeming features of the conservative moral frameworks.

Here’s what the argument looks like:

Many years ago, western nations had a unified moral framework. This framework supported people towards making long term decisions and acting in a pro-social manner. There are many people who want to act differently than they would if left to their own devices and this framework helped them to do that.

Liberals began to dismantle this system in the sixties. They saw hierarchies and people being unable to do the things they wanted to do, so tried to take down the whole edifice without first checking if any of it was doing anything important.

This strand of conservatism would argue that it was. They point to the increasing number of children born to parents who aren’t married (although increasingly these parents aren’t teens, which is pretty great), increasing crime (although this has started to fall after we took lead out of gasoline), increasing atomisation, decreasing church attendance, and increasing rates of anxiety and depression (although it is unclear how much of this is just people feeling more comfortable getting treatment).

Here’s the thing. All of these trends affect well educated and well-off liberals the least. We’re safe from crime in good neighbourhoods. We overwhelming wait until stable partnerships to have children. We can afford therapists and pills to help us with any mental health issues we might have; rehab to help us kick any drug habits we pick up.

Throwing off the old moral matrix has been an unalloyed good for privilege white liberals. We get to have our cake and eat it too – we have fun, take risks, but know that we have a safety net waiting to catch us should we fall.

The conservative appeal to tradition points out that our good time might be at the expense of the poor. It asks us if our hedonistic pleasures are worth a complete breakdown in stability for people with fewer advantages that us. It asks us consider sacrificing some of these pleasures so that they might be better off. I know many liberals who might find the sacrifice of some of their freedom to be a moral necessity, if framed this way.

But even here, social conservatism has the seeds of its own undoing. I can agree that children do best when brought up by loving and committed parents who give them a lot of stability (moving around in childhood is inarguably bad for many kids). Given this, the social conservative opposition to gay marriage (despite all evidence that it doesn’t mess kids up) is baffling. The sensible positon would have been “how can we use this to make marriage cool again“, not “how long can we delay this”.

This is a running pattern with social conservatism. It conserves blindly, without giving thought to what is even worth preserving. If liberals have some things wrong, that doesn’t automatically mean that the opposite is correct. It’s disturbingly easy for people on both sides of an issue to be wrong.

I’m sure Haidt would point out that this is why we have the other frameworks. But because of who I am, I’m personally much more inclined to do things in the other direction – throw out most of the past, then re-implement whatever we find to be useful but now lacking.

V – What if Liberals Listened?

In Berkeley, California, its environs, and assorted corners of the Internet, there exists a community that calls themselves “Rationalists”. This moniker is despite the fact that they agree with Haidt as to the futility of rationalism. Epistemically, they tend to be empiricists. Ethically, non-cognitivist utilitarians. Because they are largely Americans, they tend to be politically disengaged, but if you held them at gunpoint and demanded they give you a political affiliation, they would probably either say “liberal” or “libertarian”.

The rationalist community has semi-public events that mimic many of the best parts of religious events, normally based around the solstices (although I also attended a secular Seder when I visited last year).

This secular simulacrum of a religion has been enough to fascinate at least one Catholic.

The rationalist community has managed to do the sort of thing Haidt despaired of: create a strong community with communal morality in a secular, non-authoritarian framework. There are communal norms (although they aren’t very normal; polyamory and vegetarianism or veganism are very common). People tend to think very hard before having children and take care ensuring that any children they have will have a good extended support structure. People live in group houses, which combats atomisation.

This is also a community that is very generous. Many of the early adherents of Effective Altruism were drawn from the rationalist community. It’s likely that rationalists donate to charity in amounts more similar to Mormons than atheists (with the added benefit of almost all of this money going to saving lives, rather than proselytizing).

No community is perfect. This is a community made up of people. It has its fair share of foibles and megalomanias, bad actors and jerks. But it represents something of a counterpoint to Haidt’s arguments about the “deficiency” of a limited framework morality.

Furthermore, its altruism isn’t limited in scope, the way Haidt believes all communal altruism must necessarily be. Rationalists encourage each other to give to causes like malaria eradication (which mainly helps people in Africa), or AI risk (which mainly helps future people). Because there are few cost effective local opportunities to do good (for North Americans), this global focus allows for more lives to be saved or improved per dollar spent.

This is all of it, I think, the natural result of thoughtful people throwing away most cultural traditions and vestiges of traditionalist morality, then seeing what breaks and fixing those things in particular. It’s an example of what I wished for at the end of the last section applied to the real world.

VI – Is or Ought?

I hate to bring up the Hegelian dialectic, but I feel like this book fits neatly into it. We had the thesis: “morality stems from rationality” that was so popular in western political thought. Now we have the antithesis: “morality and rationality are separate horses, with rationality subordinate – and this is right and proper”.

I can’t wait for someone other than Haidt to a write a synthesis; a view that rejects rationalism as the basis of human morality but grapples with the fact that we yearn for perfection.

Haidt, in the words of Joseph Heath, thinks that moral discourse is “essentially confabulatory”, consisting only of made up stories that justify our moral impulses. There may be many ways in which this is true, but it doesn’t account for the fact that some people read Peter Singer’s essay “Famine, Affluence, and Morality” and go donate much of their money to the global poor. It doesn’t account for all those who have listened to the Sermon on the Mount and then abandoned their possessions to live a monastic life.

I don’t care whether you believe in The Absolute, or God, or Allah, or The Cycle of Rebirth, or the World Soul, or The Truth, or nothing at all. You probably have felt that very human yearning to be better. To do better. You’ve probably believed that there is a Good and it can perhaps be comprehended and reached. Maybe this is the last vestiges of my atrophied sanctity foundation talking, but there’s something base about believing that morality is solely a happy accident of how we evolved.

The is/ought fallacy occurs when we take what “is” and decide it is what “ought” to be. If you observe that murder is part of the natural order and conclude that it is therefore moral, you have committed this fallacy.

Haidt has observed the instincts that build towards human morality. His contributions to this field have helped make many things clear and make many conflicts more understandable. But in deciding that these natural tastes are the be-all and end-all of human morality, by putting them ahead of reason, religion, and every philosophical tradition, he has committed this fundamental error.

At the start of the Righteous Mind, Haidt approvingly mentions those scientists who once thought that ethics could be taken away from philosophers and studied instead only by them.

But science can only ever tell us what is, never what ought to be. As a book about science, The Righteous Mind is a success. But as a work on ethics, as an expression of how we ought to behave, it is an abysmal failure.

In this area, the philosophers deserve to keep their monopoly a little longer.

Model, Philosophy

When Remoter Effects Matter

In utilitarianism, “remoter effects” are the result of our actions influencing other people (and are hotly debated). I think that remoter effects are often overstated, especially (as Sir Williams said in Utilitarianism for and against) when they give the conventionally ethical answer. For example, a utilitarian might claim that the correct answer to the hostage dilemma [1] is to kill no one, because killing weakens the sanctity of human life and may lead to more deaths in the future.

When debating remoter effects, I think it’s worthwhile to split them into two categories: positive and negative. Positive remoter effects are when your actions cause others to refrain from some negative action they might otherwise take. Negative remoter effects are when your actions make it more likely that others will engage in a negative action [2].

Of late, I’ve been especially interested in ways that positive and negative remoter effects matter in political disagreements. To what extent will acting in an “honourable” [3] or pro-social way convince one’s opponents to do the same? Conversely, does fighting dirty bring out the same tendency in your opponents?

Some of my favourite bloggers are doubtful of the first proposition:

In “Deontologist Envy”, Ozy writes that we shouldn’t necessarily be nice to our enemies in the hopes that they’ll be nice to us:

In general people rarely have their behavior influenced by their political enemies. Trans people take pains to use the correct pronouns; people who are overly concerned about trans women in bathrooms still misgender them. Anti-racists avoid the use of slurs; a distressing number of people who believe in human biodiversity appear to be incapable of constructing a sentence without one. Social justice people are conscientious about trigger warnings; we are subjected to many tedious articles about how mentally ill people should be in therapy instead of burdening the rest of the world with our existence.

In “The Blues of Self-Regulation”, David Schraub talks about how this specifically applies to Republicans and Democrats:

The problem being that, even when Democrats didn’t change a rule protecting the minority party, Republicans haven’t even blinked before casting them aside the minute they interfered with their partisan agenda.

Both of these points are basically correct. Everything that Ozy says about asshats on the internet is true and David wrote his post in response to Republicans removing the filibuster for Supreme Court nominees.

But I still think that positive remoter effects are important in this context. When they happen (and I will concede that this is rare), it is because you are consistently working against the same political opponents and at least some of those opponents are honourable people. My favourite example here (although it is from war, not politics) is the Christmas Day Truce. This truce was so successful and widespread that high command undertook to move men more often to prevent a recurrence.

In politics, I view positive remoter effects as key to Senator John McCain repeatedly torpedoing the GOP healthcare plans. While Senators Murkowski and Collins framed their disagreements with the law around their constituents, McCain specifically mentioned the secretive, hurried and partisan approach to drafting the legislation. This stood in sharp contrast to Obamacare, which had numerous community consultations, went through committee and took special (and perhaps ridiculous) care to get sixty senators on board.

Imagine that Obamacare had been passed after secret drafting and no consultations. Imagine if Democrats had dismantled even more rules in the senate. They may have gotten a few more of their priorities passed or had a stronger version of Obamacare, but right now, they’d be seeing all that rolled back. Instead of evidence of positive remoter effects, we’d be seeing a clear case of negative ones.

When dealing with political enemies, positive remoter effects require a real sacrifice. It’s not enough not to do things that you don’t want to do anyway (like all the examples Ozy listed) and certainly not enough to refrain from doing things to third parties. For positive remoter effects to matter at all – for your opponents (even the honourable ones) not to say “well, they did it first and I don’t want to lose” – you need to give up some tools that you could use to advance your interests. Tedious journalists don’t care about you scrupulously using trigger warnings, but may appreciate not receiving death threats on Twitter.

Had right-wingers refrained from doxxing feminist activists (or even applied any social consequences at all against those who did so), all principled people on the left would be refusing to engage in doxxing against them. As it stands, that isn’t the case and those few leftists who ask their fellow travelers to refrain are met with the entirely truthful response: “but they started it!”

This highlights what might be an additional requirement for positive remoter effects in the political sphere: you need a clearly delimited coalition from which you can eject misbehaving members. Political parties are set up admirably for this. They regularly kick out members who fail to act as decorously as their office demands. Social movements have a much harder time, with predictable consequences – it’s far too easy for the most reprehensible members of any group to quickly become the representatives, at least as far as tactics are concerned.

Still, with positive remoter effects, you are not aiming at a movement or party broadly. Instead you are seeking to find those honourable few in it and inspire them on a different path. When it works (as it did with McCain), it can work wonders. But it isn’t something to lay all your hopes on. Some days, your enemies wake up and don’t screw you over. Other days, you have to fight.

Negative remoter effects seem so obvious as to require almost no explanation. While it’s hard (but possible) to inspire your opponents to civility with good behaviour, it’s depressingly easy to bring them down to your level with bad behavior. Acting honourably guarantees little, but acting dishonourably basically guarantees a similar response. Insofar as honour is a useful characteristic, it is useful precisely because it stops this slide towards mutual annihilation.

Notes:

[1] In the hostage dilemma, you are one of ten hostages, captured by rebels. The rebel leader offers you a gun with a single bullet. If you kill one of your fellow hostages, all of the survivors (including you) will be let free. If you refuse all of the hostages (including you) will be killed. You are guarded such that you cannot use the weapon against your captors. Your only option is to kill another hostage, or let all of the hostages be killed.

Here, I think remoter effects fail to salvage the conventional answer and the only proper utilitarian response is to kill one of the other hostages. ^

[2] Here I’m using “negative” in a roughly utilitarian sense: negative actions are those that tend to reduce the total utility of the world. When used towards good ends, negative actions consume some of the positive utility that the ends generate. When used towards ill ends, negative actions add even more disutility. This definition is robust against different preferred plans of actions (e.g. it works across liberals and conservatives, who might both agree that political violence tends to reduce utility, even if it doesn’t always reduce utility enough to rule it out in the face of certain ends), but isn’t necessarily robust across all terminal values (e.g. if you care only about reducing suffering and I care only for increasing happiness we may have different opinions on the tendency of reproduction towards good or ill).

Negative actions are roughly equivalent to “defecting”. “Roughly” because it is perhaps more accurate to say that the thing that makes defecting so pernicious is that it involves negative actions of a special class, those that generate extra disutility (possibly even beyond what simple addition would suggest) when both parties engage in them. ^

[3] I used “honourable” in several important places and should probably define it. When discussing actions, I think honourable actions are the opposite of “negative” actions as defined above: actions that tend towards the good, but can be net ill if used for bad ends. When describing “people” as honourable, I’m pointing to people who tend to reinforce norms around cooperation. This is more or less equivalent to being inherently reluctant to use negative actions to advance goals unless provoked.

My favourite example of honour is Salah ad-Din. He sent his own personal physician to tend to King Richard, who was his great enemy and used his own money to buy back a child kidnapped into slavery. Conveniently for me, Salah ad-Din shows both sides of what it means to be honourable. He personally executed Raynald III of Tripoli after Raynald ignored a truce, attacked Muslim caravans, and tortured many of the caravaners to death. To Guy of Lusignan, King of Jerusalem (who was captured in the same battle as Raynald and wrongly feared he was next to die), Salah ad-Din said: “[i]t is not the wont of kings, to kill kings; but that man had transgressed all bounds, and therefore did I treat him thus.” ^

Data Science, Literature, Model

Two Ideas Worth Sharing From ‘Weapons of Math Destruction’

Recently, I talked about what I didn’t like in Dr. Cathy O’Neil’s book, Weapons of Math Destruction. This time around, I’d like to mention two parts of it I really liked. I wish Dr. O’Neil put more effort into naming the concepts she covered; I don’t have names for them from WMD, but in my head, I’ve been calling them Hidden Value Encodings and Axiomatic Judgements.

Hidden Value Encodings

Dr. O’Neil opens the book with a description of the model she uses to cook for her family. After going into a lot of detail about it, she makes this excellent observation:

Here we see that models, despite their reputation for impartiality, reflect goals and ideology. When I removed the possibility of eating Pop-Tarts at every meal, I was imposing my ideology on the meals model. It’s something we do without a second thought. Our own values and desires influence our choices, from the data we choose to collect to the questions we ask. Models are opinions embedded in mathematics.

It is far too easy to view models as entirely empirical, as math made form and therefore blind to values judgements. But that couldn’t be further from the truth. It’s value judgements all the way down.

Imagine a model that tries to determine when a credit card transaction is fraudulent. Fraudulent credit cards transactions cost the credit card company money, because they must refund the stolen amount to the customer. Incorrectly identifying credit card transactions also costs a company money, either through customer support time, or if the customer gets so fed up by constant false positives that they switch to a different credit card provider.

If you were tasked with building a model to predict which credit card transactions were fraudulent by one of the major credit card companies, you would probably build into your model a variable cost for failing to catch fraudulent transactions (equivalent to the cost the company must bear if the transaction is fraudulent) and a fixed cost for labelling innocuous transactions as fraudulent (equivalent to the average cost of a customer support call plus the average chance of a false positive pushing someone over the edge into switching cards multiplied by the cost of their lost business over the next few years).

From this encoding, we can already see that our model would want to automatically approve all transactions below the fixed cost of dealing with false positives [1], while applying increasing scrutiny to more expensive items, especially expensive items with big resale value or items more expensive than the cardholder normally buys (as both of these point strongly toward fraud).

This seems innocuous and logical. It is also encoding at least two sets of values. First, it encodes the values associated with capitalism. At the most basic level, this algorithm “believes” that profit is good and losses are bad. It is aimed to maximize profit for the bank and while we may hold this as a default assumption for most algorithms associated with companies, that does not mean it is devoid of values; instead it encodes all of the values associated with capitalism [2]. Second, the algorithm encodes some notion that customers have freedom to choose between alternatives (even more so than is encoded by default in accepting capitalism).

By applying a cost to false positives (and likely it would be a cost that rises with each previous false positive), you are tacitly acknowledging that customers could take their business elsewhere. If customers instead had no freedom to choose who they did business with, you could merely encode as your loss from false positives the fixed cost of fielding support calls. Since outsourced phone support is very cheap, your algorithm would care much less about false positives if there was no consumer choice.

As far as I can tell, there is no “value-free” place to stand. An algorithm in the service of a hospital that helps diagnose patients or focus resources on the most ill encodes the value that “it is better to be healthy than sick; better to be alive than dead”. These values might be (almost-)universal, but they still exist, they are still encoded, and they still deserve to be interrogated when we put functions of our society in the hands of software governed by them.

Axiomatic Judgements

One of the most annoying parts of being a child is the occasional requirement to accept an imposition on your time or preferences with the explanation “because I say so”. “Because I say so” isn’t an argument, it’s a request that you acknowledge adults’ overwhelming physical, earning, and social power as giving them a right to set arbitrary rules for you. Some algorithms, forced onto unwelcoming and less powerful populations (teachers, job-seekers, etc.) have adopted this MO as well. Instead of having to prove that they have beneficial effects or that their outputs are legitimate, they define things such that their outputs are always correct and brook no criticism.

Here’s Dr. O’Neil talking about a value-added teaching model in Washington State:

When Mathematica’s scoring system tags Sarah Wysocki and 205 other teachers as failures, the district fires them. But how does it ever learn if it was right? It doesn’t. The system itself has determined that they were failures, and that is how they are viewed. Two hundred and six “bad” teachers are gone. That fact alone appears to demonstrate how effective the value-added model is. It is cleansing the district of underperforming teachers. Instead of searching for the truth, the score comes to embody it.

She contrasts this with how Amazon operates: “if Amazon.​com, through a faulty correlation, started recommending lawn care books to teenage girls, the clicks would plummet, and the algorithm would be tweaked until it got it right.” On the other hand, the teacher rating algorithm doesn’t update, doesn’t look check if it is firing good teachers, and doesn’t take an accounting of its own costs. It holds it as axiomatic ­–a basic fact beyond questioning– that its results are the right results.

I am in full agreement with Dr. O’Neil’s criticism here. Not only does it push past the bounds of fairness to make important decisions, like hiring and firing, through opaque formulae that are not explained to those who are being judged and lack basic accountability, but it’s a professional black mark on all of the statisticians involved.

Whenever you train a model, you hold some data back. This is your test data and you will use it to assess how well your model did. That gets you through to “production” – to having your model out in the field. This is an exciting milestone, not only because your model is now making decisions and (hopefully) making them well, but because now you’ll have way more data. You can see how your new fraud detection algorithm does by the volume of payouts and customer support calls. You can see how your new leak detection algorithm does by customers replying to your emails and telling you if you got it right or not.

A friend of mine who worked in FinTech once told me that they approved 1.5% of everyone who applied for their financial product, no matter what. They’d keep the score their model gave to that person on record, then see how the person fared in reality. If they used the product responsibly despite a low score, or used it recklessly despite a high score, it was viewed as valuable information that helped the team make their model that much better. I can imagine a team of data scientists, heads together around a monitor, looking through features and asking each other “huh, do any of you see what we missed here?” and it’s a pleasant image [3].

Value added teaching models, or psychological pre-screens for hiring do nothing of the sort (even though it would be trivial for them to!). They give results and those results are defined as the ground truth. There’s no room for messy reality to work its way back into the cycle. There’s no room for the creators to learn. The algorithm will be flawed and imperfect, like all products of human hands. That is inevitable. But it will be far less perfect than it could be. Absent feedback, it is doomed to always be flawed, in ways both subtle and gross, and in ways unknown to its creators and victims.

Like most Canadian engineering students, I made a solemn vow:

…in the presence of these my betters and my equals in my calling, [I] bind myself upon my honour and cold iron, that, to the best of my knowledge and power, I will not henceforward suffer or pass, or be privy to the passing of, bad workmanship or faulty material in aught that concerns my works before mankind as an engineer…

Sloppy work, like that value-added teacher model is the very definition of bad workmanship. Would that I never suffer something like that to leave my hands and take life in the world! It is no Quebec Bridge, but the value-added teaching model and other doomed to fail algorithms like it represent a slow-motion accident, steadily stealing jobs and happiness from people with no appeal or remorse.

I can accept stains on the honour of my chosen profession. Those are inevitable. But in a way, stains on our competence are so much worse. Models that take in no feedback are both, but the second really stings me.

Footnotes

[1] This first approximation isn’t correct in practice, because certain patterns of small transactions are consistent with fraud. I found this out the hard way, when a certain Bitcoin exchange’s credit card verification procedure (withdrawing less than a dollar, then refunding it a few days later, after you tell them how much they withdrew) triggered the fraud detection software at my bank. Apparently credit card thieves will often do a similar thing (minus the whole “ask the cardholder how much was withdrawn” step), as a means of checking if the card is good without cluing in the cardholder. ^

[2] I don’t mean this as a criticism of capitalism. I seek merely to point out (that like all other economic systems) capitalism is neither value neutral, nor inevitable. “Capitalism” encodes values like “people are largely rational”, “people often act to maximize their gains” and “choice is fundamentally good and useful”. ^

If socialist banks had ever made it to the point of deploying algorithms (instead of collapsing under the weight of their flawed economic system), those algorithms would also encode values (like “people will work hard for the good of the whole” and “people are inherently altruistic” and “it is worth it to sacrifice efficiency in the name of fairness”).

[3] Dulce et decorum est… get the fucking data science right. ^