Model, Philosophy

When Remoter Effects Matter

In utilitarianism, “remoter effects” are the result of our actions influencing other people (and are hotly debated). I think that remoter effects are often overstated, especially (as Sir Williams said in Utilitarianism for and against) when they give the conventionally ethical answer. For example, a utilitarian might claim that the correct answer to the hostage dilemma [1] is to kill no one, because killing weakens the sanctity of human life and may lead to more deaths in the future.

When debating remoter effects, I think it’s worthwhile to split them into two categories: positive and negative. Positive remoter effects are when your actions cause others to refrain from some negative action they might otherwise take. Negative remoter effects are when your actions make it more likely that others will engage in a negative action [2].

Of late, I’ve been especially interested in ways that positive and negative remoter effects matter in political disagreements. To what extent will acting in an “honourable” [3] or pro-social way convince one’s opponents to do the same? Conversely, does fighting dirty bring out the same tendency in your opponents?

Some of my favourite bloggers are doubtful of the first proposition:

In “Deontologist Envy”, Ozy writes that we shouldn’t necessarily be nice to our enemies in the hopes that they’ll be nice to us:

In general people rarely have their behavior influenced by their political enemies. Trans people take pains to use the correct pronouns; people who are overly concerned about trans women in bathrooms still misgender them. Anti-racists avoid the use of slurs; a distressing number of people who believe in human biodiversity appear to be incapable of constructing a sentence without one. Social justice people are conscientious about trigger warnings; we are subjected to many tedious articles about how mentally ill people should be in therapy instead of burdening the rest of the world with our existence.

In “The Blues of Self-Regulation”, David Schraub talks about how this specifically applies to Republicans and Democrats:

The problem being that, even when Democrats didn’t change a rule protecting the minority party, Republicans haven’t even blinked before casting them aside the minute they interfered with their partisan agenda.

Both of these points are basically correct. Everything that Ozy says about asshats on the internet is true and David wrote his post in response to Republicans removing the filibuster for Supreme Court nominees.

But I still think that positive remoter effects are important in this context. When they happen (and I will concede that this is rare), it is because you are consistently working against the same political opponents and at least some of those opponents are honourable people. My favourite example here (although it is from war, not politics) is the Christmas Day Truce. This truce was so successful and widespread that high command undertook to move men more often to prevent a recurrence.

In politics, I view positive remoter effects as key to Senator John McCain repeatedly torpedoing the GOP healthcare plans. While Senators Murkowski and Collins framed their disagreements with the law around their constituents, McCain specifically mentioned the secretive, hurried and partisan approach to drafting the legislation. This stood in sharp contrast to Obamacare, which had numerous community consultations, went through committee and took special (and perhaps ridiculous) care to get sixty senators on board.

Imagine that Obamacare had been passed after secret drafting and no consultations. Imagine if Democrats had dismantled even more rules in the senate. They may have gotten a few more of their priorities passed or had a stronger version of Obamacare, but right now, they’d be seeing all that rolled back. Instead of evidence of positive remoter effects, we’d be seeing a clear case of negative ones.

When dealing with political enemies, positive remoter effects require a real sacrifice. It’s not enough not to do things that you don’t want to do anyway (like all the examples Ozy listed) and certainly not enough to refrain from doing things to third parties. For positive remoter effects to matter at all – for your opponents (even the honourable ones) not to say “well, they did it first and I don’t want to lose” – you need to give up some tools that you could use to advance your interests. Tedious journalists don’t care about you scrupulously using trigger warnings, but may appreciate not receiving death threats on Twitter.

Had right-wingers refrained from doxxing feminist activists (or even applied any social consequences at all against those who did so), all principled people on the left would be refusing to engage in doxxing against them. As it stands, that isn’t the case and those few leftists who ask their fellow travelers to refrain are met with the entirely truthful response: “but they started it!”

This highlights what might be an additional requirement for positive remoter effects in the political sphere: you need a clearly delimited coalition from which you can eject misbehaving members. Political parties are set up admirably for this. They regularly kick out members who fail to act as decorously as their office demands. Social movements have a much harder time, with predictable consequences – it’s far too easy for the most reprehensible members of any group to quickly become the representatives, at least as far as tactics are concerned.

Still, with positive remoter effects, you are not aiming at a movement or party broadly. Instead you are seeking to find those honourable few in it and inspire them on a different path. When it works (as it did with McCain), it can work wonders. But it isn’t something to lay all your hopes on. Some days, your enemies wake up and don’t screw you over. Other days, you have to fight.

Negative remoter effects seem so obvious as to require almost no explanation. While it’s hard (but possible) to inspire your opponents to civility with good behaviour, it’s depressingly easy to bring them down to your level with bad behavior. Acting honourably guarantees little, but acting dishonourably basically guarantees a similar response. Insofar as honour is a useful characteristic, it is useful precisely because it stops this slide towards mutual annihilation.

Notes:

[1] In the hostage dilemma, you are one of ten hostages, captured by rebels. The rebel leader offers you a gun with a single bullet. If you kill one of your fellow hostages, all of the survivors (including you) will be let free. If you refuse all of the hostages (including you) will be killed. You are guarded such that you cannot use the weapon against your captors. Your only option is to kill another hostage, or let all of the hostages be killed.

Here, I think remoter effects fail to salvage the conventional answer and the only proper utilitarian response is to kill one of the other hostages. ^

[2] Here I’m using “negative” in a roughly utilitarian sense: negative actions are those that tend to reduce the total utility of the world. When used towards good ends, negative actions consume some of the positive utility that the ends generate. When used towards ill ends, negative actions add even more disutility. This definition is robust against different preferred plans of actions (e.g. it works across liberals and conservatives, who might both agree that political violence tends to reduce utility, even if it doesn’t always reduce utility enough to rule it out in the face of certain ends), but isn’t necessarily robust across all terminal values (e.g. if you care only about reducing suffering and I care only for increasing happiness we may have different opinions on the tendency of reproduction towards good or ill).

Negative actions are roughly equivalent to “defecting”. “Roughly” because it is perhaps more accurate to say that the thing that makes defecting so pernicious is that it involves negative actions of a special class, those that generate extra disutility (possibly even beyond what simple addition would suggest) when both parties engage in them. ^

[3] I used “honourable” in several important places and should probably define it. When discussing actions, I think honourable actions are the opposite of “negative” actions as defined above: actions that tend towards the good, but can be net ill if used for bad ends. When describing “people” as honourable, I’m pointing to people who tend to reinforce norms around cooperation. This is more or less equivalent to being inherently reluctant to use negative actions to advance goals unless provoked.

My favourite example of honour is Salah ad-Din. He sent his own personal physician to tend to King Richard, who was his great enemy and used his own money to buy back a child kidnapped into slavery. Conveniently for me, Salah ad-Din shows both sides of what it means to be honourable. He personally executed Raynald III of Tripoli after Raynald ignored a truce, attacked Muslim caravans, and tortured many of the caravaners to death. To Guy of Lusignan, King of Jerusalem (who was captured in the same battle as Raynald and wrongly feared he was next to die), Salah ad-Din said: “[i]t is not the wont of kings, to kill kings; but that man had transgressed all bounds, and therefore did I treat him thus.” ^

Data Science, Literature, Model

Two Ideas Worth Sharing From ‘Weapons of Math Destruction’

Recently, I talked about what I didn’t like in Dr. Cathy O’Neil’s book, Weapons of Math Destruction. This time around, I’d like to mention two parts of it I really liked. I wish Dr. O’Neil put more effort into naming the concepts she covered; I don’t have names for them from WMD, but in my head, I’ve been calling them Hidden Value Encodings and Axiomatic Judgements.

Hidden Value Encodings

Dr. O’Neil opens the book with a description of the model she uses to cook for her family. After going into a lot of detail about it, she makes this excellent observation:

Here we see that models, despite their reputation for impartiality, reflect goals and ideology. When I removed the possibility of eating Pop-Tarts at every meal, I was imposing my ideology on the meals model. It’s something we do without a second thought. Our own values and desires influence our choices, from the data we choose to collect to the questions we ask. Models are opinions embedded in mathematics.

It is far too easy to view models as entirely empirical, as math made form and therefore blind to values judgements. But that couldn’t be further from the truth. It’s value judgements all the way down.

Imagine a model that tries to determine when a credit card transaction is fraudulent. Fraudulent credit cards transactions cost the credit card company money, because they must refund the stolen amount to the customer. Incorrectly identifying credit card transactions also costs a company money, either through customer support time, or if the customer gets so fed up by constant false positives that they switch to a different credit card provider.

If you were tasked with building a model to predict which credit card transactions were fraudulent by one of the major credit card companies, you would probably build into your model a variable cost for failing to catch fraudulent transactions (equivalent to the cost the company must bear if the transaction is fraudulent) and a fixed cost for labelling innocuous transactions as fraudulent (equivalent to the average cost of a customer support call plus the average chance of a false positive pushing someone over the edge into switching cards multiplied by the cost of their lost business over the next few years).

From this encoding, we can already see that our model would want to automatically approve all transactions below the fixed cost of dealing with false positives [1], while applying increasing scrutiny to more expensive items, especially expensive items with big resale value or items more expensive than the cardholder normally buys (as both of these point strongly toward fraud).

This seems innocuous and logical. It is also encoding at least two sets of values. First, it encodes the values associated with capitalism. At the most basic level, this algorithm “believes” that profit is good and losses are bad. It is aimed to maximize profit for the bank and while we may hold this as a default assumption for most algorithms associated with companies, that does not mean it is devoid of values; instead it encodes all of the values associated with capitalism [2]. Second, the algorithm encodes some notion that customers have freedom to choose between alternatives (even more so than is encoded by default in accepting capitalism).

By applying a cost to false positives (and likely it would be a cost that rises with each previous false positive), you are tacitly acknowledging that customers could take their business elsewhere. If customers instead had no freedom to choose who they did business with, you could merely encode as your loss from false positives the fixed cost of fielding support calls. Since outsourced phone support is very cheap, your algorithm would care much less about false positives if there was no consumer choice.

As far as I can tell, there is no “value-free” place to stand. An algorithm in the service of a hospital that helps diagnose patients or focus resources on the most ill encodes the value that “it is better to be healthy than sick; better to be alive than dead”. These values might be (almost-)universal, but they still exist, they are still encoded, and they still deserve to be interrogated when we put functions of our society in the hands of software governed by them.

Axiomatic Judgements

One of the most annoying parts of being a child is the occasional requirement to accept an imposition on your time or preferences with the explanation “because I say so”. “Because I say so” isn’t an argument, it’s a request that you acknowledge adults’ overwhelming physical, earning, and social power as giving them a right to set arbitrary rules for you. Some algorithms, forced onto unwelcoming and less powerful populations (teachers, job-seekers, etc.) have adopted this MO as well. Instead of having to prove that they have beneficial effects or that their outputs are legitimate, they define things such that their outputs are always correct and brook no criticism.

Here’s Dr. O’Neil talking about a value-added teaching model in Washington State:

When Mathematica’s scoring system tags Sarah Wysocki and 205 other teachers as failures, the district fires them. But how does it ever learn if it was right? It doesn’t. The system itself has determined that they were failures, and that is how they are viewed. Two hundred and six “bad” teachers are gone. That fact alone appears to demonstrate how effective the value-added model is. It is cleansing the district of underperforming teachers. Instead of searching for the truth, the score comes to embody it.

She contrasts this with how Amazon operates: “if Amazon.​com, through a faulty correlation, started recommending lawn care books to teenage girls, the clicks would plummet, and the algorithm would be tweaked until it got it right.” On the other hand, the teacher rating algorithm doesn’t update, doesn’t look check if it is firing good teachers, and doesn’t take an accounting of its own costs. It holds it as axiomatic ­–a basic fact beyond questioning– that its results are the right results.

I am in full agreement with Dr. O’Neil’s criticism here. Not only does it push past the bounds of fairness to make important decisions, like hiring and firing, through opaque formulae that are not explained to those who are being judged and lack basic accountability, but it’s a professional black mark on all of the statisticians involved.

Whenever you train a model, you hold some data back. This is your test data and you will use it to assess how well your model did. That gets you through to “production” – to having your model out in the field. This is an exciting milestone, not only because your model is now making decisions and (hopefully) making them well, but because now you’ll have way more data. You can see how your new fraud detection algorithm does by the volume of payouts and customer support calls. You can see how your new leak detection algorithm does by customers replying to your emails and telling you if you got it right or not.

A friend of mine who worked in FinTech once told me that they approved 1.5% of everyone who applied for their financial product, no matter what. They’d keep the score their model gave to that person on record, then see how the person fared in reality. If they used the product responsibly despite a low score, or used it recklessly despite a high score, it was viewed as valuable information that helped the team make their model that much better. I can imagine a team of data scientists, heads together around a monitor, looking through features and asking each other “huh, do any of you see what we missed here?” and it’s a pleasant image [3].

Value added teaching models, or psychological pre-screens for hiring do nothing of the sort (even though it would be trivial for them to!). They give results and those results are defined as the ground truth. There’s no room for messy reality to work its way back into the cycle. There’s no room for the creators to learn. The algorithm will be flawed and imperfect, like all products of human hands. That is inevitable. But it will be far less perfect than it could be. Absent feedback, it is doomed to always be flawed, in ways both subtle and gross, and in ways unknown to its creators and victims.

Like most Canadian engineering students, I made a solemn vow:

…in the presence of these my betters and my equals in my calling, [I] bind myself upon my honour and cold iron, that, to the best of my knowledge and power, I will not henceforward suffer or pass, or be privy to the passing of, bad workmanship or faulty material in aught that concerns my works before mankind as an engineer…

Sloppy work, like that value-added teacher model is the very definition of bad workmanship. Would that I never suffer something like that to leave my hands and take life in the world! It is no Quebec Bridge, but the value-added teaching model and other doomed to fail algorithms like it represent a slow-motion accident, steadily stealing jobs and happiness from people with no appeal or remorse.

I can accept stains on the honour of my chosen profession. Those are inevitable. But in a way, stains on our competence are so much worse. Models that take in no feedback are both, but the second really stings me.

Footnotes

[1] This first approximation isn’t correct in practice, because certain patterns of small transactions are consistent with fraud. I found this out the hard way, when a certain Bitcoin exchange’s credit card verification procedure (withdrawing less than a dollar, then refunding it a few days later, after you tell them how much they withdrew) triggered the fraud detection software at my bank. Apparently credit card thieves will often do a similar thing (minus the whole “ask the cardholder how much was withdrawn” step), as a means of checking if the card is good without cluing in the cardholder. ^

[2] I don’t mean this as a criticism of capitalism. I seek merely to point out (that like all other economic systems) capitalism is neither value neutral, nor inevitable. “Capitalism” encodes values like “people are largely rational”, “people often act to maximize their gains” and “choice is fundamentally good and useful”. ^

If socialist banks had ever made it to the point of deploying algorithms (instead of collapsing under the weight of their flawed economic system), those algorithms would also encode values (like “people will work hard for the good of the whole” and “people are inherently altruistic” and “it is worth it to sacrifice efficiency in the name of fairness”).

[3] Dulce et decorum est… get the fucking data science right. ^