Model, Politics, Quick Fix

The Nixon Problem

Richard Nixon would likely have gone down in history as one of America’s greatest presidents, if not for Watergate.

To my mind, his greatest successes were détente with China and the end of the convertibility of dollars into gold, but he also deserves kudos for ending the war in Vietnam, continuing the process of desegregation, establishing the EPA, and signing the anti-ballistic missile treaty.

Nixon was willing to try unconventional solutions and shake things up. He wasn’t satisfied with leaving things as they were. This is, in some sense, a violation of political norms.

When talking about political norms, it’s important to separate them into their two constituent parts.

First, there are the norms of policy. These are the standard terms of the debate. In some countries, they may look like a (semi-)durable centrist consensus. In others they may require accepting single-party rule as a given.

Second are the norms that constrain the behaviour of people within the political system. They may forbid bribery, or self-dealing, or assassinating your political opponents.

I believe that the first set of political norms are somewhat less important than the second. The terms of the debate can be wrong, or stuck in a local maximum, such that no simple tinkering can improve the situation. Having someone willing to change the terms of the debate and try out bold new ideas can be good.

On the other hand, it is rarely good to overturn existing norms of political behaviour. Many of them came about only through decades of careful struggle, as heroic activists have sought to place reasonable constraints on the behaviour of the powerful, lest they rule as tyrants or pillage as oligarchs.

The Nixon problem, as I’ve taken to describing it, is that it’s very, very hard to find a politician who can shake up the political debate without at the same time shaking up our much more important political norms.

Nixon didn’t have to cheat his way to re-election. He won the popular vote by the highest absolute margin ever, some 18 million votes. He carried 49 out of 50 states, losing only Massachusetts.

Now it is true that Nixon used dirty tricks to face McGovern instead of Muskie and perhaps his re-election fight would have been harder against Muskie.

Still, given Muskie’s campaign was so easily derailed by the letter Nixon’s “ratfuckers” forged, it’s unclear how well he would have done in the general election.

And if Muskie was the biggest threat to Nixon, there was no need to bug Watergate after his candidacy had been destroyed. Yet Nixon and his team still ordered this done.

I don’t think it’s possible to get the Nixon who was able to negotiate with China without the Nixon who violated political norms for no reason at all. They were part and parcel with an overriding belief that he knew better than everyone else and that all that mattered was power for himself. Regardless, it is clear from Watergate that his ability to think outside of the current consensus was not something he could just turn off. Nixon is not alone in this.

One could imagine a hypothetical Trump (perhaps a Trump that listened to Peter Thiel more) who engaged mostly in well considered but outside-of-the-political-consensus policies. This Trump would have loosened FDA policies that give big pharma an unfair advantage, ended the mortgage tax deduction, and followed up his pressure on North Korea with some sort of lasting peace deal, rather than ineffective admiration of a monster.

The key realization about this hypothetical Trump is that, other than his particular policy positions, he’d be no different. He’d still idolize authoritarian thugs, threaten to lock up his political opponents, ignore important government departments, and surround himself with frauds and grifters.

I believe that it’s important to think how the features of different governments encourage different people to rise to the top. If a system of government requires any leader to first be a general, then it will be cursed with rigid leaders who expect all orders to be followed to the letter. If it instead rewards lying, then it’ll be cursed with politicians who go back on every promise.

There’s an important corollary to this: if you want a specific person to rule because of something specific about their character, you should not expect them to be able to turn it off.

Justin Trudeau cannot stop with the platitudes, even when backed into a corner. Donald Trump cannot stop lying, even when the truth is known to everyone. Richard Nixon couldn’t stop ignoring the normal way things were done in Washington, even when the normal way existed for a damn good reason.

This, I think, is the biggest mistake people like Peter Thiel made when backing Trump. They saw a lot of problems in Washington and correctly concluded that no one who was steeped in the ways of Washington would correct them. They decided that the only way forward was to find someone brash, who wouldn’t care about how things were normally done.

But they didn’t stop and think how far that attitude would extend.

Whenever someone tells you that a bold outsider is just what a system needs, remember that a Nixon who never did Watergate couldn’t have gone to China. If you back a new Nixon, you better be willing for a reprise.

Model, Philosophy, Quick Fix

Post-modernism and Political Diversity

I was reading a post-modernist critique of capitalist realism – the resignation to capitalism as the only practical way to organize a society, arising out of the failure of the Soviet Union – and I was struck by something interesting about post-modernism.

Insofar as post-modernism stands for anything, it is a critique of ideology. Post-modernism holds that there is no privileged lens with which to view the world; that even empiricism is suspect, because it too has a tendency to reproduce and reify the power structures in which in exists.

A startling thing then, is the sterility of the post-modernist political landscape. It is difficult to imagine a post-modernist who did not vote for Bernie Sanders or Jill Stein. Post-modernism is solely a creature of the left and specifically that part of the left that rejects the centrist compromise beloved of the incrementalist or market left.

There is a fundamental conflict between post-modernism’s self-proclaimed positioning as an ideology without an ideology – the only ideology conscious of its own construction – and its lack of political diversity.

Most other ideologies are tolerant of political divergence. Empiricists are found in practically every political party (with the exception, normally, being those controlled by populists) because empiricism comes with few built in moral commitments and politics is as much about what should be as what is. Devout Catholics also find themselves split among political parties, as they balance the social justice and social order messages of their religion. You will even, I would bet, find more evangelicals in the Democratic party than you will find post-modernists in the Republican party (although perhaps this would just be an artifact of their relative population sizes).

Even neoliberals and economists, the favourite target of post-modernists, find their beliefs cash out to a variety of political positions, from anarcho-capitalism or left-libertarianism to main-street republicanism.

It is hard to square the narrowness of post-modernism’s political commitments with its anti-ideological intellectual commitments. Post-modernism positions itself in communion with the Real, that which “any [constructed, as through empiricism] ‘reality’ must suppress”. Yet the political commitments it makes require us to believe that the Real is in harmony with very few political positions.

If this were the actual position of post-modernism, then it would be vulnerable to a post-modernist critique. Why should a narrow group of relatively privileged academics in relatively privileged societies have a monopoly on the correct means of political organization? Certainly, if economics professors banded together to claim they had discovered the only means of political organization and the only allowable set of political beliefs, post-modernists would be ready with that spiel. Why then, should they be exempt?

If post-modernism instead does not believe it has found a deeper Real, then it must grapple with its narrow political attractions. Why should we view it as anything but a justification for a certain set of policy proposals, popular among its members but not necessarily elsewhere?

I believe there is value in understanding that knowledge is socially constructed, but I think post-modernism, by denying any underlying physical reality (in favour of a metaphysical Real) removes itself from any sort of feedback loop that could check its own impulses (contrast: empiricism). And so, things that are merely fashionable among its adherents become de facto part of its ideology. This is troubling, because the very virtue of post-modernism is supposed to be its ability to introspect and examine the construction of ideology.

This paucity of political diversity makes me inherently skeptical of any post-modernist identified Real. Absent significant political diversity within the ideological movement, it’s impossible to separate an intellectually constructed Real from a set of political beliefs popular among liberal college professors.

And “liberal college professors like it” just isn’t a real political argument.

Model, Politics

The Character of Leaders is the Destiny of Nations

The fundamental problem of governance is the misalignment between means and ends. In all practically achievable government systems, the process of acquiring and maintaining power requires different skills than the exercise of power. The core criteria of any good system of government, therefore, must be selecting people by a metric that bears some resemblance to governing, or perhaps more importantly, having a metric that actively filters out people who are not suited to govern.

When the difference between means and ends becomes extreme, achieving power serves only to demonstrate unsuitability for holding it. Such systems are inevitably doomed to collapse.

Many people (I am thinking most notably of neo-reactionaries) put too much stock in the incentives or institutions of government systems. Neo-reactionaries look at the institutions of monarchies and claim they lead to stability, because monarchs have a large personal incentive to improve their kingdom and their lifetime tenure should afford them a long time horizon.

In practice, however, monarchies are rather unstable. This is because monarchs are chosen by accident of birth and may have little affinity for the patient business of building a nation. In addition, to maintain power, monarchs must be responsive to the aristocracy. This encourages the well documented disdain for the peasantry that was common in monarchical governments.

Monarchy, like many other systems of government, was not doomed so much by its institutions, as by its process for choosing a leader. The character of leaders is the destiny of nations and many forms of government have no way of picking people with a character conducive to governing well.

By observing the pathologies of failed systems of government, it becomes possible to understand why democracy is a uniquely successful form of government, as well as the risks that emergent social technologies pose to democracy.

The USSR

“Lenin’s core of original Bolsheviks… were many of them highly educated people…and they preserved these elements even as they murdered and lied and tortured and terrorised. They were social scientists who thought principle required them to behave like gangsters. But their successors… were not the most selfless people in Soviet society, or the most principled, or the most scrupulous. They were the most ambitious, the most domineering, the most manipulative, the most greedy, the most sycophantic.” – Francis Spufford, Red Plenty

The revolution that created the USSR was one founded on high minded ideals. The revolutionaries were going to create a new society, one that was fair, equal, and perfect; a utopia on earth. Yet, the bloody business of carving out a new state often stood in stark contrast to these ideals – as is common in revolutions.

It is, as a rule, difficult to tell which revolutions will lead to good rule and which to bloody shambles and repression. Take, as an example, the Eritrean People’s Liberation Front. They started as an egalitarian organization that treated prisoners of war with respect and ended up as one of the most brutal governments in the world.

Seizing power in a revolution requires a grasp of military tactics and organization; the ability to build a parallel state apparatus in occupied areas; the ability to inspire people to fight for your side; and a grasp of propaganda. While there is overlap with the skills necessary for civilian rule here, the perspective of a rebel is particularly poorly suited to governing according to the rule of law.

It is hard to win a revolution without coming to believe on some fundamental level that might makes right. The 20th century is littered with examples of rebels who cannot put aside this perspective shift when they transition to civilian rule.

(This, incidentally, is why nonviolent resistance leads to more stable governments and why repressive governments are so scared of it. A successful non-violent revolution leaves much less room for the dictator’s eventual return.)

It was so with the Soviets. Might makes right – perhaps more so even than communism – was the founding ideal of the Soviet Union.

Stalin succeeded Lenin as the leader of the Soviet Union via political manoeuvering, backstabbing, and the destruction of his enemies, tactics that would become key in future transfers of power.

To grasp the reins of the Soviet Union, it became necessary to view people as tools; to bribe key constituencies, to control the secret police, and to placate the army.

And this set of tools is not well suited to governing a prosperous nation. Attempts to reform the USSR with shadow prices, perhaps the only thing that could have saved communism, failed because shadow prices represented a loss of central control. If prices were not set politically, it would be impossible to manipulate them to reward compatriots and guarantee stability.

It’s true that its combination economic system and ambitions doomed the Soviet Union right from the start. It could not afford to be a global superpower while constrained by an economic philosophy that sharply limited its growth and guaranteed frequent shortages. But both of these were, in theory, mutable. It was only with such an ossifying process for choosing leaders that the Soviet Union was destined for failure.

In the USSR, legitimacy didn’t come from the people, but from the party apparatus. Bold changes, of the sort necessary to rescue the Soviet economy were unthinkable because they cut against too many entrenched interested. The army budget could not be decreased because the leader needed to maintain control of the army. The economic system couldn’t be changed because of how tightly the elite were tied to it.

The USSR needed bold, pioneering leaders who were willing to take risks and shake up the system. But the system guaranteed that those leaders would never rule. And so, eventually, the USSR fell.

Military Dictatorships

“The difference between a democracy and a dictatorship is that in a democracy you vote first and take orders later; in a dictatorship you don’t have to waste your time voting.” – Charles Bukowski

Military dictatorships that fall all fall in the same way: with an increasingly isolated junta issuing orders that are ignored by increasingly large swathes of the populace. The act of rising to the top of a military inculcates a belief that victory can always be achieved by finding the right set of orders. This is the mindset that military dictators bring to governing and it always leads to disaster. Whatever virtues of organization or delegation generals learn, it is never enough to overcome this central flaw.

Governing a modern state requires flexibility. There are always many constituencies: business owners, workers, teachers, doctors. There are often many regions, each with different economic needs. To support resource extraction can harm manufacturing – and vice versa. Bureaucrats have their own pet projects, their own red lines, and their own ideas.

This environment is about as different as it’s possible to be from an army. The military tells soldiers to follow orders. Civilians are rather worse at this task.

Expecting a whole society to follow orders, to put their own good aside for someone else’s plan is folly. Enough people will always buck orders to make a mockery of any grand design.

It is for this reason that military governments are so easy to satirize. Watching career soldiers try and herd cats can be darkly amusing, although the humour is quickly lost if one dwells too long on the atrocities military governments turn to when thwarted.

After all, the flip side of discipline is punishment. Failing to obey orders in the military is normally a crime, whereas failing to obey orders in the civil service is often par for the course. When these two mindsets collide, a junta is likely to impose harsh punishments on anyone disobeying. This doesn’t spring naturally from their position as dictators – most juntas start out with stunning idealistic beliefs about national salvation – but does spring naturally from military regulations. And so again we see a case where it is the background of the leaders, not the structure of the dictatorship that leads to the worst excesses.

You can replace the leaders as often as you like or tweak the laws, but as long as you keep appointing generals to rule, you will find they expect orders to be obeyed unquestioningly and respond harshly to any perceived disloyalty.

There is one last great vice of military dictatorships: a tendency to paper over domestic discontent with foreign wars. Military dictators know that revanchist wars can create popular support, so foreign adventuring is often their response when their legitimacy begins to crumble.

Off the top of my head, I can think of two wars started by military dictatorships seeking to improve their standing (the Falkland War and Six-Day War). No doubt a proper survey would turn up many others.

Since the time of Plato, soldier-rulers have been held up as the ideal heads of state. It is perhaps time to abandon this notion.

Democracy

“Indeed it has been said that democracy is the worst form of Government except for all those other forms that have been tried from time to time.” – Winston Churchill to the House of Commons

To gain power in a democracy, a politician needs to win election. This normally requires some skill in oratory and debate, the ability to delegate to competent subordinates, the ability to come up with a plan and clearly articulate how it will improve people’s lives, possibly some past experience governing that paints a flattering picture, and above all a good reputation with enough people to win an election. This oft-maligned “popularity contest” is actually democracy’s secret weapon.

Democracy is principally useful as a form of government that is resistant to corruption. Corruption is the act of arrogating state power to take benefits for yourself or give them to your friends. Persistent and widespread corruption is one of the biggest impediments to growth worldwide, so any technology (and government system are a type of cultural technology) that reduces corruption is a powerful force for human flourishing.

It is the requirement for a good reputation that helps democracy stand against corruption. In any society where corruption is scorned, democracy ensures that no one who is visibly corrupt can grasp power; if corruption is sufficient to ruin a reputation, no one who is corrupt can win a “popularity contest”.

(It is also worth noting that the demand for a sterling reputation rules out people who have tortured dissidents or ordered protestors shot. As long as autocrats are not revered, democracy can protect against many forms of repression.)

There are three main ways the democracy can fail to live up to its promise. First, it can fail because corruption isn’t appropriately sanctioned. If corruption becomes just the way things are done and scandals stop sticking, then democracy becomes much weaker as a check on corruption.

Second, democracy can be hijacked by individuals whose only skill is self-promotion. In a functioning democracy, the electorate demands that political resumes include real achievements. When this breaks down, democracy becomes a contest: who can disseminate their fake or exaggerated resume the furthest.

It is from this perspective that 24/7 news and social media present a threat to democracy. Donald Trump is an excellent example of this failure mode. He made use of viral lies and controversial statements to ensure that he was in front of as many voters as possible. His largely fake reputation for business acumen was enough to win over a few others.

There are many constituencies in all societies. Demonstrably, President Trump is not popular in America, but he appealed to enough people that he was able to build up a solid voting block in the primaries.

Beyond the primaries Trump demonstrated the third vulnerability of democracies: partisanship. Any democracy where partisanship becomes a key factor in elections is in grave danger. Normally, the reputational component of democracy selects for people with a resume of past successes (an excellent predictor of future successes) while elections with significant numbers of undecided voters provide an advantage to people who run tight campaigns – people who are good at nurturing talent and delegating (an excellent skill for governing).

Partisanship short-circuits this process and selects for whoever can whip up partisan crowds most successfully. This is a rather different sort of person! Rabid partisans spurn compromise and ignore everyone outside of their core constituency because those are the tactics that have rewarded them in the past.

Trump was able to win in part because such a large cross-section of the American electorate was willing to look beyond his flaws if it meant that someone from the other party didn’t win.

A large block of swing voters who look critically at politicians’ reputations and refuse to accept iconoclasts is an important safety valve in any democracy.

This model of democracy neatly explains why it isn’t universally successful. In societies with several strong tribal or religious identities, democracy results in cronyism dominated by the largest tribe/denomination, because it selects for whomever can promise the most to this large block. In countries that don’t have adequate cultural safeguards against corruption, corruption does not ruin reputations and democracy does nothing to squash it.

Democracy isn’t a panacea, but in the right cultural circumstances it is superior to any other realistic form of government.

Unfortunately, we can see that democracy is under attack on two fronts in Western nations. First, social media encourages shallow engagement and makes it easy for people to build constituencies around controversial statements. Second, partisanship is deepening in many societies.

I don’t know what specific remedies exist for these trends, but they strike me as two of the most important to reverse if we wish our democratic institutions to continue to provide good government.

If we cannot find a way to fix partisanship and self-promotion within our current system, then the most important political reform we can undertake is to find a system of government that can pick leaders with the right character for governing even under these very difficult circumstances.

[Epistemic status: much more theoretical than most of my writing. To avoid endless digressions, I don’t justify my centrist axioms very often. I’m happy to further discuss anything that strikes anyone as light on evidence in the comments.]

Model, Politics, Quick Fix

The Fifty Percent Problem

Brexit was always destined to be a shambles.

I haven’t written much about Brexit. It’s always been a bit of a case of “not my monkeys, not my circus”. And we’ve had plenty of circuses on this side of the Atlantic for me to write about.

That said, I do think Brexit is useful for illustrating the pitfalls of this sort of referendum, something I’ve taken to calling “The 50% Problem”.

To see where this problem arises from, let’s take a look at the text of several political referendums:

Should the United Kingdom remain a member of the European Union or leave the European Union? – 2016 UK Brexit Referendum

Do you agree that Québec should become sovereign after having made a formal offer to Canada for a new economic and political partnership within the scope of the bill respecting the future of Quebec and of the agreement signed on June 12, 1995? – 1995 Québec Independence Referendum

Should Scotland be an independent country? – 2014 Scottish Independence Referendum

Do you want Catalonia to become an independent state in the form of a republic? – 2017 Catalonia Independence Referendum, declared illegal by Spain.

What do all of these questions have in common?

Simple: the outcome is much vaguer than the status quo.

During the Brexit campaign, the Leave side promised people everything but the moon. During the run-up to Québec’s last independence referendum, there were promises from the sovereignist camp that Québec would be able to retain the Canadian dollar, join NAFTA without a problem, or perhaps even remain in Canada with more autonomy. In Scotland, leave campaigners promised that Scotland would be able to quickly join the EU (which in a pre-Brexit world, Spain seemed likely to veto). The proponents of the Catalonian referendum pretended Spain would take it at all seriously.

The problem with all of these referendums and their vague questions is that everyone ends up with a slightly different idea of what success will entail. While failure leads to the status quo, success could mean anything from (to use Brexit as an example) £350m/week for the NIH to Britain becoming a hermit kingdom with little external trade.

Some of this comes from assorted demagogues promising more than they can deliver. The rest of it comes from general disagreement among members of any coalition about what exactly their best-case outcome is.

Crucially, this means that getting 50% of the population to agree to a referendum does not guarantee that 50% of the population agrees on what happens next. In fact, getting barely 50% of people to agree practically guarantees that no one will agree on what happens next.

Take Brexit, the only one of the referendums I listed above that actually led to anything. While 51.9% of the UK agreed to Brexit, there is not a majority for any single actual Brexit proposal. This means that it is literally impossible to find a Brexit proposal that polls well. Anything that gets proposed is guaranteed to be opposed by all the Remainers, plus whatever percentage of the Brexiteers don’t agree with that specific form of Brexit. With only 52% of the population backing Leave, the defection of even 4% of the Brexit coalition is enough to make a proposal opposed by the majority of the citizenry of the UK.

This leads to a classic case of circular preferences. Brexit is preferred to Remain, but Remain is preferred to any specific instance of Brexit.

For governing, this is an utter disaster. You can’t run a country when no one can agree on what needs to be done, but these circular preferences guarantee that anything that is tried is deeply unpopular. This is difficult for politicians, who don’t want to be voted out of office for picking wrong, but also don’t want to go back on the referendum.

There are two ways to avoid this failure mode of referendums.

The first is to finish all negotiations before using a referendum to ratify an agreement. This allows people to choose between two specific states of the world: the status quo and a negotiated agreement. It guarantees that whatever wins the referendum has majority support.

This is the strategy Canada took for the Charlottetown Accord (resulting in it failing at referendum without generating years of uncertainty) and the UK and Ireland took for the Good Friday Agreement (resulting in a successful referendum and an end to the Troubles).

The second means of avoiding the 50% problem is to use a higher threshold for success than 50% + 1. Requiring 60% or 66% of people to approve a referendum ensures that any specific proposal after the referendum is completed should have majority support.

This is likely how any future referendum on Québec’s independence will be decided, acknowledging the reality that many sovereignist don’t want full independence, but might vote for it as a negotiating tactic. Requiring a supermajority would prevent Québec from falling into the same pit the UK is currently in.

As the first successful major referendum in a developed country in quite some time, Brexit has demonstrated clearly the danger of referendums decided so narrowly. Hopefully other countries sit up and take notice before condemning their own nation to the sort of paralysis that has gripped Britain for the past three years.

Ethics, Model, Philosophy

Signing Up For Different Moralities

When it comes to day to day living, many people are in agreement on what is right and what is wrong. Giving change to people who ask for it, shoveling your elderly neighbour’s driveway, and turning off the lights when you’re not in the room: good. Killing, robbing, and drug trafficking: bad. Helping the police to convict mobsters who kill, steal, and traffic drugs: good.

While many moral debates can get complicated, this one rarely does. Even when helping the police involves turning on your compatriots – “snitching” – many people (although notably not the President of the United States of America) think the practice is a net good. There’s a recent case in Australia where opinion has been rather more split. Why? Well, the informant was a lawyer – specifically, a lawyer who had worked with the accused parties. Here’s a sampling of commenters on both sides:

In this case I feel it is for the greater good that human garbage like Mokbel are convicted even if the system has to be bent to do so. [1]
The job requires strict adherence to the ethical rules. If you let your dog run the house, the house gets torn apart.
The brave lady in question went above and beyond to keep Victorians safer. If these thugs are released or sentences reduced there will be uproar.
The right to an open and fair trial is a hallmark of a democratic country even if sometimes a defendant who is in fact guilty gets acquitted.
While I’m normally happy to see violent mobsters go to jail, here I must disagree with everyone who offered support for the lawyer. I think it was wrong of her to inform on her clients and correct for the high court to rebuke the police in the strongest possible terms. I certainly don’t want any of those mobsters back on the street and I hope there’s enough other evidence that none of them have to be released.

But even if some of them do end up winning their appeals, I believe we are better off in a society where lawyers cannot inform on their clients. This, I think, is one of the ethical cases where precedent utilitarianism is particularly useful in analysis and one that demonstrates its strengths as a moral philosophy.

(To briefly recap: precedent utilitarianism is the strain of utilitarian thought that emphasizes the moral weight of precedents. Precedent utilitarians don’t just consider the first-order effects of their actions on global wellbeing. They also consider what precedents their actions create and how those precedents can be later used for others for good or ill.)

The common law legal system is premised on the belief that the burden of proof of crime rests upon the state. If the state wishes to take away someone’s liberty, it must prove to a jury that the person committed the crime. The accused is supposed to be vigourously defended by an advocate – a lawyer or barrister – who has a legal and professional duty to defend their client to the best of their abilities.

We place the burden of proof on the government because we acknowledge that the government can be flawed. To give into every demand it makes leads to tyranny. Only by forcing it to justify all of its actions can we ensure freedom for anyone.

(This sounds very pretty when laid out like this. In practice, we are rather less good at holding the government to account than many, including myself, would like. Especially when the defendant isn’t white. I believe part of why society fails to live up to its duty to hold the government to account is sympathies that commonly lie with police and against defendants, the very sympathies I’m arguing against holding too strongly.)

But it’s not just upon the government that we place a burden to avoid pre-judging. We require advocates to defend their clients to the best of their abilities because we are skeptical of them as well. If we let attorneys decide who deserves defending, then we have just shifted the tyranny. Attorneys can make snap judgements that aren’t borne out by the facts. They can be racist. They can be sexist. They can make mistakes. It’s only by forcing them to defend everyone, regardless of perceived innocence or guilt, that we can truly make the state do its duty.

This doesn’t mean that lawyers always have to go to trial and defend their clients in front of a judge and a jury. It could be that the best thing for a client is a guilty plea (ideally if they are actually guilty, although that’s also not how things currently work, especially when the accused isn’t white). If a lawyer truly believes in a legal strategy (like a guilty plea) and the client refuses to listen, the attorney always can walk away and leave the trial defense to another lawyer. The important thing is that someone must defend the accused and that that someone will be ethically bound to give it their best damn shot.

Many people don’t like this. It is obviously best if every guilty person is punished in accordance with their crime. Some people trust the government to the point where they view every accused as essential guilty. To them, lawyers are scum who defend criminals and prevent them from being justly punished.

I view things differently. I view lawyers as people who have signed up for an alternative morality. While conventional morality holds that we should punish criminals, lawyers have signed up to defend all of their clients, even criminals, and to do their best to prevent that punishment. This is very different from the rest of us!

But it’s complimentary to my (our?) morality. It is not only best if we appropriately punish those who break the law. I believe is also best if we do it without punishing anyone who is innocent.

We cannot ask lawyers to talk to their clients, figure out if they’re innocent or guilty, and then inform the judge or dump as clients all of the truly guilty. This will only work for a short while. Then everyone will figure out that you have to lie to your attorney (or tell the truth if you’re innocent) if you want to avoid jail. We’re now stuck trusting the judgement of attorneys as to who is lying and who is telling the truth – judgement that could be tainted by any number of mistakes or prejudices.

In the Australian case, the attorney made a decision she wasn’t qualified to make. She, not a jury, decided her client was guilty. She doesn’t appear to be wrong here (although really, how can we tell, given that a lot of the information used in the convictions came from her and her erstwhile clients weren’t able to cross-examine her testimony) but if we don’t want a system where a random lawyer gets to decide who is guilty or not, the important thing isn’t that her testimony is true. The important thing is that she arrogated power that wasn’t hers and thereby undermined the justice system. If we let things like this stand, we enable tyranny.

The next lawyer might not be telling the truth. He may just be biased against black clients and want to feel like a hero. Or she might be locked in a payment dispute and angry with her client. We don’t know. And that should scare us away from allowing this precedent to stand. A harsh rebuke here means that the police will be unable to use any future testimony from lawyers and protects everyone in Australia from arbitrary imprisonment based on the decisions of their lawyer.

Focusing on the precedents that actions set is important. If you don’t and instead focus solely on each issue in isolation, you can miss the slow erosion of the rights and freedoms that we all rely on (or desire). Its suitability for this sort of analysis is what makes precedent utilitarianism so appealing to me. It urges us to dig deeper and try to understand why society is set up the way it is.

I think alternative moralities, actively different moral systems that people sign up for as part of their professions are an important model to hold for precedent utilitarians. Alternative moralities encode good precedents, even if they stand in opposition to commonly held values.

We don’t just see this among lawyers. CEOs sign up for the alternative morality of fiduciary duty, which requires them to put the interests of their investors above everything but the law. Complaints about the downsides of this ignore the fact we need companies to grow and profit if we ever want to retire [2]. Engineers sign up for an alternative, stricter morality, which holds them personally and professionally responsible for the failures of any device or structure they sign off on.

Having alternative moralities around makes public morality more complicated. It becomes harder to agree on what is right or wrong; it might be right for a lawyer to help a criminal in a way that it would be wrong for anyone else, or wrong for an engineer to make a mistake in a way that would carry no moral blame for anyone outside of the profession. These alternative moralities require us to do a deeper analysis before judging and reward us with a stronger, more resilient society when we do.

Footnotes

[1] Even though I disagree strenuously with this poster, I have a bit of fondness for their comment. My very first serious essay – and my interest in moral philosophy – was inspired by a similar comment. ^

[2] This isn’t just a capitalism thing. Retirement really just looks like delay some consumption now in order to be able to consume more in retirement. Consumption, time value of [goods and services, money], and growth follow the same math whether you have central planning or free markets. Communists have to figure out how to do retirement as well and they’re faced with the prospect of either providing less for retired people, or using tactics that would make American CEOs blush in order to drive the sort of growth necessary to support an aging retired population. ^
Model, Quick Fix

When QALYs Are Wrong – Thoughts on the Gates Foundation

Every year, I check in to see if we’ve eradicated polio or guinea worm yet. Disease eradications are a big deal. We’ve only successfully eradicated one disease – smallpox – so being so close to wiping out two more is very exciting.

Still, when I looked at how much resources were committed to polio eradication (especially by the Gates Foundation), I noticed they seemed incongruent with its effects. No polio eradication effort can be found among GiveWell’s top charities, because it is currently rather expensive to prevent polio. The amount of quality-adjusted life years (QALYs, a common measure of charity effectiveness used in the Effective Altruism community) you can save with a donation to preventing malaria is just higher than for polio.

I briefly wondered if it might not be better for all of the effort going to polio eradication to instead go to anti-malaria programs. After thinking some more, I’ve decided that this would be a grave mistake. Since I haven’t seen why explained anywhere else, I figured I’d share my thinking, so that anyone else having the same thought can see it.

A while back, it was much cheaper to buy QALYs using the polio vaccines. As recently as 1988, there were more than 350,000 cases of polio every year. It’s a testament to the excellent work of the World Health Organization and its partners that polio has become so much rarer – and therefore so much more expensive to prevent each new case of. After all, when there are few new cases, you can’t prevent thousands.

It is obviously very good that there are few cases of polio. If we decided that this was good enough and diverted resources towards treating other diseases, we might quickly find that this would no longer be the case. Polio could once again become a source of easy QALY improvements – because it would be running rampant in unvaccinated populations. When phrased this way, I hope it’s clear that polio becoming a source of cheap QALY improvements isn’t a good thing; the existence of cheap QALY improvements means that we’ve dropped the ball on a potentially stoppable disease.

If polio is eradicated for good, we can stop putting any effort into fighting it. We won’t need any more polio vaccines or any more polio monitoring. It’s for this reason that we’re much better off if we finish the eradication effort.

What I hadn’t realized was that a simple focus on present QALYs obscures the potential effects our actions can have on future QALYs. Abandoning diseases until treatments for them save many lives cheaply might look good for our short term effectiveness, but in the long term, the greatest gains come from following through with our eradication efforts, so that we can repurpose all resources from an eradicated disease to the fight against another, forever.

Economics, Model

Why External Debt is so Dangerous to Developing Countries

I have previously written about how to evaluate and think about public debt in stable, developed countries. There, the overall message was that the dangers of debt were often (but not always) overhyped and cynically used by certain politicians. In a throwaway remark, I suggested the case was rather different for developing countries. This post unpacks that remark. It looks at why things go so poorly when developing countries take on debt and lays out a set of policies that I think could help developing countries that have high debt loads.

The very first difference in debt between developed and developing countries lies in the available terms of credit; developing countries get much worse terms. This makes sense, as they’re often much more likely to default on their debt. Interest scales with risk and it just is riskier to lend money to Zimbabwe than to Canada.

But interest payments aren’t the only way in which developing countries get worse terms. They are also given fewer options for the currency they take loans out in. And by fewer, I mean very few. I don’t think many developing countries are getting loans that aren’t denominated in US dollars, Euros, or, if dealing with China, Yuan. Contrast this with Canada, which has no problem taking out loans in its own currency.

When you own the currency of your debts, you can devalue it in response to high debt loads, making your debts cheaper to pay off in real terms (that is to say, your debt will be equivalent to fewer goods and services than it was before you caused inflation by devaluing your currency). This is bad for lenders. In the event of devaluation, they lose money. Depending on the severity of the inflation, it could be worse for them than a simple default would be, because they cannot even try and recover part of the loan in court proceedings.

(Devaluations don’t have to be large to be reduce debt costs; they can also take the form of slightly higher inflation, such that interest is essentially nil on any loans. This is still quite bad for lenders and savers, although less likely to be worse than an actual default. The real risk comes when a country with little economic sophistication tries to engineer slightly higher inflation. It seems likely that they could drastically overshoot, with all of the attendant consequences.)

Devaluations and inflation are also politically fraught. They are especially hard on pensioners and anyone living on a fixed income – which is exactly the population most likely to make their displeasure felt at the ballot box. Lenders know that many interest groups would oppose a Canadian devaluation, but these sorts of governance controls and civil society pressure groups often just doesn’t exist (or are easily ignored by authoritarian leaders) in the developing world, which means devaluations can be less politically difficult [1].

Having the option to devalue isn’t the only reason why you might want your debts denominated in your own currency (after all, it is rarely exercised). Having debts denominated in a foreign currency can be very disruptive to the domestic priorities of your country.

The Canadian dollar is primarily used by Canadians to buy stuff they want [2]. The Canadian government naturally ends up with Canadian dollars when people pay their taxes. This makes the loan repayment process very simple. Canadians just need to do what they’d do anyway and as long as tax rates are sufficient, loans will be repaid.

When a developing country takes out a loan denominated in foreign currency, they need some way to turn domestic production into that foreign currency in order to make repayments. This is only possible insofar as their economy produces something that people using the loan currency (often USD) want. Notably, this could be very different than what the people in the country want.

For example, the people of a country could want to grow staple crops, like cassava or maize. Unfortunately, they won’t really be able to sell these staples for USD; there isn’t much market for either in the US. There very well could be room for the country to export bananas to the US, but this means that some of their farmland must be diverted away from growing staples for domestic consumption and towards growing cash crops for foreign consumption. The government will have an incentive to push people towards this type of agriculture, because they need commodities that can be sold for USD in order to make their loan payments [3].

As long as the need for foreign currency persists, countries can be locked into resource extraction and left unable to progress towards a more mature manufacturing- or knowledge-based economies.

This is bad enough, but there’s often greater economic damage when a country defaults on its foreign loans – and default many developing countries will, because they take on debt in a highly procyclical way [4].

A variable, indicator, or quantity is said to be procyclical if it is correlated with the overall health of an economy. We say that developing nation debt is procyclical because it tends to expand while economies are undergoing expansion. Specifically, new developing country debts seem to be correlated with many commodity prices. When commodity prices are high, it’s easier for developing countries that export them to take on debt.

It’s easy to see why this might be the case. Increasing commodity prices make the economies of developing countries look better. Exporting commodities can bring in a lot of money, which can have spillover effects that help the broader economy. As long as taxation isn’t too much a mess, export revenues make government revenues higher. All of this makes a country look like a safer bet, which makes credit cheaper, which makes a country more likely to take it on.

Unfortunately (for resource dependent countries; fortunately for consumes), most commodity price increases do not last forever. It is important to remember that prices are a signal – and that high prices are a giant flag that says “here be money”. Persistently high prices lead to increased production, which can eventually lead to a glut and falling prices. This most recently and spectacularly happened in 2014-2015, as American and Canadian unconventional oil and gas extraction led to a crash in the global price of oil [5].

When commodity prices crash, indebted, export-dependent countries are in big trouble. They are saddled with debt that is doubly difficult to pay back. First, their primary source of foreign cash for paying off their debts is gone with the crash in commodity prices (this will look like their currency plummeting in value). Second, their domestic tax base is much lower, starving them of revenue.

Even if a country wants to keep paying its debts, a commodity crash can leave them with no choice but a default. A dismal exchange rate and minuscule government revenues mean that the money to pay back dollar denominated debts just doesn’t exist.

Oddly enough, defaulting can offer some relief from problems; it often comes bundled with a restructuring, which results in lower debt payments. Unfortunately, this relief tends to be temporary. Unless it’s coupled with strict austerity, it tends to lead into another problem: devastating inflation.

Countries that end up defaulting on external debt are generally not living within their long-term means. Often, they’re providing a level of public services that are unsustainable without foreign borrowing, or they’re seeing so much government money diverted by corrupt officials that foreign debt is the only way to keep the lights on. One inevitable effect of a default is losing access to credit markets. Even when a restructuring can stem the short-term bleeding, there is often a budget hole left behind when the foreign cash dries up [6]. Inflation occurs because many governments with weak institutions fill this budgetary void with the printing press.

There is nothing inherently wrong with printing money, just like there’s nothing inherently wrong with having a shot of whiskey. A shot of whiskey can give you the courage to ask out the cute person at the bar; it can get you nerved up to sing in front of your friends. Or it can lead to ten more shots and a crushing hangover. Printing money is like taking shots. In some circumstances, it can really improve your life, it’s fine in moderation, but if you overdue it you’re in for a bad time.

When developing countries turn to the printing press, they often do it like a sailor turning to whiskey after six weeks of enforced sobriety.

Teachers need to be paid? Print some money. Social assistance? Print more money. Roads need to be maintained? Print even more money.

The money supply should normally expand only slightly more quickly than economic growth [7]. When it expands more quickly, prices begin to increase in lockstep. People are still paid, but the money is worth less. Savings disappear. Velocity (the speed with which money travels through the economy) increases as people try and spend money as quickly as possible, driving prices ever higher.

As the currency becomes less and less valuable, it becomes harder and harder to pay for imports. We’ve already talked about how you can only buy external goods in your own currency to the extent that people outside your country have a use for your currency. No one has a use for a rapidly inflating currency. This is why Venezuela is facing shortages of food and medicine – commodities it formerly imported but now cannot afford.

The terminal state of inflation is hyperinflation, where people need to put their currency in wheelbarrows to do anything with it. Anyone who has read about Germany in the 1930s knows that hyperinflation opens the door to demagogues and coups – to anything or anyone who can convince the people that the suffering can be stopped.

Taking into account all of this – the inflation, the banana plantations, the boom and bust cycles – it seems clear that it might be better if developing countries took on less debt. Why don’t they?

One possible explanation is the IMF (International Monetary Fund). The IMF often acts as a lender of last resort, giving countries bridging loans and negotiating new repayment terms when the prospect of default is raised. The measures that the IMF takes to help countries repay their debts have earned it many critics who rightly note that there can be a human cost to the budget cuts the IMF demands as a condition for aid [8]. Unfortunately, this is not the only way the IMF might make sovereign defaults worse. It also seems likely that the IMF represents a significant moral hazard, one that encourages risky lending to countries that cannot sustain debt loads long-term [9].

A moral hazard is any situation in which someone takes risks knowing that they won’t have to pay the penalty if their bet goes sour. Within the context of international debt and the IMF, a moral hazard arises when lenders know that they will be able to count on an IMF bailout to help them recover their principle in the event of a default.

In a world without the IMF, it is very possible that borrowing costs would be higher for developing countries, which could serve as a deterrent to taking on debt.

(It’s also possible that countries with weak institutions and bad governance will always take on unsustainable levels of debt, absent some external force stopping them. It’s for this reason that I’d prefer some sort of qualified ban on loaning to developing countries that have debt above some small fraction of their GDP over any plan that relies on abolishing the IMF in the hopes of solving all problems related to developing country debt.)

Paired with a qualified ban on new debt [10], I think there are two good arguments for forgiving much of the debt currently held by many developing countries.

First and simplest are the humanitarian reasons. Freed of debt burdens, developing countries might be able to provide more services for their citizens, or invest in infrastructure so that they could grow more quickly. Debt forgiveness would have to be paired with institutional reform and increased transparency, so that newfound surpluses aren’t diverted into the pockets of kleptocrats, which means any forgiveness policy could have the added benefit of acting as a big stick to force much needed governance changes.

Second is the doctrine of odious debts. An odious debt is any debt incurred by a despotic leader for the purpose of enriching themself or their cronies, or repressing their citizens. Under the legal doctrine of odious debts, these debts should be treated as the personal debt of the despot and wiped out whenever there is a change in regime. The logic behind this doctrine is simple: by loaning to a despot and enabling their repression, the creditors committed a violent act against the people of the country. Those people should have no obligation (legal or moral) to pay back their aggressors.

The doctrine of odious debts wouldn’t apply to every indebted developing country, but serious arguments can be made that several countries (such as Venezuela) should expect at least some reduction in their debts should the local regime change and international legal scholars (and courts) recognize the odious debt principle.

Until international progress is made on a clear list of conditions under which countries cannot take on new debt and a comprehensive program of debt forgiveness, we’re going to see the same cycle repeat over and over again. Countries will take on debt when their commodities are expensive, locking them into an economy dependent on resource extraction. Then prices will fall, default will loom, and the IMF will protect investors. Countries are left gutted, lenders are left rich, taxpayers the world over hold the bag, and poverty and misery continue – until the cycle starts over once again.

A global economy without this cycle of boom, bust, and poverty might be one of our best chances of providing stable, sustainable growth to everyone in the world. I hope one day we get to see it.

Footnotes

[1] I so wanted to get through this post without any footnotes, but here we are.

There’s one other reason why e.g. Canada is a lower risk for devaluation than e.g. Venezuela: central bank independence. The Bank of Canada is staffed by expert economists and somewhat isolated from political interference. It is unclear just how much it would be willing to devalue the currency, even if that was the desire of the Government of Canada.

Monetary policy is one lever of power that almost no developed country is willing to trust directly to politicians, a safeguard that doesn’t exist in all developing countries. Without it, devaluation and inflation risk are much higher. ^

[2] Secondarily it’s used to speculatively bet on the health of the resource extraction portion of the global economy, but that’s not like, too major of a thing. ^

[3] It’s not that the government is directly selling the bananas for USD. It’s that the government collects taxes in the local currency and the local currency cannot be converted to USD unless the country has something that USD holders want. Exchange rates are determined based on how much people want to hold one currency vs. another. A decrease in the value of products produced by a country relative to other parts of the global economy means that people will be less interested in holding that country’s currency and its value will fall. This is what happened in 2015 to the Canadian dollar; oil prices fell (while other commodity prices held steady) and the value of the dollar dropped.

Countries that are heavily dependent on the export of only one or two commodities can see wild swings in their currencies as those underlying commodities change in value. The Russian ruble, for example, is very tightly linked to the price of oil; it lost half its value between 2014 and 2016, during the oil price slump. This is a much larger depreciation than the Canadian dollar (which also suffered, but was buoyed up by Canada’s greater economic diversity). ^

[4] This section is drawn from the research of Dr. Karmen Reinhart and Dr. Kenneth Rogoff, as reported in This Time Is Different, Chapter 5: Cycles of Default on External Debt. ^

[5] This is why peak oil theories ultimately fell apart. Proponents didn’t realize that consistently high oil prices would lead to the exploitation of unconventional hydrocarbons. The initial research and development of these new sources made sense only because of the sky-high oil prices of the day. In an efficient market, profits will always eventually return to 0. We don’t have a perfectly efficient market, but it’s efficient enough that commodity prices rarely stay too high for too long. ^

[6] Access to foreign cash is gone because no one lends money to countries that just defaulted on their debts. Access to external credit does often come back the next time there’s a commodity bubble, but that could be a decade in the future. ^

[7] In some downturns, a bit of extra inflation can help lower sticky wages in real terms and return a country to full employment. My reading suggests that commodity crashes are not one of those cases. ^

[8] I’m cynical enough to believe that there is enough graft in most of these cases that human costs could be largely averted, if only the leaders of the country were forced to see their graft dry up. I’m also pragmatic enough to believe that this will rarely happen. I do believe that one positive impact of the IMF getting involved is that its status as an international institution gives it more power with which to force transparency upon debtor nations and attempt to stop diversion of public money to well-connected insiders. ^

[9] A quick search found two papers that claimed there was a moral hazard associated with the IMF and one article hosted by the IMF (and as far as I can tell, later at least somewhat repudiated by the author in the book cited in [4]) that claims there is no moral hazard. Draw what conclusions from this you will. ^

[10] I’m not entirely sure what such a ban would look like, but I’m thinking some hard cap on amount loaned based on percent of GDP, with the percent able to rise in response to reforms that boost transparency, cut corruption, and establish modern safeguards on the central bank. ^

Model, Philosophy

Against Novelty Culture

So, there’s this thing that happens in certain intellectual communities, like (to give a totally random example) social psychology. This thing is that novel takes are rewarded. New insights are rewarded. Figuring out things that no one has before is rewarded. The high-status people in such a community are the ones who come up with and disseminate many new insights.

On the face of it, this is good! New insights are how we get penicillin and flight and Pad Thai burritos. But there’s one itty bitty little problem with building a culture around it.

Good (and correct!) new ideas are a finite resource.

This isn’t news. Back in 2005, John Ioannidis laid out the case for “most published research findings” being false. It turns out that when you have a small chance of coming up with a correct idea even using statistical tests for to find false positives can break down.

A quick example. There are approximately 25,000 genes in the human genome. Imagine you are searching for genes that increase the risk of schizophrenia (chosen for this example because it is a complex condition believed to be linked to many genes). If there are 100 genes involved in schizophrenia, the odds of any given gene chosen at random being involved are 1 in 250. You, the investigating scientist, decide that you want about an 80% chance of finding some genes that are linked (this is called study power and 80% is a common value) You run a bunch of tests, analyze a bunch of DNA, and think you have a candidate. This gene has been “proven” to be associated with schizophrenia at a p=0.05 confidence level.

(A p-value is the possibility of observing an event at least as extreme as the observed one, if the null hypothesis is true. This means that if the gene isn’t associated with schizophrenia, there is only a 1 in 20 chance – 5% – we’d see a result as extreme or more extreme than the one we observed.)

At the start, we had a 1 in 250 chance of finding a gene. Now that we have a gene, we think there’s a 19 in 20 chance that it’s actually partially responsible for schizophrenia (technically, if we looked at multiple candidates, we should do something slightly different here, but many scientists still don’t, making this still a valid example). Which probability to we trust?

There’s actually an equation to figure it out. It’s called Bayes Rule and statisticians and scientists use it to update probabilities in response to new information. It goes like this:

(You can sing this to the tune of Hallelujah; take P of A when given B / times P of A a priori / divide the whole thing by B’s expectation / new evidence you may soon find / but you will not be in a bind / for you can add it to your calculation.)

In plain language, it means that probability of something being true after an observation (P(A|B)) is equal to the probability of it being true absent any observations (P(A), 1 in 250 here), times the probability of the observation happening if it is true (P(B|A), 0.8 here), divided by the baseline probability of the observation (P(B), 1 in 20 here).

With these numbers from our example, we can see that the probability of a gene actually being associated with schizophrenia when it has a confidence level of 0.05 is… 6.4%.

I took this long detour to illustrate a very important point: one of the strongest determinants of how likely something is to actually be true is the base chance it has of being true. If we expected 1000 genes to be associated with schizophrenia, then the base chance would be 1 in 25, and the probability our gene actually plays a role would jump up to 64%.

To have ten times the chance of getting a study right, you can be 10 times more selective (which probably requires much more than ten times the effort)… or you can investigate something ten times as likely to actually occur. Base rates can be more powerful than statistics, more powerful than arguments, and more powerful than common sense.

This suggests that any community that bases status around producing novel insights will mostly become a community based around producing novel-seeming (but false!) insights once it exhausts all of the available true (and easily attainable) insights it could discover. There isn’t a harsh dividing line, just a gradual trend towards plausible nonsense as the underlying vein of truth is mined out, but the studies and blog posts continue.

Except the reality is probably even worse, because any competition for status in such a community (tenure, page views) will become an iterative process that rewards those best able to come up with plausible sounding wrappers on unfortunately false information.

When this happens, we have people publishing studies with terrible analyses but highly sharable titles (anyone remember the himmicanes paper?), with the people at the top calling anyone who questions their shoddy research “methodological terrorists“.

I know I have at least one friend who is rolling their eyes right now, because I always make fun of the reproducibility crisis in psychology.

But I’m just using that because it’s a convenient example. What I’m really worried about is the Effective Altruism community.

(Effective Altruism is a movement that attempts to maximize the good that charitable donations can do by encouraging donation to the charities that have the highest positive impact per dollar spent. One list of highly effective charities can be found on GiveWell; Givewell has demonstrated a noted trend away from novelty such that I believe this post does not apply to them.)

We are a group of people with countless forums and blogs, as well as several organizations devoted to analyzing the evidence around charity effectiveness. We have conventional organizations, like GiveWell, coexisting with less conventional alternatives, like Wild-Animal Suffering Research.

All of these organizations need to justify their existence somehow. All of these blogs need to get shares and upvotes from someone.

If you believe (like I do) that the number of good charity recommendations might be quite small, then it follows that a large intellectual ecosystem will quickly exhaust these possibilities and begin finding plausible sounding alternatives.

I find it hard to believe that this isn’t already happening. We have people claiming that giving your friends cash or buying pizza for community events is the most effective charity. We have discussions of whether there is suffering in the fundamental particles of physics.

Effective Altruism is as much a philosophy movement as an empirical one. It isn’t always the case that we’ll be using P-values and statistics in our assessment. Sometimes, arguments are purely moral (like arguments about how much weight we should give to insect suffering). But both types of arguments can eventually drift into plausible sounding nonsense if we exhaust all of the real content.

There is no reason to expect that we should be able to tell when this happens. Certainly, experimental psychology wasn’t able to until several years after much-hyped studies more-or-less stopped replicating, despite a population that many people would have previously described as full of serious-minded empiricists. Many psychology researchers still won’t admit that much of the past work needs to be revisited and potentially binned.

This is a problem of incentives, but I don’t know how to make the incentives any better. As a blogger (albeit one who largely summarizes and connects ideas first broached by others), I can tell you that many of the people who blog do it because they can’t not write. There’s always going to be people competing to get their ideas heard and the people who most consistently provide satisfying insights will most often end up with more views.

Therefore, I suggest caution. We do not know how many true insights we should expect, so we cannot tell how likely to be true anything that feels insightful actually is. Against this, the best defense is highly developed scepticism. Always remember to ask for implications of new insights and to determine what information would falsify them. Always assume new insights have a low chance of being true. Notice when there seems to be a pressure to produce novel insights long after the low hanging fruit is gone and be wary of anyone in tat ecosystem.

We might not be able to change novelty culture, but we can do our best to guard against it.

[Special thanks to Cody Wild for coming up with most of the lyrics to Bayesian Hallelujah.]

Model

Hidden Disparate Impact

It is against commonly held intuitions that a group can be both over-represented in a profession, school, or program, and discriminated against. The simplest way to test for discrimination is to look at the general population, find the percent that a group represents, then expect them to represent exactly that percentage in any endeavour, absent discrimination.

Harvard, for example, is 17.1% Asian-American (foreign students are broken out separately in the statistics I found, so we’re only talking about American citizens or permanent residents in this post). America as a whole is 4.8% Asian-American. Therefore, many people will conclude that there is no discrimination happening against Asian-Americans at Harvard.

This is what would happen under many disparate impact analyses of discrimination, where the first step to showing discrimination is showing one group being accepted (for housing, employment, education, etc.) at a lower rate than another.

I think this naïve view is deeply flawed. First, we have clear evidence that Harvard is discriminating against Asian-Americans. When Harvard assigned personality scores to applicants, Asian-Americans were given the lowest scores of any ethnic group. When actual people met with Asian-American applicants, their personality scores were the same as everyone else’s; Harvard had assigned many of the low ratings without ever meeting the students, in what many suspect is an attempt to keep Asian-Americans below 20% of the student body.

Personality ratings in college admissions have a long and ugly history. They were invented to enforce quotas on Jews in the 1920s. These discriminatory quotas had a chilling effect on Jewish students; Dr. Jonas Salk, the inventor of the polio vaccine, chose the schools he attended primarily because they were among the few which didn’t discriminate against Jews. Imagine how prevalent and all-encompassing the quotas had to be for him to be affected.

If these discriminatory personality scores were dropped (or Harvard stopped fabricating bad results for Asian-Americans), Asian-American admissions at Harvard would rise.

This is because the proper measure of how many Asian-Americans should get into Harvard has little to do with their percentage of the population. It has to do with how many would meet Harvard’s formal admission criteria. Since Asian-Americans have much higher test scores than any other demographic group in America, it only stands to reason that we should expect to see Asian-Americans over-represented among any segment of the population that is selected at least in part by their test scores.

Put simply, Asian-American test scores are so good (on average) that we should expect to see proportionately more Asian-Americans than any other group get into Harvard.

This is the comparison we should be making when looking for discrimination in Harvard’s admissions. We know their criteria and we know roughly what the applicants look like. Given this, what percentage of applicants should get in if the criteria were applied fairly? The answer turns out to be about four times as many Asian-Americans as are currently getting in.

Hence, discrimination.

Unfortunately, this only picks up one type of discrimination – the discrimination that occurs when stated standards are being applied in an unequal manner. There’s another type of discrimination that can occur when standards aren’t picked fairly at all; their purpose is to act as a barrier, not assess suitability. This does come up in formal disparate impact analyses – you have to prove that any standards that lead to disparate impact are necessary – but we’ve already seen how you can avoid triggering those if you pick your standard carefully and your goal isn’t to lock a group out entirely, but instead to reduce their numbers.

Analyzing the necessity of standards that may have disparate impact can be hard and lead to disagreement.

For example, we know that Harvard’s selection criteria must be discriminate, which is to say it must differentiate. We want elite institutions to have selection criteria that differentiate between applicants! There is a general agreement, for example, that someone who fails all of their senior year courses won’t get into Harvard and someone who aces them might.

If we didn’t have a slew of records from Harvard backing up the assertion that personality criteria were rigged to keep out Asian-Americans (like they once kept out Jews), evaluating whether discrimination was going on at Harvard would be harder. There’s no prima facie reason to consider personality scores (had they been adopted for a more neutral purpose and applied fairly) to be a bad selector.

It’s a bit old fashioned, but there’s nothing inherently wrong with claiming that you also want to select for moral character and leadership when choosing your student body. The case for this is perhaps clearer at Harvard, which views itself as a training ground for future leaders. Therefore, personality scores aren’t clearly useless criteria and we have to apply judgement when evaluating whether it’s reasonable for Harvard to select its students using them.

Historically, racism has used seemingly valid criteria to cloak itself in a veneer of acceptability. Redlining, the process by which African-Americans were denied mortgage financing hid its discriminatory impact with clinical language about underwriting risk. In reality, redlining was not based on actual actuarial risk in a neighbourhood (poor whites were given loans, while middle-class African-Americans were denied them), but by the racial composition of the neighbourhood.

Like in the Harvard case, it was only the discovery of redlined maps that made it clear what was going on; the criterion was seemingly borderline enough that absent evidence, there was debate as to whether it existed for reasonable purpose or not.

(One thing that helped trigger further investigation was the realization that well-off members of the African-American community weren’t getting loans that a neutral underwriter might expect them to qualify for; their income and credit was good enough that we would have expected them to receive loans.)

It is also interesting to note that both of these cases hid behind racial stereotypes. Redlining was defended because of “decay” in urban neighbourhoods (a decay that was in many cases caused by redlining), while Harvard’s admissions relied upon negative stereotypes of Asian-Americans. Many were dismissed with the label “Standard Strong”, implying that they were part of a faceless collective, all of whom had similarly impeccable grades and similarly excellent extracurricular, but no interesting distinguishing features of their own.

Realizing how hard it is to tell apart valid criteria from discriminatory ones has made me much more sympathetic to points raised by technocrat-skeptics like Dr. Cathy O’Neil, who I have previously been harsh on. When bad actors are hiding the proof of their discrimination, it is genuinely difficult to separate real insurance underwriting (which needs to happen for anyone to get a mortgage) from discriminatory practices, just like it can be genuinely hard to separate legitimate college application processes from discriminatory ones.

While numerical measures, like test scores, have their own problems, they do provide some measure of impartiality. Interested observers can compare metrics to outcomes and notice when they’re off. Beyond redlining and college admissions, I wonder what other instances of potential discrimination a few civic minded statisticians might be able to unearth.

Model, Politics, Science

Science Is Less Political Than Its Critics

A while back, I was linked to this Tweet:

It had sparked a brisk and mostly unproductive debate. If you want to see people talking past each other, snide comments, and applause lights, check out the thread. One of the few productive exchanges centres on bridges.

Bridges are clearly a product of science (and its offspring, engineering) – only the simplest bridges can be built without scientific knowledge. Bridges also clearly have a political dimension. Not only are bridges normally the product of politics, they also are embedded in a broader political fabric. They change how a space can be used and change geography. They make certain actions – like commuting – easier and can drive urban changes like suburb growth and gentrification. Maintenance of bridges uses resources (time, money, skilled labour) that cannot be then used elsewhere. These are all clearly political concerns and they all clearly intersect deeply with existing power dynamics.

Even if no other part of science was political (and I don’t think that could be defensible; there are many other branches of science that lead to things like bridges existing), bridges prove that science certainly can be political. I can’t deny this. I don’t want to deny this.

I also cannot deny that I’m deeply skeptical of the motives of anyone who trumpets a political view of science.

You see, science has unfortunate political implications for many movements. To give just one example, greenhouse gasses are causing global warming. Many conservative politicians have a vested interest in ignoring this or muddying the water, such that the scientific consensus “greenhouse gasses are increasing global temperatures” is conflated with the political position “we should burn less fossil fuel”. This allows a dismissal of the political position (“a carbon tax makes driving more expensive; it’s just a war on cars”) serve also (via motivated cognition) to dismiss the scientific position.

(Would that carbon in the atmosphere could be dismissed so easily.)

While Dr. Wolfe is no climate change denier, it is hard to square her claims that calling science political is a neutral statement:

With the examples she chooses to demonstrate this:

When pointing out that science is political, we could also say things like “we chose to target polio for a major elimination effort before cancer, partially because it largely affected poor children instead of rich adults (as rich kids escaped polio in their summer homes)”. Talking about the ways that science has been a tool for protecting the most vulnerable paints a very different picture of what its political nature is about.

(I don’t think an argument over which view is more correct is ever likely to be particularly productive, but I do want to leave you with a few examples for my position.)

Dr. Wolfe’s is able to claim that politics is neutral despite only using negative examples of its effects by using a bait and switch between two definitions of “politics”. The bait is a technical and neutral definition, something along the lines of: “related to how we arrange and govern our society”. The switch is a more common definition, like: “engaging in and related to partisan politics”.

I start to feel that someone is being at least a bit disingenuous when they only furnish negative examples, examples that relate to this second meaning of the word political, then ask why their critics view politics as “inherently bad” (referring here to the first definition).

This sort of bait and switch pops up enough in post-modernist “all knowledge is human and constructed by existing hierarchies” places that someone got annoyed enough to coin a name for it: the motte and bailey fallacy.

Image Credit: Hchc2009, Wikimedia Commons.

 

It’s named after the early-medieval form of castle, pictured above. The motte is the outer wall and the bailey is the inner bit. This mirrors the two parts of the motte and bailey fallacy. The “motte” is the easily defensible statement (science is political because all human group activities are political) and the bailey is the more controversial belief actually held by the speaker (something like “we can’t trust science because of the number of men in it” or “we can’t trust science because it’s dominated by liberals”).

From Dr. Wolfe’s other tweets, we can see the bailey (sample: “There’s a direct line between scientism and maintaining existing power structures; you can see it in language on data transparency, the recent hoax, and more.“). This isn’t a neutral political position! It is one that a number of people disagree with. Certainly Sokal, the hoax paper writer who inspired the most recent hoaxes is an old leftist who would very much like to empower labour at the expense of capitalists.

I have a lot of sympathy for the people in the twitter thread who jumped to defend positions that looked ridiculous from the perspective of “science is subject to the same forces as any other collective human endeavour” when they believed they were arguing with “science is a tool of right-wing interests”. There are a great many progressive scientists who might agree with Dr. Wolfe on many issues, but strongly disagree with what her position seems to be here. There are many of us who believe that science, if not necessary for a progressive mission, is necessary for the related humanistic mission of freeing humanity from drudgery, hunger, and disease.

It is true that we shouldn’t uncritically believe science. But the work of being a critical observer of science should not be about running an inquisition into scientists’ political beliefs. That’s how we get climate change deniers doxxing climate scientists. Critical observation of science is the much more boring work of checking theories for genuine scientific mistakes, looking for P-hacking, and doubled checking that no one got so invested in their exciting results that they fudged their analyses to support them. Critical belief often hinges on weird mathematical identities, not political views.

But there are real and present dangers to uncritically not believing science whenever it conflicts with your politic views. The increased incidence of measles outbreaks in vaccination refusing populations is one such risk. Catastrophic and irreversible climate change is another.

When anyone says science is political and then goes on to emphasize all of the negatives of this statement, they’re giving people permission to believe their political views (like “gas should be cheap” or “vaccines are unnatural”) over the hard truths of science. And that has real consequences.

Saying that “science is political” is also political. And it’s one of those political things that is more likely than not to be driven by partisan politics. No one trumpets this unless they feel one of their political positions is endangered by empirical evidence. When talking with someone making this claim, it’s always good to keep sight of that.