Economics, Model

Why External Debt is so Dangerous to Developing Countries

I have previously written about how to evaluate and think about public debt in stable, developed countries. There, the overall message was that the dangers of debt were often (but not always) overhyped and cynically used by certain politicians. In a throwaway remark, I suggested the case was rather different for developing countries. This post unpacks that remark. It looks at why things go so poorly when developing countries take on debt and lays out a set of policies that I think could help developing countries that have high debt loads.

The very first difference in debt between developed and developing countries lies in the available terms of credit; developing countries get much worse terms. This makes sense, as they’re often much more likely to default on their debt. Interest scales with risk and it just is riskier to lend money to Zimbabwe than to Canada.

But interest payments aren’t the only way in which developing countries get worse terms. They are also given fewer options for the currency they take loans out in. And by fewer, I mean very few. I don’t think many developing countries are getting loans that aren’t denominated in US dollars, Euros, or, if dealing with China, Yuan. Contrast this with Canada, which has no problem taking out loans in its own currency.

When you own the currency of your debts, you can devalue it in response to high debt loads, making your debts cheaper to pay off in real terms (that is to say, your debt will be equivalent to fewer goods and services than it was before you caused inflation by devaluing your currency). This is bad for lenders. In the event of devaluation, they lose money. Depending on the severity of the inflation, it could be worse for them than a simple default would be, because they cannot even try and recover part of the loan in court proceedings.

(Devaluations don’t have to be large to be reduce debt costs; they can also take the form of slightly higher inflation, such that interest is essentially nil on any loans. This is still quite bad for lenders and savers, although less likely to be worse than an actual default. The real risk comes when a country with little economic sophistication tries to engineer slightly higher inflation. It seems likely that they could drastically overshoot, with all of the attendant consequences.)

Devaluations and inflation are also politically fraught. They are especially hard on pensioners and anyone living on a fixed income – which is exactly the population most likely to make their displeasure felt at the ballot box. Lenders know that many interest groups would oppose a Canadian devaluation, but these sorts of governance controls and civil society pressure groups often just doesn’t exist (or are easily ignored by authoritarian leaders) in the developing world, which means devaluations can be less politically difficult [1].

Having the option to devalue isn’t the only reason why you might want your debts denominated in your own currency (after all, it is rarely exercised). Having debts denominated in a foreign currency can be very disruptive to the domestic priorities of your country.

The Canadian dollar is primarily used by Canadians to buy stuff they want [2]. The Canadian government naturally ends up with Canadian dollars when people pay their taxes. This makes the loan repayment process very simple. Canadians just need to do what they’d do anyway and as long as tax rates are sufficient, loans will be repaid.

When a developing country takes out a loan denominated in foreign currency, they need some way to turn domestic production into that foreign currency in order to make repayments. This is only possible insofar as their economy produces something that people using the loan currency (often USD) want. Notably, this could be very different than what the people in the country want.

For example, the people of a country could want to grow staple crops, like cassava or maize. Unfortunately, they won’t really be able to sell these staples for USD; there isn’t much market for either in the US. There very well could be room for the country to export bananas to the US, but this means that some of their farmland must be diverted away from growing staples for domestic consumption and towards growing cash crops for foreign consumption. The government will have an incentive to push people towards this type of agriculture, because they need commodities that can be sold for USD in order to make their loan payments [3].

As long as the need for foreign currency persists, countries can be locked into resource extraction and left unable to progress towards a more mature manufacturing- or knowledge-based economies.

This is bad enough, but there’s often greater economic damage when a country defaults on its foreign loans – and default many developing countries will, because they take on debt in a highly procyclical way [4].

A variable, indicator, or quantity is said to be procyclical if it is correlated with the overall health of an economy. We say that developing nation debt is procyclical because it tends to expand while economies are undergoing expansion. Specifically, new developing country debts seem to be correlated with many commodity prices. When commodity prices are high, it’s easier for developing countries that export them to take on debt.

It’s easy to see why this might be the case. Increasing commodity prices make the economies of developing countries look better. Exporting commodities can bring in a lot of money, which can have spillover effects that help the broader economy. As long as taxation isn’t too much a mess, export revenues make government revenues higher. All of this makes a country look like a safer bet, which makes credit cheaper, which makes a country more likely to take it on.

Unfortunately (for resource dependent countries; fortunately for consumes), most commodity price increases do not last forever. It is important to remember that prices are a signal – and that high prices are a giant flag that says “here be money”. Persistently high prices lead to increased production, which can eventually lead to a glut and falling prices. This most recently and spectacularly happened in 2014-2015, as American and Canadian unconventional oil and gas extraction led to a crash in the global price of oil [5].

When commodity prices crash, indebted, export-dependent countries are in big trouble. They are saddled with debt that is doubly difficult to pay back. First, their primary source of foreign cash for paying off their debts is gone with the crash in commodity prices (this will look like their currency plummeting in value). Second, their domestic tax base is much lower, starving them of revenue.

Even if a country wants to keep paying its debts, a commodity crash can leave them with no choice but a default. A dismal exchange rate and minuscule government revenues mean that the money to pay back dollar denominated debts just doesn’t exist.

Oddly enough, defaulting can offer some relief from problems; it often comes bundled with a restructuring, which results in lower debt payments. Unfortunately, this relief tends to be temporary. Unless it’s coupled with strict austerity, it tends to lead into another problem: devastating inflation.

Countries that end up defaulting on external debt are generally not living within their long-term means. Often, they’re providing a level of public services that are unsustainable without foreign borrowing, or they’re seeing so much government money diverted by corrupt officials that foreign debt is the only way to keep the lights on. One inevitable effect of a default is losing access to credit markets. Even when a restructuring can stem the short-term bleeding, there is often a budget hole left behind when the foreign cash dries up [6]. Inflation occurs because many governments with weak institutions fill this budgetary void with the printing press.

There is nothing inherently wrong with printing money, just like there’s nothing inherently wrong with having a shot of whiskey. A shot of whiskey can give you the courage to ask out the cute person at the bar; it can get you nerved up to sing in front of your friends. Or it can lead to ten more shots and a crushing hangover. Printing money is like taking shots. In some circumstances, it can really improve your life, it’s fine in moderation, but if you overdue it you’re in for a bad time.

When developing countries turn to the printing press, they often do it like a sailor turning to whiskey after six weeks of enforced sobriety.

Teachers need to be paid? Print some money. Social assistance? Print more money. Roads need to be maintained? Print even more money.

The money supply should normally expand only slightly more quickly than economic growth [7]. When it expands more quickly, prices begin to increase in lockstep. People are still paid, but the money is worth less. Savings disappear. Velocity (the speed with which money travels through the economy) increases as people try and spend money as quickly as possible, driving prices ever higher.

As the currency becomes less and less valuable, it becomes harder and harder to pay for imports. We’ve already talked about how you can only buy external goods in your own currency to the extent that people outside your country have a use for your currency. No one has a use for a rapidly inflating currency. This is why Venezuela is facing shortages of food and medicine – commodities it formerly imported but now cannot afford.

The terminal state of inflation is hyperinflation, where people need to put their currency in wheelbarrows to do anything with it. Anyone who has read about Germany in the 1930s knows that hyperinflation opens the door to demagogues and coups – to anything or anyone who can convince the people that the suffering can be stopped.

Taking into account all of this – the inflation, the banana plantations, the boom and bust cycles – it seems clear that it might be better if developing countries took on less debt. Why don’t they?

One possible explanation is the IMF (International Monetary Fund). The IMF often acts as a lender of last resort, giving countries bridging loans and negotiating new repayment terms when the prospect of default is raised. The measures that the IMF takes to help countries repay their debts have earned it many critics who rightly note that there can be a human cost to the budget cuts the IMF demands as a condition for aid [8]. Unfortunately, this is not the only way the IMF might make sovereign defaults worse. It also seems likely that the IMF represents a significant moral hazard, one that encourages risky lending to countries that cannot sustain debt loads long-term [9].

A moral hazard is any situation in which someone takes risks knowing that they won’t have to pay the penalty if their bet goes sour. Within the context of international debt and the IMF, a moral hazard arises when lenders know that they will be able to count on an IMF bailout to help them recover their principle in the event of a default.

In a world without the IMF, it is very possible that borrowing costs would be higher for developing countries, which could serve as a deterrent to taking on debt.

(It’s also possible that countries with weak institutions and bad governance will always take on unsustainable levels of debt, absent some external force stopping them. It’s for this reason that I’d prefer some sort of qualified ban on loaning to developing countries that have debt above some small fraction of their GDP over any plan that relies on abolishing the IMF in the hopes of solving all problems related to developing country debt.)

Paired with a qualified ban on new debt [10], I think there are two good arguments for forgiving much of the debt currently held by many developing countries.

First and simplest are the humanitarian reasons. Freed of debt burdens, developing countries might be able to provide more services for their citizens, or invest in infrastructure so that they could grow more quickly. Debt forgiveness would have to be paired with institutional reform and increased transparency, so that newfound surpluses aren’t diverted into the pockets of kleptocrats, which means any forgiveness policy could have the added benefit of acting as a big stick to force much needed governance changes.

Second is the doctrine of odious debts. An odious debt is any debt incurred by a despotic leader for the purpose of enriching themself or their cronies, or repressing their citizens. Under the legal doctrine of odious debts, these debts should be treated as the personal debt of the despot and wiped out whenever there is a change in regime. The logic behind this doctrine is simple: by loaning to a despot and enabling their repression, the creditors committed a violent act against the people of the country. Those people should have no obligation (legal or moral) to pay back their aggressors.

The doctrine of odious debts wouldn’t apply to every indebted developing country, but serious arguments can be made that several countries (such as Venezuela) should expect at least some reduction in their debts should the local regime change and international legal scholars (and courts) recognize the odious debt principle.

Until international progress is made on a clear list of conditions under which countries cannot take on new debt and a comprehensive program of debt forgiveness, we’re going to see the same cycle repeat over and over again. Countries will take on debt when their commodities are expensive, locking them into an economy dependent on resource extraction. Then prices will fall, default will loom, and the IMF will protect investors. Countries are left gutted, lenders are left rich, taxpayers the world over hold the bag, and poverty and misery continue – until the cycle starts over once again.

A global economy without this cycle of boom, bust, and poverty might be one of our best chances of providing stable, sustainable growth to everyone in the world. I hope one day we get to see it.

Footnotes

[1] I so wanted to get through this post without any footnotes, but here we are.

There’s one other reason why e.g. Canada is a lower risk for devaluation than e.g. Venezuela: central bank independence. The Bank of Canada is staffed by expert economists and somewhat isolated from political interference. It is unclear just how much it would be willing to devalue the currency, even if that was the desire of the Government of Canada.

Monetary policy is one lever of power that almost no developed country is willing to trust directly to politicians, a safeguard that doesn’t exist in all developing countries. Without it, devaluation and inflation risk are much higher. ^

[2] Secondarily it’s used to speculatively bet on the health of the resource extraction portion of the global economy, but that’s not like, too major of a thing. ^

[3] It’s not that the government is directly selling the bananas for USD. It’s that the government collects taxes in the local currency and the local currency cannot be converted to USD unless the country has something that USD holders want. Exchange rates are determined based on how much people want to hold one currency vs. another. A decrease in the value of products produced by a country relative to other parts of the global economy means that people will be less interested in holding that country’s currency and its value will fall. This is what happened in 2015 to the Canadian dollar; oil prices fell (while other commodity prices held steady) and the value of the dollar dropped.

Countries that are heavily dependent on the export of only one or two commodities can see wild swings in their currencies as those underlying commodities change in value. The Russian ruble, for example, is very tightly linked to the price of oil; it lost half its value between 2014 and 2016, during the oil price slump. This is a much larger depreciation than the Canadian dollar (which also suffered, but was buoyed up by Canada’s greater economic diversity). ^

[4] This section is drawn from the research of Dr. Karmen Reinhart and Dr. Kenneth Rogoff, as reported in This Time Is Different, Chapter 5: Cycles of Default on External Debt. ^

[5] This is why peak oil theories ultimately fell apart. Proponents didn’t realize that consistently high oil prices would lead to the exploitation of unconventional hydrocarbons. The initial research and development of these new sources made sense only because of the sky-high oil prices of the day. In an efficient market, profits will always eventually return to 0. We don’t have a perfectly efficient market, but it’s efficient enough that commodity prices rarely stay too high for too long. ^

[6] Access to foreign cash is gone because no one lends money to countries that just defaulted on their debts. Access to external credit does often come back the next time there’s a commodity bubble, but that could be a decade in the future. ^

[7] In some downturns, a bit of extra inflation can help lower sticky wages in real terms and return a country to full employment. My reading suggests that commodity crashes are not one of those cases. ^

[8] I’m cynical enough to believe that there is enough graft in most of these cases that human costs could be largely averted, if only the leaders of the country were forced to see their graft dry up. I’m also pragmatic enough to believe that this will rarely happen. I do believe that one positive impact of the IMF getting involved is that its status as an international institution gives it more power with which to force transparency upon debtor nations and attempt to stop diversion of public money to well-connected insiders. ^

[9] A quick search found two papers that claimed there was a moral hazard associated with the IMF and one article hosted by the IMF (and as far as I can tell, later at least somewhat repudiated by the author in the book cited in [4]) that claims there is no moral hazard. Draw what conclusions from this you will. ^

[10] I’m not entirely sure what such a ban would look like, but I’m thinking some hard cap on amount loaned based on percent of GDP, with the percent able to rise in response to reforms that boost transparency, cut corruption, and establish modern safeguards on the central bank. ^

Model, Philosophy

Against Novelty Culture

So, there’s this thing that happens in certain intellectual communities, like (to give a totally random example) social psychology. This thing is that novel takes are rewarded. New insights are rewarded. Figuring out things that no one has before is rewarded. The high-status people in such a community are the ones who come up with and disseminate many new insights.

On the face of it, this is good! New insights are how we get penicillin and flight and Pad Thai burritos. But there’s one itty bitty little problem with building a culture around it.

Good (and correct!) new ideas are a finite resource.

This isn’t news. Back in 2005, John Ioannidis laid out the case for “most published research findings” being false. It turns out that when you have a small chance of coming up with a correct idea even using statistical tests for to find false positives can break down.

A quick example. There are approximately 25,000 genes in the human genome. Imagine you are searching for genes that increase the risk of schizophrenia (chosen for this example because it is a complex condition believed to be linked to many genes). If there are 100 genes involved in schizophrenia, the odds of any given gene chosen at random being involved are 1 in 250. You, the investigating scientist, decide that you want about an 80% chance of finding some genes that are linked (this is called study power and 80% is a common value) You run a bunch of tests, analyze a bunch of DNA, and think you have a candidate. This gene has been “proven” to be associated with schizophrenia at a p=0.05 confidence level.

(A p-value is the possibility of observing an event at least as extreme as the observed one, if the null hypothesis is true. This means that if the gene isn’t associated with schizophrenia, there is only a 1 in 20 chance – 5% – we’d see a result as extreme or more extreme than the one we observed.)

At the start, we had a 1 in 250 chance of finding a gene. Now that we have a gene, we think there’s a 19 in 20 chance that it’s actually partially responsible for schizophrenia (technically, if we looked at multiple candidates, we should do something slightly different here, but many scientists still don’t, making this still a valid example). Which probability to we trust?

There’s actually an equation to figure it out. It’s called Bayes Rule and statisticians and scientists use it to update probabilities in response to new information. It goes like this:

(You can sing this to the tune of Hallelujah; take P of A when given B / times P of A a priori / divide the whole thing by B’s expectation / new evidence you may soon find / but you will not be in a bind / for you can add it to your calculation.)

In plain language, it means that probability of something being true after an observation (P(A|B)) is equal to the probability of it being true absent any observations (P(A), 1 in 250 here), times the probability of the observation happening if it is true (P(B|A), 0.8 here), divided by the baseline probability of the observation (P(B), 1 in 20 here).

With these numbers from our example, we can see that the probability of a gene actually being associated with schizophrenia when it has a confidence level of 0.05 is… 6.4%.

I took this long detour to illustrate a very important point: one of the strongest determinants of how likely something is to actually be true is the base chance it has of being true. If we expected 1000 genes to be associated with schizophrenia, then the base chance would be 1 in 25, and the probability our gene actually plays a role would jump up to 64%.

To have ten times the chance of getting a study right, you can be 10 times more selective (which probably requires much more than ten times the effort)… or you can investigate something ten times as likely to actually occur. Base rates can be more powerful than statistics, more powerful than arguments, and more powerful than common sense.

This suggests that any community that bases status around producing novel insights will mostly become a community based around producing novel-seeming (but false!) insights once it exhausts all of the available true (and easily attainable) insights it could discover. There isn’t a harsh dividing line, just a gradual trend towards plausible nonsense as the underlying vein of truth is mined out, but the studies and blog posts continue.

Except the reality is probably even worse, because any competition for status in such a community (tenure, page views) will become an iterative process that rewards those best able to come up with plausible sounding wrappers on unfortunately false information.

When this happens, we have people publishing studies with terrible analyses but highly sharable titles (anyone remember the himmicanes paper?), with the people at the top calling anyone who questions their shoddy research “methodological terrorists“.

I know I have at least one friend who is rolling their eyes right now, because I always make fun of the reproducibility crisis in psychology.

But I’m just using that because it’s a convenient example. What I’m really worried about is the Effective Altruism community.

(Effective Altruism is a movement that attempts to maximize the good that charitable donations can do by encouraging donation to the charities that have the highest positive impact per dollar spent. One list of highly effective charities can be found on GiveWell; Givewell has demonstrated a noted trend away from novelty such that I believe this post does not apply to them.)

We are a group of people with countless forums and blogs, as well as several organizations devoted to analyzing the evidence around charity effectiveness. We have conventional organizations, like GiveWell, coexisting with less conventional alternatives, like Wild-Animal Suffering Research.

All of these organizations need to justify their existence somehow. All of these blogs need to get shares and upvotes from someone.

If you believe (like I do) that the number of good charity recommendations might be quite small, then it follows that a large intellectual ecosystem will quickly exhaust these possibilities and begin finding plausible sounding alternatives.

I find it hard to believe that this isn’t already happening. We have people claiming that giving your friends cash or buying pizza for community events is the most effective charity. We have discussions of whether there is suffering in the fundamental particles of physics.

Effective Altruism is as much a philosophy movement as an empirical one. It isn’t always the case that we’ll be using P-values and statistics in our assessment. Sometimes, arguments are purely moral (like arguments about how much weight we should give to insect suffering). But both types of arguments can eventually drift into plausible sounding nonsense if we exhaust all of the real content.

There is no reason to expect that we should be able to tell when this happens. Certainly, experimental psychology wasn’t able to until several years after much-hyped studies more-or-less stopped replicating, despite a population that many people would have previously described as full of serious-minded empiricists. Many psychology researchers still won’t admit that much of the past work needs to be revisited and potentially binned.

This is a problem of incentives, but I don’t know how to make the incentives any better. As a blogger (albeit one who largely summarizes and connects ideas first broached by others), I can tell you that many of the people who blog do it because they can’t not write. There’s always going to be people competing to get their ideas heard and the people who most consistently provide satisfying insights will most often end up with more views.

Therefore, I suggest caution. We do not know how many true insights we should expect, so we cannot tell how likely to be true anything that feels insightful actually is. Against this, the best defense is highly developed scepticism. Always remember to ask for implications of new insights and to determine what information would falsify them. Always assume new insights have a low chance of being true. Notice when there seems to be a pressure to produce novel insights long after the low hanging fruit is gone and be wary of anyone in tat ecosystem.

We might not be able to change novelty culture, but we can do our best to guard against it.

[Special thanks to Cody Wild for coming up with most of the lyrics to Bayesian Hallelujah.]

Model

Hidden Disparate Impact

It is against commonly held intuitions that a group can be both over-represented in a profession, school, or program, and discriminated against. The simplest way to test for discrimination is to look at the general population, find the percent that a group represents, then expect them to represent exactly that percentage in any endeavour, absent discrimination.

Harvard, for example, is 17.1% Asian-American (foreign students are broken out separately in the statistics I found, so we’re only talking about American citizens or permanent residents in this post). America as a whole is 4.8% Asian-American. Therefore, many people will conclude that there is no discrimination happening against Asian-Americans at Harvard.

This is what would happen under many disparate impact analyses of discrimination, where the first step to showing discrimination is showing one group being accepted (for housing, employment, education, etc.) at a lower rate than another.

I think this naïve view is deeply flawed. First, we have clear evidence that Harvard is discriminating against Asian-Americans. When Harvard assigned personality scores to applicants, Asian-Americans were given the lowest scores of any ethnic group. When actual people met with Asian-American applicants, their personality scores were the same as everyone else’s; Harvard had assigned many of the low ratings without ever meeting the students, in what many suspect is an attempt to keep Asian-Americans below 20% of the student body.

Personality ratings in college admissions have a long and ugly history. They were invented to enforce quotas on Jews in the 1920s. These discriminatory quotas had a chilling effect on Jewish students; Dr. Jonas Salk, the inventor of the polio vaccine, chose the schools he attended primarily because they were among the few which didn’t discriminate against Jews. Imagine how prevalent and all-encompassing the quotas had to be for him to be affected.

If these discriminatory personality scores were dropped (or Harvard stopped fabricating bad results for Asian-Americans), Asian-American admissions at Harvard would rise.

This is because the proper measure of how many Asian-Americans should get into Harvard has little to do with their percentage of the population. It has to do with how many would meet Harvard’s formal admission criteria. Since Asian-Americans have much higher test scores than any other demographic group in America, it only stands to reason that we should expect to see Asian-Americans over-represented among any segment of the population that is selected at least in part by their test scores.

Put simply, Asian-American test scores are so good (on average) that we should expect to see proportionately more Asian-Americans than any other group get into Harvard.

This is the comparison we should be making when looking for discrimination in Harvard’s admissions. We know their criteria and we know roughly what the applicants look like. Given this, what percentage of applicants should get in if the criteria were applied fairly? The answer turns out to be about four times as many Asian-Americans as are currently getting in.

Hence, discrimination.

Unfortunately, this only picks up one type of discrimination – the discrimination that occurs when stated standards are being applied in an unequal manner. There’s another type of discrimination that can occur when standards aren’t picked fairly at all; their purpose is to act as a barrier, not assess suitability. This does come up in formal disparate impact analyses – you have to prove that any standards that lead to disparate impact are necessary – but we’ve already seen how you can avoid triggering those if you pick your standard carefully and your goal isn’t to lock a group out entirely, but instead to reduce their numbers.

Analyzing the necessity of standards that may have disparate impact can be hard and lead to disagreement.

For example, we know that Harvard’s selection criteria must be discriminate, which is to say it must differentiate. We want elite institutions to have selection criteria that differentiate between applicants! There is a general agreement, for example, that someone who fails all of their senior year courses won’t get into Harvard and someone who aces them might.

If we didn’t have a slew of records from Harvard backing up the assertion that personality criteria were rigged to keep out Asian-Americans (like they once kept out Jews), evaluating whether discrimination was going on at Harvard would be harder. There’s no prima facie reason to consider personality scores (had they been adopted for a more neutral purpose and applied fairly) to be a bad selector.

It’s a bit old fashioned, but there’s nothing inherently wrong with claiming that you also want to select for moral character and leadership when choosing your student body. The case for this is perhaps clearer at Harvard, which views itself as a training ground for future leaders. Therefore, personality scores aren’t clearly useless criteria and we have to apply judgement when evaluating whether it’s reasonable for Harvard to select its students using them.

Historically, racism has used seemingly valid criteria to cloak itself in a veneer of acceptability. Redlining, the process by which African-Americans were denied mortgage financing hid its discriminatory impact with clinical language about underwriting risk. In reality, redlining was not based on actual actuarial risk in a neighbourhood (poor whites were given loans, while middle-class African-Americans were denied them), but by the racial composition of the neighbourhood.

Like in the Harvard case, it was only the discovery of redlined maps that made it clear what was going on; the criterion was seemingly borderline enough that absent evidence, there was debate as to whether it existed for reasonable purpose or not.

(One thing that helped trigger further investigation was the realization that well-off members of the African-American community weren’t getting loans that a neutral underwriter might expect them to qualify for; their income and credit was good enough that we would have expected them to receive loans.)

It is also interesting to note that both of these cases hid behind racial stereotypes. Redlining was defended because of “decay” in urban neighbourhoods (a decay that was in many cases caused by redlining), while Harvard’s admissions relied upon negative stereotypes of Asian-Americans. Many were dismissed with the label “Standard Strong”, implying that they were part of a faceless collective, all of whom had similarly impeccable grades and similarly excellent extracurricular, but no interesting distinguishing features of their own.

Realizing how hard it is to tell apart valid criteria from discriminatory ones has made me much more sympathetic to points raised by technocrat-skeptics like Dr. Cathy O’Neil, who I have previously been harsh on. When bad actors are hiding the proof of their discrimination, it is genuinely difficult to separate real insurance underwriting (which needs to happen for anyone to get a mortgage) from discriminatory practices, just like it can be genuinely hard to separate legitimate college application processes from discriminatory ones.

While numerical measures, like test scores, have their own problems, they do provide some measure of impartiality. Interested observers can compare metrics to outcomes and notice when they’re off. Beyond redlining and college admissions, I wonder what other instances of potential discrimination a few civic minded statisticians might be able to unearth.

Link Post

Link Post – November 2018

When a poet writes about his experience of becoming a lawyer after his release from jail, you know it’s going to be a punch in the gut. One thing I noticed: he would have had a much easier time reintegrating to society, finding a job, etc. had he been tried as a juvenile, rather than an adult. Has there been any meaningful study on recidivism rates between these two groups? You could compare 17 year olds and 18 year olds charged with the same crime and look at outcomes fifteen years down the road.

Segway’s patents are now at the core of the new crop of ride-sharing scooters, which may finally bring about the original promise of the Segway. Perhaps one element of Segway’s downfall (beyond how uncool they were) is how proper they were about everything. They worked hard to get laws passed that made it legal to ride Segways on the sidewalk, rather than “innovating on the regulatory side” (read: ignoring the law) like the scooter companies do.

The winner of the 2018 Boston Marathon is a delightfully dedicated oddball.

Housing can’t be both affordable and a good investment. Currently, “good investment” seems to be beating affordable in many cities and its residents’ groups that ostensibly support affordable housing that are fighting to keep it that way with restrictive zoning.

Does Canada even exist? Or is it a made up place Americans use as the homeland when travelling? I must admit, I’ve been convinced almost to the point of Canadagnosticism.

What would happen if you laid out all the contradictory information about rapid transit in Karachi in one place? “Something a bit post-modern and a bit absurd” seems to be the answer.

Dying scientist launches a desperate attempt to prove that his herpes vaccine works. In the movies, he’d be ultimately vindicated. In real life, several people are left with lingering side effects and all of the data he collected is tainted by poor methodology.

The whole “rich kids of Instagram” thing is full of pyramid schemes that advertise risky financial products to impoverished teens.

Political theorist Hannah Arendt once claimed that you must never say “who am I to judge”. A therapist sees dramatic improvements by teaching their clients to be more judgemental, seems to agree.

Whenever I read about bullshit jobs, I feel like economic competition needs to be turned up to 11 so that companies have no slack with which to hire people to do pointless tasks. One thing that progressives might not appreciate: the investor class probably hates bullshit jobs even more than they do; from the perspective of a stockholder, a bullshit job is management stealing their money so that the managers can get off on feeling powerful.