Aspiring author, sometimes blogger. By day, I’m a Software Developer at Alert Labs. By night I write things. Both of these look the exact same to an outside observer, because it’s just me sitting in front of a computer screen, hitting buttons.
This true crime story ticks a lot of my boxes. The villain is created by the slow entropic decay of corruption and temptation, while the hero chose to prosecute white collar crimes because he wanted to go after crimes of greed, not desperation. I continue to believe that as a society, we’re too lenient on crimes of greed and too harsh on crimes of desperation, so it was easy to cheer the prosecution on.
This post claims that the pharmaceutical industry is soon going to fall apart because returns on R&D aren’t keeping up; all the low hanging fruit is gone and none of the harder to reach stuff is profitable. If anyone can give me a sense of how deeply I should be worried by this, I’ll be deeply appreciative.
If your restaurant is failing, or if you want to maximize your chances of success when you open a new location, you can apparently turn to restaurant consultants. I was especially appreciative of their weird specialized vocabulary.
The first commercial flight to circumnavigate the world did so accidentally, soon after the attack on Pearl Harbour made its return flight over the Pacific too dangerous. This is one of the cases where you want to yell at reality for being too unrealistic with its tropes; it features everything from an accidental passenger to a near miss in a mine field.
I’m young enough that I kind of just assumed the food item known as “the wrap” always existed. Turns out this is not the case! This article tracks the rise of wraps and the mania that surrounded them, as well as their inevitable fall and strange afterlife as a bland staple in catered lunches.
In 1994, Paul Krugman wrote the famous “Myth of Asia’s Miracle“, which claimed that Asian countries could not maintain their high growth rates indefinitely, especially because they lacked high productivity growth. 15 years later, another economist revisits this assertion and shows that massive re-investment can more than make up for slow productivity growth and drive strong overall growth. Turns out that in nation-building, quantity can have a quality all of its own.
I found a record of important political events from 1890 and I have to say, I’m glad we’ve come so far since the 19th century. Back then, the rest of the world was ganging up on America for taking a sudden protectionist turn, which doesn’t remind me of anything current at all.
I find I really enjoy it when judges are acerb, which makes this paper written by a judge about how annoying lawyers like catnip to me. It contains the line: ‘On mornings when I am scheduled to hear a family case, if someone greets me in the court house hallway with, “Have a good morning, Your Honour,” I typically reply, “Thank you, but I have other plans.” I adhere to the view that a legal system without Family Court is like Christianity without Hell.’, in the introduction, so you can tell right away that it’s going to be good.
I have previously written about how to evaluate and think about public debt in stable, developed countries. There, the overall message was that the dangers of debt were often (but not always) overhyped and cynically used by certain politicians. In a throwaway remark, I suggested the case was rather different for developing countries. This post unpacks that remark. It looks at why things go so poorly when developing countries take on debt and lays out a set of policies that I think could help developing countries that have high debt loads.
The very first difference in debt between developed and developing countries lies in the available terms of credit; developing countries get much worse terms. This makes sense, as they’re often much more likely to default on their debt. Interest scales with risk and it just is riskier to lend money to Zimbabwe than to Canada.
But interest payments aren’t the only way in which developing countries get worse terms. They are also given fewer options for the currency they take loans out in. And by fewer, I mean very few. I don’t think many developing countries are getting loans that aren’t denominated in US dollars, Euros, or, if dealing with China, Yuan. Contrast this with Canada, which has no problem taking out loans in its own currency.
When you own the currency of your debts, you can devalue it in response to high debt loads, making your debts cheaper to pay off in real terms (that is to say, your debt will be equivalent to fewer goods and services than it was before you caused inflation by devaluing your currency). This is bad for lenders. In the event of devaluation, they lose money. Depending on the severity of the inflation, it could be worse for them than a simple default would be, because they cannot even try and recover part of the loan in court proceedings.
(Devaluations don’t have to be large to be reduce debt costs; they can also take the form of slightly higher inflation, such that interest is essentially nil on any loans. This is still quite bad for lenders and savers, although less likely to be worse than an actual default. The real risk comes when a country with little economic sophistication tries to engineer slightly higher inflation. It seems likely that they could drastically overshoot, with all of the attendant consequences.)
Devaluations and inflation are also politically fraught. They are especially hard on pensioners and anyone living on a fixed income – which is exactly the population most likely to make their displeasure felt at the ballot box. Lenders know that many interest groups would oppose a Canadian devaluation, but these sorts of governance controls and civil society pressure groups often just doesn’t exist (or are easily ignored by authoritarian leaders) in the developing world, which means devaluations can be less politically difficult .
Having the option to devalue isn’t the only reason why you might want your debts denominated in your own currency (after all, it is rarely exercised). Having debts denominated in a foreign currency can be very disruptive to the domestic priorities of your country.
The Canadian dollar is primarily used by Canadians to buy stuff they want . The Canadian government naturally ends up with Canadian dollars when people pay their taxes. This makes the loan repayment process very simple. Canadians just need to do what they’d do anyway and as long as tax rates are sufficient, loans will be repaid.
For example, the people of a country could want to grow staple crops, like cassava or maize. Unfortunately, they won’t really be able to sell these staples for USD; there isn’t much market for either in the US. There very well could be room for the country to export bananas to the US, but this means that some of their farmland must be diverted away from growing staples for domestic consumption and towards growing cash crops for foreign consumption. The government will have an incentive to push people towards this type of agriculture, because they need commodities that can be sold for USD in order to make their loan payments .
As long as the need for foreign currency persists, countries can be locked into resource extraction and left unable to progress towards a more mature manufacturing- or knowledge-based economies.
This is bad enough, but there’s often greater economic damage when a country defaults on its foreign loans – and default many developing countries will, because they take on debt in a highly procyclical way .
A variable, indicator, or quantity is said to be procyclical if it is correlated with the overall health of an economy. We say that developing nation debt is procyclical because it tends to expand while economies are undergoing expansion. Specifically, new developing country debts seem to be correlated with many commodity prices. When commodity prices are high, it’s easier for developing countries that export them to take on debt.
It’s easy to see why this might be the case. Increasing commodity prices make the economies of developing countries look better. Exporting commodities can bring in a lot of money, which can have spillover effects that help the broader economy. As long as taxation isn’t too much a mess, export revenues make government revenues higher. All of this makes a country look like a safer bet, which makes credit cheaper, which makes a country more likely to take it on.
Unfortunately (for resource dependent countries; fortunately for consumes), most commodity price increases do not last forever. It is important to remember that prices are a signal – and that high prices are a giant flag that says “here be money”. Persistently high prices lead to increased production, which can eventually lead to a glut and falling prices. This most recently and spectacularly happened in 2014-2015, as American and Canadian unconventional oil and gas extraction led to a crash in the global price of oil .
When commodity prices crash, indebted, export-dependent countries are in big trouble. They are saddled with debt that is doubly difficult to pay back. First, their primary source of foreign cash for paying off their debts is gone with the crash in commodity prices (this will look like their currency plummeting in value). Second, their domestic tax base is much lower, starving them of revenue.
Even if a country wants to keep paying its debts, a commodity crash can leave them with no choice but a default. A dismal exchange rate and minuscule government revenues mean that the money to pay back dollar denominated debts just doesn’t exist.
Oddly enough, defaulting can offer some relief from problems; it often comes bundled with a restructuring, which results in lower debt payments. Unfortunately, this relief tends to be temporary. Unless it’s coupled with strict austerity, it tends to lead into another problem: devastating inflation.
Countries that end up defaulting on external debt are generally not living within their long-term means. Often, they’re providing a level of public services that are unsustainable without foreign borrowing, or they’re seeing so much government money diverted by corrupt officials that foreign debt is the only way to keep the lights on. One inevitable effect of a default is losing access to credit markets. Even when a restructuring can stem the short-term bleeding, there is often a budget hole left behind when the foreign cash dries up . Inflation occurs because many governments with weak institutions fill this budgetary void with the printing press.
There is nothing inherently wrong with printing money, just like there’s nothing inherently wrong with having a shot of whiskey. A shot of whiskey can give you the courage to ask out the cute person at the bar; it can get you nerved up to sing in front of your friends. Or it can lead to ten more shots and a crushing hangover. Printing money is like taking shots. In some circumstances, it can really improve your life, it’s fine in moderation, but if you overdue it you’re in for a bad time.
When developing countries turn to the printing press, they often do it like a sailor turning to whiskey after six weeks of enforced sobriety.
Teachers need to be paid? Print some money. Social assistance? Print more money. Roads need to be maintained? Print even more money.
The money supply should normally expand only slightly more quickly than economic growth . When it expands more quickly, prices begin to increase in lockstep. People are still paid, but the money is worth less. Savings disappear. Velocity (the speed with which money travels through the economy) increases as people try and spend money as quickly as possible, driving prices ever higher.
As the currency becomes less and less valuable, it becomes harder and harder to pay for imports. We’ve already talked about how you can only buy external goods in your own currency to the extent that people outside your country have a use for your currency. No one has a use for a rapidly inflating currency. This is why Venezuela is facing shortages of food and medicine – commodities it formerly imported but now cannot afford.
The terminal state of inflation is hyperinflation, where people need to put their currency in wheelbarrows to do anything with it. Anyone who has read about Germany in the 1930s knows that hyperinflation opens the door to demagogues and coups – to anything or anyone who can convince the people that the suffering can be stopped.
Taking into account all of this – the inflation, the banana plantations, the boom and bust cycles – it seems clear that it might be better if developing countries took on less debt. Why don’t they?
One possible explanation is the IMF (International Monetary Fund). The IMF often acts as a lender of last resort, giving countries bridging loans and negotiating new repayment terms when the prospect of default is raised. The measures that the IMF takes to help countries repay their debts have earned it many critics who rightly note that there can be a human cost to the budget cuts the IMF demands as a condition for aid . Unfortunately, this is not the only way the IMF might make sovereign defaults worse. It also seems likely that the IMF represents a significant moral hazard, one that encourages risky lending to countries that cannot sustain debt loads long-term .
A moral hazard is any situation in which someone takes risks knowing that they won’t have to pay the penalty if their bet goes sour. Within the context of international debt and the IMF, a moral hazard arises when lenders know that they will be able to count on an IMF bailout to help them recover their principle in the event of a default.
In a world without the IMF, it is very possible that borrowing costs would be higher for developing countries, which could serve as a deterrent to taking on debt.
(It’s also possible that countries with weak institutions and bad governance will always take on unsustainable levels of debt, absent some external force stopping them. It’s for this reason that I’d prefer some sort of qualified ban on loaning to developing countries that have debt above some small fraction of their GDP over any plan that relies on abolishing the IMF in the hopes of solving all problems related to developing country debt.)
Paired with a qualified ban on new debt , I think there are two good arguments for forgiving much of the debt currently held by many developing countries.
First and simplest are the humanitarian reasons. Freed of debt burdens, developing countries might be able to provide more services for their citizens, or invest in infrastructure so that they could grow more quickly. Debt forgiveness would have to be paired with institutional reform and increased transparency, so that newfound surpluses aren’t diverted into the pockets of kleptocrats, which means any forgiveness policy could have the added benefit of acting as a big stick to force much needed governance changes.
Second is the doctrine of odious debts. An odious debt is any debt incurred by a despotic leader for the purpose of enriching themself or their cronies, or repressing their citizens. Under the legal doctrine of odious debts, these debts should be treated as the personal debt of the despot and wiped out whenever there is a change in regime. The logic behind this doctrine is simple: by loaning to a despot and enabling their repression, the creditors committed a violent act against the people of the country. Those people should have no obligation (legal or moral) to pay back their aggressors.
The doctrine of odious debts wouldn’t apply to every indebted developing country, but serious arguments can be made that several countries (such as Venezuela) should expect at least some reduction in their debts should the local regime change and international legal scholars (and courts) recognize the odious debt principle.
Until international progress is made on a clear list of conditions under which countries cannot take on new debt and a comprehensive program of debt forgiveness, we’re going to see the same cycle repeat over and over again. Countries will take on debt when their commodities are expensive, locking them into an economy dependent on resource extraction. Then prices will fall, default will loom, and the IMF will protect investors. Countries are left gutted, lenders are left rich, taxpayers the world over hold the bag, and poverty and misery continue – until the cycle starts over once again.
A global economy without this cycle of boom, bust, and poverty might be one of our best chances of providing stable, sustainable growth to everyone in the world. I hope one day we get to see it.
 I so wanted to get through this post without any footnotes, but here we are.
There’s one other reason why e.g. Canada is a lower risk for devaluation than e.g. Venezuela: central bank independence. The Bank of Canada is staffed by expert economists and somewhat isolated from political interference. It is unclear just how much it would be willing to devalue the currency, even if that was the desire of the Government of Canada.
Monetary policy is one lever of power that almost no developed country is willing to trust directly to politicians, a safeguard that doesn’t exist in all developing countries. Without it, devaluation and inflation risk are much higher. ^
 It’s not that the government is directly selling the bananas for USD. It’s that the government collects taxes in the local currency and the local currency cannot be converted to USD unless the country has something that USD holders want. Exchange rates are determined based on how much people want to hold one currency vs. another. A decrease in the value of products produced by a country relative to other parts of the global economy means that people will be less interested in holding that country’s currency and its value will fall. This is what happened in 2015 to the Canadian dollar; oil prices fell (while other commodity prices held steady) and the value of the dollar dropped.
Countries that are heavily dependent on the export of only one or two commodities can see wild swings in their currencies as those underlying commodities change in value. The Russian ruble, for example, is very tightly linked to the price of oil; it lost half its value between 2014 and 2016, during the oil price slump. This is a much larger depreciation than the Canadian dollar (which also suffered, but was buoyed up by Canada’s greater economic diversity). ^
 This section is drawn from the research of Dr. Karmen Reinhart and Dr. Kenneth Rogoff, as reported in This Time Is Different, Chapter 5: Cycles of Default on External Debt. ^
 This is why peak oil theories ultimately fell apart. Proponents didn’t realize that consistently high oil prices would lead to the exploitation of unconventional hydrocarbons. The initial research and development of these new sources made sense only because of the sky-high oil prices of the day. In an efficient market, profits will always eventually return to 0. We don’t have a perfectly efficient market, but it’s efficient enough that commodity prices rarely stay too high for too long. ^
 Access to foreign cash is gone because no one lends money to countries that just defaulted on their debts. Access to external credit does often come back the next time there’s a commodity bubble, but that could be a decade in the future. ^
 I’m cynical enough to believe that there is enough graft in most of these cases that human costs could be largely averted, if only the leaders of the country were forced to see their graft dry up. I’m also pragmatic enough to believe that this will rarely happen. I do believe that one positive impact of the IMF getting involved is that its status as an international institution gives it more power with which to force transparency upon debtor nations and attempt to stop diversion of public money to well-connected insiders. ^
 A quick search found twopapers that claimed there was a moral hazard associated with the IMF and one article hosted by the IMF (and as far as I can tell, later at least somewhat repudiated by the author in the book cited in ) that claims there is no moral hazard. Draw what conclusions from this you will. ^
 I’m not entirely sure what such a ban would look like, but I’m thinking some hard cap on amount loaned based on percent of GDP, with the percent able to rise in response to reforms that boost transparency, cut corruption, and establish modern safeguards on the central bank. ^
So, there’s this thing that happens in certain intellectual communities, like (to give a totally random example) social psychology. This thing is that novel takes are rewarded. New insights are rewarded. Figuring out things that no one has before is rewarded. The high-status people in such a community are the ones who come up with and disseminate many new insights.
On the face of it, this is good! New insights are how we get penicillin and flight and Pad Thai burritos. But there’s one itty bitty little problem with building a culture around it.
Good (and correct!) new ideas are a finite resource.
This isn’t news. Back in 2005, John Ioannidis laid out the case for “most published research findings” being false. It turns out that when you have a small chance of coming up with a correct idea even using statistical tests for to find false positives can break down.
A quick example. There are approximately 25,000 genes in the human genome. Imagine you are searching for genes that increase the risk of schizophrenia (chosen for this example because it is a complex condition believed to be linked to many genes). If there are 100 genes involved in schizophrenia, the odds of any given gene chosen at random being involved are 1 in 250. You, the investigating scientist, decide that you want about an 80% chance of finding some genes that are linked (this is called study power and 80% is a common value) You run a bunch of tests, analyze a bunch of DNA, and think you have a candidate. This gene has been “proven” to be associated with schizophrenia at a p=0.05 confidence level.
(A p-value is the possibility of observing an event at least as extreme as the observed one, if the null hypothesis is true. This means that if the gene isn’t associated with schizophrenia, there is only a 1 in 20 chance – 5% – we’d see a result as extreme or more extreme than the one we observed.)
At the start, we had a 1 in 250 chance of finding a gene. Now that we have a gene, we think there’s a 19 in 20 chance that it’s actually partially responsible for schizophrenia (technically, if we looked at multiple candidates, we should do something slightly different here, but many scientists still don’t, making this still a valid example). Which probability to we trust?
There’s actually an equation to figure it out. It’s called Bayes Rule and statisticians and scientists use it to update probabilities in response to new information. It goes like this:
(You can sing this to the tune of Hallelujah; take P of A when given B / times P of A a priori / divide the whole thing by B’s expectation / new evidence you may soon find / but you will not be in a bind / for you can add it to your calculation.)
In plain language, it means that probability of something being true after an observation (P(A|B)) is equal to the probability of it being true absent any observations (P(A), 1 in 250 here), times the probability of the observation happening if it is true (P(B|A), 0.8 here), divided by the baseline probability of the observation (P(B), 1 in 20 here).
With these numbers from our example, we can see that the probability of a gene actually being associated with schizophrenia when it has a confidence level of 0.05 is… 6.4%.
I took this long detour to illustrate a very important point: one of the strongest determinants of how likely something is to actually be true is the base chance it has of being true. If we expected 1000 genes to be associated with schizophrenia, then the base chance would be 1 in 25, and the probability our gene actually plays a role would jump up to 64%.
To have ten times the chance of getting a study right, you can be 10 times more selective (which probably requires much more than ten times the effort)… or you can investigate something ten times as likely to actually occur. Base rates can be more powerful than statistics, more powerful than arguments, and more powerful than common sense.
This suggests that any community that bases status around producing novel insights will mostly become a community based around producing novel-seeming (but false!) insights once it exhausts all of the available true (and easily attainable) insights it could discover. There isn’t a harsh dividing line, just a gradual trend towards plausible nonsense as the underlying vein of truth is mined out, but the studies and blog posts continue.
Except the reality is probably even worse, because any competition for status in such a community (tenure, page views) will become an iterative process that rewards those best able to come up with plausible sounding wrappers on unfortunately false information.
I know I have at least one friend who is rolling their eyes right now, because I always make fun of the reproducibility crisis in psychology.
But I’m just using that because it’s a convenient example. What I’m really worried about is the Effective Altruism community.
(Effective Altruism is a movement that attempts to maximize the good that charitable donations can do by encouraging donation to the charities that have the highest positive impact per dollar spent. One list of highly effective charities can be found on GiveWell; Givewell has demonstrated a noted trend away from novelty such that I believe this post does not apply to them.)
We are a group of people with countless forums and blogs, as well as several organizations devoted to analyzing the evidence around charity effectiveness. We have conventional organizations, like GiveWell, coexisting with less conventional alternatives, like Wild-Animal Suffering Research.
All of these organizations need to justify their existence somehow. All of these blogs need to get shares and upvotes from someone.
If you believe (like I do) that the number of good charity recommendations might be quite small, then it follows that a large intellectual ecosystem will quickly exhaust these possibilities and begin finding plausible sounding alternatives.
Effective Altruism is as much a philosophy movement as an empirical one. It isn’t always the case that we’ll be using P-values and statistics in our assessment. Sometimes, arguments are purely moral (like arguments about how much weight we should give to insect suffering). But both types of arguments can eventually drift into plausible sounding nonsense if we exhaust all of the real content.
There is no reason to expect that we should be able to tell when this happens. Certainly, experimental psychology wasn’t able to until several years after much-hyped studies more-or-less stopped replicating, despite a population that many people would have previously described as full of serious-minded empiricists. Many psychology researchers still won’t admit that much of the past work needs to be revisited and potentially binned.
This is a problem of incentives, but I don’t know how to make the incentives any better. As a blogger (albeit one who largely summarizes and connects ideas first broached by others), I can tell you that many of the people who blog do it because they can’t not write. There’s always going to be people competing to get their ideas heard and the people who most consistently provide satisfying insights will most often end up with more views.
Therefore, I suggest caution. We do not know how many true insights we should expect, so we cannot tell how likely to be true anything that feels insightful actually is. Against this, the best defense is highly developed scepticism. Always remember to ask for implications of new insights and to determine what information would falsify them. Always assume new insights have a low chance of being true. Notice when there seems to be a pressure to produce novel insights long after the low hanging fruit is gone and be wary of anyone in tat ecosystem.
We might not be able to change novelty culture, but we can do our best to guard against it.
[Special thanks to Cody Wild for coming up with most of the lyrics to Bayesian Hallelujah.]
It is against commonly held intuitions that a group can be both over-represented in a profession, school, or program, and discriminated against. The simplest way to test for discrimination is to look at the general population, find the percent that a group represents, then expect them to represent exactly that percentage in any endeavour, absent discrimination.
Harvard, for example, is 17.1% Asian-American (foreign students are broken out separately in the statistics I found, so we’re only talking about American citizens or permanent residents in this post). America as a whole is 4.8% Asian-American. Therefore, many people will conclude that there is no discrimination happening against Asian-Americans at Harvard.
This is what would happen under many disparate impact analyses of discrimination, where the first step to showing discrimination is showing one group being accepted (for housing, employment, education, etc.) at a lower rate than another.
I think this naïve view is deeply flawed. First, we have clear evidence that Harvard is discriminating against Asian-Americans. When Harvard assigned personality scores to applicants, Asian-Americans were given the lowest scores of any ethnic group. When actual people met with Asian-American applicants, their personality scores were the same as everyone else’s; Harvard had assigned many of the low ratings without ever meeting the students, in what many suspect is an attempt to keep Asian-Americans below 20% of the student body.
Personality ratings in college admissions have a long and ugly history. They were invented to enforce quotas on Jews in the 1920s. These discriminatory quotas had a chilling effect on Jewish students; Dr. Jonas Salk, the inventor of the polio vaccine, chose the schools he attended primarily because they were among the few which didn’t discriminate against Jews. Imagine how prevalent and all-encompassing the quotas had to be for him to be affected.
If these discriminatory personality scores were dropped (or Harvard stopped fabricating bad results for Asian-Americans), Asian-American admissions at Harvard would rise.
This is because the proper measure of how many Asian-Americans should get into Harvard has little to do with their percentage of the population. It has to do with how many would meet Harvard’s formal admission criteria. Since Asian-Americans have much higher test scores than any other demographic group in America, it only stands to reason that we should expect to see Asian-Americans over-represented among any segment of the population that is selected at least in part by their test scores.
Put simply, Asian-American test scores are so good (on average) that we should expect to see proportionately more Asian-Americans than any other group get into Harvard.
This is the comparison we should be making when looking for discrimination in Harvard’s admissions. We know their criteria and we know roughly what the applicants look like. Given this, what percentage of applicants should get in if the criteria were applied fairly? The answer turns out to be about four times as many Asian-Americans as are currently getting in.
Unfortunately, this only picks up one type of discrimination – the discrimination that occurs when stated standards are being applied in an unequal manner. There’s another type of discrimination that can occur when standards aren’t picked fairly at all; their purpose is to act as a barrier, not assess suitability. This does come up in formal disparate impact analyses – you have to prove that any standards that lead to disparate impact are necessary – but we’ve already seen how you can avoid triggering those if you pick your standard carefully and your goal isn’t to lock a group out entirely, but instead to reduce their numbers.
Analyzing the necessity of standards that may have disparate impact can be hard and lead to disagreement.
For example, we know that Harvard’s selection criteria must be discriminate, which is to say it must differentiate. We want elite institutions to have selection criteria that differentiate between applicants! There is a general agreement, for example, that someone who fails all of their senior year courses won’t get into Harvard and someone who aces them might.
If we didn’t have a slew of records from Harvard backing up the assertion that personality criteria were rigged to keep out Asian-Americans (like they once kept out Jews), evaluating whether discrimination was going on at Harvard would be harder. There’s no prima facie reason to consider personality scores (had they been adopted for a more neutral purpose and applied fairly) to be a bad selector.
It’s a bit old fashioned, but there’s nothing inherently wrong with claiming that you also want to select for moral character and leadership when choosing your student body. The case for this is perhaps clearer at Harvard, which views itself as a training ground for future leaders. Therefore, personality scores aren’t clearly useless criteria and we have to apply judgement when evaluating whether it’s reasonable for Harvard to select its students using them.
Historically, racism has used seemingly valid criteria to cloak itself in a veneer of acceptability. Redlining, the process by which African-Americans were denied mortgage financing hid its discriminatory impact with clinical language about underwriting risk. In reality, redlining was not based on actual actuarial risk in a neighbourhood (poor whites were given loans, while middle-class African-Americans were denied them), but by the racial composition of the neighbourhood.
Like in the Harvard case, it was only the discovery of redlined maps that made it clear what was going on; the criterion was seemingly borderline enough that absent evidence, there was debate as to whether it existed for reasonable purpose or not.
(One thing that helped trigger further investigation was the realization that well-off members of the African-American community weren’t getting loans that a neutral underwriter might expect them to qualify for; their income and credit was good enough that we would have expected them to receive loans.)
It is also interesting to note that both of these cases hid behind racial stereotypes. Redlining was defended because of “decay” in urban neighbourhoods (a decay that was in many cases caused by redlining), while Harvard’s admissions relied upon negative stereotypes of Asian-Americans. Many were dismissed with the label “Standard Strong”, implying that they were part of a faceless collective, all of whom had similarly impeccable grades and similarly excellent extracurricular, but no interesting distinguishing features of their own.
Realizing how hard it is to tell apart valid criteria from discriminatory ones has made me much more sympathetic to points raised by technocrat-skeptics like Dr. Cathy O’Neil, who I have previously been harsh on. When bad actors are hiding the proof of their discrimination, it is genuinely difficult to separate real insurance underwriting (which needs to happen for anyone to get a mortgage) from discriminatory practices, just like it can be genuinely hard to separate legitimate college application processes from discriminatory ones.
While numerical measures, like test scores, have their own problems, they do provide some measure of impartiality. Interested observers can compare metrics to outcomes and notice when they’re off. Beyond redlining and college admissions, I wonder what other instances of potential discrimination a few civic minded statisticians might be able to unearth.
When a poet writes about his experience of becoming a lawyer after his release from jail, you know it’s going to be a punch in the gut. One thing I noticed: he would have had a much easier time reintegrating to society, finding a job, etc. had he been tried as a juvenile, rather than an adult. Has there been any meaningful study on recidivism rates between these two groups? You could compare 17 year olds and 18 year olds charged with the same crime and look at outcomes fifteen years down the road.
Segway’s patents are now at the core of the new crop of ride-sharing scooters, which may finally bring about the original promise of the Segway. Perhaps one element of Segway’s downfall (beyond how uncool they were) is how proper they were about everything. They worked hard to get laws passed that made it legal to ride Segways on the sidewalk, rather than “innovating on the regulatory side” (read: ignoring the law) like the scooter companies do.
What would happen if you laid out all the contradictory information about rapid transit in Karachi in one place? “Something a bit post-modern and a bit absurd” seems to be the answer.
Dying scientist launches a desperate attempt to prove that his herpes vaccine works. In the movies, he’d be ultimately vindicated. In real life, several people are left with lingering side effects and all of the data he collected is tainted by poor methodology.
Political theorist Hannah Arendt once claimed that you must never say “who am I to judge”. A therapist sees dramatic improvements by teaching their clients to be more judgemental, seems to agree.
Whenever I read about bullshit jobs, I feel like economic competition needs to be turned up to 11 so that companies have no slack with which to hire people to do pointless tasks. One thing that progressives might not appreciate: the investor class probably hates bullshit jobs even more than they do; from the perspective of a stockholder, a bullshit job is management stealing their money so that the managers can get off on feeling powerful.
A friend of mine recently linked to a story about stamp scrip currencies in a discussion about Initiative Q. Stamp scrip currencies are an interesting monetary technology. They’re bank notes that require weekly or monthly stamps in order to be valid. These stamps cost money (normally a few percent of the face value of the note), which imposes a cost on holding the currency. This is supposed to encourage spending and spur economic activity.
This isn’t just theory. It actually happened. In the Austrian town of Wörgl, a scrip currency was used to great effect for several months during the Great Depression, leading to a sudden increase in employment, money for necessary public works, and a general reversal of fortunes that had, until that point, been quite dismal. Several other towns copied the experiment and saw similar gains, until the central bank stepped in and put a stop to the whole thing.
In the version of the story I’ve read, this is held up as an example of local adaptability and creativity crushed by centralization. The moral, I think, is that we should trust local institutions instead of central banks and be on the lookout for similar local currency strategies we could adopt.
If this is all true, it seems like stamp scrip currency (or some modern version of it, perhaps applying the stamps digitally) might be a good idea. Is this the case?
My first, cheeky reaction, is “we already have this now; it’s called inflation.” My second reaction is actually the same as my first one, but has an accompanying blog post. Thus.
Currency arrangements feel natural and unchanging, which can mislead modern readers when they’re thinking about currencies used in the 1930s. We’re very used to floating fiat currencies, that (in general) have a stable price level except for 1-3% inflation every year.
This wasn’t always the case! Historically, there was very little inflation. Currency was backed by gold at a stable ratio (there were 23.2 grains of gold in a US dollar from 1834 until 1934). For a long time, growth in global gold stocks roughly tracked total growth in economic activity, so there was no long-run inflation or deflation (short-run deflation did cause several recessions, until new gold finds bridged the gap in supply).
During the Great Depression, there was worldwide gold hoarding . Countries saw their currency stocks decline or fail to keep up with the growth rate required for full economic activity (having a gold backed currency meant that the central bank had to decrease currency stocks whenever their gold stocks fell). Existing money increased in value, which meant people hoarded that too. The result was economic ruin.
In this context, a scrip currency accomplished two things. First, it immediately provided more money. The scrip currency was backed by the national currency of Austria, but it was probably using a fractional reserve system – each backing schilling might have been used to issue several stamp scrip schillings . This meant that the town of Wörgl quickly had a lot more money circulating. Perhaps one of the best features of the scrip currency within the context of the Great Depression was that it was localized, which meant that it’s helpful effects didn’t diffuse.
(Of course, a central bank could have accomplished the same thing by printing vastly more money over a vastly larger area, but there was very little appetite for this among central banks during the Great Depression, much to everyone’s detriment. The localization of the scrip is only an advantage within the context of central banks failing to ensure adequate monetary growth; in a more normal environment, it would be a liability that prevented trade.)
Second to this, the stamp scrip currency provided an incentive to spend money.
Here’s one model of job loss in recessions: people (for whatever reason; deflation is just one cause) want to spend less money (economists call this “a decrease in aggregate demand”). Businesses see the falling demand and need to take action to cut wages or else become unprofitable. Now people generally exhibit “downward nominal wage rigidity” – they don’t like pay cuts.
Furthermore, individuals don’t realize that demand is down as quickly as businesses do. They hold out for jobs at the same wage rate. This leads to unemployment .
Stamp scrip currencies increase aggregate demand by giving people an incentive to spend their money now.
Importantly, there’s nothing magic about the particular method you choose to do this. Central banks targeting 2% inflation year on year (and succeeding for once ) should be just as effective as scrip currencies charging 2% of the face value every year . As long as you’re charged some sort of fee for holding onto money, you’re going to want to spend it.
Central bank backed currencies are ultimately preferable when the central bank is getting things right, because they facilitate longer range commerce and trade, are administratively simpler (you don’t need to go buy stamps ever), and centralization allows for more sophisticated economic monitoring and price level targeting .
Still, in situations where the central bank fails, stamp scrip currencies can be a useful temporary stopgap.
That said, I think a general caution is needed when thinking about situations like this. There are few times in economic history as different from the present day as the Great Depression. The very fact that there was unemployment north of 20% and many empty factories makes it miles away from the economic situation right now. I would suspect that radical interventions that were useful during the Great Depression might be useless or actively harmful right now, simply due to this difference in circumstances.
 My opinion is that their marketing structure is kind of cringey (my Facebook feed currently reminds me of all of the “Paul Allen is giving away his money” chain emails from the 90s and I have only myself to blame) and their monetary policy has two aims that could end up in conflict. On the other hand, it’s fun to watch the numbers go up and idly speculate about what you could do if it was worth anything. I would cautiously recommend Q ahead of lottery tickets but not ahead of saving for retirement. ^
 See “The Midas Paradox” by Scott Sumner for a more in-depth breakdown. You can also get an introduction to monetary theories of the business cycle on his blog, or listen to him talk about the Great Depression on Vimeo. ^
 The size of the effect talked about in the article suggests that one of three things had to be true: 1) the scrip currency was fractionally backed, 2) Wörgl had a huge bank account balance a few years into the recession, or 3) the amount of economic activity in the article is overstated. ^
 As long as inflation is happening like it should be, there won’t be protracted unemployment, because a slight decline in economic activity is quickly counteracted by a slightly decreased value of money (from the inflation). Note the word “nominal” up there. People are subject to something called a “money illusion”. They think in terms of prices and salaries expressed in dollar values, not in purchasing power values.
There was only a very brief recession after the dot com crash because it did nothing to affect the money supply. Inflation happened as expected and everything quickly corrected to almost full employment. On the other hand, the Great Depression lasted as long as it did because most countries were reluctant to leave the gold standard and so saw very little inflation. ^
 Here’s an interesting exercise. Look at this graph of US yearly inflation. Notice how inflation is noticeably higher in the years immediately preceding the Great Recession than it is in the years afterwards. Monetarist economists believe that the recession wouldn’t have lasted as long if it there hadn’t been such a long period of relatively low inflation.
 You might wonder if there’s some benefit to both. The answer, unfortunately, is no. Doubling them up should be roughly equivalent to just having higher inflation. There seems to be a natural rate of inflation that does a good job balancing people’s expectations for pay raises (and adequately reduces real wages in a recession) with the convenience of having stable money. Pushing inflation beyond this point can lead to a temporary increase in employment, by making labour relatively cheaper compared to other inputs.
The increase in employment ends when people adjust their expectations for raises to the new inflation rate and begin demanding increased salaries. Labour is no longer artificially cheap in real terms, so companies lay off some of the extra workers. You end up back where you started, but with inflation higher than it needs to be.
It had sparked a brisk and mostly unproductive debate. If you want to see people talking past each other, snide comments, and applause lights, check out the thread. One of the few productive exchanges centres on bridges.
Bridges are clearly a product of science (and its offspring, engineering) – only the simplest bridges can be built without scientific knowledge. Bridges also clearly have a political dimension. Not only are bridges normally the product of politics, they also are embedded in a broader political fabric. They change how a space can be used and change geography. They make certain actions – like commuting – easier and can drive urban changes like suburb growth and gentrification. Maintenance of bridges uses resources (time, money, skilled labour) that cannot be then used elsewhere. These are all clearly political concerns and they all clearly intersect deeply with existing power dynamics.
Even if no other part of science was political (and I don’t think that could be defensible; there are many other branches of science that lead to things like bridges existing), bridges prove that science certainly can be political. I can’t deny this. I don’t want to deny this.
I also cannot deny that I’m deeply skeptical of the motives of anyone who trumpets a political view of science.
You see, science has unfortunate political implications for many movements. To give just one example, greenhouse gasses are causing global warming. Many conservative politicians have a vested interest in ignoring this or muddying the water, such that the scientific consensus “greenhouse gasses are increasing global temperatures” is conflated with the political position “we should burn less fossil fuel”. This allows a dismissal of the political position (“a carbon tax makes driving more expensive; it’s just a war on cars”) serve also (via motivated cognition) to dismiss the scientific position.
(Would that carbon in the atmosphere could be dismissed so easily.)
While Dr. Wolfe is no climate change denier, it is hard to square her claims that calling science political is a neutral statement:
You are getting warmer. Fascinating how “science” is read as “empirical findings” and “political” as inherently bad.
When pointing out that science is political, we could also say things like “we chose to target polio for a major elimination effort before cancer, partially because it largely affected poor children instead of rich adults (as rich kids escaped polio in their summer homes)”. Talking about the ways that science has been a tool for protecting the most vulnerable paints a very different picture of what its political nature is about.
(I don’t think an argument over which view is more correct is ever likely to be particularly productive, but I do want to leave you with a few examplesfor myposition.)
Dr. Wolfe’s is able to claim that politics is neutral despite only using negative examples of its effects by using a bait and switch between two definitions of “politics”. The bait is a technical and neutral definition, something along the lines of: “related to how we arrange and govern our society”. The switch is a more common definition, like: “engaging in and related to partisan politics”.
I start to feel that someone is being at least a bit disingenuous when they only furnish negative examples, examples that relate to this second meaning of the word political, then ask why their critics view politics as “inherently bad” (referring here to the first definition).
This sort of bait and switch pops up enough in post-modernist “all knowledge is human and constructed by existing hierarchies” places that someone got annoyed enough to coin a name for it: the motte and bailey fallacy.
It’s named after the early-medieval form of castle, pictured above. The motte is the outer wall and the bailey is the inner bit. This mirrors the two parts of the motte and bailey fallacy. The “motte” is the easily defensible statement (science is political because all human group activities are political) and the bailey is the more controversial belief actually held by the speaker (something like “we can’t trust science because of the number of men in it” or “we can’t trust science because it’s dominated by liberals”).
I have a lot of sympathy for the people in the twitter thread who jumped to defend positions that looked ridiculous from the perspective of “science is subject to the same forces as any other collective human endeavour” when they believed they were arguing with “science is a tool of right-wing interests”. There are a great many progressive scientists who might agree with Dr. Wolfe on many issues, but strongly disagree with what her position seems to be here. There are many of us who believe that science, if not necessary for a progressive mission, is necessary for the related humanistic mission of freeing humanity from drudgery, hunger, and disease.
It is true that we shouldn’t uncritically believe science. But the work of being a critical observer of science should not be about running an inquisition into scientists’ political beliefs. That’s how we get climate change deniers doxxing climate scientists. Critical observation of science is the much more boring work of checking theories for genuine scientific mistakes, looking for P-hacking, and doubled checking that no one got so invested in their exciting results that they fudged their analyses to support them. Critical belief often hinges on weird mathematical identities, not political views.
When anyone says science is political and then goes on to emphasize all of the negatives of this statement, they’re giving people permission to believe their political views (like “gas should be cheap” or “vaccines are unnatural”) over the hard truths of science. And that has real consequences.
Saying that “science is political” is also political. And it’s one of those political things that is more likely than not to be driven by partisan politics. No one trumpets this unless they feel one of their political positions is endangered by empirical evidence. When talking with someone making this claim, it’s always good to keep sight of that.
Theranos was founded in 2003 by Stanford drop-out Elizabeth Holmes. It and its revolutionary blood tests eventually became a Silicon Valley darling, raising $700 million from investors that included Rupert Murdoch and the Walton family. It ultimately achieved a valuation of almost $10 billion on yearly revenues of $100 million. Elizabeth Holmes was hailed as Silicon Valley’s first self-made female billionaire.
In 2015, a series of articles by John Carreyrou published in the Wall Street Journal popped this bubble. Theranos was a fraud. Its blood tests didn’t work and were putting patient lives at risk. Its revenue was one thousand times smaller than reported. It had engaged in a long running campaign of intimidation against employees and whistleblowers. Its board had entirely failed to hold the executives to account – not surprising, since Elizabeth Holmes controlled over 99% of the voting power.
Bad Blood is the story of how this happened. John Carreyrou interviewed more than 140 sources, including 60 former employees to create the clearest possible picture of the company, from its founding to just before it dissolved.
It’s also the story of Carreyrou’s reporting on Theranos, from the first fateful tip he received after winning a Pulitzer for uncovering another medical fraud, to repeated legal threats from Theranos’s lawyers, to the slew of awards his coverage won when it eventually proved correct.
I thought it was one hell of a book and would recommend it to anyone who likes thrillers or anyone who might one day work at a start-up and wants a guide to what sort of company to avoid (pro tip: if your company is faking its demos to investors, leave).
Instead of rehashing the book like I sometimes do in my reviews, I want to discuss three key things I took from it.
Claims that Theranos is “emblematic” of Silicon Valley are overblown
Carreyrou vacillates on this point. He sometimes points out all the ways that Theranos is different from other VC backed companies and sometimes holds it up as a poster child for everything that is wrong with the Valley.
I’m much more in the first camp. For Theranos to be a posterchild of the Valley, you’d want to see it raise money from the same sources as other venture-backed companies. This just wasn’t the case.
First of all, Theranos had basically no backing from dedicated biotechnology venture capitalists (VCs). This makes a lot of sense. The big biotech VCs do intense due-diligence. If you can’t explain exactly how your product works to a room full of intensely skeptical PhDs, you’re out of luck. Elizabeth Holmes quickly found herself out of luck.
Next is the list of VCs who did invest. Missing are the big names from the Valley. There’s no Softbank, no Peter Thiel, no Andreessen Horowitz. While these investors may have less ability to judge biotech start-ups than the life sciences focused firms, they are experienced in due diligence and they knew red flags (like Holmes’s refusal to explain how her tech worked, even under NDA) when they saw them. I work at a venture backed company and I can tell you that experienced investors won’t even look at you if you aren’t willing to have a frank discussion about your technology with them.
The people who did invest? Largely dabblers, like Rupert Murdoch and the Walton family, drawn in by a board studded with political luminaries (two former secretaries of state, James friggen’ Mattis, etc.). It perhaps should have been a red flag that Henry Kissinger (who knows nothing about blood testing and would be better placed on Facebook’s board, where his expertise in committing war crimes would come in handy) was on the board, but to the well-connected elites from outside the Valley, this was exactly the opposite.
It is hard to deal with people who just lie
I don’t want to blame these dabblers from outside the Valley too much though, because they were lied to like crazy. As America found out in 2016, many institutions struggle when dealing with people who just make shit up.
There is an accepted level of exaggeration that happens when chasing VC money. You put your best foot forward, shove the skeletons deep into your closet, and you try and be the most charming and likable version of you. One founder once described trying to get money from VCs as “basically like dating” to me and she wasn’t wrong.
Much like dating, you don’t want to exaggerate too far. After all, if the suit is fruitful, you’re kind of stuck with each other. The last thing you want to find out after the fact is that your new partner collects their toenail clippings in a jar or overstates their yearly revenue by more than 1000x.
VCs went into Theranos with the understanding that they were probably seeing rosy forecasts. What they didn’t expect was that the forecasts they saw were 5x the internal forecasts, or that the internal forecasts were made by people who had no idea what the current revenue was. This just doesn’t happen at a normal company. I’m used to internal revenue projections being the exact same as the ones shown to investors. And while I’m sure no one would bat an eye if you went back and re-did the projections with slightly more optimistic assumptions, you can’t get to a 5x increase in revenue just by doing that. Furthermore, the whole exercise of doing projections is moot if you are already lying about your current revenue by 1000x.
There is a good reason that VCs expect companies not to do this. I’m no lawyer, but I’m pretty sure that this is all sorts of fraud. The SEC and US attorney’s office seem to agree. It’s easy to call investors naïve for buying into Theranos’s lies. But I would contend that Holmes and Balwani (her boyfriend and Theranos’s erstwhile president) were the naïve ones if they thought they could get away with it without fines and jail time.
(Carreyrou makes a production about how “over-promise, then buy time to fix it later” is business as usual for the Valley. This is certainly true if you’re talking about, say, customers of a free service. But it is not and never has been accepted practice to do this to your investors. You save the rosy projections for the future! You don’t lie about what is going on right now.)
The existence of a crime called “fraud” is really useful for our markets. When lies of the sort that Theranos made are criminalized, business transactions become easier. You expect that people who are scammers will go do their scams somewhere where lies aren’t so criminalized and they mostly do, because investors are very prone to sue or run to the SEC when lied to. Since this mostly works, it’s understandable that a sense of complacency might set in. When everyone habitually tells more or less the truth, everyone forgets to check for lies.
The biotech companies didn’t invest in Theranos because their sweep for general incompetence made it clear that something fishy was going on. The rest of the VCs were less lucky, but I would argue that when the books are as cooked as Theranos’s were, a lack of understanding of biology was not the primary problem with these investors. The primary problem was that they thought they were buying a company that was making $100 million a year when in fact it was making $100,000.
Most VCs (and probably most of the dabblers, who after all made their money in business of some sort) may not understand the nuances of biotech, but they do understand that revenue that low more than a decade into operation represent a serious problem. Conversely, revenues of $100 million are pretty darn good for a decade-old medical device company. With that lie out of the way, the future growth projections looked reasonable; they were just continuing a trend. Had any investors been told the truth, they could have used their long experience as business people or VCs to realize that Theranos was a bad deal. Holmes’s lies prevented that.
I sure wish there was a way to make lies less powerful in areas where people mostly stick near the truth (and that we’d found one before 2016), but absent that, I want to give Theranos’s investors a bit of a break.
Theranos was hardest on ethical people
Did you know that Theranos didn’t have a chief financial officer for most of its existence? Their first CFO confronted Holmes about her blatant lies to investors (she was entirely faking the blood tests that they “took”) and she fired him, then used compromising material on his computer to blackmail him into silence. He was one of the lucky ones.
Bad Blood is replete with stories of idealistic young people who joined Theranos because it seemed to be one of the few start-ups that was actually making a positive difference in normal people’s lives. These people would then collide with Theranos’s horrible management culture and begin to get disillusioned. Seeing the fraud that took place all around them would complete the process. Once cynicism set in, employees would often forward some emails to themselves so they’d have proof that they only participated in the fraud when unaware and immediately handed in their notice.
If they emailed themselves, they’d get a visit from a lawyer. The lawyer would tell them that forwarding emails to themselves was stealing Theranos’s trade secrets (everything was a trade secret with Theranos, especially the fact that they were lying about practically everything). The lawyer would present the employee with an option: delete the emails and sign a new NDA that included a non-disparagement clause that prevented them from criticising Theranos, or be sued by the fiercely talented and amoral lawyer David Boies (who was paid in Theranos stock and had a material interest in keeping the company afloat) until they were bankrupted by the legal fees.
Most people signed the paper.
If employees left without proof, they’d either be painted as deranged and angered by being fired, or they be silenced with the threat of lawsuits.
Theranos was a fly trap of a company. Its bait was a chance to work on something meaningful. But then it was set up to be maximally offensive and demoralizing for the very people who would jump at that opportunity. Kept from speaking out, guilt at helping perpetuate the fraud could eat them alive.
One employee, Ian Gibbons, committed suicide when caught between Theranos’s impossible demands for loyalty and an upcoming deposition in a lawsuit against the company.
To me, this makes Theranos much worse than seemingly similar corporate frauds like Enron. Enron didn’t attract bright-eyed idealists, crush them between an impossible situation and their morals, then throw them away to start the process over again. Enron was a few directors enriching themselves at the expense of their investors. It was wrong, but it wasn’t monstrous.
Theranos was monstrous.
Elizabeth Holmes never really made any money from her fraud. She was paid a modest (by Valley standards) salary of $200,000 per year – about what a senior engineer could expect to make. It’s probably about what she could have earned a few years after finishing her Stanford degree, if she hadn’t dropped out. Her compensation was mostly in stock and when the SEC forced her to give up most of it and the company went bankrupt, its value plummeted from $4.5 billion to $0. She never cashed out. She believed in Theranos until the bitter end.
If she’d been in it for the money, I could have understood it, almost. I can see how people would do – and have done – horrible things to get their hands on $4.5 billion. But instead of being motivated by money, she was motivated by some vision. Perhaps of saving the world, perhaps of being admired. In either case, she was willing to grind up and use up anyone and everyone around her in pursuit of that vision. Lying was on the table. Ruining people’s lives was on the table. Callously dismissing a suicide that was probably caused by her actions was on the table. As far as anyone knows, she has never shown remorse for any of these. Never viewed her actions as anything but moral and upright.
And someone who can do that scares me. People who are in it for the money don’t go to bed thinking they’re squeaky clean. They know they’ve made a deal with the devil. Elizabeth Holmes doesn’t know and doesn’t understand.
I think it’s probably for the best that no one will trust Elizabeth Holmes with a fish and chips stand, let alone a billion-dollar company, ever again. Because I tremble to think of what she could do if given another chance to “change the world”.
Or: the simplest ways of killing people tend to be the most effective.
A raft of articles came out during Defcon showing that security vulnerabilities exist in some pacemakers, vulnerabilities which could allow attackers to load a pacemaker with arbitrary code. This is obviously worrying if you have a pacemaker implanted. It is equally self-evident that it is better to live in a world where pacemakers cannot be hacked. But how much worse is it to live in this unfortunately hackable world? Are pacemaker hackings likely to become the latest crime spree?
Electrical grid hackings provide a sobering example. Despite years of warning that the American electrical grid is vulnerable to cyber-attacks, the greatest threat to America’s electricity infrastructure remains… squirrels.
Hacking, whether it’s of the electricity grid or of pacemakers gets all the headlines. Meanwhile fatty foods and squirrels do all the real damage.
For all the media attention that novel cyberpunk methods of murder get, they seem to be rather ineffective for actual murder, as demonstrated by the paucity of murder victims. I think this is rather generalizable. Simple ways of killing people are very effective but not very scary and so don’t garner much attention. On the other hand, particularly novel or baroque methods of murder cause a lot of terror, even if almost no one who is scared of them will ever die of them.
I often demonstrate this point by comparing two terrorist organizations: Al Qaeda and Daesh (the so-called Islamic State). Both of these groups are brutally inhumane, think nothing of murder, and are made up of some of the most despicable people in the world. But their methodology couldn’t be more different.
Al Qaeda has a taste for large, complicated, baroque plans that, when they actually work, cause massive damage and change how people see the world for years. 9/11 remains the single deadliest terror attack in recorded history. This is what optimizing for terror looks like.
On the other hand, when Al Qaeda’s plans fail, they seem almost farcical. There’s something grimly amusing about the time that Al Qaeda may have tried to weaponize the bubonic plague and instead lost over 40 members when they were infected and promptly died (the alternative theory, that they caught the plague because of squalid living conditions, looks only slightly better).
(Had Al Qaeda succeeded and killed even a single westerner with the plague, people would have been utterly terrified for months, even though the plague is relatively treatable by modern means and would have trouble spreading in notably flea-free western countries.)
Daesh, on the other hand, prefers simple attacks. When guns are available, their followers use them. When they aren’t, they’ll rent vans and plough them into crowds. Most of Daesh’s violence occurs in Syria and Iraq, where they once controlled territory with unparalleled brutality. This is another difference in strategy (as Al Qaeda is outward facing, focused mostly on attacking “The West”). Focusing on Syria and Iraq, where the government lacks a monopoly on violence and they could originally operate with impunity, Daesh racked up a body count that surpassed Al Qaeda’s.
While Daesh has been effective in terms of body count, they haven’t really succeeded (in the west) in creating the lasting terror that Al Qaeda did. This is perhaps a symptom of their quotidian methods of murder. No one walked around scared of a Daesh attack and many of their murders were lost in the daily churn of the news cycle – especially the ones that happened in Syria and Iraq.
I almost wonder if it is impossible for attacks or murders by “normal” means to cause much terror beyond those immediately affected. Could hacked pacemakers remain terrifying if as many people died of them as gunshots? Does familiarity with a form of death remove terror, or are some methods of death inherently more terrible and terrifying than others?
(It is probably the case that both are true, that terror is some function of surprise, gruesomeness, and brutality, such that some things will always terrify us, while others are horrible, but have long since lost their edge.)
Terror for its own sake (or because people believe it is the best path to some objective) must be a compelling option to some, because otherwise everyone would stick to simple plans whenever they think violence will help them achieve their aims. I don’t want to stereotype too much, but most people who going around being terrorists or murders typically aren’t the brightest bulbs in the socket. The average killer doesn’t have the resources to hack your pacemaker and the average terrorist is going to have much better luck with a van than with a bomb. There are disadvantages to bombs! The average Pastun farmer or disaffected mujahedeen is not a very good chemist and homemade explosives are dangerous even to skilled chemists. Accidental detonations abound. If there wasn’t some advantage in terror to be had, no one would mess around with explosives when guns and vans can be easily found.
(Perhaps this advantage is in a multiplier effect of sorts. If you are trying to win a violent struggle directly, you have to kill everyone who stands in your way. Some people might believe that terror can short-circuit this and let them scare away some of their potential opponents. Historically, this hasn’t always worked.)
In the face of actors committed to terror, we should remember that our risk of dying by a particular method is almost inversely related to how terrifying we find it. Notable intimidators like Vladimir Putin or the Mossad kill people with nerve gasses, polonium, and motorcycle delivered magnetic bombs to sow fear. I can see either of them one day adding hacked pacemakers to their arsenal.
If you’ve pissed off the Mossad or Putin and would like to die in some way other than a hacked pacemaker, then by all means, go get a different one. Otherwise, you’re probably fine waiting for a software update. If, in the meantime, you don’t want to die, maybe try ignoring headlines and instead not owning a gun and skipping French fries. Statistically, there isn’t much that will keep you safer.
Our biases make it hard for us to treat things that are easy to remember as uncommon, which no doubt plays a role here. I wrote this post like this – full of rambles, parentheses, and long-winded examples – to try and convey the difficult intuition, that we should discount as likely to effect us any method of murder that seems shocking, but hard. Remember that most crimes are crimes of opportunity and most criminals are incompetent and you’ll never be surprised to hear the three most common murder weapons are guns, knives, and fists.
[Epistemic Status: I am not an economist. I am fairly confident in my qualitative assessment, but there could be things I’ve overlooked.]
Vox has an interesting article on Elizabeth Warren’s newest economic reform proposal. Briefly, she wants to force corporations with more than $1 billion in revenue to apply for a charter of corporate citizenship.
This charter would make three far-reaching changes to how large companies do business. First, it would require businesses to consider customers, employees, and the community – instead of only its shareholders – when making decisions. Second, it would require that 40% of the seats on the board go to workers. Third, it would require 75% of shareholders and board members to authorize any corporate political activity.
Vox characterizes this as Warren’s plan to “save capitalism”. The idea is that it would force companies to do more to look out for their workers and less to cater to short term profit maximization for Wall Street . Vox suggests that it would also result in a loss of about 25% of the value of the American stock market, which they characterize as no problem for the “vast majority” of people who rely on work, rather than the stock market, for income (more on that later).
Other supposed benefits of this plan include greater corporate respect for the environment, more innovation, less corporate political meddling, and a greater say for workers in their jobs. The whole 25% decrease in the value of the stock market can also be spun as a good thing, depending on your opinions on wealth destruction and wealth inequality.
I think Vox was too uncritical in its praise of Warren’s new plan. There are some good aspects of it – it’s not a uniformly terrible piece of legislation – but I think once of a full accounting of the bad, the good, and the ugly is undertaken, it becomes obvious that it’s really good that this plan will never pass congress.
I can see one way how this plan might affect normal workers – decreased purchasing power.
As I’ve previously explained when talking about trade, many countries will sell goods to America without expecting any goods in return. Instead, they take the American dollars they get from the sale and invest them right back in America. Colloquially, we call this the “trade deficit”, but it really isn’t a deficit at all. It’s (for many people) a really sweet deal.
Anything that makes American finance more profitable (like say a corporate tax cut) is liable to increase this effect, with the long-run consequence of making the US dollar more valuable and imports cheaper .
It’s these cheap imports that have enabled the incredibly wealthy North American lifestyle. Spend some time visiting middle class and wealthy people in Europe and you’ll quickly realize that everything is smaller and cheaper there. Wealthy Europeans own cars, houses, kitchen appliances and TVs that are all much more modest than what even middle class North Americans are used to.
Weakening shareholder rights and slashing the value of the stock market would make the American financial market generally less attractive. This would (especially if combined with Trump or Sanders style tariffs) lead to increased domestic inflation in the United States – inflation that would specifically target goods that have been getting cheaper as long as anyone can remember.
This is hard to talk about to Warren supporters as a downside, because many of them believe that we need to learn to make do with less – a position that is most common among a progressive class that conspicuously consumes experiences, not material goods . Suffice to say that many North Americans still derive pleasure and self-worth from the consumer goods they acquire and that making these goods more expensive is likely to cause a politically expensive backlash, of the sort that America has recently become acquainted with and progressive America terrified of.
(There’s of course also the fact that making appliances and cars more expensive would be devastating to anyone experiencing poverty in America.)
Inflation, when used for purposes like this one, is considered an implicit tax by economists. It’s a way for the government to take money from people without the accountability (read: losing re-election) that often comes with tax hikes. Therefore, it is disingenuous to claim that this plan is free, or involves no new taxes. The taxes are hidden, is all.
There are two other problems I see straight away with this plan.
The first is that it will probably have no real impact on how corporations contribute to the political process.
The Vox article echoes a common progressive complaint, that corporate contributions to politics are based on CEO class solidarity, made solely for the benefit of the moneyed elites. I think this model is inaccurate.
From a shareholder value model, this makes sense. Lower corporate tax rates might benefit a company, but they really benefit all companies equally. They aren’t going to do much to increase the value of any one stock relative to any other (so CEOs can’t make claims of “beating the market”). Anti-competitive laws, implicit subsidies, or even blatant government aid, on the other hand, are highly localized to specific companies (and so make the CEO look good when profits increase).
When subsidies are impossible, companies can still try and stymie legislation that would hurt their business.
This was the goal of the infamous Lawyers In Cages ad. It was run by an alliance of fast food chains and meat producers, with the goal of drying up donations to the SPCA, which had been running very successful advocacy campaigns that threatened to lead to improved animal cruelty laws, laws that would probably be used against the incredibly inhumane practice of factory farming and thereby hurt industry profits.
Here’s the thing: if you’re one of the worker representatives on the board at one of these companies, you’re probably going to approve political spending that is all about protecting the company.
The market can be a rough place and when companies get squeezed, workers do suffer. If the CEO tells you that doing some political spending will land you allies in congress who will pass laws that will protect your job and increase your paycheck, are you really going to be against it ?
The ugly fact is that when it comes to rent-seeking and regulation, the goals of employees are often aligned with the goals of employers. This obviously isn’t true when the laws are about the employees (think minimum wage), but I think this isn’t what companies are breaking the bank lobbying for.
The second problem is that having managers with divided goals tends to go poorly for everyone who isn’t the managers.
Being upper management in a company is a position that provides great temptations. You have access to lots of money and you don’t have that many people looking over your shoulder. A relentless focus on profit does have some negative consequences, but it also keeps your managers on task. Profit represents an easy way to hold a yardstick to management performance. When profit is low, you can infer that your managers are either incompetent, or corrupt. Then you can fire them and get better ones.
Writing in Filthy Lucre, leftist academic Joseph Heath explains how the sort of socially-conscious enterprise Warren envisions has failed before:
The problem with organizations that are owned by multiple interest groups (or “principals”) is that they are often less effective at imposing discipline upon managers, and so suffer from higher agency costs. In particular, managers perform best when given a single task, along with a single criterion for the measurement of success. Anything more complicated makes accountability extremely difficult. A manager told to achieve several conflicting objectives can easily explain away the failure to meet one as a consequence of having pursued some other. This makes it impossible for the principals to lay down any unambiguous performance criteria for the evaluation of management, which in turn leads to very serious agency problems.
In the decades immediately following the Second World War, many firms in Western Europe were either nationalized or created under state ownership, not because of natural monopoly or market failure in the private sector, but out of a desire on the part of governments to have these enterprises serve the broader public interest… The reason that the state was involved in these sectors followed primarily from the thought that, while privately owned firms pursued strictly private interests, public ownership would be able to ensure that these enterprises served the public interest. Thus managers in these firms were instructed not just to provide a reasonable return on the capital invested, but to pursue other, “social” objectives, such as maintaining employment or promoting regional development.
But something strange happened on the road to democratic socialism. Not only did many of these corporations fail to promote the public interest in any meaningful way, many of them did a worse job than regulated firms in the private sector. In France, state oil companies freely speculated against the national currency, refused to suspend deliveries to foreign customers in times of shortage, and engaged in predatory pricing. In the United States, state-owned firms have been among the most vociferous opponents of enhanced pollution controls, and state-owned nuclear reactors are among the least safe. Of course, these are rather dramatic examples. The more common problem was simply that these companies lost staggering amounts of money. The losses were enough, in several cases, to push states like France to the brink of insolvency, and to prompt currency devaluations. The reason that so much money was lost has a lot to do with a lack of accountability.
Heath goes on to explain that basically all governments were forced to abandon these extra goals long before the privatizations on the ’80s. Centre-left or centre-right, no government could tolerate the shit-show that companies with competing goals became.
This is the kind of thing Warren’s plan would bring back. We’d once again be facing managers with split priorities who would plow money into vanity projects, office politics, and their own compensation while using the difficulty of meeting all of the goals in Warren’s charter as a reason to escape shareholder lawsuits. It’s possible that this cover for incompetence could, in the long run, damage stock prices much more than any other change presented in the plan.
The shift in comparative advantage that this plan would precipitate within the American economy won’t come without benefits. Just as Trump’s corporate tax cut makes American finance relatively more appealing and will likely lead to increased manufacturing job losses, a reduction in deeply discounted goods from China will likely lead to job losses in finance and job gains in manufacturing.
This would necessarily have some effect on income inequality in the United States, entirely separate from the large effect on wealth inequality that any reduction in the stock market would spur. You see, finance jobs tend to be very highly paid and go to people with relatively high levels of education (the sorts of people who probably could go do something else if their sector sees problems). Manufacturing jobs, on the other hand, pay decently well and tend to go to people with much less education (and also with correspondingly fewer options).
This all shakes out to an increase in middle class wages and a decrease in the wages of the already rich .
(Isn’t it amusing that Warren is the only US politician with a credible plan to bring back manufacturing jobs, but doesn’t know to advertise it as such?)
As I mentioned above, we would also see fewer attacks on labour laws and organized labour spearheaded by companies. I’ll include this as a positive, although I wonder if these attacks would really stop if deprived of corporate money. I suspect that the owners of corporations would keep them up themselves.
I must also point out that Warren’s plan would certainly be helpful when it comes to environmental protection. Having environmental protection responsibilities laid out as just as important as fiduciary duty would probably make it easy for private citizens and pressure groups to take enforcement of environmental rules into their own hands via the courts, even when their state EPA is slow out of the gate. This would be a real boon to environmental groups in conservative states and probably bring some amount of uniformity to environmental protection efforts.
Looking at the expected yields on these funds makes it pretty clear that they’re invested in the stock market (or something similarly risky ). You don’t get 7.5% yearly yields from buying Treasury Bills.
Assuming the 25% decrease in nominal value given in the article is true (I suspect the change in real value would be higher), Warren’s plan would create a pension shortfall of $750 billion – or about 18% of the current US Federal Budget. And that’s just the hit to the 30 largest public-sector pensions. Throw in private sector pensions and smaller pensions and it isn’t an exaggeration to say that this plan could cost pensions more than a trillion dollars.
This shortfall needs to be made up somehow – either delayed retirement, taxpayer bailouts, or cuts to benefits. Any of these will be expensive, unpopular, and easy to track back to Warren’s proposal.
Furthermore, these plans are already in trouble. I calculated the average funding ratio at 78%, meaning that there’s already 22% less money in these pensions than there needs to be to pay out benefits. A 25% haircut would bring the pensions down to about 60% funded. We aren’t talking a small or unnoticeable potential cut to benefits here. Warren’s plan requires ordinary people relying on their pensions to suffer, or it requires a large taxpayer outlay (which, you might remember, it is supposed to avoid).
This isn’t even getting into the dreadfully underfunded world of municipal pensions, which are appallingly managed and chronically underfunded. If there’s a massive unfunded liability in state pensions caused by federal action, you can bet that the Feds will leave it to the states to sort it out.
And if the states sort it out rather than ignoring it, you can bet that one of the first things they’ll do is cut transfers to municipalities to compensate.
This seems to be how budget cuts always go. It’s unpopular to cut any specific program, so instead you cut your transfers to other layers of governments. You get lauded for balancing the books and they get to decide what to cut. The federal government does this to states, states do it to cities, and cities… cities are on their own.
In a worst-case scenario, Warren’s plan could create unfunded pension liabilities that states feel compelled to plug, paid for by shafting the cities. Cities will then face a double whammy: their own pension liabilities will put them in a deep hole. A drastic reduction in state funding will bury them. City pensions will be wiped out and many cities will go bankrupt. Essential services, like fire-fighting, may be impossible to provide. It would be a disaster.
The best-case scenario, of course, is just that a bunch of retirees see a huge chunk of their income disappear.
It is easy to hate on shareholder protection when you think it only benefits the rich. But that just isn’t the case. It also benefits anyone with a pension. Your pension, possibly underfunded and a bit terrified of that fact, is one of the actors pushing CEOs to make as much money as possible. It has to if you’re to retire someday.
Vox is ultimately wrong about how affected ordinary people are when the stock market declines and because of this, their enthusiasm for this plan is deeply misplaced.
 To some extent, Warren’s plan starts out much less appeal if you (like me) don’t have “Wall Street is too focused on the short term” as a foundational assumption.
I am very skeptical of claims that Wall Street is too short-term focused. Matt Levine gives an excellent run-down of why you should be skeptical as well. The very brief version is that complaints about short-termism normally come from CEOs and it’s maybe a bad idea to agree with them when they claim that everything will be fine if we monitor them less. ^
 I’d love to show this in chart form, but in real life the American dollar is also influenced by things like nuclear war worries and trade war realities. Any increase in the value of the USD caused by the GOP tax cut has been drowned out by these other factors. ^
 Canada benefits from a similar effect, because we also have a very good financial system with strong property rights and low corporate taxes. ^
 They also tend to leave international flights out of lists of things that we need to stop if we’re going to handle climate change, but that’s a rant for another day. ^
 I largely think that Marxist style class solidarity is a pleasant fiction. To take just one example, someone working a minimum wage grocery store job is just as much a member of the “working class” as a dairy farmer. But when it comes to supply management, a policy that restriction competition and artificially increases the prices of eggs and dairy, these two individuals have vastly different interests. Many issues are about distribution of resources, prestige, or respect within a class and these issues make reasoning that assumes class solidarity likely to fail. ^
 These goals could, of course, be accomplished with tax policy, but this is America we’re talking about. You can never get the effect you want in America simply by legislating for it. Instead you need to set up a Rube Goldberg machine and pray for the best. ^
 Any decline in stocks should cause a similar decline in return on bonds over the long term, because bond yields fall when stocks fall. There’s a set amount of money out there being invested. When one investment becomes unavailable or less attractive, similarly investments are substituted. If the first investment is big enough, this creates an excess of demand, which allows the seller to get better terms. ^