Model, Politics, Science

Science Is Less Political Than Its Critics

A while back, I was linked to this Tweet:

It had sparked a brisk and mostly unproductive debate. If you want to see people talking past each other, snide comments, and applause lights, check out the thread. One of the few productive exchanges centres on bridges.

Bridges are clearly a product of science (and its offspring, engineering) – only the simplest bridges can be built without scientific knowledge. Bridges also clearly have a political dimension. Not only are bridges normally the product of politics, they also are embedded in a broader political fabric. They change how a space can be used and change geography. They make certain actions – like commuting – easier and can drive urban changes like suburb growth and gentrification. Maintenance of bridges uses resources (time, money, skilled labour) that cannot be then used elsewhere. These are all clearly political concerns and they all clearly intersect deeply with existing power dynamics.

Even if no other part of science was political (and I don’t think that could be defensible; there are many other branches of science that lead to things like bridges existing), bridges prove that science certainly can be political. I can’t deny this. I don’t want to deny this.

I also cannot deny that I’m deeply skeptical of the motives of anyone who trumpets a political view of science.

You see, science has unfortunate political implications for many movements. To give just one example, greenhouse gasses are causing global warming. Many conservative politicians have a vested interest in ignoring this or muddying the water, such that the scientific consensus “greenhouse gasses are increasing global temperatures” is conflated with the political position “we should burn less fossil fuel”. This allows a dismissal of the political position (“a carbon tax makes driving more expensive; it’s just a war on cars”) serve also (via motivated cognition) to dismiss the scientific position.

(Would that carbon in the atmosphere could be dismissed so easily.)

While Dr. Wolfe is no climate change denier, it is hard to square her claims that calling science political is a neutral statement:

With the examples she chooses to demonstrate this:

When pointing out that science is political, we could also say things like “we chose to target polio for a major elimination effort before cancer, partially because it largely affected poor children instead of rich adults (as rich kids escaped polio in their summer homes)”. Talking about the ways that science has been a tool for protecting the most vulnerable paints a very different picture of what its political nature is about.

(I don’t think an argument over which view is more correct is ever likely to be particularly productive, but I do want to leave you with a few examples for my position.)

Dr. Wolfe’s is able to claim that politics is neutral despite only using negative examples of its effects by using a bait and switch between two definitions of “politics”. The bait is a technical and neutral definition, something along the lines of: “related to how we arrange and govern our society”. The switch is a more common definition, like: “engaging in and related to partisan politics”.

I start to feel that someone is being at least a bit disingenuous when they only furnish negative examples, examples that relate to this second meaning of the word political, then ask why their critics view politics as “inherently bad” (referring here to the first definition).

This sort of bait and switch pops up enough in post-modernist “all knowledge is human and constructed by existing hierarchies” places that someone got annoyed enough to coin a name for it: the motte and bailey fallacy.

Image Credit: Hchc2009, Wikimedia Commons.

 

It’s named after the early-medieval form of castle, pictured above. The motte is the outer wall and the bailey is the inner bit. This mirrors the two parts of the motte and bailey fallacy. The “motte” is the easily defensible statement (science is political because all human group activities are political) and the bailey is the more controversial belief actually held by the speaker (something like “we can’t trust science because of the number of men in it” or “we can’t trust science because it’s dominated by liberals”).

From Dr. Wolfe’s other tweets, we can see the bailey (sample: “There’s a direct line between scientism and maintaining existing power structures; you can see it in language on data transparency, the recent hoax, and more.“). This isn’t a neutral political position! It is one that a number of people disagree with. Certainly Sokal, the hoax paper writer who inspired the most recent hoaxes is an old leftist who would very much like to empower labour at the expense of capitalists.

I have a lot of sympathy for the people in the twitter thread who jumped to defend positions that looked ridiculous from the perspective of “science is subject to the same forces as any other collective human endeavour” when they believed they were arguing with “science is a tool of right-wing interests”. There are a great many progressive scientists who might agree with Dr. Wolfe on many issues, but strongly disagree with what her position seems to be here. There are many of us who believe that science, if not necessary for a progressive mission, is necessary for the related humanistic mission of freeing humanity from drudgery, hunger, and disease.

It is true that we shouldn’t uncritically believe science. But the work of being a critical observer of science should not be about running an inquisition into scientists’ political beliefs. That’s how we get climate change deniers doxxing climate scientists. Critical observation of science is the much more boring work of checking theories for genuine scientific mistakes, looking for P-hacking, and doubled checking that no one got so invested in their exciting results that they fudged their analyses to support them. Critical belief often hinges on weird mathematical identities, not political views.

But there are real and present dangers to uncritically not believing science whenever it conflicts with your politic views. The increased incidence of measles outbreaks in vaccination refusing populations is one such risk. Catastrophic and irreversible climate change is another.

When anyone says science is political and then goes on to emphasize all of the negatives of this statement, they’re giving people permission to believe their political views (like “gas should be cheap” or “vaccines are unnatural”) over the hard truths of science. And that has real consequences.

Saying that “science is political” is also political. And it’s one of those political things that is more likely than not to be driven by partisan politics. No one trumpets this unless they feel one of their political positions is endangered by empirical evidence. When talking with someone making this claim, it’s always good to keep sight of that.

Literature

Book Review: Bad Blood

Theranos was founded in 2003 by Stanford drop-out Elizabeth Holmes. It and its revolutionary blood tests eventually became a Silicon Valley darling, raising $700 million from investors that included Rupert Murdoch and the Walton family. It ultimately achieved a valuation of almost $10 billion on yearly revenues of $100 million. Elizabeth Holmes was hailed as Silicon Valley’s first self-made female billionaire.

In 2015, a series of articles by John Carreyrou published in the Wall Street Journal popped this bubble. Theranos was a fraud. Its blood tests didn’t work and were putting patient lives at risk. Its revenue was one thousand times smaller than reported. It had engaged in a long running campaign of intimidation against employees and whistleblowers. Its board had entirely failed to hold the executives to account – not surprising, since Elizabeth Holmes controlled over 99% of the voting power.

Bad Blood is the story of how this happened. John Carreyrou interviewed more than 140 sources, including 60 former employees to create the clearest possible picture of the company, from its founding to just before it dissolved.

It’s also the story of Carreyrou’s reporting on Theranos, from the first fateful tip he received after winning a Pulitzer for uncovering another medical fraud, to repeated legal threats from Theranos’s lawyers, to the slew of awards his coverage won when it eventually proved correct.

I thought it was one hell of a book and would recommend it to anyone who likes thrillers or anyone who might one day work at a start-up and wants a guide to what sort of company to avoid (pro tip: if your company is faking its demos to investors, leave).

Instead of rehashing the book like I sometimes do in my reviews, I want to discuss three key things I took from it.

Claims that Theranos is “emblematic” of Silicon Valley are overblown

Carreyrou vacillates on this point. He sometimes points out all the ways that Theranos is different from other VC backed companies and sometimes holds it up as a poster child for everything that is wrong with the Valley.

I’m much more in the first camp. For Theranos to be a posterchild of the Valley, you’d want to see it raise money from the same sources as other venture-backed companies. This just wasn’t the case.

First of all, Theranos had basically no backing from dedicated biotechnology venture capitalists (VCs). This makes a lot of sense. The big biotech VCs do intense due-diligence. If you can’t explain exactly how your product works to a room full of intensely skeptical PhDs, you’re out of luck. Elizabeth Holmes quickly found herself out of luck.

Next is the list of VCs who did invest. Missing are the big names from the Valley. There’s no Softbank, no Peter Thiel, no Andreessen Horowitz. While these investors may have less ability to judge biotech start-ups than the life sciences focused firms, they are experienced in due diligence and they knew red flags (like Holmes’s refusal to explain how her tech worked, even under NDA) when they saw them. I work at a venture backed company and I can tell you that experienced investors won’t even look at you if you aren’t willing to have a frank discussion about your technology with them.

The people who did invest? Largely dabblers, like Rupert Murdoch and the Walton family, drawn in by a board studded with political luminaries (two former secretaries of state, James friggen’ Mattis, etc.). It perhaps should have been a red flag that Henry Kissinger (who knows nothing about blood testing and would be better placed on Facebook’s board, where his expertise in committing war crimes would come in handy) was on the board, but to the well-connected elites from outside the Valley, this was exactly the opposite.

It is hard to deal with people who just lie

I don’t want to blame these dabblers from outside the Valley too much though, because they were lied to like crazy. As America found out in 2016, many institutions struggle when dealing with people who just make shit up.

There is an accepted level of exaggeration that happens when chasing VC money. You put your best foot forward, shove the skeletons deep into your closet, and you try and be the most charming and likable version of you. One founder once described trying to get money from VCs as “basically like dating” to me and she wasn’t wrong.

Much like dating, you don’t want to exaggerate too far. After all, if the suit is fruitful, you’re kind of stuck with each other. The last thing you want to find out after the fact is that your new partner collects their toenail clippings in a jar or overstates their yearly revenue by more than 1000x.

VCs went into Theranos with the understanding that they were probably seeing rosy forecasts. What they didn’t expect was that the forecasts they saw were 5x the internal forecasts, or that the internal forecasts were made by people who had no idea what the current revenue was. This just doesn’t happen at a normal company. I’m used to internal revenue projections being the exact same as the ones shown to investors. And while I’m sure no one would bat an eye if you went back and re-did the projections with slightly more optimistic assumptions, you can’t get to a 5x increase in revenue just by doing that. Furthermore, the whole exercise of doing projections is moot if you are already lying about your current revenue by 1000x.

There is a good reason that VCs expect companies not to do this. I’m no lawyer, but I’m pretty sure that this is all sorts of fraud. The SEC and US attorney’s office seem to agree. It’s easy to call investors naïve for buying into Theranos’s lies. But I would contend that Holmes and Balwani (her boyfriend and Theranos’s erstwhile president) were the naïve ones if they thought they could get away with it without fines and jail time.

(Carreyrou makes a production about how “over-promise, then buy time to fix it later” is business as usual for the Valley. This is certainly true if you’re talking about, say, customers of a free service. But it is not and never has been accepted practice to do this to your investors. You save the rosy projections for the future! You don’t lie about what is going on right now.)

The existence of a crime called “fraud” is really useful for our markets. When lies of the sort that Theranos made are criminalized, business transactions become easier. You expect that people who are scammers will go do their scams somewhere where lies aren’t so criminalized and they mostly do, because investors are very prone to sue or run to the SEC when lied to. Since this mostly works, it’s understandable that a sense of complacency might set in. When everyone habitually tells more or less the truth, everyone forgets to check for lies.

The biotech companies didn’t invest in Theranos because their sweep for general incompetence made it clear that something fishy was going on. The rest of the VCs were less lucky, but I would argue that when the books are as cooked as Theranos’s were, a lack of understanding of biology was not the primary problem with these investors. The primary problem was that they thought they were buying a company that was making $100 million a year when in fact it was making $100,000.

Most VCs (and probably most of the dabblers, who after all made their money in business of some sort) may not understand the nuances of biotech, but they do understand that revenue that low more than a decade into operation represent a serious problem. Conversely, revenues of $100 million are pretty darn good for a decade-old medical device company. With that lie out of the way, the future growth projections looked reasonable; they were just continuing a trend. Had any investors been told the truth, they could have used their long experience as business people or VCs to realize that Theranos was a bad deal. Holmes’s lies prevented that.

I sure wish there was a way to make lies less powerful in areas where people mostly stick near the truth (and that we’d found one before 2016), but absent that, I want to give Theranos’s investors a bit of a break.

Theranos was hardest on ethical people

Did you know that Theranos didn’t have a chief financial officer for most of its existence? Their first CFO confronted Holmes about her blatant lies to investors (she was entirely faking the blood tests that they “took”) and she fired him, then used compromising material on his computer to blackmail him into silence. He was one of the lucky ones.

Bad Blood is replete with stories of idealistic young people who joined Theranos because it seemed to be one of the few start-ups that was actually making a positive difference in normal people’s lives. These people would then collide with Theranos’s horrible management culture and begin to get disillusioned. Seeing the fraud that took place all around them would complete the process. Once cynicism set in, employees would often forward some emails to themselves so they’d have proof that they only participated in the fraud when unaware and immediately handed in their notice.

If they emailed themselves, they’d get a visit from a lawyer. The lawyer would tell them that forwarding emails to themselves was stealing Theranos’s trade secrets (everything was a trade secret with Theranos, especially the fact that they were lying about practically everything). The lawyer would present the employee with an option: delete the emails and sign a new NDA that included a non-disparagement clause that prevented them from criticising Theranos, or be sued by the fiercely talented and amoral lawyer David Boies (who was paid in Theranos stock and had a material interest in keeping the company afloat) until they were bankrupted by the legal fees.

Most people signed the paper.

If employees left without proof, they’d either be painted as deranged and angered by being fired, or they be silenced with the threat of lawsuits.

Theranos was a fly trap of a company. Its bait was a chance to work on something meaningful. But then it was set up to be maximally offensive and demoralizing for the very people who would jump at that opportunity. Kept from speaking out, guilt at helping perpetuate the fraud could eat them alive.

One employee, Ian Gibbons, committed suicide when caught between Theranos’s impossible demands for loyalty and an upcoming deposition in a lawsuit against the company.

To me, this makes Theranos much worse than seemingly similar corporate frauds like Enron. Enron didn’t attract bright-eyed idealists, crush them between an impossible situation and their morals, then throw them away to start the process over again. Enron was a few directors enriching themselves at the expense of their investors. It was wrong, but it wasn’t monstrous.

Theranos was monstrous.

Elizabeth Holmes never really made any money from her fraud. She was paid a modest (by Valley standards) salary of $200,000 per year – about what a senior engineer could expect to make. It’s probably about what she could have earned a few years after finishing her Stanford degree, if she hadn’t dropped out. Her compensation was mostly in stock and when the SEC forced her to give up most of it and the company went bankrupt, its value plummeted from $4.5 billion to $0. She never cashed out. She believed in Theranos until the bitter end.

If she’d been in it for the money, I could have understood it, almost. I can see how people would do – and have done – horrible things to get their hands on $4.5 billion. But instead of being motivated by money, she was motivated by some vision. Perhaps of saving the world, perhaps of being admired. In either case, she was willing to grind up and use up anyone and everyone around her in pursuit of that vision. Lying was on the table. Ruining people’s lives was on the table. Callously dismissing a suicide that was probably caused by her actions was on the table. As far as anyone knows, she has never shown remorse for any of these. Never viewed her actions as anything but moral and upright.

And someone who can do that scares me. People who are in it for the money don’t go to bed thinking they’re squeaky clean. They know they’ve made a deal with the devil. Elizabeth Holmes doesn’t know and doesn’t understand.

I think it’s probably for the best that no one will trust Elizabeth Holmes with a fish and chips stand, let alone a billion-dollar company, ever again. Because I tremble to think of what she could do if given another chance to “change the world”.

Model

Hacked Pacemakers Won’t Be This Year’s Hot Crime Trend

Or: the simplest ways of killing people tend to be the most effective.

A raft of articles came out during Defcon showing that security vulnerabilities exist in some pacemakers, vulnerabilities which could allow attackers to load a pacemaker with arbitrary code. This is obviously worrying if you have a pacemaker implanted. It is equally self-evident that it is better to live in a world where pacemakers cannot be hacked. But how much worse is it to live in this unfortunately hackable world? Are pacemaker hackings likely to become the latest crime spree?

Electrical grid hackings provide a sobering example. Despite years of warning that the American electrical grid is vulnerable to cyber-attacks, the greatest threat to America’s electricity infrastructure remains… squirrels.

Hacking, whether it’s of the electricity grid or of pacemakers gets all the headlines. Meanwhile fatty foods and squirrels do all the real damage.

(Last year, 610,000 Americans died of heart disease and 0 died of hacked pacemakers.)

For all the media attention that novel cyberpunk methods of murder get, they seem to be rather ineffective for actual murder, as demonstrated by the paucity of murder victims. I think this is rather generalizable. Simple ways of killing people are very effective but not very scary and so don’t garner much attention. On the other hand, particularly novel or baroque methods of murder cause a lot of terror, even if almost no one who is scared of them will ever die of them.

I often demonstrate this point by comparing two terrorist organizations: Al Qaeda and Daesh (the so-called Islamic State). Both of these groups are brutally inhumane, think nothing of murder, and are made up of some of the most despicable people in the world. But their methodology couldn’t be more different.

Al Qaeda has a taste for large, complicated, baroque plans that, when they actually work, cause massive damage and change how people see the world for years. 9/11 remains the single deadliest terror attack in recorded history. This is what optimizing for terror looks like.

On the other hand, when Al Qaeda’s plans fail, they seem almost farcical. There’s something grimly amusing about the time that Al Qaeda may have tried to weaponize the bubonic plague and instead lost over 40 members when they were infected and promptly died (the alternative theory, that they caught the plague because of squalid living conditions, looks only slightly better).

(Had Al Qaeda succeeded and killed even a single westerner with the plague, people would have been utterly terrified for months, even though the plague is relatively treatable by modern means and would have trouble spreading in notably flea-free western countries.)

Daesh, on the other hand, prefers simple attacks. When guns are available, their followers use them. When they aren’t, they’ll rent vans and plough them into crowds. Most of Daesh’s violence occurs in Syria and Iraq, where they once controlled territory with unparalleled brutality. This is another difference in strategy (as Al Qaeda is outward facing, focused mostly on attacking “The West”). Focusing on Syria and Iraq, where the government lacks a monopoly on violence and they could originally operate with impunity, Daesh racked up a body count that surpassed Al Qaeda’s.

While Daesh has been effective in terms of body count, they haven’t really succeeded (in the west) in creating the lasting terror that Al Qaeda did. This is perhaps a symptom of their quotidian methods of murder. No one walked around scared of a Daesh attack and many of their murders were lost in the daily churn of the news cycle – especially the ones that happened in Syria and Iraq.

I almost wonder if it is impossible for attacks or murders by “normal” means to cause much terror beyond those immediately affected. Could hacked pacemakers remain terrifying if as many people died of them as gunshots? Does familiarity with a form of death remove terror, or are some methods of death inherently more terrible and terrifying than others?

(It is probably the case that both are true, that terror is some function of surprise, gruesomeness, and brutality, such that some things will always terrify us, while others are horrible, but have long since lost their edge.)

Terror for its own sake (or because people believe it is the best path to some objective) must be a compelling option to some, because otherwise everyone would stick to simple plans whenever they think violence will help them achieve their aims. I don’t want to stereotype too much, but most people who going around being terrorists or murders typically aren’t the brightest bulbs in the socket. The average killer doesn’t have the resources to hack your pacemaker and the average terrorist is going to have much better luck with a van than with a bomb. There are disadvantages to bombs! The average Pastun farmer or disaffected mujahedeen is not a very good chemist and homemade explosives are dangerous even to skilled chemists. Accidental detonations abound. If there wasn’t some advantage in terror to be had, no one would mess around with explosives when guns and vans can be easily found.

(Perhaps this advantage is in a multiplier effect of sorts. If you are trying to win a violent struggle directly, you have to kill everyone who stands in your way. Some people might believe that terror can short-circuit this and let them scare away some of their potential opponents. Historically, this hasn’t always worked.)

In the face of actors committed to terror, we should remember that our risk of dying by a particular method is almost inversely related to how terrifying we find it. Notable intimidators like Vladimir Putin or the Mossad kill people with nerve gasses, polonium, and motorcycle delivered magnetic bombs to sow fear. I can see either of them one day adding hacked pacemakers to their arsenal.

If you’ve pissed off the Mossad or Putin and would like to die in some way other than a hacked pacemaker, then by all means, go get a different one. Otherwise, you’re probably fine waiting for a software update. If, in the meantime, you don’t want to die, maybe try ignoring headlines and instead not owning a gun and skipping French fries. Statistically, there isn’t much that will keep you safer.

Coda

Our biases make it hard for us to treat things that are easy to remember as uncommon, which no doubt plays a role here. I wrote this post like this – full of rambles, parentheses, and long-winded examples – to try and convey the difficult intuition, that we should discount as likely to effect us any method of murder that seems shocking, but hard. Remember that most crimes are crimes of opportunity and most criminals are incompetent and you’ll never be surprised to hear the three most common murder weapons are guns, knives, and fists.