Scientism Isn’t a Complete Strategy

Ari Holtzman
5 min readDec 20, 2020

--

My deliciously thoughtful colleague Dallas Card has just written a short piece about the collective resident response to vaccine allocation in Stanford hospital. He does an excellent job of highlighting the kinds communicative dynamics and implicit assumptions both the spokesperson and the crowd of residents use in this now infamous video.

I recommend you watch the video and read the article before reading this.

Card notes that Stanford reportedly accidentally ended up with a “first-come, first-served” system. Perhaps, then, we should ask first: Why didn’t Stanford just start with first-come, first-served? Obviously it is because there is a sense of non-uniform urgency: some people are more likely to contract COVID-19 (largely because of their role in the hospital) and some people are more likely to die of COVID-19 (largely because of their age). These two issues are often in direct contradiction with regards to who should get the vaccine.

Reasoning from a pure utility calculus about how to prevent deaths or maximize preserved life-years is all well and good, but Card points out a bigger issue:

In this case, ProPublica reports that residents did not have an assigned “location”, which sounds exactly like a classic informational failure.

With so little information it seems very hard to justify an algorithm that can’t be described in about 60 seconds on a single PowerPoint slide. Or even to justify the use of an algorithm at all, rather than a guidance policy and a list put together by the relevant parties.

But who are the relevant parties?

Card notes that the spokesperson uses the word “algorithm” to

suggest that the choice was to some extent out of their hands, or that the choice was somehow more principled or fair. Mention of algorithms might still suggest a degree of disinterestedness that perhaps cues “reasonable” for some people (though I expect this is quickly evaporating).

This impression of algorithmic “cleanliness” is oft-noted in current discussions, both academic and informal, but the “hiding factor” seems more important than any such impression. For instance, in the given video clip the people who are in charge in are not revealed, because all the narrative force is directed at the algorithm itself which is justified in three different ways by the Stanford spokesperson:

Our algorithm, that the ethicists, infectious disease experts, worked on for weeks

First, by the implication of algorithmic “disinterestedness”, second by the use of expert roles to justify its creation, and third by time investment (weeks). As Card notes, all of these things serve to discredit the unsavory result of such an apparently arduous process. I would argue this bad outcome is not directly measurable from the dissatisfaction of the residents, but from the fact that this dissatisfaction comes so close to the date of vaccine distribution. As Card says:

No matter what process is used, there will be people who will be displeased (often justifiably), and, somewhat counter-intuitively, this is all the more assured when a process is made explicit (as someone will inevitably have specific grounds on which to challenge it).

A process that was made explicit in advance would have faced significantly more pushback, but it is the sudden pushback and the spokesperson’s inability to describe what even happened that is worrying. There is nothing to pushback on but the words “algorithm”, “ethicists”, “infectious disease experts”, and “ weeks”—and nowhere can these be found concretely instantiated.

The issue, then, seems to be that computational solutions and expert designs as abstract ideas come to stand-in for any kind of concrete justification for the current plan. This is why the explanations feel hollow: they are impossible to really argue with because so little has been tied-back to the real world.

The point of avoiding first-come, first-served was too increase the probability that vaccines would be given in a way that prevented deaths and further transmission. But such a responsibility is not something people like to take, indeed it is clear that these issues will end many people’s careers in 2021. The result is using the concepts of expert-designed algorithms as a shield against any notion of personal management, and thus liability. The organization is liable for collectively creating an algorithm, but it is this collectivity that rescinds responsibility from individuals.

Perhaps this is my biggest fear about algorithms altogether: they will be used where they are useful as political deflection, even if there is a vast literature on how awful a thing to do that is. On the one hand, it is clear that many things simply can’t be done well without algorithms, e.g. credit card fraud detection. On the other hand, algorithms are now popularly understood to have quirks that are a result of their lack of human perspective, providing the ultimate deflection tool. It is easy, now, to say “Our best decision-maker simply couldn’t understand the problem.

I worry about this most, because I worry about the Scientism that has seeped into every corner of discourse. I do not think we can solve every problem empirically, i.e. with statistical tests on data collected from experiments. The most important reason for this is that most events aren’t repeatable and we’re always hazarding a guess as to how they’re similar to other events we have data about.

To be clear, I think our increasing use of empirical trials and data in policy is one of the things that stands to make decisions more accountable than government in the 20th century could have ever hoped to be. But it seems to me that in our nervousness about leaving decisions up to humans, we make it increasingly less incentivized for humans to take responsibility. That’s very scary.

We have more ways of knowing things than statistical tests. Most arguments in any organization are not resolved through such means nor should they be. Statistical tests can, however, force us to be honest with ourselves about whether our ideas are represented in the data the way we would believe they are before checking.

Sadly, the notion of “algorithm” has taken on some of the sheen of such statistical tests, as if by manipulating data deterministically they can keep us honest to our original intentions. It hardly needs stating that this is not the case. Yet, to make the case stronger we will have to flesh-out the ways in which we feel it is justified to rely on arguments whose evidence is essentially analogical: comparing one situation to another, compensating for differences, and taking action that seems justified in a shared model.

Perhaps this seems obvious when it comes to policy, because that is classically how policy has been done. But I would argue it is no less true when it comes to Science itself. I find it exceedingly rare to read a paper where something is justified by analogy rather than (often hamfistedly) trying to test a given idea on benchmarks that often don’t-quite-apply. To me, this is the same problem as in organizational policy: we have become unwilling to make arguments that do not look “data-driven”.

This will be seen as a historic mistake.

I’d like to echo Card’s ending of gratitude and ask the reader to take a moment of thought and thankfulness to everyone who is working on the front-lines, those who are making difficult decisions about risk, and those who are suffering from the secondary effects the pandemic has had on the economy and daily life in this painful and complicated year.

--

--