Tag Archives: steph tai

Guest post: Stephanie Tai on deference to experts

My colleague Steph Tai at the law school wrote a long, amazing Facebook message to me about the question Cathy and I have been pawing at:  when and in what spirit should we be listening to experts?  It was too good to be limited to Facebook, so, with her permission, I’m reprinting it below.

Steph deals with these issues because her academic specialty is the legal status of scientific knowledge and scientific evidence.  So yes:  in a discussion on whether we should listen to experts I am asking you to listen to the opinions of an expert on expertise.

Also, Steph very modestly doesn’t link to her own paper on this stuff until the very bottom of this post.  I know you guys don’t always read to the bottom, so I’ve got your link to “Comparing Approaches Toward Governing Scientific Advisory Bodies on Food Safety in the United States and the European Union” right here!

And now, Steph:

*****

Some quick thoughts on this very interesting exchange. What might be helpful, to take everyone out of our own political contexts, perhaps, is to contrast this discussion you’re both having regarding experts and financial models with discussions about experts and climate models, where, it seems, the political dynamics are fairly opposite. There, you have people on the far right making similar claims to Cathy: that climate scientists are to be distrusted because they’re just coming up with scare models because these allegedly biased models are useful to those climate scientists–i.e., to bring money to left-wing causes, to generate grants for more research, etc.

 

So when you apply the claim that Cathy makes at the end of her post–“If you see someone using a model to make predictions that directly benefit them or lose them money – like a day trader, or a chess player, or someone who literally places a bet on an outcome (unless they place another hidden bet on the opposite outcome) – then you can be sure they are optimizing their model for accuracy as best they can. . . . But if you are witnessing someone creating a model which predicts outcomes that are irrelevant to their immediate bottom-line, then you might want to look into the model yourself.”–I’m not sure you can totally put climate scientists in that former category (of those that directly benefit from the accuracy of their predictions). This is due to the nature of most climate work: most researchers in the area only contribute to one tiny part of the models, rather than produce the entire model themselves (thus, the incentives to avoid inaccuracies are diffuse rather than direct); the “test time” for the models are often relatively far into the future (again, making the incentives more indirect); and the sorts of diffuse reputational gains that an individual climate scientist gets from being part of a team that might partly contribute to an accurate climate model is far less direct than the examples given of day traders and chess players or “someone who literally places a bet on an outcome.”

 

What that in turn seems to mean is that under Cathy’s approach, climate scientists would be viewed as in the latter category—those creating models that “predict outcomes that are irrelevant to their immediate bottom-line,” and thus deserve people looking “into the model [themselves].” But at least from what I’ve seen, there is *so* much out there in terms of inaccurate and misleading information about climate models (by folks with stakes in the *perception* of those models) that chances are, a lay person’s inquiry into climate models has high chance to being shaped by similar forces with which Cathy is (in my view appropriately) concerned. Which in turn makes me concerned about applying this approach.
Disclaimer: I used to fall under this larger umbrella of climate scientists, though I didn’t work on the climate models themselves, just one small input to them—the global warming potentials of chlorofluorocarbon substitutes. So this contrast is not entirely unemotional for me. That said, now that I’m an academic who studies the *use* of science in legal decisionmaking (and no longer really an academic who studies the impact of chlorofluorocarbon substitutes on climate), I don’t want to be driven by these past personal ties, but they’re still there, so I feel like I should lay them out.

 

So what’s to be done? I absolutely agree with Cathy’s statement that “when independent people like myself step up to denounce a given statement or theory, it’s not clear to the public who is the expert and who isn’t.” It would seem, from what she says at the end of her essay, that her answer to this “expertise ambiguity” is to get people to look into the model when expertise is unclear.[*] But that in turn raises a whole bunch of questions:

 

(1) What does it take to “look into the model yourself”? That is, how much understanding does it take? Some sociologists of science suggest that translational “experts”–that is, “experts” who aren’t necessarily producing new information and research, but instead are “expert” enough to communicate stuff to those not trained in the area–can help bridge this divide without requiring everyone to become “experts” themselves. But that can also raise the question of whether these translational experts have hidden agendas in some way. Moreover, one can also raise questions of whether a partial understanding of the model might in some instances be more misleading than not looking into the model at all–examples of that could be the various challenges to evolution based on fairly minor examples that when fully contextualized seem minor but may pop out to someone who is doing a less systematic inquiry.

 

(2) How does a layperson avoid, in attempting to understand the underlying model, the same manipulations by those with financial stakes in the matter–the same stakes that Cathy recognizes might shape the model itself? Because that happens as well, so that even if one were to try to look into a model themselves, the educational materials it would take to look into that model can be also argued to be developed by those with stakes in the matter. (I think Cathy sort of raises this in a subsequent post about how entire subfields can be regarded as “captured” by particular interests.)

 

(3) (and to me this is one of the most important questions) Given the high degree of training it takes to understand any of these individual areas of expertise, and given that we encounter so many areas in which this sort of deeper understanding is needed to resolve policy questions, how can any individual actually apply that initial exhortation–to look into the model yourself–in every instance where expertise ambiguity is raised? To me that’s one of the most compelling arguments in favor of deferring to experts to some extent–that lay people (as citizens, as judges, as whatever) simply don’t have time to do the kind of thing that Cathy suggests in every situation where she argues it’s called for. Expert reliance isn’t perfect, sure–but it’s a potentially pragmatic response to an imperfect world with limited time and resources.

 

Do my thoughts on (3) mean that I think we should blindly defer to experts? Absolutely not. I’m just pointing it out as something that weighs in favor of listening to experts a little more. But that also doesn’t mean that the concerns Cathy raises are unwarranted. My friend Wendy Wagner writes about this in her papers on the production of FDA reports and toxic materials testing, and I find her inquiries quite compelling. P.s. I should also plug a work of hers that seems especially relevant to this conversation. It suggests that the part of Nate Silver’s book that might raise the most concerns (I dunno, because I haven’t read it) is its potential contribution to the vision of models as “truth machines,” rather than understanding that models are just one tool to aid in making decisions, and a tool which must be contextualized (for bias, for meaningfulness, for uncertainty) at that.

 

So how to address this balance between skepticism and lack of time to do full inquiries into everything? I totally don’t have the answers, though the kind of stuff I explore are procedural ways to address these issues, at least when legal decisions are raised–for example,
* public participation processes (with questions as to both the timing and scope of those processes, the ability and likelihood that these processes are even used, the accessibility of these processes, the susceptibility of “abuse,” the weight of those processes in ultimate decisionmaking)
* scientific ombudsman mechanisms (which questions of how ombudsman are to be selected, the resources they can use to work with citizen groups, the training of such ombudsmen)
* the formation of independent advisory committees (with questions of the selection of committee members, conflict of interest provisions, the authority accorded to such committees)
* legal case law requiring certain decisionmaking heuristics in the face of scientific uncertainty to avoid too much susceptibility to data manipulation (with questions of the incentives those heuristics create for actual potential funders of scientific research, the ability of judges to apply such heuristics in a consistent manner)
–as well as legal requirements that exacerbate these problems. Anyway, thanks for an interesting back and forth!

 

[*] I’m not getting into the question of “what makes someone an expert?” here, and instead focus on “how do we make decisions given the ambiguousness of who should be considered experts?” because that’s more relevant to what I study, although I should also point out that philosophers and sociologists of science have been studying this in what’s starting to be called the “third wave” of science, technology, and society studies. There’s a lot of debate about this, and I have a teensy summary of it here (since Jordan says it’s okay for me to plug myself :) (Note: the EFSA advisory committee structure, if anyone cares, has changed since I published this article so that the article characterizations are no longer accurate.)

 

 

 

Tagged , , , , , ,
%d bloggers like this: