Monthly Archives: May 2022

Science-speak and Poor Hermeneutics

I had a conversation with someone last week which was a little epistemologically disorienting. She was making certain arguments for the effectiveness and harm of certain medical treatments. Having followed Scott Alexander on the subjects she was discussing, I was pretty sure most of her conclusions were wrong. But not only had she done a lot of research, she knew a lot of scientific facts that she was using to back up her arguments. She would explain concepts and I would be like “yep, that’s how recessive genes work” and “yep, that’s an accurate history of the development of that medical treatment” and only occasionally did she seem to get any of the facts wrong — but the way she put them together seemed really wrong. I didn’t bother to argue. I knew it would be incredibly difficult — between the level of research she had done, and the impressive level of her (amateur) knowledge — it really sounded like she understood science. And I haven’t done any more research on the topic that she has! Scott Alexander reviews the literature on something and he seems really reasonable, knowledgeable, and relatively unbiased, and I take his word for it, because I don’t have a medical degree like he does nor the time or interest to research it myself. But she was doing the same thing, just with a different set of sources she trusted.

But something really bothered her about her arguments, which was not just her conclusions; the way she drew connections between scientific facts in a way they don’t actually relate and connect. As if she knew how to talk science, without actually understanding science.

And I realized I’ve seen this in several other domains as well.

There’s a difference between searching the Bible for evidence for a particular view, and really trying to understand what it says with an open mind. And it’s often difficult for me to tell which of those two methods someone has used when they are presenting an argument of Biblical evidence for a topic.

I often think of the title of the 2016 essay on economics: “The New Astrology”. Before you take that as a mere knock on economics, how much do you know about the historical practice of astrology? I recently learned from L. M. Sacasas how evidence-based astrology and related disciplines were: how the Chaldeans, for example, kept meticulous charts of of the movements of stars, the dreams of kings, and the appearance of animal fetuses, and major events that followed, to study their correlations. It was incredibly mathematical and statistical, in the way that is one pillar of modern science. (And here’s your reminder that Newton was as interested in alchemy as physics, and only one of them took off in the following centuries.) Perhaps the status of economics as a science is not in the state of scorn that astrology is, but a practitioner of either would defend: look at all the evidence behind our models! You may knock economics’ predictive power, but is weather forecasting, which is a physical science, much better?

Back to the physical sciences, there’s this great XKCD:

https://imgs.xkcd.com/comics/53_cards_2x.png

The perpetual motion enthusiast may be really good at using physics concepts, but he is drawing an obviously wrong conclusion. But how can an outsider who knows nothing about physics know that the physicist is the one who’s right, when the enthusiast appears so knowledgeable and has so much evidence? The statement “your argument is so bad I don’t even have to address it” may or may not persuade third parties, and definitely conveys no signal as to whether you’re actually right.

This made me realize is that these are all variations on the same problem: good science, good Biblical hermeneutics, and any good economic modeling if it exists (I want to give economists the benefit of the doubt!) can sound a lot like poor equivalents that have a lot of knowledge of the material to use as evidence.

But how can someone discern which is which?

(1) I guess the main criterion I use is the attitude of the person making the argument. I have never met Scott Alexander, but he really seems like someone who is open to being wrong, and admits it. My stance on understanding the Bible has always been that the reader must approach it with humility, open to whatever it might say. I’m really willing to give a lot more credit to people with an open mind who draw wrong conclusions than people who happen to be right who are only concerned with finding evidence to prove it.

I think there are two major epistemological temptations of which everyone faces at least one: the conformist temptation to agree with your in-group, and the non-conformist temptation to reject what you’ve been told and take a view that sounds more interesting. The first is more likely to believe whatever the authorities say, and the second is more likely to be a conspiracy theorist, but one can definitely fall to both temptations at the same time. I think good epistemology involves intentionally recognizing those temptations within yourself and rejecting them. I have a friend that more than most friends of mine, I consider likely to have an accurate understanding of what the Bible teaches. First, he’s really read the Bible a lot; second, I’ve seen him change his mind; and third, he seems really good at rejecting both temptations such that he is neither wholly conformist nor adopting suspiciously many distinct views.

Even so, how well can you really tell if someone has that humble, open mind — even yourself? How do you avoid having an open mind and still being wrong?

(2) The standard scientific answer is testability / replicability. When I said astrology had one pillar of modern science (statistics), this is the obvious one it lacks: if you keep having to add epicycles, that’s evidence that your model is fundamentally wrong. But how do you test your model if you’re not in a lab? You can’t run new experiments on history or biblical hermeneutics — but neither can you, as a layman, singlehandedly run experiments to solve questions of medical science, in any but the simplest cases — having insufficient sample sizes, for example, if it’s just you and your friends. Science’s answer of “testability” does not on its face solve the problem.

But I think predictability still has value as a test of truth. In particular, the question, “If you were wrong, what evidence would convince you?”

(3) Even so, certain categories of debate really struggle under the weight of predictability. I recently heard the characterization that the left’s favorite doomsday prediction is drastic climate change, and the right’s favorite doomsday prediction is hyperinflation. (If you’re in the contemporary rationalist community, maybe it’s evil superhuman AI; in stereotypical evangelical circles, every major world event is evidence of approaching end times . . .) The problem with arguments for either is that they’re based on theoretical models that are really have a long time horizon. If the earth gets colder, or prices go down, the person making the argument is rightly unpersuaded because they’re not claiming to predict things on a short scale; they’re only concerned about the fundamental theory regarding long-term effects, which they’re certain is sound. (Or even just appealing to the uncertainty of it: “we’ve never produced CO2 emissions at this rate in human history: even if the result isn’t certain, you really think we’re safe not doing anything about it and just ‘finding out what happens’?”) These are valid arguments — but then on what grounds can the underlying claim be evaluated at all?

(4) Since my own capacity to dive into every subject that is out there is limited, I use a proxy for discerning who I can trust to do that for me, by evaluating whether a community really have a culture of debate that is truth-seeking. In such a community, I can rely on any holes in an argument to be thoroughly poked at by others, and I can relax a little in assuming that where there was no poking there are probably few holes. I really find that in the readers of Scott Alexander’s blog. But this, too, is far from foolproof. You can always, as Chomsky said, “limit the spectrum of acceptable opinion, but allow very lively debate within that spectrum” — and I can tell even the rationalist community is not fully immune from that. But then I benefit from having my feet in several very different communities simultaneously (on many axes) — though that produces a high degree of epistemic helplessness in myself, that results in struggles like this post.

(5) Then again, I do think there’s an unusually good test of whether someone is earnestly truth-seeking: how willing are they to admit confusion? Eliezer Yudkowski tells a story of how he failed to notice his own confusion, and concludes: “. . . the usefulness of a model is not what it can explain, but what it can’t. A hypothesis that forbids nothing, permits everything, and thereby fails to constrain anticipation. Your strength as a rationalist is your ability to be more confused by fiction than by reality. If you are equally good at explaining any outcome, you have zero knowledge.” Do you write off every apparent counter-evidence to your medical view / theological stance / doomsday prediction as exceptional, or are you ever confused by something? A few years ago I got in the habit of lightly emphasizing confusing verses in Bible discussion. The temptation to ignore them, or to jump to a commentary to explain them away, or to explain them away myself as if that verse might as well not be in the Bible, is incredibly great. I don’t want to be some contrarian who thinks all our doctrines are actually wrong — and I’m not. But I do want to be someone who occasionally reads certain passages and honestly says, “I am surprised and confused.” I think that only then am I in a position to learn.

Overall, I am not very satisfied by my answers here, but I’m increasingly considering quantity of evidence not to be persuasive at all, and trying to little by little hone my capacity to distinguish between evidence-speak and earnest, humble search for truth.