Businessman running upon a red arrow
Image licensed from Adobe Stock
ID the Future Intelligent Design, Evolution, and Science Podcast
Episodes
Share
Facebook
Twitter
LinkedIn
Flipboard
Print
Email

Bayesian Probability and Intelligent Design: A Beginner’s Guide

Episode
1828
With
Andrew McDiarmid
Guest
Jonathan McLatchie
Duration
00:28:30
Download
Audio File (17.7 mb)
Share
Facebook
Twitter
LinkedIn
Flipboard
Print
Email

If the phrase “Bayesian calculus” makes you want to run for the hills, you’re not alone! Bayesian logic can sound intimidating at first, but if you give it a little time, you’ll understand how useful it can be to evaluate the evidence for design in the natural world. On this ID The Future, Dr. Jonathan McLatchie gives us a beginner’s guide to Bayesian thinking and teaches us how it can be used to build a strong cumulative case for intelligent design, as well as how we can use it in our everyday lives.

It is one of the most important formulas in all of probability, and it has been central to scientific discovery for the last two centuries. At its heart, Bayes’ theorem, first developed by 18th century English statistician, philosopher, and minister Thomas Bayes, is a method to quantify the confidence one should have in a particular belief or hypothesis. The process results in a likelihood ratio of a hypothesis being true or false, given the evidence. Here, Dr. McLatchie explains what the theorem is, the components that comprise it, when it would typically be used, and some useful examples of Bayesian reasoning in action.

Dr. McLatchie then shows how Bayesian probability can be applied to the evidence for design in nature. First, he argues that the initial prior probability – the intrinsic plausibility of the hypothesis being true given the background information alone – for the design hypothesis is not low: “In the case of intelligent design and our inferences to design in biology, we have independent reasons, I would contend, to already think that a mind is involved in the origin of our cosmos, including the fine-tuning of the laws and constants of our universe…and the prior environmental fitness of nature.” Secondly, when you add in the evidence we’ve discovered of the complexity of living cells, the infusions of new biological information into the biosphere over time, the evidence for the Big Bang, and more, the cumulative case for intelligent design grows stronger. “If we suppose that a mind is involved,” says McLatchie, “then it’s not hugely improbable that we’d find information content in the cell, and that we’d have information processing systems and that we’d have irreducibly complex machines. But, on the other hand, it is overwhelmingly improbable, I would argue, that such information-rich systems and irreducibly complex machinery would exist on the falsity of the design hypothesis. And so you have this overwhelmingly top-heavy likelihood ratio.”

Dig Deeper

Transcript

View Transcript

[00:00:04] Jonathan McLatchie: ID the Future, a podcast about evolution and intelligent design.

[00:00:12] Andrew McDiarmid: Welcome to ID the Future. I’m your host, Andrew McDermott. Today I’m speaking with Dr. Jonathan McClatchey, fellow and resident biologist at the Discovery Institute’s center for Science and Culture. Jonathan was previously an assistant professor at Sadler College in Boston, where he lectured biology for four years. He holds a bachelor’s degree in forensic biology, a master’s degree in evolutionary biology, a second master’s degree in medical and molecular bioscience, and a PhD in evolutionary biology. His research interests include the scientific evidence for design and nature, arguments for the existence of God, and New Testament scholarship. Jonathan is also founder and director of TalkaboutDoubts.com. Jonathan, welcome.

[00:00:57] Jonathan McLatchie: Great to be here, Andrew. How are you doing?

[00:00:59] Andrew McDiarmid: I’m doing well, thank you very much. Well, today I’d like your help giving ID the future Listeners A Beginner’s Guide to Bayesian Reasoning and how we can apply that kind of reasoning to the hypothesis of design in the natural world. Now, listeners, don’t run away or turn this off just at the word Bayesian. Don’t be afraid of it. We are going to break it down for you, and you’ll begin to understand how to use it in your own life and in your own arguments. Now until recently, I myself didn’t know about Bayesian reasoning. I’d heard of it. I’d heard it mentioned here and there by ID theorists and in the literature about design arguments. And it’s an idea that really appeals to me. We know that science is provisional and it progresses as hypotheses get tested again and again by different people at different times and in different settings, until a preponderance of evidence points definitively to a likely conclusion. And given that provisional state of the scientific pursuit, it’s often helpful to have a way to evaluate or quantify the probability P of a hypothesis H in light of the evidence E. So a little historical context as we get going here. Bayes theorem originates with the 18th century English statistician philosopher and Presbyterian minister Thomas Bayes, who was the first to develop a specific case of theorem that bears his name.

Now, interestingly for you and I, Jonathan Bayes was born in England, but went to the University of Edinburgh in Scotland to study logic and theology. Did you know that?

[00:02:34] Jonathan McLatchie: I think I did, but I’d forgotten.

[00:02:36] Andrew McDiarmid: Okay, well, he was elected as a fellow of the Royal Society in 1742, and he never published on his theorem or about his theorem during his lifetime. His notes were actually gathered posthumously and published after his death. Later, Bayesian reasoning grew in popularity and use with other scientists, such as Pierre Laplace, and they would build on Bayes’theorem in general ways. And today, it’s one of the most important formulas in all of probability. And it’s central to scientific discovery, has been for the last two centuries. It’s also interestingly used as a core tool in machine learning and AI, which brings it all the way up to this current moment that we’re living in now. Jonathan, you’ve written three articles for Evolutionnews.org about applying Bayesian reasoning to the design hypothesis. Let’s take some time to unpack your points.

First, what is Bayes’theorem and how does it relate to our understanding of evidence? Sure.

[00:03:33] Jonathan McLatchie: So Bayes’theorem is a way of probabilistically modeling our assessment of evidence and how we update our confidence and conclusions in response to new data. So the way that a Bayesian would conceptualize evidence is in terms of a likelihood ratio, that is to say, the probability of the evidence existing given the hypothesis is true on the numerator, and then on the denominator, we have the probability that that same evidence exists, given that the hypothesis is false. And to the extent that that likelihood ratio is top heavy, that is the extent to which you have evidence for your hypothesis. And so the product of that ratio is known as the Bayes factor. So, for example, if you have a ratio of five to one, then the Bayes factor is fivE. And so that is essentially what Bayes theorem is. It’s in a way of probabilistically modeling our updating of our confidences in our hypotheses in response to new data.

[00:04:36] Andrew McDiarmid: Okay. And again, listeners don’t get scared away by the mathematical concepts here. In fact, I saw one video recently that kind of posed it as a geometrical idea. So think of a square, and it’s filled with little squares, and that square is a one by one, and it represents all of the possibilities of something, of a hypothesis or an idea. And then if you have a hypothesis about that, your hypothesis is going to fill in some of that square, but not all of it, some of it. And within that hypothesis, you’re going to have evidence that matches your hypothesis, which is even smaller portion of the one by one square. So you end up with something between zero and one as your probability. And Jonathan will help us go a little bit deeper into this. So, Jonathan, give us an example just to wrap our heads around Bayes’theorem.

[00:05:38] Jonathan McLatchie: So, imagine a court scene where the forensic expert or the detective comes forward and he presents the murder weapon that was involved in a homicide. And on the handle of that murder weapon are the fingerprints of the accused. Now, does that prove that the defendant is guilty of committing the murder? No, not necessarily. There could be other explanations for how his fingerprints happen to be on the murder weapon, but it is a lot more probable, supposing that he is guilty, that we’d find his fingerprints on the handle of the murder weapon than it would otherwise be. And so it tends to move the needle in the direction of a guilty verdict. And then there might be, of course, other evidences that one brings to bear. Perhaps there are treadmarks in the mud close to where the homicide took place that match his pair of shoes, for example, and so forth. And that, again, doesn’t necessarily prove that he’s the murderer, but that observation is more probable, given that he is, in fact, guilty than it would otherwise be. And so that’s an example to help illustrate what we are talking about when we say that particular piece of evidence tends to confirm a hypothesis.

[00:06:51] Andrew McDiarmid: Okay? And remember, in that court case, the jury is going for a decision based on the evidence and beyond the shadow of a doubt. Right? So we’re not going for 100% proof here. That’s very difficult to get when you’re studying the natural world or even the human world, but you’re going for something that is beyond the shadow of a doubt. That is way more likely than unlikely, and you’ll understand that as we go here. Well, what are the basic components, Jonathan, of Bayes theorem?

[00:07:26] Jonathan McLatchie: So, Bayes’theorem involves the probability of the evidence existing given the hypothesis is true on the numerator, and then on the denominator, you have the probability of the evidence existing given the hypothesis is false. And then you multiply that by what probability theorists call the prior probability, which essentially relates to the intrinsic plausibility of the hypothesis being true, based only on the background information alone. And that is also expressed as a likelihood ratio. And so the more intrinsically implausible a hypothesis is, the more evidence you need in order to overcome that intrinsic plausibility. So that is how this is called the ODs form of Bayes theorem. That is how it is expressed, which is the form of Bayes theorem that we use for developing cumulative cases.

[00:08:12] Andrew McDiarmid: Okay? And I alluded to some of these in my introduction here, but what are some of the letters you’ll find in a Bayes theorem? You have P, which stands for the probability. Okay. And then H for hypothesis, E for evidence. Okay. So we’re starting to kind of form this in our heads. Now tell us, how does Bayes theorem help us build a cumulative case? You might have one piece of evidence here and there that by themselves are weak. But how does this help us build a cumulative case for a given hypothesis?

[00:08:43] Jonathan McLatchie: Sure. So let me express this mathematically, and then I’ll give an illustration. So suppose that you have the probability of a particular piece of evidence, given your hypothesis is zero two, but the probability of that same evidence, given the falsity of the hypothesis, is zero four. Well then the ratio of the probability of the evidence given the hypothesis against the probability of the evidence, given the falsity of the hypothesis, would have a value of five to one, or just five. That is what we call the Bayes factor. If there are multiple pieces of independent evidence, then their power, of course, accumulates exponentially. So five such pieces would yield a cumulative ratio of 3125 to one.

If the initial ratio were two to one, then if you had ten pieces of such evidence and they were all independent, then the cumulative power of that evidence would be more than 1000 to one. And so you can see that if you have lots of different pieces of evidence, each by themselves not being a particularly great weight, they can amount to cumulatively, a very powerful cumulative case for the hypothesis being true. So, to go back to our court scene, imagine that we have evidence that the perpetrator is 6ft tall, that there are particular treadmarks in the mud near the scene that match his shoes, that there are fabrics that match his sweatshirt, and so forth. Now, a defense attorney might say, well, how many people that have that particular sweatshirt do you think there are in the city? Or how many people with that particular set of shoes do you think there are in the city? And so forth. But in order to mend the defense, you have to scotch tape together a lot of different auxiliary hypotheses to make the data fit. And each of the evidences independently might not be of overwhelming weight. But when you take all of the evidences cumulatively, then you have a very, I would say, overwhelming case that indeed this defendant is guilty of the crime, even if you wouldn’t be able to establish that from each of the individual pieces of evidence taken in isolation.

[00:10:44] Andrew McDiarmid: Okay, that’s understandable. Now, does the probability of the evidence given the hypothesis have to be high in order for a piece of evidence to strongly confirm a hypothesis?

[00:10:54] Jonathan McLatchie: No, this is a common misunderstanding, because the probabilities and the likelihood ratio do not need to add up to one. Right? So even if you have a low probability of the evidence existing, given the hypothesis, if the probability of that same evidence existing, given the falsity of the hypothesis, is much lower, then you could still have, in principle, a very strongly top heavy likelihood ratio, and therefore strong evidence for your hypothesis. So, for example, my colleague, Dr. Timothy McGrew, who’s an expert in Bayes, who’s a philosophy professor at Western Michigan University, he likes to give the following illustration that imagine that you are hiking in a forest somewhere, and you’re far away from roads and civilizations, and you come across a cabin in the middle of this forest, and you wonder whether it is inhabited. And so you decide to inspect, and you prize open the door, and you find a cup of English breakfast tea that is still steeping. It is not at room temperature, that is sat on the table inside this cabin.

Well, on the hypothesis that the cabin is inhabited, would you have predicted strongly that you would find a cup of specifically English breakfast tea still seeping on this table? Well, no, it’s actually quite a low probability on the hypothesis that it’s inhabited, that you would make that observation. Nonetheless, it is far more probable on the hypothesis that the cabin is inhabited, that you would make that observation than it would otherwise be. If the cabin is not inhabited, then it’s astronomically improbable that you would make that observation. And so again, you have this. Even though the numerator is not particularly high, you nonetheless, in this case, have a very strongly top heavy likelihood ratio, such that the evidence overwhelmingly confirms the hypothesis.

[00:12:48] Andrew McDiarmid: Okay, I like that example. Makes me want to take a hike in the mountains. Well, when do we typically use Bayes’theorem? It’s not all the time in Science that we use it. When are some good times that we would want to employ this?

[00:13:01] Jonathan McLatchie: Well, we wouldn’t use it, of course, for propositions that are just true by analytically, such as there are no married bachelors, for example, the statement that there are no married bachelors can be shown to be true just by virtue of the meaning of the constituent terms. To be a bachelor means to be unmarried, or two plus two equals four, of course, is just true by virtue of the meaning of two plus equals and four. But when we’re dealing with probabilistic analysis, I think Bayes theorem is an appropriate tool to help us to model our assessment of the pertinent evidence.

Notice that you don’t always need to be able to plug in specific and well justified values into the Bayesian calculus. Oftentimes, it’s just a way of modeling the degree of confidence that is justified by the assumptions that we make in terms of the probability of the evidence, given the hypothesis, and given the falsity of the hypothesis and so forth. So long as you’re explicit about what values you’re plugging in and what justification of any you would put forward for that, then you don’t need to have very precise values that you plug in there. And oftentimes when we’re dealing with, I would apply Bayesian theorem in historical inquiry as well. And there, of course, it’s very difficult to put precise values in. But again, as I said, it’s a way of modeling our assessment of the pertinent evidence.

[00:14:33] Andrew McDiarmid: Okay? And you don’t need to be a scientist to do this. I mean, folks at home can do this. If you have a problem you want to work through and evaluate likelihood for given a hypothesis or given evidence, you can work this calculus out by yourself. You can sit there and you can do this. It is possible. And that’s what makes it quite interesting.

I’ve seen examples where the question is, is somebody given a description that was considered evidence of that person? Is this person more likely a librarian or a farmer? And you just have to weigh it, know, based on how many farmers there are to librarians, and just kind of thinking that through. You don’t have to have exact numbers, necessarily. It’s all about ratios and probability. Well, Jonathan, let’s apply this to the design argument, because, after all, this is intelligent design. We’re talking about idea of the future. How might we apply Bayes’theorem to arguments for design or the existence of God, including biological design, fine tuning cosmological arguments?

[00:15:38] Jonathan McLatchie: Sure. So the way that I would express it is that the evidence that we observe, let’s take the biological design arguments to begin with, the information content of the cell, for example, and the irredistibly complex nanomachines that run the show in life. These observations are not particularly surprising if we suppose that a mind is involved. After all, information content, especially in digital form, is a bit daily associated with conscious activity and intelligent beings. And so if we suppose that a mind is involved, then it’s not hugely improbable that we’d find information content in the cell and that we’d have information processing systems and that we’d have irredisibly complex machines.

But on the other hand, it is overwhelmingly improbable. I would argue that such information rich systems and irredisibly complex machinery would exist on the falsity of the design hypothesis. And so you have this overwhelmingly top heavy likelihood ratio, likewise with the fine tuning argument for the existence of God. I would argue that supposing that the God of classical theism exists, then it wouldn’t be particularly surprising for him to create a universe inhabited by moral agents such as ourselves, that can interact with each other and mold and shape their character and engage in moral decision making and so forth. On the other hand, it is extremely probable that such a state of affairs would exist on the falsity of design.

And to demonstrate this, I would appeal to, among other things, the fine tuning of the laws and constants of our universe for the existence of embodied sentient life such as ourselves, the cosmological constant, the ratio of the strong weak nuclear force, and so forth.

And so you have a top heavy likelihood ratio that favors a theistic hypothesis or a design thesis. As for the cosmological argument, I don’t personally make a deductive argument. I don’t personally use the deductive form of the cosmological argument, which is to say that everything which begins to exist has a cause. Universe began to exist, therefore the universe has a cause of his existence. And then philosophers like Wilma Craig, for example, will try to infer certain features of that cause. That’s not the approach that I take. I take more of a probabilistic or inductive approach to the cosmological argument and would rather argue that. I’d make a more modest argument, which is to say that the fact that the universe, as far as we can tell, had a beginning in the finite past is more probable on the supposition of theism than on atheism. And so it tends to confirm a theistic hypothesis. If the steady state theory had turned out to be correct in the 20th century, then atheists would quite appropriately champion that as an argument or an evidence in favor of atheism, because it sits far more comfortably on an atheist worldview than an atheistic worldview. And I think the same thing is true of the converse model, namely the Big Bang cosmological model, which maintains that the universe in fact began to exist. Now, a common pushback we get when we put forward the arguments for intelligent design is the God of the gaps objection, right? Which is to say that you are basically putting forward your designer or God and using him as a placeholder for that of which we are ignorant. And intelligent design doesn’t really work like that. And the way that I’m expressing the argument in terms of Bayes’theorem helps to illustrate why it’s not a god of the gaps argument, because we’re actually making a positive inference. And so whatever else may be wrong with the argument. It’s not wrong by virtue of being a God of the Gaps argument.

Lydia McGrew, who’s a colleague of Mine, published a paper in 2004 in the journal Philo. It’s titled Testability, Likelihoods, and Design. And she explains that it’s in fact, quite easy to conceive of a scenario where we know the likelihood on the hypothesis of chance is very low. But nonetheless, we do not have evidence for the likelihood on the hypothesis of design being higher. So, for example, suppose we were to find a small cloud of hydrogen molecules floating in interstellar space in which the molecules were not dispersing. Now, without sufficient mass for the cloud to be held together by gravity, such an observation would be an anomaly, given a present understanding of physics. But even though such an observation would be seemingly improbable on the hypothesis of natural law, there would be no reason to think that the hypothesis of design is a better explanation. After all, there was no independent reason to think that a designer would likely cause a small cloud of hydrogen molecules to clump together. Right? On the other hand, the sorts of features that we’re talking about in biology, such as the information content of the cell and the fine tuning of the laws and constants, it’s quite easy to see why that would be more probable, given the hypothesis of design than given its falsity. And so that hopefully helps to illustrate that this is not a god of the gaps argument that we’re putting forward now. McGrew also further points out that it’d be quite a different story if, in the distant future, we were able to capture high resolution images of Alpha Centauri, which is the nearest star after the sun to the Earth, and that we were to discover that a Volkswagen beetle was orbiting a planet there. Well, in that case, the probability of a Volkswagen beetle being there would be much, much higher on the hypothesis of design than its falsehood, so long as there is a positive reason to think that the evidence is more probable on a design hypothesis than it would otherwise be. You have a justification for design, and to the extent of that likelihood ratio is top heavy. That is the Extent to which the Evidence is compelling for your hypothesis.

[00:21:22] Andrew McDiarmid: Okay, very interesting. Well, one of the components, as we mentioned, near the beginning of the Bayes theorem, is Prior probability, which is the probability of a hypothesis without Considering Evidence. Now, tell us how that figures into design. How would we go about evaluating the prior probability of design?

[00:21:43] Jonathan McLatchie: Yeah, so the prior probability is the probability or the intrinsic plausibility of the hypothesis being true, given the background Information alone. So we could imagine, to take another illustration, what’s the intrinsic plausibility that any given individual won the lottery in the United States? Well, let’s say that it’s one in 300 million probability of any particular individual winning the lottery. Now, if your neighbor comes to you and says, I won the lottery last night, you might be inclined to be a little skeptical because there’s such an overwhelmingly low intrinsic plausibility of any particular individual winning it. But suppose that you were then to observe that he gave up his job, he bought himself a really fancy new car and home, and he is taking Lavish and expensive trips to the Caribbean, and he’s even able to show you a certificate of his winnings or the check that he received or what have you. Well, even though the initial intrinsic plausibility was extremely low of him actually winning the lottery, the evidence that he was able to adduce was sufficient to swamp that intrinsic plausibility. And therefore, even though the intrinsic plausibility was initially low, the posterior ODs of him actually having won the lottery is nonetheless high. Furthermore, if you have independent lines of reason to think that the hypothesis under review is intrinsically plausible, then you can make an argument that the initial prior probability is not especially small. So, in the case of intelligent design, in our inferences to design a biology, we have independent reasons, I would contend, to already think that a mind is involved in the origins of our cosmos, including the fine tuning of the laws and causes of our universe, the prior environmental fitness of nature that Michael Denton writes about in his books, such as the Miracle of Man, the Miracle of the Cell, and the Wonder of Water, and so forth. And so these are all relevant positively to the intrinsic probability that there might be a designer behind our universe that would have a vested interest in creating life such as ourselves. And so I don’t actually think that the prior probability of the design hypothesis, when it comes to biology, is very small. And you have this multidisciplinary argument for design based both on the physical and the life sciences.

[00:24:14] Andrew McDiarmid: Okay, well, there is another objection that I wanted to run by you. What about those who say, well, we don’t have experience of non human designers, not to mention non material ones, how would you push back on that when it comes to evaluating probability?

[00:24:29] Jonathan McLatchie: Sure. This is also a very common objection, particularly in debates where one points out that we don’t have experience with non human designers, much less non material ones, which, of course, any plausible contender for the role of designer would have to be. But as Lydia McGrew points out in her 2004 article that I mentioned previously, and I’m quoting, she says, any attempt to use frequencies either to make a straight inductive inference or to construct a likelihood for evasion inference must confront the problem of induction.

So, in other words, what she’s saying is that it’s always a possibility that a group which has some characteristic that differs in some way from the sample already analyzed differs also with respect the very quality about which we are interested, right? So she gives an analogy that there is always a possibility that a prehistoric civilization did not have the ability or desire to make arrowheads. Nevertheless, if we discover arrowheads that date to prehistoric times, it becomes irrational to reject the inference that the prehistoric arrowheads were produced by an intelligent agent simply due to the fact that he lived in a time period different from the other makers of arrowheads with which we are familiar. And so why then, asks McGrew, should there be special hesitation about a similar use of induction when it comes to extrapolating from a known group to an unknown group in the ID debate? Why should the gap between human and non human agents be of greater epistemic significance than that between human agents living in different time periods? It seems to be quite arbitrary, and so the same principle may be applied also to the objection that we have never observed intelligent agents design living objects, even those which are information rich and contain errors for complex machinery. But again, as McGrew points out in her article reference class, difficulties always occur when induction is used. Indeed, an event or object may always be defined in such a way so as to make it unique with respect to some of its properties. So I think that expressing the argument in terms of Bayesian theory actually helps to address some of those popular objections to a design perspective makes sense.

[00:26:36] Andrew McDiarmid: Well, Jonathan, for those who want to learn more about adopting a Bayesian approach, Bayesian reasoning, where can they turn?

[00:26:43] Jonathan McLatchie: So I have a few articles at Evolution News and Science today from a while back. This is from 2020, so a few years ago, and so I’d recommend checking that out, and I’m sure you can include those in the show notes for this interview. I’d also recommend the McGrew article that I mentioned from 2004, which basically fleshes out how the design inference can be hashed out in terms of Bayes theorem. There’s also another article, also in the journal Filo by Tim McGrew, Lydia’s husband, in 2005, on the subject as well. I link to these in my essays so you can check those out. Another good book is Richard Swinburne’s the Existence of God where he parses out his arguments for theism in terms of Bayes as well. So there is a few resources that you might want to check out.

[00:27:38] Andrew McDiarmid: That’s awesome. And for those of you who have Stephen Meyer’s book Return of the God Hypothesis sitting on your shelf or your coffee table, it’s in chapter eleven, Meyer’s discussion of Bayesian reasoning. And you can again, as Jonathan mentioned, you’ll find links to these resources in the show notes the podcast Description@idthefuture.com. Well, Jonathan, thanks very much for stopping by and sharing your expertise on this.

[00:28:03] Jonathan McLatchie: Thank you. Great to be here.

[00:28:05] Andrew McDiarmid: To get more of Jonathan’s work, visit his website, jonathanmcclachey.com. For idthuture. I’m Andrew McDermott. Thank you for listening.

[00:28:15] Jonathan McLatchie: Visit us@idthefuture.com and intelligent design. This program is Copyright Discovery Institute and recorded by its center for Science and Culture.