Episode 304: The Planes Don't Land
Very Bad WizardsMarch 11, 2025
304
01:40:28115.19 MB

Episode 304: The Planes Don't Land

What has four thumbs and can effortlessly glide from the a priori to the a posteriori in a single episode? These guys. In the first segment we tackle a brand new paper called "Being Exalted: an A Priori Argument for the Trinity." That's right, the Holy Trinity arrived at through reason alone. Then in the main segment we talk about Richard Feynman's classic 1974 Caltech commencement address "Cargo Cult Science." Does Feynman's metaphor suggest that whole paradigms might be systematically misguided? Or is he just admonishing social scientists to maintain their integrity and use more rigorous methods? As you might imagine, a fight almost breaks out in this one.

Moore, H. J. (2025). Being Exalted: An A Priori Argument for the Trinity. Sophia, 1-23. [link.springer.com]

Cargo Cult Science by Richard Feynman [caltech.edu]

Interrogating the "cargo cult science" metaphor by Andrew Gelman and Megan Higgs [columbia.edu]

[00:00:00] Very Bad Wizards is a podcast with a philosopher, my dad, and psychologist Dave Pizarro, having an informal discussion about issues in science and ethics. Please note that the discussion contains bad words that I'm not allowed to say, and knowing my dad, some very inappropriate jokes. Ooh, that's a bingo!

[00:01:10] Welcome to Very Bad Wizards, I'm Tamler Sommers from the University of Houston. Dave, today the latest in new AI news on the Ezra Klein Show, your boy, your show. Ezra Klein hosts a guest that says that the government knows that AGI, artificial general intelligence is coming and we're not ready.

[00:01:33] These things are coming out every day right now, new warnings about AI. Are we wrong to be contemptuously dismissive of all of this panic? Like I don't want to be smug about this. I could always be wrong, but there's no fucking way. Like there's just no fucking way. Like this is snake oil. It is absolute snake oil.

[00:01:57] Like I think these people, I think they're really convinced, but I actually don't know. Like a lot of them I think are just hyping up their own businesses and their own interests. And given what it takes, like even to conceive of what an artificial general intelligence would do, this is not like let's get a few more graphics cards crunching out numbers and we're going to like finally crack it. It's right on the horizon. In some ways it's already here.

[00:02:25] It's like, I'll believe it when I see fully self-automated driving, which seems like a lower bar, you know? Well, I mean, there are, there is finally some example of that. Like I have friends who have gone in. In those like taxis. Yeah. In those taxis and they seem to work. Yeah. In very constrained environments like in San Francisco or something. Like things that are super duper mapped out well. Are you saying because there are a lot of gays and they're very well organized? I don't know exactly how that works, but yes.

[00:02:55] They're unwilling to drive their own cars because of the gay things they're doing. But they've arranged everything very neatly so that the cars can easily navigate through traffic. Like, yeah. So like I kind of agree. And you know, like obviously Robert Wright is like most of what he does now is on this issue. I don't think he's like hyping it for his own personal benefit, although he is writing a book on that issue.

[00:03:24] But I really don't think he is like, in fact, every time I talk to somebody who's legitimately concerned about it, they look at me shocked that I'm not like I don't see it. It's like it's obvious. Eddie Nominus was just here. He was like, what do you mean? Like this is we're fine. So, I mean, there are two parts of the argument. One is that artificial general intelligence will, you know, emerge. And the other one is that it's going to fuck us. And like, I feel like that second one is usually not justified by anybody.

[00:03:54] They just say, well, like obviously we're fucked if it emerged. But is the first one even justified? I don't think that it is, but I'm not quite sure why they're convinced that once AGI hits, it's going to be like worse for the world. Is it because of that like whole, you know, paperclip problem? Like they're just going to be very efficient at doing whatever it is they want to do. And that might include killing all humans to do it. Is that? No, I don't think it's like I actually don't think it's that I don't think it's they're going to kill for most of the people who I consider like respectable.

[00:04:23] It's more just that it's going to take everybody's job and it'll be able to do basically anything that anyone can do in front of a computer, you know, and we'll start creeping into all these other areas like nursing and customer service. And, you know, it already is in its own shitty way going into things like customer service and I'm sure dealing with health care. And teaching.

[00:04:50] I mean, people's that's see, that's the thing. Like, that's another way because like soon you'll just take a course taught by AI. I just that's what I don't think like. No, no. Yeah, I think that soon you're just going to get all your papers from students written by AI. And so you have to design like your course to deal with that. But that's a real threat that is already here. Right. And it doesn't require AGI at all. No, no, no. Right.

[00:05:13] Yeah. Look, like in principle, I think that I as a materialist of the kind that I am, I think that at some point you could emulate a human brain and create an intelligence that works. I just don't think that LLMs are the way to do it. And that's just a gut thing. Maybe computers can't do it because maybe it's embodied and maybe like, you know, there's something about our bodies that can't be mimicked by a computer. Why aren't people more worried about like monkeys unlocking the power of language? You know, like that seems closer.

[00:05:42] Or dolphins. They've already unlocked it. They're just, they're in the planning stage right now. Yeah. If we put all the resources we put into stopping evil AI, like into killing dolphins, to just slaughtering as many dolphins as we could. Or like gorillas, you know, gorillas have probably artificial, I mean, have natural general intelligence of some sort. Yeah. We took care of them pretty easily.

[00:06:07] Yeah. Like we fucked up a bunch of gorillas and the dolphins won't even step to us. So bring it fucking graphics cards. Yeah. But, you know, it does feel like we're in the minority, like on Severance episode seven, like everybody else disagrees with us. So, yeah, yeah, I don't, I don't understand it. It's again, one of the things I just don't understand. I'll, I'll eat my words if it happens.

[00:06:34] Yeah. Like, you know, maybe if we get fired and like everyone's like, ha ha, now do you believe it? I'll be like, yeah, I do. But like, I won't regret a thing. It's not in my temperament to worry about something like that. I don't even worry about like my retirement. No. We're going to do this until we die because we're not going to have it. But not that this, this isn't our opening segment, but I'm kind of looking forward to having a robot butler. I got to admit, like I'm waiting for that.

[00:07:00] I don't care if it brings on the apocalypse as long as like the last few years of my life, like I have a robot that can vacuum and make me eggs or whatever. You have a, there is a robot that can vacuum already. It's called Roombat. It's not very smart. It's like so far from artificial general intelligence. That's fine. But it can vacuum your room. It's here. The future is now. I want one to make me eggs too. Like the same thing. You want like the Jetson, you want the Jetsons robot? Yes, Rosie. Rosie. Who was like vaguely African American for some reason.

[00:07:31] Was she? I don't think I ever saw her as that. She was coated black, I swear. I swear. All right, listeners, you can weigh in on this and cancel Dave, not me, because I did not see that. I thought she was more like Flo. Flo from Mel's Diner? Yeah. She's kind of sassy. Kiss my chips. Kiss my chips. All right, what are we actually talking about?

[00:08:02] Okay, no, we're talking about something much more that is actually a threat to your kind of atheistic worldview. Your projection of things is like one of my favorite parts of these intro segments. That you're an atheist? By watching Ezra Klein or listening to Ezra Klein. But you are an atheist. You know, I'm one of those agnostics who still loves God just in case. Okay.

[00:08:24] Well, now you can go from agnosticism to full-on believer in Christianity, which is what you already were. But in case you were flirting with Judaism, nope, it was wrong. God, I kind of love this paper. So this is by Harry James Moore.

[00:08:46] It appeared recently, just this past November, in Sophia, a journal of the philosophy of religion. And it's called Being Exalted, an a priori argument for the Trinity. I saw this and I was like, this has all of the key words that are going to make Tandler love it. Well, I think that's a little bit of a stretch. But do you want to read the abstract? Yeah, okay. So it says,

[00:09:13] This paper presents an original a priori argument for the existence of the Holy Trinity. Capitalized, because there would be blasphemy to not. The argument is based on the notion of exaltation. It will be argued that being exalted is a great making property. And that a divine individual as possessing all such properties must also possess the property of being exalted. For a divine individual to possess this property, a second divine individual must exalt the first. Since only in this way do we avoid both the hubris of mere self-exaltation and avoid the danger of necessitating creation.

[00:09:43] It is finally argued that third party recognition of this exaltation renders it a more perfect form of exaltation. The paper begins by considering the most important historical examples of a priori arguments for the Trinity to emphasize the centrality of these arguments within Christian philosophy providing justification for the development. The argument will then be presented before such objections are raised. Problems with perfect being theology, something I didn't know existed. Problems with social Trinitarianism and the problem of anthropomorphism. Problems with social Trinitarianism and the problem of anthropomorphism.

[00:10:10] And he does, as promised, start with a history of a priori arguments for the Trinity, which I, I mean, I'm not well versed in this literature, but I didn't know that there was this history. Me neither. And yeah, like the most surprising part of the paper to me was all of this previous work trying to get a priori arguments for the Trinity. Like I thought this was like sideways music. This was something this guy came up with when he was stoned or something. And no, there's a whole like literature for the last like 2000 years on this.

[00:10:38] So just to give you an example, a little taste, he talks about Augustine's a priori argument. And it says in book nine of De Trinitate, Augustine dwells on the notion of divine love after reminding us that God is love in case we forgot. First John 4, 8 for those who didn't know this. Yeah. Augustine proceeds to argue that love contains precisely three moments. Love contains precisely three moments.

[00:11:08] That's great. When I who conduct this inquiry love something, then three things are found. I, what I love and the love itself. For I do not love unless I love a lover, for there is no love where there is nothing is loved. There are therefore three things, the lover, the beloved and the love, Augustine of Hippo. For Augustine, this insight allows us to see ourselves in the image of God. And yeah.

[00:11:35] So to be fair to Augustine, he didn't mean it as an a priori argument for the Trinity. He says such an observation can be clearly reformulated into an a priori argument suggesting that God is love, that he must like love itself, consist of three aspects or moments. So is there a history or can we reformulate things into making a history? I think maybe you got to fast forward to like the 20th century to get the real hard hitting a priori.

[00:12:05] You need to get to where this was a professional thing. Yeah. Although it is interesting that Augustine sounds so, so analytical. He could get that way, I think. Yeah. I forget. I've read it before, but I forget. You think it's all about stealing pairs and like fucking bitches, but. That's what I remember. Yeah. That's. Can I ask you a question? The great making property. Yeah. Like the structure of those things, like great making. Is that like a recent thing?

[00:12:35] Like noun hyphen verb? Oh, yeah. No, I see what you're saying. Yeah. Like it's a something making property or. Yeah. There is, I'm sure, an analytic philosophy. Something like that. It's like the hot new shit. Well, yeah. I don't know if it's new, but it's been around. Um, he goes through when we get to the premises, like he gives a, you know, an explication of that kind of. Right. But you're right. There is something in just normal analytic philosophy.

[00:13:03] I think that is. It's like a weird construction of just like rather than saying like it makes things great. It's like a great. It's a great making. That's so like analytic philosophy 101. It's like you have to make something sound like, and I wonder what that is. It's like, oh, it has this property. There's something that gives it a kind of objectivity or something like that. Yeah.

[00:13:29] So as an, as just a sort of meta comment, I don't know what you thought of this because even though, you know, like the religious upbringing that I have means that I'm familiar with quoting scripture as a way of making a point, as a way of arguing a point. I'd never really seen this combination of like scripture as like specific input into premises that are then used in this like analytic way. It seems like, like a bringing together of two things that don't, shouldn't really go together.

[00:13:58] It's like, well, Isaiah said this specific thing and then like using analytic philosophy to see what follows. The way I read it, I also have never seen just like, this is an a priori argument. So we shouldn't need some other reference point to make it like Descartes didn't quote scripture and making his a priori argument for, you know, like the existence of a soul or whatever.

[00:14:22] Like, so after you go through the whole history, you get to the argument for exaltation. Exaltation. And here's where like you just get flooded with references. He says, the inspiration for the following argument is found in the book of Isaiah, where we come across the four servant songs. These songs have been read by both New Testament authors, Matthew, Matthew, different part of Matthew, Peter, John and Acts and subsequent church fathers,

[00:14:49] including Augustine, Gregory of Nyssa and Cyril of Alexandria as messianic prophecies concerning Christ. So he gives a lot of this and then he, you know, he talks about glorification and that's through the gospel of John. And yeah, you get this whole thing. But then when he gets to the actual argument, which we can go through the eight premise argument, I don't think he uses those references as a way of defending the premises. Yeah.

[00:15:18] Well, that section is what I was referring to. So maybe it's more subtle in the way you're saying, but he does get to like exaltation through scripture. Like that he chose the property exaltation as a feature of God and, you know, God incarnate in Jesus. And then he uses that as the like, okay, so this is where Augustine was talking love. Like I'm going to use my argument as exaltation, inspired by the scriptures discussion of exaltation.

[00:15:43] I guess, well, you could read it both ways, but I think what he's saying is this is where I was inspired. I can derive this, you know, you can reformulate this from the gospels. But, you know, when I actually get to defending all the premises, I'm not going to. But like, what does it matter? Yeah. And I think we're saying kind of the same thing because all I'm saying is like, it's weird for me to see the part where it's like, and we know that God is exalted from all of these texts. And so then now I'm going to get into the pure analytic argument.

[00:16:13] It's true. The pure analytic argument doesn't require this. Yeah. Yeah, that's right. I mean, the only time you see this in regular analytic philosophy sometimes is in history. When you're talking like about a history journal or something like that, where you want to situate your claims within. So maybe it's like that. Maybe it's like, who knows? It's not like, I don't think either of us read this super closely. I gave it a good shot. You did? That's good. All right.

[00:16:38] So let me read the argument and then you can say where, if anywhere, you disagree. So one, being exalted, in quotes, is a great making property. I mean, just like you could spend a whole episode just talking about that idea. Like what has society come to that this is something that, okay, but whatever. Two, a divine individual must possess all great making properties.

[00:17:08] Okay. Yeah. A divine individual must possess the property of being exalted. This comes from one and two. Only another divine individual could exalt the first divine individual. Interesting. There are at least two divine individuals. That comes from four. Right. But does it actually? So if God needs to be exalted and only another divine individual could exalt a divine individual,

[00:17:38] then you're left with a minimum of two. It's required by the way it means. But we haven't established that there are exalted beings. We've just established that if there were, it would be a great making property. Well, that's the thing. He's assuming God exists. No, but even if God exists as a divine individual, that doesn't mean that he has to exalt someone. But this is the argument that he's making. Given that a divine individual, by nature of being divine, has to possess all things that are great making,

[00:18:07] being exalted is great making, so somebody needs to exalt it. Yeah, obviously. Sorry. That was my bad. No, it's important that we work through this so that you're, in the end, residing, you know, just like... Become baptized. Okay. There are at least two divine individuals, I'm convinced now. Exaltation requires third-party recognition. Okay. It's like a voyeur, you know? Yeah. No, it's like when you get married, you need, like, a judge, you know,

[00:18:35] to kind of establish it. Witness. Yeah, witness. The exaltation must be recognized by a third, oh, by a third divine individual. So you couldn't have a non-divine individual recognizing it. So there are three divine individuals. I actually did follow it up till this point, but I have a quick question before we go through the premises. This doesn't show that there are only three divine. No.

[00:19:02] In fact, I think premise eight or conclusion eight should read at least three divine individuals and there needs to be more work into it. He does, like, actually talk about why limit it to three. Oh, okay. I didn't get to that point. Okay. So you seem to be quite dismissive of this argument. So which of these premises do you think, you know, you might put some pressure on?

[00:19:28] So in general, like, as I was reading this, the leaps that are required to go through all of these eight and come out, like, feeling good about what you just did are pretty big. Like, he's, like, leaping tall buildings in a single bound sometimes when he's moving from one to another. Yeah. It's like some guy that's on a bender trying to convince himself that he can still do this, that, and that, and he won't lose his family. So, I don't know.

[00:19:57] I'm, like, I kind of am already lost at the divine individual must possess all great making properties. Well, no, no, that's just the definition of God. I mean, yeah, I guess, but it seems as if there are great making properties. And he actually talks about this, but I was convinced by the objection more than by his defense, which is what it means to be great making can be pretty context dependent. So, a piano player is great, like, when he's great at playing the piano. Yeah. And a chef is great as a chef.

[00:20:26] But God doesn't have to have all of the great making properties that anybody would have, like, being really good at talking to your daughter or whatever. Like, it seems as if why require that a divine individual have all those? Why can't you come up with a general set of great things that you could just say a divine being has to possess? Right. So, like, normally people will say that God is all powerful and perfectly benevolent,

[00:20:52] but not, like, perfectly playing of the piano or the best father of a daughter or... Right. Yeah. Right. And it doesn't feel like it, by definition, a divine being has to have all of the things it could possibly be. So, you're saying that God can't play the piano. You put God in front of a piano and he's just going to be doing, like, chopsticks and fucking, you know... I don't think he could even... He can't beat Michael Jordan on one-on-one. Wow. Could he beat LeBron? Of course.

[00:21:21] But LeBron would complain to the Holy Spirit. And he might be a divine individual also. Yeah. But there is, like, this flavor of... And I don't know how much... I've never really studied this stuff, but the ontological argument for the existence of God, like, it has the flavor of that, which is the greatest conceivable thing has to exist and that's God. No. And then also, like, but the greatest conceivable thing would have the property of existence. Of existing. Yeah. Yeah.

[00:21:49] And, uh, therefore that, like, it must exist if there is a greatest possible thing. I forget why, but there's some weird explanation for how, where there has to be a greatest possible thing. Yeah. Yeah. That exists. Because existing is... Well, right. But existing is what the greatest possible thing would have to be. But why did we believe that there has to be a greatest possible thing that actually... Oh, because it exists. Yeah. Yeah. Yeah. I know.

[00:22:18] It's actually kind of clever. It's like... No, it works. Yeah. It's so funny because, like, I thought you were going to be even more dismissive of this than me, but you're clearly, like, tempted. You are... I'm trying to see where my reasoning is pulled. This is the tree of knowledge, the apple that you are trying to resist, but you're... I don't, you know... Yeah. So, I'm also not convinced that being exalted is...

[00:22:47] That it's a great making property and therefore is required. But, like, it seems pretty reasonable to me that you could... I mean, it's sort of the same point that I was making before, that you could have a perfect being that's not, like, being exalted. Yeah. Or that imperfect creatures like us could exalt them and that'd be fine. But part of the argument really has to be that exaltation has to be from another divine being because or else the exaltation isn't perfect.

[00:23:12] And, again, kind of in the ontological flavor, the most perfect kind of exaltation is from a divine being. So, therefore, a divine being is required to do the exalting. So, here's what he says about being exalted is a great making property. He says, One familiar example of exaltation is workplace promotion. When increased responsibilities and opportunities are granted in recognition of hard work and progress, then such exaltation is perceived by all parties as a good and desirable end.

[00:23:43] Indeed, the hierarchies which structure, civil, corporate, ecclesial, and family life all depend on the processes of promotion and exaltation. Within family life, for example, a married couple are, quote, unquote, exalted to fatherhood and motherhood upon the birth of a child. It thus seems that being exalted is a desirable human good. It is a good which is better to possess than not to possess.

[00:24:08] The property of being exalted ought, thus, to be considered as a great making property or, in Kurt Gödel's language, interesting, by the way, a positive property. So, I mean, this is where it gets so slippery, right? Yeah. It's like, oh, the married couple are, quote, unquote, exalted to fatherhood and motherhood upon the birth of a child. Like, what does that mean? What does exalting even mean at this point if you just get, like, a little, you know, you go to associate professor or you have a kid?

[00:24:38] Why is that a form of exaltation? I don't know. Like, I lost the thread about what exaltation is because I always thought of it as, in religious context, as just, like, praise. And it is kind of, like, praise. It's like the highest kind of praise. It's like what, like, a saint gets, right? Yeah. Or a king or whatever. Yeah. So, I don't know. That argument doesn't fall for me. Bringing up Kurt Gödel is, like, pretty. Yeah. That had to make you do a little bit of a double take. I was like, oh, shit. Spit out your coffee.

[00:25:08] Okay. So, on your thing about the divine individual must possess all great making properties, although you find it unsatisfying. Apparently, it's less controversial for the majority. Yeah. For the majority of theists all believe this. Yeah. Yeah. There is ample scriptural and—see, this is, again, the mixing. There is ample scriptural and philosophical evidence for this claim, which is reflected in the fact that disagreements between different classes of theists seem to presuppose God's overall greatness and possession of as many great making properties as possible.

[00:25:37] Because if you didn't have—if there is a great making property— And God doesn't possess it. Then it's like, well, are you God or are you just someone that's really good at, you know, like, a lot of things? Yeah. All theists thus presuppose God's maximal greatness and thus his maximally consistent possession of as many great making properties as possible. By the way, just, like, think about what he's saying is that for God to be perfect, he needs to be, like, praised by somebody.

[00:26:03] So he essentially emanates or creates or whatever it is that causes a trinity to exist. Like, he's like, ah, man, I need somebody to, like, witness how dope I am. But he doesn't—it's not them that exalt him. If you're already divine, right? Like, you already have been exalted. No, like, the argument is that he needed another divine being to exalt him or else he wouldn't possess the great making property of being exalted. It's like the love argument. He's basically just stealing Augustine's love argument.

[00:26:32] I see, right. I see what you mean also about that this is, like, the ontological argument. So, like, there's one God, like, as Jews believe. Yeah. And then all of a sudden, for that, God realizes, oh, my God, there's this other possible great making property that could exist. Like, nobody else—it's not that other people have it right now. They don't. Right.

[00:26:57] But it's logically possible that exaltation could be a great making property. And so I need to create something that can exalt me. Yeah. I need to step my game up and, like, be exalted. Right. Again, just because of the possibility of that being a thing. Exactly. Yeah. Because it's a great making—yeah. And so, you know, he gets into, is this anthropomorphic, like, to think that— No. —a god needs to—

[00:27:25] Why would you— I mean, I guess you have to consider all objections, but— Can I just say, but, like, there's a—like, I can't save this to the end because I feel like I'm doing the author a disservice. Because the—I don't know if you got to the conclusion, but he, to me, totally redeems himself in his conclusion. Because here's what he says.

[00:28:00] He says, He is on account of this a priori argument for the Trinity. Wait, I don't get that. So, I guess he's saying, like, look, if you already think that the Trinity is, like, a whack, like, impossible concept— Yeah.

[00:28:30] This will just be more of fuel for your fire. Exactly. Yeah. So, my argument might actually make people who are already convinced that it's false even more convinced. Yeah. Right. Yeah. Well, he has a good line here where he says, after all, one man's modus ponens is another man's modus tollens. Yeah. But it's not—I don't know where the modus tollens is. I don't know how that works, but it sounds good. Yeah. And he says, despite such an apparently disappointing result, I've hopefully shown that a wide variety

[00:28:59] of such a priori arguments can easily be made, and that new versions will continue to be formulated as part of the broader project of ramified natural theology. So, he's saying, like, I think this one doesn't really work, but I'm just showing you that we might get somewhere if we— Right. We might have more of these arguments that will cement the doubts of people who are skeptical of the Trinity as something that's coherent. Yeah. And it's kind of clever.

[00:29:26] Like, I actually think the argument's kind of clever, where he's like, all right, if you really think that being exalted is necessary, you do need somebody doing the exalting. And it makes sense that you need somebody to, like, see that this is going on. So, this is one, can I just—this was actually the premise that I highlighted as one that, you know, like, he takes seriously the worry. He calls it the weakest point in the argument, as is often the case with similar premises in a priori arguments, which argue for a third divine individual.

[00:29:55] So, he says, look, like, corporate promotion requires the recognition of the promotion by other employees. Yeah, I mean, I'm not sure about that, though. Like, if one of us just became, like, VBW boss, like, we wouldn't need someone else to recognize that. No, but it does remind me—I had a friend once who was never really good at business stuff, but he, at one point, decided that he was going to start a company. I don't think it ever got off the ground. Yeah.

[00:30:25] It was him and his, like, brother-in-law, and they both printed out business cards with the title vice president. And it was like, wait, don't you need people under you to, like, just— —as a necessary condition? Yeah, what would we do? Co-CEOs. Co-CEOs. Can we say founders like the tech bros do? Yeah, founders, co-CEOs. Okay, so, but then here's where he loses me.

[00:30:50] Furthermore, if we recall Hegel's idea on recognition in the phenomenology, then we might be a little more sympathetic towards the premise. That's wrong. I highlighted this as one of my favorites. Yeah. And by the way, you know— I actually kind of agree with this, actually. We got Girdle. We got Hegel. Yeah. You know, like, just dropping names. Hegel, in various ways, emphasizes the vital need for recognition. Anarche.

[00:31:20] Well, I put this into a deeply Catholic article. In all civil and familial affairs between human individuals, precisely as the constitution of those very individuals, it is therefore not outrageous to suggest that some—it's not outrageous to suggest that some form of third-party personal recognition would render the exaltation of a divine person a more perfect form of exaltation.

[00:31:46] And here's where I honestly think he hits on it exactly. This can also go into our next segment. Some might argue that this mutual constitution is a non-well-founded house of cards. How could one— In quotes. In quotes. In quotes. Non-well-founded house of cards. Yeah, that's a term that we bandy about all the time. That's the famous phrase. A non-well-founded house of cards. A non-great-making well-founded.

[00:32:15] For how could one individual recognize another if they themselves are only there to recognize this other if they themselves are recognized? That one kind of lost me. That one lost me as I was reading it, actually. And I have it highlighted. However, we could use some metaphysical examples of mutual and interdependency to elucidate this claim. So let's abort. Let's pull the cord here. But yeah, you know, look, here's the thing.

[00:32:45] I think that a lot of papers are like this in a lot of fields, you know, and this is clever and modest. It's kind of like, I don't know the degree to which it's clever, but it seems clever. Yeah, it's kind of clever. Like the abstract pulled me in. Yeah. It is a little pompous. I just don't know why people do this. He just quotes French passages without the translation. Wait, you don't understand French?

[00:33:13] I mean, I understood it all perfectly, but I'm saying I felt bad for others. The dumb people. Yeah. I didn't realize, you're right, that I didn't fully make it to the conclusion that at the end he just completely disavows the argument. He just jumps ship. He's just like, peace. I got a pub. I got a publication out of this. And the worst thing is, is that like he's, he actually thinks it could be damaging to people's belief.

[00:33:40] I feel like in the notion of integrity that we might discuss in the second segment, I feel like he should have ended his abstract and I jumped ship at the end. Yeah. I also think you could do an esoteric reading of this as maybe, oh wait, is this a reductio ad absurdum of a priori arguments? Although he never fully established that anybody is trying to do this at all. Yeah. And I doubt he doesn't believe them.

[00:34:06] There was a section we didn't talk about where he's like, what's to stop you from thinking that you need a fourth divine being? Oh yeah. I want to know what he says to that. Yeah. Yeah. Yeah. And so you might have this, this infinite regress. And so he basically says, so look, that third party exaltation that's going on is a qualitatively different thing than the exalting and the being exalted. So that is like a third kind of thing that's necessary for this whole relationship to work.

[00:34:32] If you add a fourth one, I'm arguing, he says, that this is a quantitative difference. Now you're just adding more of the same. Like now you're just adding another third party to do it and you haven't actually added a qualitatively different thing. And so therefore you don't need four and you don't definitely don't need. So it's not Occam's razor. Exactly. It's more like a definitional thing where. Yeah. But it has, you're totally right though. It has the vibe of a parsimony appeal. But yeah.

[00:35:01] So here's what he says. Of course, the all too obvious objection. It's like, oh, thanks. I didn't think it was that obvious. Surely this reasoning leads to an infinity of divine persons. To counter this objection, we might begin by reform. He does a lot of reformulating. Swinburne's suggestion to show that there is a qualitative difference between the act of exaltation and the recognition of that act. Right. I see exactly what you said. And so, yeah, maybe it's just the definition of the persons.

[00:35:31] Like, you're not even a person because you already have all the great making qualities. So why is it necessary that you, that there even be, wouldn't it just be folded into the third person or the three people? Right. Yeah. And so see, it's a little clever to say, therefore three is what's needed. Yeah. It's such a challenge. Like to do an a priori argument that there has to be three.

[00:35:55] Like if you just gave that as like, you know, like a contest for people that they had to come up with the best argument, like a priori for why there are only three divine beings. Like it's great. Like, well, and it's funny to think that this all rests on, on obviously you believing that there are three, um, you know, this couldn't possibly convince somebody who's, who doesn't believe this, that there are three. I don't think. No.

[00:36:22] But from my perspective, it's like a historical accident that we believe that people believe in three. So to have an a priori argument that happens to support that is kind of funny. There is something about it though. Like it's one thing to do the ontological argument for one being in a, like, I do feel like the argument for some sort of single theistic entity, like you could present me with some stuff where even if I didn't subscribe to it, I would get it.

[00:36:49] But three, like, and that specific number, that is a different kind of thing, right? Because it's three, but it's not exactly three. It can sometimes, it's like one with the, so this is a thing I remember when we did Tolstoy that, uh, the confessions, you know, and he just thought the whole thing was just gobbledygook. It was just a clear contradiction. It was incoherent, but that's also what made like the leap necessary.

[00:37:16] Cause you don't need the leap as much if it's something that you can assemble into your worldview, but like three different things that you don't even understand the nature of them or the difference between them or like, that's something you're just going to have to say, like, take me, you know? Yeah. And you're getting to like, what I'm realizing is the heart of my like vague problem with this, which I don't know if it's what you're saying, but it reminded me like of what I

[00:37:43] feel, which is to me, religion really does require like this leap. And you know me well enough to know that I favor like the sort of mystical, like if you're going to believe in three, I don't want an a priori argument for it. I just want you to say like, I don't know, some guy had a vision that there were three faces and, uh, and another guy had one and that's what we believe about God now. Like I kind of prefer my religion to be faith, faith based. Yeah.

[00:38:10] Or some sort of experience or way of knowing that isn't, uh, either a priori or just any of our standard ways of knowing, you know? Yeah. And for me, it doesn't have to be three for that. Like, in fact, for me, like the thing I'm attracted to is I don't know what it is. It's a mystery, but it's not something that we have any kind of handle on, uh, that we could try to like prove or even present a compelling argument for.

[00:38:37] It's a different kind of thing that you might come to know. Yeah. He also doesn't get into all of the crazy, like the weird things that you have to believe about Trinity. Like what? So it is blasphemy for some people to say that one being created another being. Um, so, so there's just like weird shit that even, even if you believe in a Trinity, you have to like make some leaps. Also, like what is the Holy Ghost? I have, I don't feel like. He likes to watch. He's the one who likes to watch. He's the one who's watching.

[00:39:06] But on this formulation, the Holy Spirit is the, is the third party on the couch. But he also has to be exalted. Assuming it's a he, a they. Yeah. Interestingly, the spirit is often taken as being the feminine form of God. But yeah, I don't think so. I think because it's three and one and, and one that is three, I think all he needs is one part of the Trinity is being exalted. One is doing the exalting and one is watching the exaltation. Well, no, because they're all divine beings and I didn't think I would have to remind you,

[00:39:36] but being, but divine beings must possess all great making properties and exaltation is a great making. That's the mystery. It is, it is also one being. Right. Well, but he doesn't address that at all. No, that's what I'm saying. Yeah. That's why like he doesn't get into the nitty gritty of like the, though like weird things that you have to believe about what the Trinity is. But you would have to, because like otherwise it's not an argument for that. It's an argument of three like equal divine beings, essentially.

[00:40:04] Like we have no reason to differentiate between them. Maybe that's the next paper. Yeah. A priori argument for how the three is also one. Yeah. An a priori argument for once we've agreed that there, that the Trinity is established, how the Trinity is also a mononity or whatever. Yeah. A singularity. Slowly working our way to full blown Christian theology. But this time, this time not through any kind of divine messages. Yeah.

[00:40:33] Just pure reason. Yep. All right. Well, that was the a priori side of this episode. We'll be right back to talk about the a posteriori side, the empirical side, and Richard Feynman's classic text or classic speech to Caltech cargo cult science.

[00:41:58] Welcome back to Very Bad Wizards. This is the time of the episode where we like to sincerely thank all the listeners who get in touch with us, who reach out either to make fun of us, to yell at us, to thank us. All across the spectrum, we love to hear from you. If you would like to email us, email us at verybadwizards at gmail.com. You can tweet at us at peas at tamler at verybadwizards.

[00:42:24] Neither of us are all that active on Blue Sky, but we do have accounts there. You can follow us on Instagram. We post info about our episodes there. You can like us on Facebook. You can join the subreddit where there's sometimes good discussions going on about the episodes and other things VBW related and often people making fun of us. And you can give us a five-star review on any of the podcast platforms where you listen.

[00:42:54] That really helps us out. That helps other people find us and grow our audience, people who might like the podcast or they might hate it, you know? And if you're a movie fan and you're on Letterboxd, follow me on Letterboxd. I am trying to post brief reviews of everything I see. I'm in a bad movie drought right now. I think it's been about a week and a half since I saw the last movie, which is a horrible drought. But I'm getting back on the horse, going to see Eraserhead in the theater tomorrow night.

[00:43:23] Very excited for that. Never seen it in the theater. Could be also related to an upcoming episode topic. We'll have to see. If you would like to support us in more tangible ways, you can go to the support page where you'll find some swag. Also the option to give us a one-time or recurring donation on PayPal. But the biggest way you can help us out tangibly is by becoming a Patreon patron.

[00:43:48] And there's a lot of different levels where you'll get various tiers of benefits. At the $5 and up level, you get access to all of our bonus episodes, including, of course, the Ambulators, our episode-by-episode breakdown of the great TV series Deadwood. And also the ongoing Reintegrators, where we do a deep dive on the previous two episodes. This comes out every two weeks.

[00:44:17] The previous two episodes of Severance. These have been a lot of fun. And, of course, we have the great Paul Bloom joining us for that. At the highest level, you can ask us a question every month. And we will answer the question in video form for you and audio form for everybody else, our monthly AUA episodes. We really enjoy those. And we're about to record one right after I finish this promo. So thank you so much.

[00:44:46] We really couldn't do this without all of you. It's been a special privilege being able to do this for all these years. Now let's get back to the episode. All right, let's get to our main topic, cargo cult science. Like you said in the intro, this was actually a commencement address that Richard Feynman delivered in 1974. So it reads like an actual address, but it's been transcribed. And it's had quite an impact, I think, across the sciences because Feynman does the Feynman-y things that he does, which is very insightful.

[00:45:14] And his target is essentially pseudoscience. And I think in pseudoscience, he includes a lot of social sciences, including psychology. But he outlines in this really informal way a lot of things that, you know, a lot of smart people have said in many more formal ways since then. But he lays out a lot of the things in this that I think had psychologists paid attention to it carefully, we wouldn't have gotten ourselves into this replication crisis.

[00:45:43] Like I think that the things that science reform talks about now as being things that we need to really, really pay attention to or else we might have like bullshit science were, I think, pointed out very early and very clearly by Feynman here. Yeah, and also like by other people even earlier than Feynman. For sure, for sure. But it's sort of weird that a physicist at a commencement address does such a nice job. Totally. Yeah. Yeah, yeah. In a very celebrated speech that probably all of them read. Yeah. Yeah, yeah. Yeah.

[00:46:12] So what did you, had you, you'd read this before, right? I had, yeah. But this was the first time I really dug into it. Like obviously I agree with a lot of the criticisms of the social sciences, which, you know, it is true that he also takes shots at ESP research and at, you know, reflexology or whatever. But I think like the real lesson to these people in the audience at Caltech is you're not doing ESP stuff. You're not doing reflexology.

[00:46:42] But you still might be in this field where you're doing cargo cult science. Yeah. And I hope that the structures will be aligned so that you can call it out or at least not do it in your field. You will have the freedom and opportunity to maintain the integrity. So, yeah. So do you want to say like what it means to be a cargo cult science? Yeah. Yeah.

[00:47:09] So Feynman starts off by talking about like, okay, I kind of believed that we were in this like scientific age where, you know, power of science pervaded all of society and people thought carefully about things. But he says he noticed that tons of people still believe in UFOs or astrology. He mentions Yuri Geller, the guy who did telekinesis, ESP stuff by name. He says a lot of people still believe a lot of bullshit. But as you said, it's more than just these like weird, crazy beliefs.

[00:47:39] Like a lot of stuff has the feel, the appearance of science, but isn't science at all. And this is where he brings up the cargo cult metaphor. He says, and he's talking about some of the studies that he's read in education, for instance, like people who claim to know the best way to teach people. How to read. Yeah. How to read. Psychotherapy, like studies trying to show what the best kind of psychotherapy is. And he says, I think the educational and psychological studies I mentioned are examples of what I would like to call cargo cult science.

[00:48:08] In the South Seas, there's a cargo cult of people. During the war, they saw airplanes land with lots of good materials and they want the same thing to happen now. So they've arranged to make things like runways, to put fires along the sides of the runways and to make a wooden hut for a man to sit in with two wooden pieces on his head like headphones and bars of bamboo sticking out like antennas. He's the controller and they wait for the airplanes to land. They're doing everything right. The form is perfect. It looks exactly the way it looked before, but it doesn't work. No airplanes land.

[00:48:36] So I call these things cargo cult science because they follow all the apparent precepts and forms of scientific investigation. But they're missing something essential because the planes don't land. Yeah. So what that essential thing is, is what the rest of this essay is about. Right. Yeah. So what the rest of the essay is about is trying to investigate what the essential things that are missing are.

[00:48:58] So psychologists or whatever, you know, educational psychologists or people studying psychotherapy, they're acting as if the things that they do will yield the same results that a rigorous science would yield. But they're doing something wrong. So what is that thing? Yeah. So there's a lot of questions, right? Like if you're trying to use this as a metaphor, it's like, well, in the case of the cargo cults, the planes aren't landing. It's very clear what's not happening. Like I think he knows that in the social sciences you get results.

[00:49:27] So what's the thing that's not happening? And I think what it is like you get with the education example is like kids aren't learning to read better, even though you have all these statistical analyses and models that say that this is the best method of teaching kids to read. And I think in the case of social psychology or, you know, a social science like that or economics, it's like we're not actually learning about the thing that we're supposed to be studying.

[00:49:57] So in social psychology, it's like the mind and the relation of the mind to behavior. We're not making progress on having a kind of deep understanding of that. We're getting a lot of results. We're doing a lot of the things that everybody else is doing with the headphones and the runways. And we're even getting like little things that happen that seem like it would lead to the big thing, which is like the important thing, which is the airplanes landing. But that's the thing that we're not getting.

[00:50:26] But I think that's an open question is what exactly isn't happening in, you know, the social sciences that would correspond to like the airplane not landing. When I first read this, I don't remember when it was, but when I first read it, I was like, well, that's kind of like a wide sweeping critique of all of these social sciences or whatever. You know, these things that he calls pseudoscience because they're not physics.

[00:50:50] And I still get that sometimes where I'm like, all right, well, that was a very facile way of dismissing a bunch of stuff. But the point that like the analogy is making for me, which is just because you include control groups and just because you measure outcomes and do statistics on it doesn't mean that you're actually learning anything in the same way, which is, I think, what you're saying. As you continue to read, I think that he actually believes that you could do it the same way.

[00:51:18] So it's like the metaphor of cargo cults, like the planes will never land for cargo cults. Like they're never, it's not, that's never going to happen. Can we get to the point where something happens? Well, I think we can. And he gives some examples of how you can do it right in some of these sciences. But I think that's sort of an open question because what he ends up focusing on is really not like methodological reform.

[00:51:39] What he ends up focusing on is the feature of what he thinks of the pseudosciences that allow us to think, to continue to think that we're actually making progress is just some sort of self-deception where we construct theories. We then do studies. We discard the results that don't fit our theories. We don't publish them. We publish the ones that do. We only look, you know, it's essentially a treatise on confirmation bias really.

[00:52:07] And because of that, we don't even realize that we're doing cargo culty things. We're kind of seeing planes land. Like we're squinting and seeing like, oh, no, no, we landed a plane. Right. Yeah. Yeah. That's the thing where it's hard to track the analogy is like the results, the effects that you get, that you get to publish and stuff like that. There's no analogy for that in the cargo cult thing. Yeah. It's as if like they threw up some kind of mechanical airplane and they saw it was able to land or something.

[00:52:36] And then sort of like, okay, now we're going to get real airplanes with goods and that will start trades or something like that. Yeah. So what do you think about that? Because I do think that's the most fascinating part of it is that there's something about the social sciences and the pseudosciences that allow for this in a way that maybe some of the other sciences. Well, I mean, it's yeah.

[00:53:02] So that's one way of looking at it is there's something about the social sciences and the pseudosciences that allow for this kind of tricking yourself without being necessarily self-deceiving in a way that you're conscious of. You're just you're actually believe that you're getting something when you're not. That's one way of looking at it.

[00:53:23] And in another way, because he talks about integrity and scientific integrity, it is more like a personal failing or a group failing on the part of social scientists and pseudoscientists that they're focused on getting these results any way they can. And they just want to make it look like they're in big boy scientist clothes when they do it there.

[00:53:45] It's it's sort of like if you had more virtuous scientists with greater and personal integrity, you could fix this. And I'm more like I do think like there are issues with integrity in the social sciences and various areas of academia. But like I do kind of think that this is more methodological. And at times, like you say, this seems like he thinks it's not exactly methodological.

[00:54:14] Yeah, he does focus on it being sort of a personal failing of a scientist. And and so he gives an analogy to advertising where he says so he outlines a set of basically issues that he believes that are in this general theme of integrity. So he says, for instance, if you're doing an experiment, you should report everything that you think might make it not support your theory, not just the things that it does. Yeah. You got to like actually put out everything. Like open science kind of. Exactly. Open.

[00:54:42] He's essentially arguing for what open science reform argues. But he says one of the instances that he sees as clear failures of integrities in advertising. So he talks about seeing this advertisement for Wesson oil that says, oh, Wesson oil doesn't soak through food. And he's like, well, yeah, that's true. But also no oil sucks through food, at least not in the way that you use it.

[00:55:05] So they are selectively using their arguments in a way that is clearly, I think, intending to deceive in order to sell things. And I don't think that that's going on for most people. Yeah, I agree. I mean, it's clearly going on for some. It has been well documented.

[00:55:24] But I think in like it does seem like the worst you can say about most people is they're not fully taking a hard look at themselves and their methods and the assumptions that their methods rely on to make sure that this is something that really is a truth giving property. These methods, right? Yeah.

[00:55:46] You know, I've said I probably said this a bunch on this podcast, but I was trained in a way that explicitly the practices that we used were considered perfectly fine practices, but they were systematically distorting our ability to find the truth. So they were actually don't report all of the results, report the ones that you need to make your point. Right. Yeah. Like it was never said in such a like terrible way. It was more like, look, the truth is there.

[00:56:15] So report the things that you find that support your truth. And it felt like a personal condemnation because there were times and there were people who are more likely to be, say, open with everything they find, report everything. And some people would like make arguments that you didn't need to do that. And I do think that there was something deep that hit me when I read this that was like, I could be a better scientist, like I should be a better scientist. And I didn't feel like we were all bad people.

[00:56:44] I just feel like we really weren't reflecting on what we were doing nearly as much as we needed to. Here's where I think I really like the cargo cult analogy. But I think here's where we might get into a big fight. It's this idea of there are these rituals and these rituals are they involve the use of statistical models and measurement models and this wide array of things that you're not trained to understand, like the validity of.

[00:57:12] Like you're trained to understand how to employ them, but you're not trained to really dig into like what's the basis for using these models or using these statistical tests. So someone like Gerd Gigerenzer, like he has a whole bunch of papers on significance testing, null hypothesis, significant testing. Also Paul Miel. This stuff has been out there. These criticisms have been out there for like 70 years.

[00:57:39] This is a way to get results as I think like we looked at a Gelman paper on this. You know, I think he refers to it as like a straw man null hypothesis. It's a good way to like disprove a straw man null hypothesis. But it's not a good way to actually learn about like how the mind works and how like behavior operates. And yet it's run rampant and people aren't taught to like examine like what like why are we doing this?

[00:58:09] What is the assumptions behind it? And so it's become just this entrenched ritual that people do. It's not that it's not examined. It's just that the people who who employ it for the most part have no idea like what the meta like history is behind it and what the justifications are or what the potential problems are.

[00:58:29] And yet you do it anyway in a mechanical way, in a ritualistic way, in the way that these people put like headphones on and they create their runways of a certain length and stuff like that because that's what they saw that people do who would land the planes. Right.

[00:58:46] And so then the real question, I think, when you're talking about science reform and some of the open science stuff and like pre-registration and reporting all your data and all it is that just making better runways and like improving like the headphones to look more like the real headphones. Is it like that or is it actually something that will lead to the planes actually landing?

[00:59:14] But the cool thing about the metaphor is like to actually get the planes landing, you need to do something completely different. It's not about like the headphones, like how good they look or how long the runway is. It's about like we have to become an imperialistic colonial power. So like it's completely paradigm shifting to think of like what you actually have to do or to put that in a less strong terms.

[00:59:44] At the very least, like what you're doing now, like the whole category of what you're doing now is wrong. Like it's just wrong. It's just like hope. Those are the less strong terms. Well, it's hopeless. But that doesn't mean that like it requires a new paradigm. It might require just like a radical shift in emphasis within the existing paradigm.

[01:00:04] You know, like I think you're right that in the speech, Feynman seems to think this is less of a paradigm shifting thing and more of an improving or reforming practices thing. But the metaphor itself seems to – and I think the popularity of the metaphor is that it seems to suggest that there's something deeply wrong with how you're doing this, not something superficially wrong that you can improve on the edges. Right.

[01:00:35] Right. So, you know, we do disagree fundamentally, like as we always do, because I – well, in two ways in this context, I do think that Feynman was using the metaphor to point out how you can be sort of blind and self-deceiving. Not – the metaphor wasn't that the planes will never land in the sense that like what we're doing is making wooden headphones.

[01:01:03] Because he goes on to give examples of good science, right, that is not a qualitatively or paradigm shifting way. And we can talk about what Feynman thinks, because I think that Feynman wouldn't say the rest of the things that he said if he believed that it was just like witchcraft that we were doing. Right. And I don't think that he would give the examples of how, for instance, physics has had this problem as well. Right.

[01:01:26] When he talks about self-deception and he talks about like how the embarrassment of reporting results that were wrong just because somebody initially got it wrong and they were a famous person who published it and everybody thought that they needed to match the result. He's saying like, yeah, that can be a problem even in physics. And so that's what needs to be fixed. Like the deep thing that needs to be fixed is you need to actually be more rigorous and have more integrity about what you're doing.

[01:01:51] And that's why he uses the example of the good psychologists who did all of the like rat. Yeah. Yeah. No. So I didn't mean to suggest that Feynman himself thinks that this is something that requires a paradigm shift. It wouldn't surprise me if he thought that social sciences or at least some of the social sciences do need that.

[01:02:16] But, yes, the way he talks about it and the examples he uses suggest that this is more about self-deception. But the reason why people still talk about cargo cult sciences is it's not going to be a, you know, a tough, rigorous, but like kind of easy fix like pre-registration. It's something that goes beyond that. I mean, but that's whatever.

[01:02:43] Like, but let me just read from Andrew Gelman's thing on this. So he says, like, why speak of cargo cult science rather than, say, cookbook science or black box science or some other metaphor capturing mechanistic application of procedures that do not meet standards of quality and lack adequate transparency or introspection.

[01:03:03] We argue that the metaphor has had value beyond the perhaps amusing conjuring of images of teams of lab coded PhDs as members of technologically unsophisticated societies. Cargo cult goes beyond cookbook, et cetera, in conveying that the processes being employed are not just automatic or poorly understood, but also that they don't work at all. Right.

[01:03:28] The cargo cult epithet, despite its problems, may better capture the complexity of the inherent social parts of the problems and the challenges to finding solutions. Yeah. Yeah. I mean, I don't know because I haven't read like a lot of people who talk about this, but I do think that if so, then it kind of removes the power of the rest of this speech.

[01:03:53] Which, because I take it that Feynman and he's devoting the rest of what he's talking about to ensuring that we have integrity as scientists and that we don't self deceive. Right. Yeah. So he says, but this long history of learning how not to fool ourselves of having utter scientific integrity is, I'm sorry to say, something that we haven't specifically included in any particular course that I know of. We just hope you've caught on by osmosis. The first principle is that you must not fool yourself and you are the easiest person to fool.

[01:04:21] So you have to be very careful about that. And after you've not fooled yourself, it's easy to not fool other scientists. You just have to be honest in a conventional way after that. So like it's. If you just end with the cargo cults metaphor being like this is all a doomed endeavor that is so deeply flawed that we shouldn't be doing it. And the rest of his advice is not that important. Right. Yeah. Yeah. So then let me try to find a middle ground then, which that might still be consistent with what he's saying.

[01:04:51] So maybe examining, really examining what it is that you're doing with the kind of integrity that he's talking about. And to relate this to one of his examples, the examples with the rat, this guy there in 1937, a man named Young, that's that's the only name that he gives him, did an interesting one. And he tried to train the rats to go to a third door to get food even. And they always seem to know that the first that the food was behind the first door.

[01:05:21] So then he just he created this whole corridor that that isolated every single condition to try to figure out how they could tell where the food was. He says he finally found that they could tell the way the floor sounded when they ran over it. And he could only fix that by putting his corridor in sand. And so he covered one after another of all possible clues and finally was able to fool the rats so that they had to go to the third door. If you relaxed any of the conditions, the rats could tell.

[01:05:50] And so then he says he looked into the subsequent history of this research, the subsequent experiment and the one after that. Never. They all never referred to Mr. Young. They never used any of his criteria of putting the corridor on sand or of being very careful. They just went on running rats in the same old way and paid no attention to the great discoveries of Mr. Young. The reason they didn't, he says, is because he didn't discover anything about the rats.

[01:06:14] In fact, he discovered all the things you have to do to discover something about rats. But not paying attention to experiments like that is a characteristic of cargo cult science.

[01:06:26] Now, there's a way to interpret this completely in line with what you're saying, which is that if only we had followed Mr. Young and had that kind of experimental rigor and that kind of integrity, we could have learned something about rats and mazes. Maybe, right?

[01:06:49] But then the other option is that's just your baseline for if you can learn anything about rats, you would have to do it this way. But it could also be that you're not going to find anything about rats that way.

[01:07:03] And so because maybe like something's just wrong with the way you're trying to artificially construct mazes and whatever overarching theory you have of rats and mazes and how that generalizes, like there's, you know, that could also be the case.

[01:07:21] But, you know, like we have discovered tons of stuff about rats and mazes and all the way to like how how the brain works to like have have rats know how to navigate. Like there's a guy in my department who does this like as his research, right? So he's uncovered the way that the hippocampus encodes for context, like and like spatial, like the way that spatial navigation works.

[01:07:50] And I guess what I'm saying is that some of the things that he's talking about, like the repeatability, right, the rigor, reporting everything, using the proper controls, that we've actually come a long way from 1937. And these are things that we have paid attention to. And maybe I think social psychology is a particularly egregious way, right? His next example is the ESP experiments that he's talking about. Social psychology became more like those.

[01:08:17] Whereas in the rats, like we know a great deal about how the brain works, like like because we ran rats in mazes and because we know like the right controls to make sure that everything is is is properly rigorous in order to find this out.

[01:08:33] I think this critique expects that people will be boasting about their successes in certain areas without actually being able to demonstrate anything beyond the results that they got from the methods they employ that they don't understand well enough.

[01:08:56] I do think that happens quite, like we have said we understand the nature of a lot of different kinds of effects that it turns out, well, we didn't understand. But we would have had a good reason, we thought, to tell you that ego depletion is real or whatever.

[01:09:17] Yeah. So I think that there is like a very real phenomenon that that is unfortunate, which is the closer you get to the physical sciences, the more reproducible that things are like the easier it is to like to for people to prove whether or not you got it right or wrong. And that's why I think we have all this physics enemy, because that's generally the primary example that we use.

[01:09:40] So when I'm talking about like neurobiology essentially with rats, like I have confidence that these results are real, whether or not they the theories are right, I have confidence that the results are real and that they're making progress there. But wait, that's exactly the thing, though. Like the point is the theory, not the results, right? Well, you need the results to be to be reproducible by everybody. And then and only then can you start theorizing and building theories to test.

[01:10:08] Right. You got to get on that solid ground to be even able to tease apart two theories. And so like there's a lot of work where there is like, OK, there are these two theories that predict very different things and somebody does an experiment and one of them wins. Like that happens, you know, but that happens more in rats with mazes than it does in ego depletion studies.

[01:10:27] And that's where the softness of what we do at this level of like self-report and talking to people about what their intentions were and and trying to relate that to behavior. All that stuff like those critiques that you raise like are really good critiques of a whole bunch of psychology. I'm just saying like it's not it's it's not social psychology all the way down. Right. There really are qualitative and quantitative differences, but qualitative differences in the rigor of certain disciplines.

[01:10:55] So when we talk about like the the visual illusion research that we talked about a couple of episodes ago, that's just on better ground than social most social psychology. And the softer that we get, the more I think we're being cargo culty. Like that's where I'm that's where Feynman is like like, you know, making me feel something where where I realize and everything you said about the ritual stuff. Like we were using P of 0.05 as a ritual run the ANOVA P 0.05 report your results.

[01:11:25] That shit's all bad and that shit's all still going on. But I'm of the opinion that that that that we can actually reform these things. And I think that people like meal and Giger Enzer and Gelman are saying do it right. There are very few people who do it right. But if you do it right, then you can get something. But they're not saying do like no hypothesis significant testing right. They're saying don't use that. Like think about this in a new way.

[01:11:55] This is not the right way. I mean, this is definitely true explicitly of Giger Enzer and meal in a lot of cases, not in all cases, but in a lot of cases. They say it's not designed to do the thing that you think it's it's doing. And in fact, like like Giger Enzer has this whole paper of like like asking people what they think the implications of these significance tests are. And they're all all over the map. And some of them are just obviously false.

[01:12:24] Like nobody who actually understood like the statistics behind it would agree with that. So that makes it seem like it could be that it's we don't just know what we have to do better. It's not like, you know, it has to be P.01 instead of 0.05 or it has, you know, like it's it could be that we're just not thinking about this the right way. And that's the thing.

[01:12:46] If you talk about the integrity of the whole thing, that's the thing where it's like that step that people don't want to that people don't want to make. And I'm not even saying it's definitely true. I'm just saying like it's possible. And people eat. I think like I don't know. I think a lot of social scientists agree that it's possible, but like their whole jobs depend on it not being true. And so and like they end up just going back to it.

[01:13:13] Like they're confronted with something like a meal paper and like a kind of seminal meal paper. And then they're like, yeah, yeah. You know, we've talked about this all the time. And then they but it's not like they try to figure it out because that's not their job. They're not metascientists. They're not. Yeah. I mean, you're you're not wrong about a lot of this. I do. Am I wrong? No, you're not wrong.

[01:13:42] Am I wrong? You're not wrong, Walter. You're just an asshole. Oh, OK. There are the progress has been slow, but there are there are like real things that have changed about the way that we do things that, you know, maybe we haven't abandoned null hypothesis testing. But we certainly have made it clear that you need more than a P of less than 0.05 in order for anybody to believe you.

[01:14:11] So we do use things like Bayesian statistics. We look at effect sizes. We have other criteria for validity, like when you're doing it right. What like if you're trying really hard to have integrity, you are doing it better. Like I just think that you think that there's no. Well, I think it's possible. I think it's possible that that's true. It's possible that you are trying to chisel the headphones like and refine them in such a way that they are.

[01:14:38] They look exactly like the headphones that of the air traffic controllers that are bringing in the planes. But like that's possible. It's possible that you can. But that's not what I'm saying. I'm not saying that we're chiseling the headphones. No, I know you're not. You're saying the opposite. You're actually like improving it. You're trying to figure out how we can get planes to actually land and using better methods.

[01:15:02] I'm just saying that the way you're doing that, it's possible that it is without your knowledge chiseling the headphones rather than actually making it happen. Like it's a different category of thing that you have to improve if you're going to make the planes land. And that's the thing that I think people don't consider that possibility. Because it might be. It might be that you have to go back to interviewing people or being like William James. Like it could be that way.

[01:15:31] It could be that this isn't the right way of going about it. And there's actually probably some reason to think that that's not outlandish. Because there haven't been like that many breakthroughs in certain fields in the social sciences. Like where we say, oh yeah, we really learned something there. Especially about something complex. Maybe not rats and mazes. But maybe something as complex as what's the best way to educate students. Or what's the best way to understand when people cheat on their wives.

[01:16:00] Yeah, it's what that thing is. Right? So like it is what would the change look like? What would be the criteria for success is what really needs to be thought about. And here's where I think we really do think differently. Because I think there is a way to know whether or not our method is not giving results. And there would be a way to know whether there is a better method. Right?

[01:16:29] Because we would have some sort of outcome that we all agreed was like, to be a philosopher, a right making. Never right making property. Yeah, exactly. And so, but I think you're deeply skeptical that we can know whether or not something worked. Like I think I'm trying to defend a weaker claim here, which is to just say it's possible. It's a live possibility.

[01:16:55] So one of the things, you know that book that we talked about in the last episode, Trust in Numbers. Yeah. Yeah. And so one of the things that he says, which I find really interesting is that you don't find like this obsession with statistics and this ritualistic application of methods. This is not Feynman. This is, to be clear, this is the author of Trust in Numbers, Theodore Porter.

[01:17:23] What he says is that the softer you go in the sciences, the more they rely on fancy statistical models and ritualistic application of these methods. Because they don't have like these laws that everyone agrees on and everyone's.

[01:17:42] So they need something to just tether them and have them talking to each other and not just, you know, here's Freud, here's Jung, here's like William James, here's a Buddhist. Like it needs something to like have you all talking about the same thing. And so, and they don't have stable laws that they can work with or real solid discoveries. So they lean on these statistical models more than anybody else. All I'm saying is maybe that's the wrong way of going about this. Maybe it is.

[01:18:12] Like it could be. And it's like if it is, like what if the methods of analytic philosophy are not actually going to shed light on knowledge? Like it's not that we have to like refine the theory and get a better theory of knowledge and make sure that like when we do experimental philosophy about it, like we use the – no, it's just like we can't think of – we have to think of this in a different way. I'm saying that that's a live possibility and very much in line with the cargo cult metaphor.

[01:18:41] They're not in the same – in the right ballpark for figuring out like how to solve this. It's not that they couldn't be. They could be. But improving the things that they're already doing isn't going to fix it. However honest and however those improvements are, however methodologically sound even those improvements to those methods are, it's not – it might not be the right kind of thing to actually get the airplane to land.

[01:19:10] So I feel – okay. So like there's – in one way you were saying what I was saying, which is that the softer social sciences, like our reliance on fancier statistics as you get more and more complex, like looking at say human behavior or behavior of groups, that's totally true. And so people who are doing like the biology of rat brains, they don't need to do like fancy regressions.

[01:19:39] Their results kind of speak for themselves. They can just see like when I did this, this happened. For visual illusions, sometimes it's like even silly to run a bunch of studies when you know – the minute you see the visual illusion, you know that it works. This is what you were saying on the other thing. Yeah, right. The study is superfluous because everyone sees it. Yeah, and the stats are superfluous.

[01:20:02] Like biologists sometimes use like stats, like the most basic of stats because they don't really need to do fancy things like control for 10 million things, right? And most physicists don't need – like when they run stats, like the results are so clear. So like all of that. And I'm with you that like there's a live possibility that we're just pursuing it wrongly. The thing that I was trying to get from you is what would be the way that we discovered that it was wrong?

[01:20:30] Like how would we know that when a method actually worked? Like suppose that the next genius comes along and says, psychology, we've been doing it completely wrong. What we need is this completely new method that you nor I have ever thought about. How would we know that that thing is right? I think that there is a really limited way in which you could ever know that a scientific theory is right. And that is to in some way do super careful controlled studies to show it, right?

[01:20:58] And so whether or not we're using the right statistics, I don't know. Like maybe it's actually intractable to say that we ever – or it's not a possibility to make any claims about super complex systems at this point. Maybe we just like can't – we should be silent about it or say in the like most basic way it's probabilistic. This is like entire – like what we – we're seeing some sort of pattern, right?

[01:21:24] And here's my theory about that pattern but like be very loose about the word belief. But I mean I don't know. Like going back to the metaphor like – and I think you agree with this. Just because you don't have an alternative strategy or even one that you can conceive of on the fly doesn't mean that the criticisms are invalid.

[01:21:49] But then the other part of your question is, well, how would you know when it does work in something like social psychology? I don't know. Like the way that it is more modeled on the hard sciences is like you just have – like all of a sudden you have this understanding of the mind at this level that maybe we can't even fully articulate right now.

[01:22:10] But it is a little bit more like our understanding of the laws of motion or even something like general relativity which is very hard to conceive of. But you can kind of go into it and go deep and really understand it.

[01:22:26] Like something that gives you like a more general understanding of what it is that you're dealing with because that is the goal ultimately of something like social psychology is how to understand the human mind. It could be possible to like recognize it when it's there even if you can't articulate it now. And in any case, even if that's wrong, that you should have to interrogate.

[01:22:52] In the same way that I feel like obligated, I'm kind of offended on behalf of my analytic philosophy peers that people don't take like the standard like Wittgensteinian critiques of the method that have been around for a long time. Why is that not enough to make everyone be like, holy shit, this isn't right? Like I feel like I did this – like the stakes are so much lower for me.

[01:23:17] But like once I felt like I understand like P.F. Strawson's like attack on the way people were talking about free will and moral responsibility, like I feel like that's okay. Like so you don't do that anymore. Yeah. I don't know. Yeah. I mean it's tough. Like philosophy is like I think a different beast, right? I mean my answer to you would be a lot of people simply aren't convinced that P.F. Strawson is right. And I don't think that's – You know. So forget about that.

[01:23:47] But just I mean like – Okay. But let me get to your points about the like the metaphor where I think the metaphor works in this sense where you say a guy comes along and says but you're not doing it right. And the other person says, well, how would you even know? It's so clear. The answer is do you make planes land? Like that's built into the example. The reason that we know it's a failure is that the planes never landed. Yeah. Right? And so this is what I'm saying.

[01:24:15] Like it could be that we're going about finding natural laws in the mind like we're not doing it right. But I think Feynman would be the first to say like the way that you find out whether or not this new guy who does some shit that we'd never thought about, the way that you find out whether he's doing it right is to match it with observation in nature. The outcome is clear.

[01:24:39] Like did your law, much like relativity can be observed by taking a clock on the space shuttle and seeing that time dilation works, right? Right? Then if we found something like a law of human behavior at the social level of analysis, then the only way that you would know that it worked is if it matched with observations. So that's the thing that I think like you're deeply skeptical about where I'm – I think it just matches with reality. Like that's the way that you know if something works. Right.

[01:25:08] So I guess the disagreement then isn't over. Like I agree ultimately that's the case. That's the case with all the sciences, right? Like it has to match with reality. But my questioning is of a level down from that, which is all these experiments you do in labs with the methods that you use.

[01:25:26] Improving those methods doesn't mean that you're going to ultimately have some sort of theory that matches with reality reality as people actually engage with the world. Not in the lab, not on Mechanical Turk, but like actually engage with the world. And so maybe doing – Well, the lab is real. I mean it's real, but it's artificially. It's artificially constructed to do the experiment.

[01:25:54] And so like we're not trying to figure out like how the mind works in these highly artificial environments ultimately, right? And the way that you know that a theory – like a theory works is if it can work in areas independent of the environment that you're testing them in. Yeah. I mean I have views on this, right?

[01:26:19] I think actually the way that you learn about reality often is under laboratory conditions, right? But it's ultimately vindicated by the actual world, not the laboratory. That's not the final end point of the vindication, right? Well, it's hard because what you want to know if what you – like as sort of you were describing, what we're after is a deep understanding of the laws.

[01:26:46] And the thing is in the real world there's so much shit going on that you might never observe like this specific thing that is actually occurring because there's 10 other things that are occurring at the same time. So I think that in many cases you might only ever see it in the lab. So let me give you an example. I don't know if this is worth it. But we know that how gravity works, it pulls all objects at the same rate, right?

[01:27:15] So a bowling ball and a feather in a vacuum presumably fall exactly the same time. But we never see that in reality. You would never ever see that happen in reality because there's wind and wind resistance in the world. There's an atmosphere. So you just never see it. The only way that you can see it is to create like the most artificial environments where you have like a vacuum and you can drop a feather in a bowling ball and you actually see that it fell. So there is like that's the proof right there.

[01:27:43] We're just not at the point in social psychology where we could do that. But I don't think that just because it's in the lab doesn't mean that it's not. No, no, no. But like there's a whole underlying theory behind why, you know, that's okay that we don't see it because we also have an understanding of how wind resistance would affect this stuff and how – right?

[01:28:06] Like so the interest of that theory is that it expands beyond the lab even if we can't recreate it beyond the lab. Like the reason why we're excited about that result. It's true. It works. Yeah. It's doing – like gravity is working. It tells us something about gravity outside the lab. It doesn't just tell us something about gravity inside the lab even though that's the only place we can run it. Right.

[01:28:31] So the problem is we don't know whether or not when we measure things outside the lab whether the thing that we measured in the lab is actually working out there because we don't have like the precise mathematical formulations like gravity. We just don't have that. So I don't know. This is all an aside about whether or not the proof is in the pudding by going out in the real world because every context is different in the real world. Like there is no real world. Like, you know, it's just – it's all different versions of fake worlds. It gets you getting a little postmodern.

[01:29:01] I mean, you know, every instance is going to be different. Like I don't know if, you know, anchoring and adjustment work in like, you know, this Indian population versus these Germans versus people who are – But like those lab experiments led us to be able to launch rockets that we could like be confident would land where we hope they would land. Right? Like – and that's the thing I think that you're looking for at the very least. That's what we're looking for. And that's – and we're just not there yet. So like it's hard to know.

[01:29:31] Yeah. I don't know. And you could imagine that rocket science was initially gone about the wrong way because of, you know, like some contingent factors of how sociologically like certain methods developed that were misguided in a certain way. I mean the most modest claim that I absolutely stand by is it's definitely possible that that's the case.

[01:29:59] And the frustration I have – and I get the frustration with my kind of ill-informed immediate critiques. But the frustration I have is that people just always stop short of that step.

[01:30:13] And it seems like it's a live possibility that should be reckoned with if you're – like to get back to the Feynman thing, going to have the kind of integrity as a truth seeker who is trying to find theories with truth-making properties. That does seem like something that should be reckoned with and it doesn't seem settled at this point. Yeah. Yeah. Yeah. I agree.

[01:30:41] Like there's a point at which a lot of people stop. But I also think that there are a lot of people that keep going and they're pushing. And so – and I don't know. I think that if we can agree that there is a reality out there that ought to match the, you know, our theoretical predictions and that that reality can be assessed, like that we can actually measure it. Yeah. Which you seem like you're doubting. Well, yeah. But that's – but I don't. No. I'm just saying you never step in the same river twice is what I'm saying.

[01:31:10] A little Marilow Ponte from Dave. I'll just end. I will – yeah. I'll just end. And I should just end. I should just stop right there. But so long as – I think the only thing I want, and I think that you believe this, is that when the guy comes along and says, have you tried these wires in this antenna? That we would be able to say, oh, we shouldn't have been making these headphones with wood.

[01:31:36] Like that will – that there will be some moment where we say, you're right. This is better, and I know this is better because now this is like predicting a whole bunch of things that we couldn't predict before. Right? And it's that part of it that I think is what's sometimes up for debate in these discussions with you, whether or not you think there is the equivalent in the metaphor to like getting that antenna to work for the first time.

[01:32:05] There is, but it also requires – and this is what I think that you sometimes don't – that the person admit, oh, like all this stuff of me chiseling the wood, that's not – that was never the way of going about this. And so like all the improvements we made on doing that ultimately were not – was not the thing that would actually help. Yeah. So here's what I just think.

[01:32:30] I think that the real low-level things like sharing your data and pre-registering and trying to replicate other people's experiments, that's the only way we're going to start knowing whether or not some of the reforms work. Right. Whether or not like – when I get your data set, if I get completely different results with my statistics, that's super problematic. And we know that that happens. Yeah. Right? We know that like people use different statistics to try to prove something and they completely disagree.

[01:32:57] So I think we're making progress in a way that's slow, but I think that that's the only way we're going to make it when we actually start being more open, sharing, trying to do our experiments more carefully. And like all of those things are necessary to figure out whether or not we're making any improvements in matching reality. All right. So this is the – yeah, this could be the compromise.

[01:33:24] It's like, okay, well, let us do this and then we'll see if all of this has been a huge waste of time or maybe if it's – yeah. But I think it's important to think about it in a way that like I appreciate your pushback. Like I hope that comes across. Like I can yell at you, but we're – that we really think about what it would mean to say that something matched reality.

[01:33:47] That's the part that I worry about the most where we – like what can we say means that this worked, that this is true, that this was a true observation about the world. Yeah. I worry about that too. Like because especially with something like the mind and consciousness, which we really don't have a great grasp of like what it would even mean to solve the problem. Never mind like finding some method that will solve it or a promising method of solving it.

[01:34:16] So sometimes I think about the differences between you and me and thinking about this. And I think that I sometimes believe – and not always, but sometimes believe that this is a house of cards, but there's one real piece of wood in there. Yeah. And like we could build around it if we started remembering. And you might sometimes believe it's all a house of cards. There's no real wood there. But there could be a wood. And maybe we have to do something. But there could be. Yeah, like it's just that there – well, yeah. Let me read because on this exact issue from the Gallman piece.

[01:34:46] So he – because I feel like this better articulates the worry. We are concerned that some people, including us, who have criticized science or scientists using the cargo cult analogy, envision a solution in which scientists build better earphones or runways without solving the underlying problems, what I've been saying. Something that looks like an easy solution to an outsider, such as a statistician or scientist in another field, might not actually solve the underlying problem.

[01:35:14] In statistical terms, short-term solutions such as multiple comparisons adjustments can represent potential improvements in carrying out the mechanics of an expected method without solving the problem of continued ritualistic use and without interrogation by the researcher of how the method might work or not to help accomplish that goal. The point is, like, sometimes those first step improvements are not a useful first step.

[01:35:44] They can be another, like, way of, you know, diluting yourself that you're on the right path to progress. And it's so true that it can be used ritualistically. Like, this is the big problem where I learn what's – actually, nowadays, what's the statistical procedure that I should use for this kind of data, for these data? And so I find out, oh, like, it used to be that logistic regression was the answer, but now we're using this other thing.

[01:36:14] And so I'll just say, like, okay, I'll use this other thing. And then the whole field uses this other thing. And we think, like, oh, we're so much better than we used to be. But we don't really know. You don't interrogate why that's better or how it's better and whether it's sufficient. Right. Right. You know, I was just talking to Matt Nock, a friend of the show. We had him on who does research on suicide. Suicidal ideation, self-harm, suicide.

[01:36:41] And they're doing lots of interesting work using automated posting on social media. Where they're doing these large-scale linguistic analyses of, like, words that sound like people might be suicidal. And then giving a message. So sometimes the message could be just like the suicide hotline or whatever.

[01:37:07] But they could use other – like, there's other kinds of things that you can say to somebody who's currently suicidal that might prevent them from actually, you know, trying to kill themselves. And they have, like, a method to know whether or not it works. Right. Like, either this thing leads to fewer suicides or it doesn't.

[01:37:29] And the trick is – and this is what Gelman, like, I'm sure would say is, well, how do you know that this method increased or – sorry, decreased suicides? And there it's like, I don't know. I have these two numbers. On the one hand, there's people who didn't get this language and there were 10 suicides. And on the other hand, there's people who did and there's only one suicide. So that's, like – that's the problem that we're faced with with statistics is, like, how do I know whether or not I can say that it worked?

[01:38:00] And I don't know in the absence of just, like, doing a bunch of different ways of showing that it worked. Like, I don't know how else – you know, it's like – it's a problem. But it's – that's, like, the cleanest kind of data you can imagine. Either it works or it doesn't. And you still don't know if it fully – like, how to evaluate whether it works. Right. Like, if the number is 200 versus 10, I think, like, I don't need statistics maybe. Yeah. You know? Right. It's like a tough problem to know what the truth of the matter is. Yeah.

[01:38:30] The more you level up in complexity, it gets – that problem gets even worse because now it's not just, like, what caused the suicide. Because now we're doing all the shit that you point out. Yeah. It's like, did you fill out this, like, 10-item I-want-to-kill-myself questionnaire. Right. Right? Right. Now it's like, well, is that a questionnaire that's valid? Right? Yeah. I mean, I think the key is whether the methods are conducive to whatever it is that your goal is.

[01:38:58] And sometimes the goal – like, in that case, the goal is very well specified, which is to find out or could this intervention help people not attempt suicide? Or does this factor lead people to attempt suicide? But, like, in social psychology, it's not even clear what the goal is. Right? Like, the question – Yeah. Because the goal is sometimes just, like, did you score higher on this same questionnaire that I've been – you know? And also just what the overarching goal is. Yeah.

[01:39:28] Like, what's the overarching goal? What are we trying to exactly understand? And – Maybe we're done being all over the place. Yeah. Yeah. All right. Join us next time on Very Bad Business.

[01:39:41] Just a very bad –