David and Tamler share a few brief thoughts on the election and then raise some questions about Tucker Carlson being attacked by a demon as he slept in the woods with his wife and four dogs (still don't believe in ghosts, people?). In the main segment we talk about one of the most popular measures in social psychology – the cognitive reflection test (CRT). Originally designed to identify differences in people's ability to employ reflection (system 2) to override their initial intuition (system 1), this three-item measure has mushroomed into its own industry with researchers linking CRT scores to job performance, religious belief, conspiracy theorizing and more. But what psychological attribute is this test supposed to measure exactly, and how can we determine its validity? And has the dual process system 1/system 2 framework outlived its usefulness?
Tucker Carlson was totally mauled by a demon and not scratched by his dogs [youtube.com]
Frederick, S. (2005). Cognitive reflection and decision making. Journal of Economic Perspectives, 19(4), 25-42.
Blacksmith, N., Yang, Y., Ruark, G., & Behrend, T. (2018, July). A Validity Analysis of the Cognitive Reflection Test Using an Item-Response-Tree Model. In Academy of Management Proceedings (Vol. 2018, No. 1, p. 18090). Briarcliff Manor, NY 10510: Academy of Management.
Erceg, N., Galić, Z., & Ružojčić, M. (2020). A reflection on cognitive reflection–testing convergent/divergent validity of two measures of cognitive reflection. Judgment and Decision making, 15(5), 741-755.
Meyer, A., & Frederick, S. (2023). The formation and revision of intuitions. Cognition, 240, 105380.
[00:00:00] Very Bad Wizards is a podcast with a philosopher, my dad, and psychologist Dave Pizarro, having an informal discussion about issues in science and ethics. Please note that the discussion contains bad words that I'm not allowed to say, and knowing my dad, some very inappropriate jokes.
[00:00:17] I never understand those people who say I have no regrets. You know? It's incomprehensible to me. Mine are as countless as the sands of the desert.
[00:00:32] The Greatest!
[00:01:15] Very Bad Wizards.
[00:01:17] Welcome to Very Bad Wizards. I'm Tamler Sommers from the University of Houston.
[00:01:21] Dave, Donald Trump has been elected once again as President of the United States, this time with a much bigger victory.
[00:01:29] Political analysts are still poring over the exit polls to figure out why this happened, but there is a general consensus building that it was you when you jinxed Kamala in our recent AUA on Patreon
[00:01:42] with your confident prediction that she'd win handily. How does it feel to be personally responsible for four more years of Trump?
[00:01:50] I so thought about it afterwards. I was like, oh man. I was like, Tamler's for sure convinced that I jinxed the election.
[00:01:59] You're not the only one.
[00:02:01] No.
[00:02:02] After that seltzer poll.
[00:02:04] It just gave everybody the fucking nectar that we were thirsty for.
[00:02:08] The sign that it had really just swept the nation was that you asked me about the seltzer poll.
[00:02:14] Like, that you even knew about it.
[00:02:16] In any case, I hope you didn't bet too much money on Predict It based on the seltzer poll.
[00:02:22] I didn't put any money on the markets, but I'm glad I didn't. I would have lost a lot of money, I think.
[00:02:27] Looking at the results, it's kind of crazy. Like, I almost think we probably did fake the results last time, right?
[00:02:33] Like, give it a miss?
[00:02:35] Yeah.
[00:02:35] I don't know. Different circumstances. You know, like everyone was sick of Trump and there was COVID and like, you know, Biden was somewhat functional.
[00:02:46] You know, I don't know. We don't, we shouldn't engage in our own post-mortem.
[00:02:50] No, the post-mortem.
[00:02:51] But...
[00:02:52] Everybody has a theory.
[00:02:52] I mean, I think basically people don't like Democrats right now. And for good reason, most of the time. I mean, they're really bad at pretending even to care about non-elites.
[00:03:07] Yeah. A little entrenched too.
[00:03:08] A little.
[00:03:09] Yes.
[00:03:09] A little bit of a bubble.
[00:03:13] Did you see the MSNBC clip afterwards where like, Joy Reid, who I guess is one of their things, was saying, she ran a flawless campaign. She had Queen Latifah, who she never endorses anyone on her side.
[00:03:26] She had the Swifties. Like, you can't do any better than that.
[00:03:29] And nothing that was true yesterday about how flawlessly this campaign was run is not true now.
[00:03:36] I mean, this really was an historic, flawlessly run campaign. She had, Queen Latifah never endorses anyone. She came out and endorsed.
[00:03:44] You know, I mean, she had every prominent celebrity voice. She had the Taylor Swifties, she had the Swifties. She had the Beehive. Like, you could not have run a better campaign in that short period of time. And I think that's still true.
[00:03:58] And it's just like, why do you think people don't like Democrats?
[00:04:02] Hey, LeBron endorsed her.
[00:04:05] Yeah.
[00:04:06] Yeah, I know. Some of the least likable people endorsed her. Hillary, LeBron.
[00:04:11] Oh, God.
[00:04:13] Liz and Dick Cheney. We can't forget how much she courted the attentions of Liz and Dick Cheney.
[00:04:19] Yeah.
[00:04:20] Nice work, Libs. Good job, David.
[00:04:23] Don't blame me.
[00:04:24] I mean, it is.
[00:04:26] I voted for Kodos.
[00:04:27] Exactly your fault.
[00:04:29] Not that what's coming is going to be an improvement on that, because I definitely don't think so.
[00:04:34] But man, if Bernie had just been allowed to win in 2016, he would have ended his, I guess, his two-term presidency right now.
[00:04:45] Right now?
[00:04:46] Yeah.
[00:04:47] To distract us, you put something that I loved into the slack.
[00:04:51] Should we say what we're doing for the main segment first?
[00:04:54] Yeah.
[00:04:54] Yeah. For the main segment, we're going to tackle, tackle, maybe.
[00:04:57] We're going to discuss perhaps the most widely used measure in all of social psychology slash judgment decision making, the cognitive reflection test.
[00:05:05] We'll put a link to it, a simple three-item task that if you haven't heard of it, you should take it and score yourself.
[00:05:12] Because we're going to talk about what it is, what it means, if it matters.
[00:05:15] What it means about you.
[00:05:17] What it says about you.
[00:05:19] Yeah.
[00:05:21] You know who would score low on the cognitive reflection test.
[00:05:24] Who?
[00:05:24] Tucker Carlson.
[00:05:26] So, yeah, I put this in the slack.
[00:05:28] This is a YouTube, I guess a preview, like a trailer, but it's really just a clip from a new movie called Christianites.
[00:05:38] Is that right?
[00:05:39] Or Christianities?
[00:05:40] I don't know.
[00:05:42] Christianities, maybe.
[00:05:43] Yeah.
[00:05:44] Which I don't know anything about.
[00:05:45] I don't know what that movie is or what Tucker Carlson's exact role with the movie is.
[00:05:53] But the clip is just an interview with him, I guess at his house, right?
[00:05:58] Or, you know, somewhere out in the woods.
[00:06:00] He seems to live in the woods with, as he says it, I guess, like four dogs.
[00:06:07] We see two dogs.
[00:06:08] We see two dogs.
[00:06:09] But yeah.
[00:06:09] So he describes as they're talking and it's definitely stage is like, oh, this is just a conversation that we're having.
[00:06:15] I had you over to my house, a profile or something like that, even though it's not because it's from this movie.
[00:06:21] But in it, he reveals very casually almost that he was attacked by a demon at night.
[00:06:29] He wrestled with the demon like Jacob at night, although it wasn't.
[00:06:34] Jacob wrestled with an angel.
[00:06:36] An angel.
[00:06:36] Yeah.
[00:06:36] This was, well, it's Tucker Carlson.
[00:06:39] They're not sending him angels probably.
[00:06:41] But in this case, it's demon.
[00:06:43] Yeah.
[00:06:44] So the way he describes it is that he was sleeping in his bed with his wife and four dogs.
[00:06:51] And then at a certain point, a demon came and attacked him, like mauled him.
[00:06:57] He was like fighting it, but I don't think he got it.
[00:07:00] He didn't say that he fought it.
[00:07:01] He said he didn't wake up.
[00:07:02] He woke up and he had already been mauled.
[00:07:06] And it starts off, by the way, with him saying, like, have you had any experience with evil?
[00:07:10] He's like, oh, yeah, directly.
[00:07:11] Yeah.
[00:07:11] Yeah.
[00:07:13] Do you think the presence of evil is kickstarting people to wonder about the good?
[00:07:19] That's what happened to me.
[00:07:19] That's what happened to you?
[00:07:20] Oh, yeah.
[00:07:21] I had a direct experience with it.
[00:07:26] In the milieu of journalism or just?
[00:07:28] In my bed at night and I got attacked while I was asleep with my wife and four dogs in
[00:07:33] the bed and mauled.
[00:07:37] Physically mauled.
[00:07:39] In a spiritual attack by a demon?
[00:07:41] Yeah.
[00:07:41] By a demon.
[00:07:43] Or by something unseen that left.
[00:07:46] Is that right?
[00:07:47] Claw marks on my sides.
[00:07:49] On my.
[00:07:49] So he left physical marks.
[00:07:50] Oh, they're still there.
[00:07:51] Yeah.
[00:07:51] Yeah.
[00:07:52] Year and a half ago.
[00:07:53] And then he tells the story about waking up and he says the claw marks were underneath
[00:07:57] his arms.
[00:07:58] I think everybody is probably wondering maybe the dogs.
[00:08:02] That was my first thought.
[00:08:04] That was my first thought.
[00:08:05] Pretty sure that's the most parsimonious.
[00:08:07] One of the four dogs, you know, like I think I even know which dog you can tell because
[00:08:12] they show like a clip of them by a pond.
[00:08:15] And there was one dog that looks like he feels pretty happy with himself that he's gotten
[00:08:20] away with it this long.
[00:08:21] Oh, I got to the part.
[00:08:22] So here's what he says about the dream.
[00:08:24] Yeah.
[00:08:24] So he had gotten up in the middle of the night and noticed that he had blood on him and he
[00:08:28] had gone to the mirror and he had looked and there was these like claw marks and then he
[00:08:31] went back to sleep.
[00:08:32] And then the next morning he said, I didn't know if it was real.
[00:08:36] Was that just the weirdest dream that I'd ever had in my life?
[00:08:38] And he's referring to like, he didn't know whether he actually had the claw marks or whether him
[00:08:42] like going and looking and seeing the blood was a dream because he never says that he actually
[00:08:47] fought like a, like he saw anything or anything like that.
[00:08:50] Okay. So then that's even weirder that he, his conclusion was that he'd been attacked
[00:08:56] by a demon. Like he says, I knew it was spiritual.
[00:08:59] Yeah.
[00:08:59] And the guy even says, you didn't even try to explain it away. Do some bullshit, skeptical
[00:09:03] explanation like sleep paralysis or something. And Tucker says, no, no, no. I just,
[00:09:08] knew. But then he also keeps saying how he's not from like a faith. What does he call like
[00:09:14] faith based tradition where they talk about demon attacks at night?
[00:09:19] Yeah. So I think what he's saying is that he's not from one of like the mainline Christian
[00:09:23] sort of like, there are these mainline Christian dominations that aren't evangelical or like
[00:09:28] all about like the, the demons and the speaking in tongues and that kind of thing.
[00:09:32] He's not.
[00:09:33] Yeah. He's not from those. So he calls up his assistant, which who was the only evangelical
[00:09:37] right? He, he knew. And she's like, Oh no, that was totally a demon. That happens all
[00:09:41] the time.
[00:09:41] Oh yeah.
[00:09:43] No.
[00:09:45] Classic, uh, demon attack. Sure.
[00:09:47] Surprised it hasn't happened to be already.
[00:09:49] Yeah.
[00:09:50] There is something about the way that Tucker Carlson tells this story that I kind of love.
[00:09:54] And that is like, by the end he's like, you know, I don't care if anybody believes
[00:09:58] me. Oh, I don't know. Like whatever. You know, I just got attacked by a demon.
[00:10:01] So after this happens, he says he was seized with a sudden urge to read the Bible.
[00:10:06] Yeah.
[00:10:07] And he made sure to get one with no editorializing just the Bible. And he went through it and
[00:10:12] he, and then he's like, I'm not saying you should come to me for theological advice.
[00:10:17] And clearly the interviewer is like, well, I didn't think you were saying that. And that
[00:10:21] would be insane based on you read the Bible once after you think you were attacked by a demon.
[00:10:27] Although I think the, the interviewer is definitely from an evangelical tradition where it must be.
[00:10:32] Must be right.
[00:10:33] Yeah. But he says it was a transformative experience. You know, Lori Paul, he was, she, he gives a
[00:10:37] shout out to Lori Paul. So I guess it's his origin story for whatever new religious, I guess, I
[00:10:43] don't know. Maybe he was born again or evangelical now when he was Episcopalian before.
[00:10:48] Yeah. I'm not, I'm not sure. I'm not sure either. There are a few people who are having these like
[00:10:53] rebirths into like the more extreme versions of Christianity. And I think Tucker must be one of
[00:10:59] them. That's why I really want to watch this whole movie. But I love how he's just like,
[00:11:03] like you said, I don't want a Bible that editorializes. He's like, I'm going to go straight to the
[00:11:07] Bible. He said he read it and he reread it. And I love how like lacking in information,
[00:11:12] like closing yourself off to whatever anybody else has said is the truest way of assessing truth.
[00:11:20] And in fact, he takes a couple of shots at pastors. Like, he's like, I don't like them. I don't like,
[00:11:26] he's like, it's sad to say, but I don't think they're good people. The whole way he talks about
[00:11:30] it is very strange. Like it almost feels like it just came up. Like he wasn't going to bring it up,
[00:11:35] but now that they've talked about it, but now he's a little embarrassed. He doesn't want to convince
[00:11:39] anybody that it really happened. He doesn't want to.
[00:11:41] To me for theological advice. He doesn't even say like just reading the Bible,
[00:11:45] what he took away from that. Not in the clip anyway, it's just, he did it. Yeah. Like,
[00:11:50] it's very strange shot though. Like it's very eerie. Like it's in the woods somewhere.
[00:11:55] He's like in his LL Bean, like hunting gear and intersplices like him,
[00:11:59] like with a gun shooting at things. Yeah.
[00:12:02] Which I assume only assume are demons that are coming for him in that moment.
[00:12:06] He's pretty cagey about it. Like, he's like, I don't understand it. He's like,
[00:12:10] you didn't try to explain it away with some scientific. He's like, no,
[00:12:14] cause it doesn't make any sense. And I, at first you think, Oh,
[00:12:16] like the only thing that makes sense is a demon attack. But then he's like,
[00:12:20] no, I still don't understand it. It's like, so how are you so sure it's a demon?
[00:12:23] But I also love the absolute sincerity with which he's opens the story with that.
[00:12:27] He's sleeping with four dogs and that that's not like the first hypothesis that jumps to mind.
[00:12:34] He didn't even think like, he said, like, I didn't scratch myself. Yeah. He's like,
[00:12:39] I don't have any nails. And then he just does this weird laugh, you know, like,
[00:12:43] Oh, no actual call marks. And I sleep on my side. So I wasn't clawing myself. I don't have long nails.
[00:12:50] Um, and they didn't fit my hands anyway, but yeah, that happened.
[00:12:54] And he's like, the claw marks are still there, which I thought was going to be,
[00:12:58] he's going to lift up his shirt and show us or something.
[00:13:01] No, nothing.
[00:13:02] Yeah. You know what I would be like, I would love for a demon to come and claw me at night.
[00:13:07] You know, it would strengthen my faith. You know,
[00:13:12] it would finally, it would give me something to believe in.
[00:13:14] Well, I'll see what I can do.
[00:13:15] I'm just going to keep my dog next to me every night.
[00:13:18] I mean, the dog thing is almost so obvious. Like there's four of them in the bed. They might not like him. At least one of them probably doesn't like Carlson. And so like, yeah, they probably just secretly scratch him in the middle of the night.
[00:13:35] He also, he says the wife was in the bed, but she plays no other role in the story.
[00:13:41] Yeah. And nobody says neither. None of the dogs or him or his wife were all the light sleepers. None of them woke up. And I'm like, damn, that was a quiet demon.
[00:13:49] Yeah. Like I just came in with no fanfare.
[00:13:53] So what do you think? You're a prognosticator now. What do you think actually happened? Like I take it. The choices are one, he's making this up entirely to he thinks it really happens.
[00:14:04] But it was sleep paralysis. That's your go to sleep paralysis of the gaps kind of explanation for everything. Three, it's the dog. Like which and then four other like what do you think?
[00:14:20] I would go with wholesale made up. Yeah. Like whole cloth made up.
[00:14:25] So that's just a pure performance? Like. Yes. Did you see the clip of him at a grocery store in Russia talking about how amazing it was? No.
[00:14:34] Oh man. He goes to like this grocery store in Russia and he takes like a camera crew with him and he's just like, look, this is only whatever, $2, you know, in the U.S. we'd pay $5 for this.
[00:14:48] That's amazing.
[00:14:50] Well, you laugh at him, but who's president now?
[00:14:53] Yeah.
[00:14:55] All right. Let's get back to talk about the cognitive reflection test and the construct it purports to measure.
[00:15:53] Welcome back to Very Bad Wizards. This is the time where we like to take a moment and thank all of our listeners who reach out to us, who interact with us, who email us, tweet at us, all the various ways you get in touch.
[00:16:06] If you would like to do that, you can email us, verybadwizards at gmail.com, tweet at Tamler for me, at peas for David and at verybadwizards for both of us.
[00:16:18] You can join the subreddit. You can like us on Facebook, follow us on Instagram, and you could give us a five-star review on Apple Podcasts and wherever you rate your podcasts.
[00:16:32] That helps other listeners who might enjoy this podcast find us.
[00:16:37] So thank you. We read all our emails still. We can't respond to very many of them, but we really appreciate getting a chance to know how you feel about the show or a particular episode.
[00:16:47] Or if you just want to say hi. If you would like to support us in more tangible ways, you can go to the support page on our website.
[00:16:57] There you'll find swag. You'll find a bunch of different ways.
[00:17:01] But the main one now, and especially now that we don't do advertisements anymore, is our Patreon.
[00:17:09] And I want to make a brief announcement now, and we'll have a more complete announcement next episode.
[00:17:16] But we're making a few changes, very minor changes for the Patreon.
[00:17:21] The big change that will affect everyone here, whether you're a Patreon supporter or not, is soon the archive of the show,
[00:17:30] the archive being defined as the first 200 episodes, will only be available for our Patreon supporters.
[00:17:37] Everyone will have access to our last 100 episodes.
[00:17:41] But if you want to go further back than that, you will soon need to become a Patreon supporter.
[00:17:48] The other things you'll get by becoming a Patreon supporter is access to all of our bonus episodes,
[00:17:54] access to all the volumes of David's Beats.
[00:17:58] And he just posted another one for everyone on Patreon.
[00:18:01] At $2 and up per episode, you get the bonus tiers.
[00:18:04] You get the ambulators.
[00:18:06] You get Overton windows.
[00:18:07] You get all of the various other miscellaneous bonus episodes that we've done over the years.
[00:18:12] And we have done a ton.
[00:18:14] You also get the David Lynch series.
[00:18:16] Up one more tier you get, in addition, our Brothers Karamazov episodes.
[00:18:21] Five episodes.
[00:18:22] Deep dive on the Brothers Karamazov.
[00:18:24] Karamazov.
[00:18:24] And of course, at the $10 and up tier, you get to ask us a question every month for our monthly Ask Us Anything episode.
[00:18:31] And we will answer them for you in video form and audio form for everybody else.
[00:18:36] One last note for our Patreon supporters.
[00:18:39] Patreon is making us go on a per-month basis rather than on a per-episode basis.
[00:18:44] Basically, this will, with one exception, not really change anything because we always do two episodes per month.
[00:18:50] So basically, if you're at the $2 tier, that would be $4 a month.
[00:18:55] If you're at the $5 tier, that would be $10 a month, etc.
[00:18:59] The one change we are going to make is to bump up the $4 a month tier to $5 a month.
[00:19:07] You'll have plenty of notice about this.
[00:19:09] It won't go into effect at least until January 1st.
[00:19:13] At that level, you will have access to the complete archive of all of our episodes, bonus and non-bonus.
[00:19:21] And, of course, any future subsequent bonus series.
[00:19:25] And we're definitely taking suggestions.
[00:19:27] Now that we're winding down, we're almost at the end of the ambulators.
[00:19:31] We are definitely taking suggestions for what to do next.
[00:19:35] We have a lot of different ideas, books, maybe a noir series, maybe another TV series.
[00:19:44] So please send in your suggestions.
[00:19:47] Thank you so much to all of you for supporting us.
[00:19:49] It really means more now than ever before.
[00:19:53] We're happy with the decision to drop advertisements.
[00:19:56] It just makes us appreciate the support we get from all of you even more.
[00:20:02] All right, let's get back to the episode.
[00:20:04] Okay, let's get to the main segment today.
[00:20:06] All right, just a little bit of background.
[00:20:08] I know that in the past we've talked probably a bunch of times about this very popular family of theories in psychology
[00:20:15] that we often just lump together as dual process theories.
[00:20:19] These are approaches that characterize human cognition as consisting of two kind of different sorts of processes.
[00:20:25] And there's been a lot of these theories, and they have a ton of different names, which is a bit annoying,
[00:20:30] like heuristic versus systematic, intuitive versus analytical, central versus peripheral, associative versus rule-based.
[00:20:37] And weirdly, the one that's become the most popular may be because Kahneman are system one and system two,
[00:20:43] where system one is the intuitive and system two is the deliberate, systematic.
[00:20:47] But what all of these theories have in common is that, again, they present one way of thinking as conscious, slow, deliberate, rational, effortful, rule-based,
[00:20:58] and the other as a fast, intuitive, heuristic, associative, sometimes emotional or unconscious way of thinking.
[00:21:06] So, like, we're in two modes.
[00:21:08] And these dual process theories have been dominating the literature and social psychology and judgment and decision-making for, like, 20, 30 years now.
[00:21:17] But by and large, people have thought of these really as descriptions of everybody,
[00:21:22] as, like, these two universal processes that we all use.
[00:21:26] And they've paid less attention to, like, individual differences to rely on one way of thinking versus another.
[00:21:34] And that's where this topic comes in, this cognitive reflection test.
[00:21:37] And I don't know, Tamer, as far as I can remember, I don't think we've ever done an entire main segment on a measure.
[00:21:42] We usually are just shitting on them, like, in the intro.
[00:21:45] Yeah, that's right.
[00:21:46] Yeah, main segment.
[00:21:47] I'm not sure.
[00:21:48] So we're going to try.
[00:21:49] Yeah.
[00:21:49] We're going to try it today.
[00:21:50] And in part, we're focusing on the measure because it's been so popular as a way of assessing these individual differences
[00:21:55] in the tendency to rely on intuitive thinking.
[00:21:58] And partly because Tamer has a big boner for measurement lately.
[00:22:02] Yeah.
[00:22:03] Not lately.
[00:22:03] I mean, I think for the last, like, six or seven years.
[00:22:06] It's a long boner.
[00:22:09] All right.
[00:22:10] So the cognitive reflection task is originally developed by Shane Frederick in 2005 as this three-item measure
[00:22:15] that's supposed to measure this intuitive style of thinking.
[00:22:19] And it's been used by now on, like, hundreds of thousands of participants.
[00:22:25] There are some papers that try to estimate how many people have seen at least one of the items.
[00:22:28] And they say that, like, half of the online pool of people who take the test have some familiarity with it.
[00:22:34] Which would totally invalidate the test for all those people.
[00:22:38] Well, and we can talk about that because people have tried to argue that it doesn't.
[00:22:42] But I don't know how.
[00:22:43] So the idea is pretty straightforward.
[00:22:45] It's just they're questions that are designed to have an intuitive but wrong answer.
[00:22:49] So, like, there's an answer that pops into your head right away, but it's the wrong one.
[00:22:54] And in order to get to the right answer, you have to, like, really think about it.
[00:22:58] So they're, like, trick questions.
[00:22:59] Or they have a lure, as they call it.
[00:23:01] And the original idea was that it would be a measure of the degree to which people can suppress their intuitive response.
[00:23:08] So the idea is everybody has that intuitive, that gut feeling that this is the answer.
[00:23:12] It's just that some people can, like, turn off that little voice and spend the time, whatever, deliberating.
[00:23:19] It's like you see the cookie, but you know you shouldn't eat it before dinner.
[00:23:24] So can you suppress it?
[00:23:25] But this time it's your intuition about what the answer is obviously is.
[00:23:30] And you suppress that to get the right answer.
[00:23:32] Yeah.
[00:23:33] Right.
[00:23:34] So I thought it'd be good to just talk about the original three items because these are the ones that are still used.
[00:23:39] So I'm just going to read them out.
[00:23:40] The first is, and the most famous and most used one, is a bat and a ball cost $1.10 in total.
[00:23:46] The bat costs $1 more than the ball.
[00:23:49] How much does the ball cost?
[00:23:51] Oh, well, that's obvious.
[00:23:53] It's 10 cents.
[00:23:55] That's right.
[00:23:56] Totally.
[00:23:57] We're done.
[00:24:00] Next question.
[00:24:01] Next question.
[00:24:02] If it takes five machines, five minutes to make five widgets, how long would it take 100 machines to make 100 widgets?
[00:24:07] Wow.
[00:24:08] I just saw the two numbers 100 there before, so I'm going to assume it would take 100.
[00:24:13] Five, five, five.
[00:24:14] 100, 100, 100.
[00:24:16] Easy peasy.
[00:24:17] What's the next one?
[00:24:18] Last one.
[00:24:18] In a lake, there's a patch of lily pads.
[00:24:20] Every day, the patch doubles in size.
[00:24:22] If it takes 48 days for the patch to cover the entire lake, how long would it take for the patch to cover half the lake?
[00:24:29] So I guess the intuitive one here, although it wasn't my intuition, is 24.
[00:24:35] Is that right?
[00:24:36] Yeah, that's supposed to be the intuitive.
[00:24:37] Yeah.
[00:24:37] I guess because it's doubled in size every day.
[00:24:41] Yeah, that one seems the easiest of them all, I think.
[00:24:44] The other two, I had the intuition.
[00:24:47] Oh, like one that popped to mind.
[00:24:48] There was one that popped to mind, but this one, 24, didn't even pop to mind.
[00:24:53] A couple other wrong things popped to mind at first.
[00:24:56] Like, oh my God, I'm going to have to do square roots or something.
[00:25:01] Right.
[00:25:02] That actually, in the longer versions, that happens to me a lot.
[00:25:05] When we might talk about some of the other items, it's less to me that there's some immediate answer that pops into my head.
[00:25:11] It's not clear what the lure is.
[00:25:13] Yeah.
[00:25:13] Right.
[00:25:13] Right.
[00:25:14] So obviously, each of these, you can respond with the intuitive incorrect answer.
[00:25:20] You could come up with an incorrect answer that's not intuitive.
[00:25:23] Yeah.
[00:25:24] Which is a potential problem because they score it as correct or incorrect, right?
[00:25:29] Yes, that's right.
[00:25:30] So usually the way that it's scored is you either get zero, one, two, or three, right?
[00:25:36] Depending on how many you got correct.
[00:25:37] And that score is then correlated with things.
[00:25:39] As we'll talk a bit later, that actually makes a difference.
[00:25:43] That's how you score.
[00:25:44] Yeah.
[00:25:44] And just to get a better sense of what's going on, what is the psychological attribute or trait that this is supposed to be able to identify and detect individual differences in?
[00:25:59] Yeah.
[00:26:00] Well, that's debated.
[00:26:01] But the idea is, and the reason that this has been used so widely, especially in the judgment decision-making literature, is because it maps on so nicely to all of these studies like, you know, the classic Kahneman-Tversky stuff where you have like these heuristics, these intuitive responses like that lead to biases, like availability, the gambler's fallacy, the conjunction fallacy, base rate fallacy.
[00:26:25] All of those things that seem like if you're a bit lazy and you just give whatever answer pops to mind, it's trying to capture how often you do that.
[00:26:34] And it's supposed to be a different measure than just straight up cognitive abilities, which sometimes is just referred to as intelligence or IQ, whatever it is in IQ test measures.
[00:26:46] And the difference is that it could be that you have, you Tamler, say you're reading these and let's say you responded just like you did in this little intro.
[00:26:56] You could probably arrive at the right answer.
[00:26:59] You have the ability to do it.
[00:27:01] You have like the right cognitive stuff to do it.
[00:27:05] You just don't.
[00:27:06] You just like went with the intuitive.
[00:27:08] Went with the gut.
[00:27:10] Exactly.
[00:27:10] And that's why it's supposed to be different.
[00:27:12] You didn't moneyball these.
[00:27:14] But I could have moneyballed them.
[00:27:16] I just.
[00:27:17] Exactly.
[00:27:17] I reject that as a like way of living.
[00:27:20] That's not my conception of the good life.
[00:27:23] So I still don't know if that fully answers the question of what the attribute or construct is supposed to be exactly.
[00:27:30] And I know this is a problem that's sort of flagged with it, but I don't even know what the like leading options are.
[00:27:37] It's not disposition to do well or to avoid the Kahneman and Tversky fallacies like gambler's fallacy or base rate fallacy.
[00:27:46] Right.
[00:27:47] That doesn't explain like what it is that just says, well, if you do well on this test, you'll probably be less likely to fall for these things.
[00:27:54] But that's not a property of somebody.
[00:27:58] So like how would you describe just the general thing?
[00:28:00] Is it the ability to override intuition in cases where the intuition is false?
[00:28:07] Like how would you just give a general description of it?
[00:28:09] I would say it's the like whatever that disposition is that makes somebody be more likely to go with their gut.
[00:28:16] So like you like going with your gut.
[00:28:18] You think that that's actually like a good thing to do.
[00:28:21] So going with your gut on more times than not is if they're measuring what they say they're measuring, like whatever you call it.
[00:28:29] Some people call it cognitive laziness.
[00:28:31] Some people might just refer to it as intuition.
[00:28:33] Like it's just that on average, are you more likely to pause and take the time to deliberate or are you willing to stop as soon as the like whatever punitive answer presents itself?
[00:28:46] And on like how wide of the domain of decisions are we talking about it?
[00:28:50] Is it like on tests?
[00:28:52] I assume it's a little bit more than that.
[00:28:54] Is it also supposed to be, oh, and I'm more likely to do that when I'm deciding whether to propose or where to go on vacation or what job to take or like, yeah, how like broad is it the constructs supposed to be?
[00:29:08] That's a good question.
[00:29:09] And the only answer is really what has it been shown to be associated with that makes sense in under this conception.
[00:29:17] And so what people have tried to show it, like I don't know if anybody's done it with like your willingness to choose one job over another based on like your gut response to it.
[00:29:26] But they've done it for all of those heuristic things like the base rate fallacy and all that.
[00:29:31] They've done it for religious or supernatural beliefs.
[00:29:34] They've done it for like conspiratorial beliefs, beliefs that have a low probability of being true in general.
[00:29:40] They've done it for, you know, that literature on bullshit, like pseudo profound statements.
[00:29:44] Like we ask people, is this a meaningful statement or not?
[00:29:47] I looked into that actually, the Pennycock, but that's the one that at least raised Pennycock.
[00:29:55] He's my colleague, but we laugh a lot because everybody calls him Pennycock.
[00:29:58] It does seem like his name should be Pennycock.
[00:30:00] He should just legally change it.
[00:30:02] I have in my notes Pennycock.
[00:30:05] That's a Pennycock.
[00:30:07] Yeah, the Pennycock.
[00:30:08] But I mean, like with some of those things, the pseudo profound, that doesn't seem like, oh, I'm going with my gut by thinking that that statement is profound.
[00:30:18] And religious belief also not necessarily like I'm going with my gut and being religious.
[00:30:25] So like that makes it kind of more confusing what exactly it is we're talking about.
[00:30:31] It is confusing.
[00:30:32] And Pennycock himself has written papers on where he says, look, like maybe intuitive isn't the right way of describing religious beliefs.
[00:30:41] Because like there are people who would actually describe many religious beliefs as completely non-intuitive.
[00:30:46] And so what does it mean if the CRT, this test, is predictive of like how likely you are to believe in that you were scratched by a demon at night?
[00:30:55] Right. That's not like, oh, my gut tells me I was.
[00:30:58] Like, I mean, the way he described it, it's like a rational inference from the fact that he had claw marks on his arm that didn't fit his own hands.
[00:31:06] Or an inference.
[00:31:07] Yeah.
[00:31:08] I have no idea what happened.
[00:31:10] All I know is I was dead asleep with my wife and dogs and I woke up with claw marks on my ribcage underneath my arms.
[00:31:19] So this is why it's like sort of ends up being important to try to distinguish it.
[00:31:23] So like in the original paper that Frederick published in 05, he tries to at least distinguish it from measures of cognitive ability.
[00:31:32] So he includes a kind of intelligence task.
[00:31:35] And many people have done that since, including in this one of the papers that will link to this most recent Nikki Blacksmith, where they're looking at measures.
[00:31:44] So take your traditional IQ test.
[00:31:46] Is this capturing something that's above and beyond?
[00:31:50] It might be related to intelligence.
[00:31:52] But if the ability to suppress the intuitive response is independent of that, then it shouldn't be overlapping with intelligence tests.
[00:32:02] So if you give any two IQ tests, even if different people developed it, chances are they're going to be pretty highly correlated.
[00:32:08] Like the ACT and the SAT are pretty highly correlated with each other.
[00:32:11] But the CRT doesn't have like such a strong correlation with these IQ measures.
[00:32:17] Right. So it could be like you and I have the same exact IQ.
[00:32:20] But because I just trust my gut more, I'm more likely to get these answers wrong in spite of the fact that I'm just as smart as you.
[00:32:29] Yeah.
[00:32:29] That's what he wants to say.
[00:32:31] Yes. Yeah. And that's what makes this a sort of unique and original one kind of measure.
[00:32:36] But, you know, one of the things that you're making me think is that I don't know whether or not people have developed.
[00:32:43] I'm sure by now that there's so many papers on this that maybe somebody has.
[00:32:47] But it would be interesting to develop questions in which the intuitive response was the right one.
[00:32:54] Right.
[00:33:24] I've been kind of annoyed with, like, say, the Kahneman-Tversky way of thinking.
[00:33:28] The way that they even started talking about heuristics and biases was that, look, these intuitions, these heuristics, by and large, like the reason we have them is because they yield the right response so often.
[00:33:39] Yeah.
[00:33:40] And it's just in cases that we can create where it yields the wrong response.
[00:33:44] So like the conjunction fallacy where I say, you know, is Linda a feminist and a bank teller or whatever, you know, that famous Linda problem is.
[00:33:52] Yeah, which is more likely that she's.
[00:33:55] Mathematically, like it's always going to be true that one of them is more likely than both of them together.
[00:33:59] So the question is, Linda goes to a lot of women's marches or something.
[00:34:04] Exactly.
[00:34:04] Do you think it's more likely that she's a feminist and a bank teller or just a bank teller?
[00:34:08] And people would say the feminist and the bank teller.
[00:34:10] Right.
[00:34:11] Yeah, exactly.
[00:34:11] So there is something where you create these scenarios where the intuitive system leads you astray and it got kind of like distorted into a general, like, look how stupid people are all the time.
[00:34:23] Right.
[00:34:23] And those are pretty artificial and with a very clear right answer to and a clear wrong answer.
[00:34:29] But some of the times they use this stuff for stuff that is just not clear, like they'll think it's a mistake to take a dollar now versus like a dollar 20, you know, tomorrow or something like that.
[00:34:41] And there's just normative like assumptions built into what the rational thing to do is and how our intuition leads us astray in these kinds of situations.
[00:34:50] Even as you know, like with the utilitarian versus deontological, which are riddled with all sorts of issues.
[00:34:57] But like it's kind of implicit that the utilitarian one is the right one and or sometimes explicit.
[00:35:04] Explicit.
[00:35:05] You know, when our emotions are leading us astray in, you know, thinking that we shouldn't push the fat guy off the bridge.
[00:35:12] Right.
[00:35:12] Or fuck the chicken.
[00:35:16] Classic.
[00:35:16] Classic dilemmas.
[00:35:19] You just brought home a chicken from the grocery store.
[00:35:22] On your way to bring the chicken back and fuck it, you see a fat man on the bridge.
[00:35:27] But you're already super hard.
[00:35:30] Yeah, exactly.
[00:35:31] Right.
[00:35:32] So you just walk by.
[00:35:33] You leave him alone.
[00:35:35] Costing the lives of foreign people.
[00:35:38] So we jumped into the meat of it.
[00:35:40] But like, how much did you know about this stuff before?
[00:35:42] Like the eight million papers that we probably were looking over?
[00:35:46] I mean, I knew about it because people will always talk about people who perform high on the CRT do this, you know, whether it's job performance or they're more likely to be religious.
[00:35:56] That's when I've come across.
[00:35:58] But like I never exactly understood.
[00:36:00] I assumed it was trying to measure just how reflective people are.
[00:36:05] Because they called it, Frederick at least, called it cognitive reflection.
[00:36:08] That's the attribute we're talking about.
[00:36:12] But it's just very unclear what the construct is.
[00:36:16] And there hasn't been much attempt to define it or to sort of wrestle with some of the ambiguities.
[00:36:23] And the one I wanted to ask you about is, so she says it confuses the processes or the mechanisms at play with the ability that we're talking about.
[00:36:36] So sometimes it's talked about as an ability, you know, to just override your intuition when it'll lead you astray, I guess, would be the ability.
[00:36:44] But other times it's spoken of in more theoretical terms as part of this process where the intuitions are activated.
[00:36:52] And then sequentially after that, the reflection goes, hey, hold on there, big dog.
[00:36:58] Let's take a second and figure out what's going on here.
[00:37:03] Right.
[00:37:03] I was trying to wrap my head around that specific thing.
[00:37:07] Do you actually have the quote in the paper where she says that?
[00:37:12] So as we noted previously, the conceptualization and content domain of the CRT are ambiguous.
[00:37:18] Frederick defined CR as the ability or disposition to resist reporting the response that first comes to mind.
[00:37:25] And he says that's grounded in the heuristic analytic theory of reasoning.
[00:37:30] The theory posits that people use two types of information processing when reasoning, a quick intuitive A and an effortful reflective processing.
[00:37:39] Intuitive processing leads to biased responses, whereas reflective processes override the intuitive response to reach correct responses.
[00:37:48] A critical assumption of the theory is that the use of the two processes is sequential, not simultaneous.
[00:37:55] That is, people high in CR are first able to recognize the need to override the intuitive process and then engage in reflective thinking to reach a correct conclusion.
[00:38:06] Yeah, I'm not sure if that's the objection that you were referring to.
[00:38:09] But this is at least one of the issues that blacksmith has, which is you can think of the tendency to rely on intuition and then the ability to reflect.
[00:38:21] And the CRT, one of the problems with this as a measure is that it's sort of measuring two things in one question.
[00:38:30] And whenever that happens, then you have to do some work to try to tease apart what's going on.
[00:38:35] Is it that people are high in reflection, like reflective ability?
[00:38:41] Right.
[00:38:41] Or is it that they are low on intuition?
[00:38:45] Because those two things are independent.
[00:38:47] And that's why they make it a point in their paper to separate into three ways of answering.
[00:38:52] So the intuitive incorrect one, the non-intuitive incorrect one, in which presumably people aren't relying on their intuitive response.
[00:39:00] They just get the math problem wrong.
[00:39:03] They know it's a math problem.
[00:39:04] And then getting it right.
[00:39:05] So they create a score based on how much intuition is being relied on versus how much reflection is being done.
[00:39:14] And so now for any given person that takes a bunch of these items, you have two metrics.
[00:39:21] Right.
[00:39:22] One that's like how high or low are they in their tendency to be intuitive?
[00:39:26] And then one on how high or low are they on their reflective ability?
[00:39:30] And then because they're interested in figuring out if these are two different things.
[00:39:34] And what they end up finding is that those two are pretty highly correlated with each other.
[00:39:38] So even though they're conceptually independent, they're pretty highly correlated.
[00:39:42] Like 0.87.
[00:39:43] It's a really high correlation with each other.
[00:39:45] So then presumably there aren't a lot of people who get the incorrect but non-intuitive answer.
[00:39:52] Yeah.
[00:39:53] But here's an interesting thing.
[00:39:54] So there are all these measures that just straight up ask you, how likely are you to go with your gut?
[00:40:02] Right.
[00:40:02] So they'll ask you a series of questions like almost like a personality test.
[00:40:05] Like when I'm faced with a problem, I like to go with my gut.
[00:40:07] Or like I like to really think carefully about every decision I encounter.
[00:40:11] Right.
[00:40:11] So you ask a bunch of those questions and now you have like a thinking style questionnaire.
[00:40:15] Yeah.
[00:40:17] And those, it turns out, aren't related at all to scores on CRT.
[00:40:20] You would think that the people who go with the intuitive response also would say I am the kind of person who relies on their intuition.
[00:40:30] Right.
[00:40:31] But it doesn't.
[00:40:31] And I'm not sure why.
[00:40:33] Like maybe self-report is just a shitty way of assessing this stuff.
[00:40:36] Or maybe these tests are a shitty way of assessing this stuff too.
[00:40:42] Right.
[00:40:42] Because all they are are measuring how you do on a very specific kind of test.
[00:40:48] And even when you're comparing it to Kahneman and Tversky, it's like those are very similar kinds of fallacies.
[00:40:57] You know, the same kinds of mistakes.
[00:40:59] But like, you know, to the extent that it generalizes to how intuitive you are in general.
[00:41:06] I mean, that's what we were talking about at the beginning.
[00:41:08] Right.
[00:41:09] Like, I think people might be reporting not how likely am I to be careless on a test, but how likely am I to, you know, rely on my gut and decide to leave my wife for this nice, beautiful waitress.
[00:41:24] Yeah.
[00:41:25] To go to the French Wallonesia.
[00:41:27] To be true.
[00:41:28] Yeah.
[00:41:28] Exactly.
[00:41:29] No, no.
[00:41:29] Yeah, you're right.
[00:41:30] And so I think what are the chances that when you ask somebody, do you go with your gut?
[00:41:34] They're thinking, well, I know that like I go with the first answer on trick questions all the time.
[00:41:40] Right.
[00:41:41] I know that when I go on Mechanical Turk.
[00:41:45] I don't think that somebody who's taking the CRT is thinking, yeah, I went with my gut on these.
[00:41:52] They're thinking I got the right answer.
[00:41:55] The mistake they're making is that this is an easy question.
[00:41:57] Right.
[00:41:57] Not that they like, they were like lazy, you know.
[00:42:00] Right.
[00:42:00] And careless, like on a test.
[00:42:02] How does this just differ from carelessness, you know.
[00:42:06] Like laziness of some sort.
[00:42:07] Yeah, laziness of some sort that just is captured by the fact if there is some kind of trick that you're likely to fall for it because you just don't give a shit or you don't pay too much attention to these kinds of things.
[00:42:19] Right.
[00:42:19] So, yeah.
[00:42:21] And I do think, yeah, it's not surprising that there's not much correlation with how intuitive they think they are because that's part of these, you know, the issue with this literature in general.
[00:42:30] Right.
[00:42:30] It's the extent to which it generalizes because the kinds of scenarios like the framing puzzles and things like that, they're all very specific.
[00:42:39] There's something that only a kind of small sliver of people are exposed to.
[00:42:45] And so, yeah, of course, they'll more likely fall for it.
[00:42:48] But I guess the question is to what extent does this generalize beyond getting the right answers in questions and tests?
[00:42:56] And the question of how you offer evidence for generalizability is tough, too, because if what you do is you just give the CRT and then a bunch of other measures, even like life decisions, like it might be correlated with.
[00:43:09] Is it because of the processes that you think that you're capturing with the CRT?
[00:43:13] Now, the CRT has this simplicity to it that seems very appealing where it's like, oh, yeah, like the intuitive response is pops into my head.
[00:43:22] Do I go with it or not? But it could be measuring all kinds of stuff that's hard to put your finger on, including motivation to do well on tasks like this.
[00:43:32] Yeah.
[00:43:33] One of the bigger debates in this is, is this just numeracy? Is this just basic mathematical ability?
[00:43:39] And you certainly need some sort of patience, motivation, mathematical ability, and maybe also that thing that is suppress your first response.
[00:43:47] Yeah, exactly. Like probably to someone who's mathematically sophisticated, they do use their intuition to solve it and they get the right answer, you know, in the same way that like a chess master will intuit the right move, whereas someone else would have to reason their way to it and be lucky if they could get at it.
[00:44:06] But that doesn't mean that they're more intuitive. It just tells us that, you know, they have enough experience with math to just see, oh, like that's the right answer, you know?
[00:44:17] Yeah. Okay. There's a lot of things I want to say about my general beef with dual process theories that you point to, which one is it doesn't make too much sense to me to try to characterize what a mathematician or a chess player is doing as either intuitive or rational.
[00:44:31] Right. Like that's just doesn't make sense to me. Like, I think that obviously if the problem is complex enough, like they're doing a whole shit ton of deliberation and they're being very careful.
[00:44:42] And they're also building all of their decisions on a host of intuitions that have been learned over time.
[00:44:47] Yeah.
[00:44:48] Due to expertise. And so I don't know what the value really is. And I've gotten in arguments with people in my own department about does it make sense to just lump these as two separate things just because you can create problems.
[00:45:01] Right.
[00:45:02] Where you can separate these. Does it actually mean that it's capturing decision making?
[00:45:06] And so what they'll say to me is, but like, obviously, sometimes you think hard and sometimes you don't. And obviously, sometimes you make mistakes in judgment and sometimes you don't.
[00:45:14] So, like, doesn't it make sense that a measure that is trying to measure whether you're thinking hard would be correlated with whether you make a mistake?
[00:45:22] And that just seems to me to be missing a whole bunch of richness.
[00:45:25] Behind the cognitive processes.
[00:45:26] I agree. And it's just it's a way of artificially kind of simplifying what is actually going on in it.
[00:45:34] And maybe that's the point of that quote I read earlier is there is this idea that just like, OK, one train goes off the track, the intuition track, and then the other one is now trying to stop that train.
[00:45:47] And then once it realizes, oh, no, that's like that's going to lead me astray.
[00:45:52] Now they set down to do the actual problem.
[00:45:55] And yeah, I mean, that's probably true for a lot of people, but it's definitely not going to be true for everybody.
[00:46:01] Some people just don't care and they'll just write down whatever.
[00:46:04] And it's not because like if this meant anything to them, they wouldn't be able to spot that error.
[00:46:10] It's just right. And then also like sometimes we just use a combination of both of them at the same time, which I guess is your point.
[00:46:17] To me, like the huge motivation is to not be embarrassed into looking dumb.
[00:46:21] Like that's just like the eager to please like the academic circle is like so ingrained in me.
[00:46:28] Yeah. And of course, we would think that that's then correlated with like high levels of like reflectiveness and bias avoidance, you know,
[00:46:38] because we just are eager to get things right on tests.
[00:46:41] Yeah. So there's so many more wrinkles to this, too.
[00:46:44] So Andrew Meyer and Shane Frederick have a paper that they published recently that I didn't put in the Slack.
[00:46:50] It's a massive data collection of essentially answers to the bat in the ball problem.
[00:46:56] Like I think in this paper alone, they present 59 studies with 72000 subjects and they did variations of how they ask the bat in the ball problem.
[00:47:08] And they conclude if I'm getting this right. Sorry, Shane Frederick, if I don't.
[00:47:13] That the way that you ask the question and the kinds of questions that you formulate, you have this middle ground that isn't captured by the simplistic way in which it's been used.
[00:47:23] So it's changing just the wording that you use can lead to better or worse performance on this, meaning in the most charitable view that people's intuition is not an on or off switch and their deliberation isn't either or.
[00:47:40] But rather like there are just gradations of the extent to which people are using both of those things throughout.
[00:47:47] And random triggers that can make you use your intuition more.
[00:47:51] But it doesn't mean that you are an intuitive person.
[00:47:54] It's just if different framings can affect that, it's sort of like, yeah, under some circumstances, give me a few drinks and I'll use my intuition more or whatever it is.
[00:48:04] Right. And then there's another problem, which is you can do the problem such that you can test whether people endorse their intuition as right completely or whether when prompted they'll like be like, oh, shit.
[00:48:18] No, that's right. That's probably not the right answer.
[00:48:20] And that's a good yet another dimension of what's going on that the CRT doesn't capture.
[00:48:24] So there's a simplicity on the face of this measure that just does not adequately present the richness of all of the things that it might be measuring.
[00:48:34] Yeah, which hasn't stopped it from being used because, you know, like the context that you'll see it outside of this very specific literature is more reflective people are likely to be, you know, more this more religious, more, you know, prone to believe conspiracy theories.
[00:48:53] But like the thing that is allowing them to say that these people are more reflective or less reflective is just how they do often on just those on that question test.
[00:49:04] And so this is one of those things where a lot of the time the generalizability beyond this very artificial environment that these things are tested in.
[00:49:13] And it's definitely not demonstrated, certainly not in any of the things that I read that you put in.
[00:49:19] But also, like like you said, it's not even clear how you would do it because there's a lot of things where it's correlated with higher job performance.
[00:49:27] Right. But you don't know.
[00:49:29] There's no reason to expect that that means, oh, you know, like they're more reflective at work.
[00:49:35] Right. There could be a lot of reasons.
[00:49:37] Yeah. That's that's been argued for like the marshmallow task as well.
[00:49:40] Yeah. The reason that the marshmallow test is correlated with performance on SAT all those years later is what you're getting is like a sort of eagerness to please authority in children.
[00:49:50] Slavishness.
[00:49:52] Exactly. But you said generalizability.
[00:49:54] I don't think that that's what the criticism that I was raising is at all.
[00:49:58] So like the CRT does predict whatever supernatural beliefs, religiosity, conspiratorial thinking, endorsement of bullshit statements.
[00:50:06] It predicts those.
[00:50:07] The question is, is more what is the validity of this as a as a measure of reflective?
[00:50:14] Well, that's what I mean.
[00:50:15] Yeah, but that's not a question generalizability. That's a question of validity.
[00:50:17] Your criticism is that that's not the CRT or the criticism that I was laying down that I thought you were picking up is that the CRT, it might be measuring like four things.
[00:50:26] Oh, OK. Got it. So I think my I was building on that point by saying that you can't use these other correlations like belief in conspiracy theories or belief in some sort of supernatural orientation like that doesn't validate the measure because there's no reason to think that the people who are more reflective or more intuitive would be more prone to those dispositions.
[00:50:54] So that doesn't validate. Yeah.
[00:50:57] And that's usually how this thing is used.
[00:50:59] That doesn't validate the measure.
[00:51:01] So how would you validate the measure?
[00:51:03] I guess their one attempt was to do that self-report thing.
[00:51:06] Like, do you consider yourself like prone to intuition?
[00:51:10] But that failed. Right.
[00:51:12] Right. But you're agreeing that it's validity, not generalizability.
[00:51:14] Yeah. But I take validity to mean like the construct validity, that this is a real trait that we have.
[00:51:21] Reflective, that we're either prone to use our gut or prone to be reflective and that that would show up in other areas of life also.
[00:51:29] Right. It's a tricky thing for them.
[00:51:31] It is a tricky thing to figure out what the right step here is to make.
[00:51:35] So if you say the CRT, if it is a valid measure of your reliance on intuition, then it ought to predict how accurate you are at avoiding the conjunction fallacy or the framing problem.
[00:51:48] And so you show that and you could retort like you and I have.
[00:51:54] Well, isn't that so close to the domain of the original questions that it's not like showing that it's true?
[00:52:01] It's not really providing.
[00:52:02] You're good at solving logic problems.
[00:52:03] Congratulations.
[00:52:04] Right. But then when you show that it's related to lower belief in like bullshit statements or in conspiracies, you could say I don't think it's on the face of it so weird to say, yeah, people who are less reflective, who don't really take time to think about a problem are more likely to believe pseudo profound bullshit.
[00:52:22] That seems like on the face of it, a valid thing that it should be related to.
[00:52:26] And so you show that.
[00:52:27] But then you could say, like we have been saying, well, you don't really know what the CRT is actually measuring.
[00:52:32] It might be measuring four things that are related to this.
[00:52:34] Yeah. And also it's not clear to me at all that you're like a more reflective person versus a more intuitive person would fall for pseudo profound statements.
[00:52:45] Because like what's the intuition that's being triggered?
[00:52:49] Yeah.
[00:52:50] Which leads me to honestly believe that what the CRT is measuring mostly, not, I mean, as it's probably measuring like at least three things, you know.
[00:52:59] But I think one of the things that it's importantly measuring is just a cognitive ability, like just some sort of smarts.
[00:53:08] And the reason that you might not get the CRT correlating super highly with like IQ, like G, like the general intelligence metric.
[00:53:17] Yeah.
[00:53:18] Isn't that it's actually like this cool new different measure, but rather that it is just lower order cognitive ability.
[00:53:27] It's a subset. So, you know, like general intelligence is calculated using a whole bunch of super low level tests.
[00:53:35] So you have like digit span memory tests and verbal reasoning and all these things.
[00:53:39] And those are all like brought together to make this one general intelligence.
[00:53:43] The more specific tasks are often better at predicting things out there in the world that are super related to it than the general thing.
[00:53:52] Right.
[00:53:53] Than like the combination of all.
[00:53:55] And so I think what Blacksmith is arguing is, look, this might just be like a more specific cognitive ability.
[00:54:00] But nonetheless, it's just another kind of intelligence test.
[00:54:03] It's not, it's no more, no less.
[00:54:04] And it has nothing to do with intuitive versus how reflective.
[00:54:08] Or like overriding or willpower to like, you know.
[00:54:10] Yeah.
[00:54:10] I think that's what she means by process, like disentangling the processes from the ability.
[00:54:17] The ability doesn't say how this thing gets done.
[00:54:19] It just says that it gets done.
[00:54:21] Yeah.
[00:54:21] But if you start thinking of it in terms of, oh, it's the property of being able to override your intuition with then.
[00:54:30] Yeah.
[00:54:30] And it might not be that at all.
[00:54:32] It just might be like how smart you are in this particular way.
[00:54:35] So here's one piece of evidence that people have used to try to get a process.
[00:54:40] So the idea is, how long did you take to answer the bat and ball question?
[00:54:46] And if the view that like you aren't overriding your intuition is right, then you should be faster when you go with the intuitive incorrect response than when you get the deliberate response.
[00:54:57] If it's just intelligence, though, or some other cognitive ability like numeracy is what the big candidate is for what's going on, then you could take just as long and still get the wrong answer.
[00:55:09] Still get the intuitive one.
[00:55:10] So like it's not overriding.
[00:55:12] So that's their attempt at measuring process.
[00:55:13] And guess what?
[00:55:14] When I tried to look up what the real answer is, there's results that show that sometimes the people who fall for the lures are faster and sometimes it doesn't show that at all.
[00:55:23] It's like unclear.
[00:55:23] Yeah.
[00:55:24] It's totally unknown which one is right.
[00:55:26] Yeah.
[00:55:27] It's so it is very conceptually confused.
[00:55:29] It kind of assumes a model of the mind.
[00:55:34] This is takes us maybe a little far afield.
[00:55:36] But like in the original paper, right, it says it's motivated by the dual process theories that you were talking about.
[00:55:45] So, you know, you would think then, you know, that theory entails X.
[00:55:51] And I'm now trying to test as a way of trying to provide support or falsify that theory.
[00:55:59] But that's not what it does because it doesn't really make a prediction.
[00:56:03] It's just the test and then saying that that score.
[00:56:06] So it's also a little unclear how it relates to the overarching theory.
[00:56:11] And I guess precisely because it's like of all the other problems that we've been talking about where we don't even know that intuitions or reflection is at play here.
[00:56:21] Yeah.
[00:56:22] Right.
[00:56:22] We need is an MRI study.
[00:56:24] Then we could then we could do it because if the front part of the brain is lit up, then they're using their intuition, their amygdala.
[00:56:32] Yeah.
[00:56:33] The last thing I can think of that's relevant is if it's really the lures part of the CRT that's doing the action, because that seems so inherently part of what this says it's trying to measure like that intuitive lure.
[00:56:45] Or if we give people word problems that are like that, but they don't have an intuitive lure.
[00:56:50] Are those just as predictive?
[00:56:52] And it seems as if there's some evidence that they are just as predictive.
[00:56:55] If you just give people these like I forget what it's called, the belief bias, something or other.
[00:57:00] But it's giving people syllogisms that where the conclusion is valid, but empirically wrong.
[00:57:06] And people do poorly at those.
[00:57:08] Valid but unsound.
[00:57:09] Yeah.
[00:57:09] Yeah, exactly about those.
[00:57:11] Or you give them the affirming, affirming the consequent things and they do bad at those.
[00:57:17] And those are highly correlated with the CRT as well.
[00:57:20] Again, get things like right and math and logic, you know?
[00:57:24] Yeah.
[00:57:24] Sorry.
[00:57:24] Those are the intuitive ones.
[00:57:25] Those do have intuitive answers.
[00:57:27] But then when you give people just straight up word problems, they're really highly correlated with both of them.
[00:57:31] So when there is no lure.
[00:57:33] When there's no lure.
[00:57:33] Yeah, that's right.
[00:57:34] So you might just be measuring test taking ability.
[00:57:37] Or some kind of intelligence that's just part of intelligence.
[00:57:41] Right.
[00:57:41] It would be interesting if you – I'm sure this has been tried but probably not successfully.
[00:57:47] But like, you know, correlate it with how likely are you to fall for one of those spam like Nigerian prince emails or something like that.
[00:57:57] You know?
[00:57:57] So those are the kinds of things would be interesting.
[00:58:00] But it would have to be how they really are.
[00:58:02] How likely –
[00:58:03] So they really scam people.
[00:58:04] Yeah.
[00:58:05] Yeah, you have to really scam them and keep their money.
[00:58:10] I want to do an episode on construct validity.
[00:58:15] Because it really does seem like, as one of the papers says, like this hasn't had a real construct validity demonstration.
[00:58:25] Right.
[00:58:25] But then they do it.
[00:58:26] Right.
[00:58:26] Kind of.
[00:58:27] Yeah.
[00:58:30] You should just get Nicky Blacksmith on the podcast.
[00:58:33] But just generally construct validity.
[00:58:35] Like we could just take a construct, just go through how it's established and what it means.
[00:58:41] I know you're saying that so unironically.
[00:58:43] But it is literally like me pitching to you, let's do first order, like basic logic for an episode or something like that.
[00:58:50] No, but it's the most under-analyzed thing that you do in psychology practically.
[00:58:56] I guess you want to keep it that way.
[00:58:58] Except for those whole branches of psychology that are dedicated.
[00:59:00] Yeah, they're dedicated to it.
[00:59:01] But like, it's not like you wait for them to make sure if you can, you know, use the construct for your nefarious purposes.
[00:59:10] So I don't know.
[00:59:11] I find it very interesting.
[00:59:12] And there's a big literature.
[00:59:13] Emil also has a paper on that.
[00:59:15] He has a paper on anything.
[00:59:16] Who?
[00:59:17] Paul Emil.
[00:59:18] Oh, Paul Emil.
[00:59:18] My hero.
[00:59:19] Of course.
[00:59:19] Paul Emil.
[00:59:20] We could pick a Paul Emil paper and do it on that.
[00:59:22] But what you're making it sound like is like you just want to take a class on like how to do the specific steps for construct validity.
[00:59:30] I mean, I taught you, you know, Hume's problem of induction, Plato's Cave.
[00:59:35] You could throw me a construct.
[00:59:36] Which I'm very grateful for.
[00:59:38] They're amazing.
[00:59:39] They're amazing.
[00:59:39] That's my construct validity.
[00:59:44] All right.
[00:59:45] Well, we'll talk about this.
[00:59:46] Any of you measurement people who actually listen to our podcast.
[00:59:50] Yeah.
[00:59:50] Tell me if you can make it very bad Wizards material.
[00:59:53] Do we have anything to say about this?
[00:59:55] Do you use this at all?
[00:59:56] I don't think I have.
[00:59:58] Not for principled reasons.
[01:00:00] I just.
[01:00:00] Just haven't.
[01:00:01] I just haven't.
[01:00:02] Like, I also just was never sure what it was measuring, to be honest.
[01:00:06] Like, I think in my heart of hearts, I always thought this was just another like a handy dandy three item intelligence test.
[01:00:12] But yeah, I have not.
[01:00:14] I just don't like dual process theories.
[01:00:16] And my boy, Gord Pennycook, my colleague, whom I love, who uses this like he publishes a paper every week on this.
[01:00:23] And he's like doing all this misinformation stuff.
[01:00:25] Yeah.
[01:00:26] That's over.
[01:00:27] All this stuff about misinformation.
[01:00:30] It's interesting that you say that because I was talking to him yesterday.
[01:00:33] We have joint lab meetings together.
[01:00:35] And he was like, you know, it really was the hope.
[01:00:38] Like when I started doing this stuff that like we'd be making a difference.
[01:00:41] And he's like, probably we haven't made like a bit of difference.
[01:00:44] Wow.
[01:00:45] That shows a startling self-awareness from a psychologist.
[01:00:49] We're more self-aware than philosophers.
[01:00:52] That's totally true.
[01:00:54] Touche.
[01:00:55] But yeah, I do think that whole like, oh, we have to cure the American public.
[01:01:01] It's kind of related to this of their biases and their conspiracies.
[01:01:06] And like that's not working at all.
[01:01:09] I think it's safe to say.
[01:01:10] No, because they're dumber than we thought.
[01:01:12] Yeah, that's why they hate all of you.
[01:01:18] And I'm now part of the hated whitino, the Latino.
[01:01:23] Oh, yeah.
[01:01:23] You voted for Trump.
[01:01:25] Demographic now.
[01:01:25] Yeah.
[01:01:26] So now you're out.
[01:01:28] Yeah.
[01:01:29] I'm one of the enemy.
[01:01:30] It's actually I was like personally offended at the shit, the anti-Latino shit.
[01:01:35] I saw you whites posting on Twitter after the election.
[01:01:38] It's true.
[01:01:38] There are a lot of Democrats that are just gleeful about like mass deportation.
[01:01:42] You know, I know there are people who are just like, you voted for Trump.
[01:01:46] Your family's going to get deported.
[01:01:47] I'm like, fuck you.
[01:01:48] I didn't vote for Trump.
[01:01:49] It's every four years.
[01:01:50] The shock that Democrats express at the Latino vote going Republican.
[01:01:55] It's just like.
[01:01:56] But we had the Bay Hive and we had Queen Latifah.
[01:02:02] How's that possible?
[01:02:04] Queen Latifah.
[01:02:05] That's the example that even popped into your head.
[01:02:08] It makes me laugh so much.
[01:02:09] Well, that wasn't my example.
[01:02:10] That Queen Latifah popped into her head.
[01:02:13] It's hilarious.
[01:02:15] You know, but she didn't know about your jinx.
[01:02:18] Like your jinx was powerful enough to override the Queen Latifah endorsement.
[01:02:23] I texted you.
[01:02:23] I'm never voting again.
[01:02:25] Just reflect on your power also.
[01:02:30] All right.
[01:02:31] We've devolved past the CRT.
[01:02:33] All right.
[01:02:34] Join us next time on Very Bad Wizards.
[01:02:36] Just a very bad wizard.
