Episode 186: The One with Peter Singer
Very Bad WizardsApril 07, 2020
186
01:29:4182.54 MB

Episode 186: The One with Peter Singer

[00:00:00] Very Bad Wizards is a podcast with a philosopher, my dad and psychologist Dave Pizarro having an informal discussion about issues and science and ethics. Please note that the discussion contains bad words that I'm not allowed to say and knowing my dad some very inappropriate jokes.

[00:00:32] Welcome to Very Bad Wizards, I'm Tamler Sommers from the University of Houston. Dave, today in the second segment the great Peter Singer is going to finally be on the podcast. Who should our next bucket list guest be after this one?

[00:01:27] I mean who should it be or who do we have a shot at getting? It was too very different. Well both, yeah. I'm David Pizarro from Cornell University, I remember this one. Sam Esmail. So we're just going to bitch about like seasons two through four of Mr. Robot?

[00:01:43] Well you have that other show that you swear you're on his nuts about so I figure we could. Oh Homecoming, yeah. So we could watch Homecoming and he seems like, you know, like I can't remember anybody else we've talked about more than having on.

[00:01:59] I mean like I think anybody who is involved in some of the entertainment that we've consumed would be super fun like we've never had. Charlie Booker. Exactly. Ted Chang. Damon Lindelof. Ted Chang. Well Damon Lindelof will be after we do our leftovers bonus episode. What about David Lynch?

[00:02:24] I think we've got a shot. Yeah, we definitely got a shot. He goes on a lot of podcasts. I think I've heard him once on Russell Brands podcast. They have a transcendental meditation connection. He's great. He's hilarious but he pretty much doesn't leave his art studio.

[00:02:45] It would be funny if he just spoke gibberish for about an hour with us and then we could just interpret it. I would find it brilliant. He's just a random word generator. We just watched, oh speaking of Patreon bonus episodes, it's not promo time yet but I'm

[00:03:04] going to record tomorrow a bonus episode with Jesse Graham and Natalia Washington on Blue Velvet. Nice. I haven't even told you this but Barry Lamb and I are going to record again this time on the new Picard Star Trek series that we both watched the first season of.

[00:03:26] Is this a polyamorous relationship right now? Well, I was feeling compersion I think. I felt something in the pit of my stomach. Something kind of growing. I mean at least I'm not having threesomes. That's true.

[00:03:48] Before we get to Peter Singer, I think we needed something very light hearted. How dare you light hearted? I actually don't know what... Well, we'll get to it. For whatever reason, I saw this on Twitter and even though I think that it's a discussion

[00:04:07] that's been going on for quite some time and the paper that it's referring to came out in 2018, people were talking about it now and the topic is... So this is a blog post at a blog called practicaltypography.com which I assume, Tamler, you're an avid reader of. Yes.

[00:04:26] Called Our Two Spaces Better Than One, A Response to New Research. And so this blog post is about a journal article called Our Two Spaces Better Than One, The Effect of Spacing Following Periods and Commas During Reading.

[00:04:42] I sent this to you hoping for some reason that you might actually think that it might be fun to talk about it and you actually did. So I was very happy. I do look at things through a lens of... That's what I said.

[00:04:57] This bash, like empirical research where it doesn't belong and it seemed like this did. So I said yes. But to be clear, there's nothing I care less about than the Two Spaces versus One Space debate after a period.

[00:05:14] I do Two Spaces after a period because that's how I learned it in typing class when I took my secretary class in high school. And so I do it. I'm happy if people change it or if people do One Space. I can't even pretend to care.

[00:05:34] I can't even play the game of caring about it. Yeah. I actually very much enjoy typography and discussions about fonts. I can't really care that much about this either. Like you, I learned to type with Two Spaces on typewriters in my administrative assistant class as we say nowadays.

[00:05:59] I think I automatically put Two Spaces in. Sometimes I have to go and take it out. And in fact, so the way that we do it is you write the show notes and I post them.

[00:06:10] And because Two Spaces looks weird on the particular display that is used by Fireside. I always take out the Two Spaces. I'm glad to know that you're not secretly offended by that. But nonetheless, this is a... This is news to me. This is a huge debate amongst nerds.

[00:06:30] But that's not why we probably found it interesting. We found it interesting because it's a nice sort of takedown of what the, at least the way that some of the press concluded or at least what they said should be concluded following the publication of this journal article.

[00:06:48] So what the journal article on the face of it seems to be claiming and the way that it was talked about was that this was empirical evidence that Two Spaces after a sentence is actually better.

[00:07:00] And as he is the author, I don't know if it's he or she, mentions in their blog post. APA, American Psychological Association, their style guide says to use Two Spaces. And apparently there's been no kind of empirical research on whether that's a good thing or not.

[00:07:21] And so the author is until now. Apparently the author has just very clearly said, yeah, that was their motivation to like see if they could provide evidence for the APA. It's very motivated. Right. The John Hyte thing. Social Intuitionist model. I'm actually dumbfounded.

[00:07:42] I just think Two Spaces is better. I don't know why. I can't tell you why maybe because God said so. So yeah, so it appears as if that was what they found. But if you dig deeper, which this blog does. They found what? Like that it was better?

[00:08:02] That it was? They did a test for better fonts? Yeah, that they said that putting Two Spaces after a period delivered a small but statistically detectable improvement in reading speed. And so it's not even about comprehension. It's just about reading speed.

[00:08:21] And so it turns out that and what I like one of the things I like about this blog post is that he's not questioning what they found. I just like I grant that they found what they found.

[00:08:33] But all he did was dig a little bit more into what they found. And then yes, without being heavy handed about whether or not this applies to more general trends in our field, let's just stick to talking about this study.

[00:08:50] So this was the small but statistically detectable improvement in reading speed was about 3% improvement in reading speed, but only for people who already type with Two Spaces. So they part of this study measured whether or not they had people type and they just

[00:09:07] categorize people like into one spacers or two spacers. And so when they spliced the data up, they found this effect in reading speed 3% for people who already typed with with two spaces. So a very small difference that is entirely limited to who already do it.

[00:09:28] Let me note as the very bad wizards official Marxist that again, once you find this transcendental sanction for the status quo, you find that people who use two spaces are now leading the charge to keep themselves in power by reading faster with two spaces.

[00:09:48] This might be not at the level of conscious awareness. A lot of these things aren't, but this is exactly what a Marxist would expect. Systems justifying. Yeah, exactly. And they're just trying to make one spacers feel bad about it. So press them, exploit them. That's right.

[00:10:14] It's time for them to rise up. So okay, I guess there's more nuance even to this finding. And there are more caveats in this finding than one would expect. It's like reading this blog post is like reading twists and turns of a plot because

[00:10:33] you think that might be the end of the story. Well, okay, they found this 3%. But when you actually look at how they did the study, it starts to shed some light on whether or not this finding is even interesting if you take it at face value.

[00:10:53] Part of the problem is that when they created their experimental stimuli, they used what's called a monospace font, Courier New, which- From like 20 years ago? Yeah, programmers tend to use it. It's basically there is the same amount of space between letters no matter what the letters are.

[00:11:15] And most people don't use fonts like this anymore because it just turns out to be better and easier to read when you vary how much space is after a letter depending on how busy that letter is.

[00:11:31] So this blog post tried to recreate what the font, what the display would have looked like by the way, in a CRT display, like an old style tube display from 2002. Because the experiment was in like a liberal arts college psychology department, right? That's sad, but it's true.

[00:11:53] It's 60 Skidmore college students who are native speakers. Oh my gosh. How are you not just embarrassed? These are not my people. Can you read this without blushing? No, but it's your field. Well, you know, no, sometimes you- whatever, whatever. Fuck you.

[00:12:15] So by the way, this guy pointed out something that I didn't know was true, but apparently typographers just always recommend only one space after a sentence. I did not know that. It's good to know. But they don't claim to have any empirical, right?

[00:12:31] Unlike the APA, they don't claim to have empirical support for this recommendation. So I encourage people to put a link to actually look at the font. They have a weird amount of space in between the lines. It's a check of a weird looking- It's very weird.

[00:12:48] The whole thing is very weird. Yeah, yeah. Using a shitty CRT display from 2002 that everybody's phones has better resolution than that now. And the findings weren't at all. They didn't get any differences about reading comprehension. Comprehension accuracy was high across all the conditions.

[00:13:11] The authors acknowledge all these limitations, that the paragraphs were presented in mono-space fonts and that's not how they're presented nowadays, and that they only found it in one of these conditions, which is very sort of peahacky to split up your sample to try

[00:13:28] to search for something, and even then they only found it in three. I think the big take home from this guy, Matthew Butterick is the name of the author of the blog list, is he points out very nicely all of the different conditions under which we read things,

[00:13:45] whether it's display size, fonts, the conditions at which we're reading, and how all of that's going to matter. And so he's not surprised at all that they're not going to find any sort of universal finding

[00:13:57] or anything so that it's misguided to think that you could design an empirical test of whether or not two spaces leads to better comprehension than one space. Because the question is very poorly specified, it's under specified and the complexity

[00:14:15] of the environments and under which people read, you know, you would have to study every single environment, every kind of font, every kind of display if you were to be able to make a prediction about how this will work in real life.

[00:14:30] So I mean, so I would expect you when these confounding factors are pointed out that for you to defend the authors of the study by saying, well, this is just how science works. It's just slow and steady one step at a time.

[00:14:45] Just, you know, like people will build on this research and, you know, maybe after like, you know, 10 people devote their careers to this question, we will have we'll have made a little progress. Well, let's pretend that this is an interesting question.

[00:15:02] Well, actually, I want to talk about that because so. All right, but let me defend myself from what you just said first, which is, yeah, of course, I would say that because that is true. I mean, but but you I don't want to confuse that with the possibility

[00:15:18] that some studies actually don't find anything. So so assuming that a study finds something that you have enough power, right, good sample size and you don't have compounds, it's assuming all that stuff that I would say then science is incremental here. It's unclear whether they actually found anything.

[00:15:35] I want to just talk about the question here. And there's a great line in the blog post. So he tells this story. He says, I once gave a talk about typography to a group of UCLA law

[00:15:49] professors towards the end, one of them known as the quote unquote empirical guy on the faculty said, that's all very interesting, Matthew. But why don't typographers resolve these matters with empirical research? Surely it can't be subjective which font is best.

[00:16:08] And then the author responds, typography like language and every other form of human expression doesn't occupy a realm of strict objective truth. But I think that if I had to pick a target like an enemy of my, you know,

[00:16:25] these last couple of years of me being on my hobby horse on the podcast, it is that rhetorical question. Surely it can't be subjective which font is best. I it absolutely could be subjective. In fact, it just is subjective and just the mindset that would

[00:16:49] make that kind of statement or question or whatever it is. OK, I think is a real problem. It is what leads to a lot of these studies trying to make headway into a question that ultimately is something that is not amenable to empirical research.

[00:17:12] Now, yes, you can say that it is not subjective which which style or which font will make people read faster or make people read with more comprehension. Maybe that's what they're trying to find out. That's exactly what they're asking. But that doesn't make it best, right?

[00:17:34] Like no, no, no, no. But it's not but it's not as if they're saying like which font is the prettiest, where it's like so obvious that it's subjective because, you know, it could be that no font is objectively the best or that there are

[00:17:49] fonts that are objectively superior for certain kinds of people. But but it's not I don't know how it would be subjective. Right? There's an answer to whether or not there is a best font. That's what I don't get what you're saying. I see I don't I disagree.

[00:18:03] Like why what what what are the criteria for best font? That's not an objective question. So so for speed or comprehension, that is, but that doesn't make it a best. The best font. No, no, no, that he's talking specifically about speed and like

[00:18:20] he was giving a talk about typography and from the context, it seems what he was giving was a talk about whether or not some of these are more legible, right? Or right. So he goes on to say, by the way, have you researched what kind of law

[00:18:35] review in his in his hypothetical retort to the professor? By the way, have you researched what kind of law review article is most likely to get your tenure? How many words in the first sentence average number of vowels per paragraph? But that's a reductio of his question.

[00:18:47] Yeah. But what is it's not that he's saying that well, this is just it can never be studied. It's a great mystery. I think there's an answer like it could very well be that there is no font that improves legibility across the board, but that doesn't make it

[00:19:04] subjective. That just means objectively there's no difference. Well, legibility is a different. Like again, how you decide what the best font is and what the criteria for that is, is just not an empirical question.

[00:19:16] Like I don't necessarily want a font where that I can read the fastest in. Sure. This whole post is about the speed and legibility of incomprehension of reading. So if what you're saying is that what you define as best is subjective, then sure, absolutely.

[00:19:35] But but I take it that what he was saying was that there's no clear answer to which one would improve whatever goal it is you have, whether it's legibility. And I think that's right, but not because it's subjective, but rather because it's really complicated.

[00:19:49] And I think this is the. But I don't think that's what the author is saying. I think that he is saying two things and he's not quite sure. So he's saying that when he when the professor asked surely it can't be subjective, which font is best.

[00:20:04] He doesn't say no, yes, it can be subjective. He points to the complexity of the problem. No, he says it doesn't occupy. He says very explicitly it doesn't occupy the realm of objective truth. You're right. Typography like language in every other form doesn't occupy a realm of strict

[00:20:21] strict objective truth. That doesn't mean typographers are hostile to the idea of research or that legibility can't be tested. On the contrary, many typefaces have emerged from forms of empirical research. So it goes on to describe how ink spreads on newsprint and how fonts were developed.

[00:20:38] Right. So so long as as you have a clearly defined metric, then there is an empirical answer. That empirical answer might be it is completely dependent on the individual that's reading it. It could be that. But that doesn't make it subjective.

[00:20:55] No, but what makes it subjective is that there are all sorts of different kinds of considerations to take into account to weigh when you're trying to say choose a typography and how you weigh each of those considerations is not something that you can test for.

[00:21:17] I agree. I totally. Yeah, I totally agree. So suppose that you had like, OK, there's legibility, there's aesthetics, there's comprehension rates, there's whatever. You could you could make a list of these and you could run a bunch

[00:21:30] of empirical tests on all of them and you still wouldn't know which one is the best. All you would know is which ones do which things under what conditions. So what he says later is could we discover the best font for everything? He says it looks hopeless.

[00:21:46] Research by its nature tests narrow questions, as you just said. As I said, in what is good typography, typography can't be reduced to a math problem with one right answer. I think that's that. I mean, I think that's kind of obviously true.

[00:22:01] But then I also think this is it's the mindset that makes someone have to express that obvious point that I find to be a problem, that everything that we're not even comfortable admitting a level of subjectivity to something

[00:22:21] like typography, that we still just want some sort of study to make the decision for us, to just tell us which which one to do because we can't just accept that ultimately this is going to come down to things that can't be tested.

[00:22:42] But I think that there is some confusion here about what is being questioned. When he says typography can't be reduced to a math problem with one right answer, I think what he's pointing to is just that there are all these considerations

[00:22:56] that you would take into account when deciding which type which typeset to use. That's very different than saying you might as well not get started because there's a mindset that there's an answer to these. I think what he's saying is there is not one right answer.

[00:23:11] There's many answers depending on the question that you ask. And there I don't see why you would not start doing the research. Where I agree with you is that that mindset seems to guide the way that we ask

[00:23:25] our empirical questions and the conclusions that we draw from them. I think we're in agreement there. Like nobody it seems as if we're afraid to say what we tested was the comprehension rates of this kind of font on this kind of computer screen,

[00:23:41] on this kind of person because that's not interesting to anybody. So let me ask you this then. And I bet there is a good answer to this question because I have I don't mean this as like a gotcha question. This isn't gotcha journalism. Mostly because we're not journalists.

[00:23:57] What if the law the law professor you're giving a talk on Borges and then the law professor says surely there's and it's an empirical question. Which Borges story is best? And, you know, like then set aside that it's silly to think that you can run

[00:24:17] a simple test that will tell us which is the best Borges stories. But also you could you could make you could make a somewhat analogous counter which is well, we can test which is the one people rate the highest,

[00:24:33] which is the one people are most likely to remember a year after they've read it, which are the ones that and all of that. And I still maintain that that wouldn't tell you which Borges story is the best one.

[00:24:50] I don't think I disagree with anything that you just said. I think what we're we're assuming that the law professor who asked surely one of these fonts is the best was specifically making a claim about the best in terms of the legibility or the comprehension.

[00:25:04] If if he was asking the question that's akin to which Borges story is the best, then then that's silly. Like you could test what what font people like the best by asking exactly what you said. But there's a lot of interesting things that could be found

[00:25:19] like, you know, some of my I don't know why I enjoy this so much. But you have you seen people who analyze rap lyrics for the the side, the amount of words that they use across their corpus.

[00:25:34] So like there is a metric of like who uses the most words across all their songs? Like the most words per song or know them in their whole corpus, like who has the widest vocabulary. Right. OK. Yeah.

[00:25:48] And I you know, I think it's something like Wu Tang is up there. Wu Tang clan, which could be because there's nine different guys. But that's a kind of an interesting question. And you can people also ask the question like how many what is the rhyme scheme?

[00:26:04] Like the how how many rhymes within a verse? So like somebody like Eminem is off the charts. I think that captures something that's interesting when we're making our subjective evaluations, we might say I like Eminem, but I'm not sure why.

[00:26:16] And the answer might be well part of the reason why might be because these complex rhyme schemes or whatever. And in fact, if you look at his he has, you know, twenty two percent higher complexity than than Jay-Z or something like that. Right. And he's white. So

[00:26:38] I think we agree more than we disagree on this. It's just less fun when we agree. Like I said, I read this with a lens. Well, that's why I sent it to you because the lens was hard to ignore.

[00:26:53] Like I do maintain that if you think, say that Comic Sans is like an appropriate font to use for your power. You're objectively wrong. I'm objectively wrong. Yeah. But it turns out that things like Comic Sans are easier to read fonts like Comic Sans are easier to read.

[00:27:13] At least this is my understanding of the research for people who have just certain forms of dyslexia. There's something about Comic Sans that makes them easier to read. And there is an interesting finding, right?

[00:27:23] Coming up next, we're going to preempt our Peter Singer segment and just do a deep dive on the documentary Hell, Venica. Oh, I love that documentary. It is a good document. It is really good. But we're actually not going to do that. No, no, that'll be a pre-patreon.

[00:27:47] All right. We'll be right back. This episode of Very Bad Wizards is brought to you by our very long time and favorite sponsor, Givewell. I think there's no better episode, Tamler, to have Givewell on than this Peter Singer episode. Definitely. Yeah.

[00:28:08] Coronavirus has gripped the world and is changing our lives in ways few of us anticipated, but with time, intelligence and cooperation will overcome it together and emerge with a greater sense of shared purpose. But we also know a harsh reality.

[00:28:22] Coronavirus will leave a world with more people in need and fewer people in the position to give. This will make supporting effective charities more important than ever. When you're in a position to give and want to have maximum impact, visit Givewell.

[00:28:36] Givewell conducts in-depth investigations to find charities that dollar per dollar are saving or improving lives the most. Those donations will be used to distribute things like malaria treatments, insecticide treated bed nets or vitamin A supplements, programs that can save a life for every few thousand dollars donated.

[00:28:54] Givewell uses academic research, as I've said before, there are favorite spreadsheet nerds. They also use interviews with charity representatives and site visits to estimate which charities can give donors the biggest bang for their buck. They keep their recommendations up to date to make sure that their

[00:29:09] recommended charities can still use additional funds effectively. This means that donations at any points in the year, including now, will be put to good use. Last holiday season alone, podcast listeners like you donated over seven hundred thousand dollars, saving hundreds of lives and treating thousands of

[00:29:27] children for intestinal parasites. Yeah, it's a really great point too that as fewer people are in a position to give, it is that much more important to make money that you can give will effectively support people in need. I absolutely agree.

[00:29:47] When you're ready to find out what your donation can do, go to Givewell.org. There you'll find all of Givewell's research for free as well as a short list of the most effective charities they've found. You can donate directly through their website and they charge no fees and

[00:30:02] take no cut. So visit Givewell.org and thank you for supporting them. And thanks to Givewell for supporting us.

[00:31:23] Welcome back to Very Bad Wizards. This is the time in the episode where we love to thank all of our listeners for getting in touch with us, for contacting us, commenting on our episodes, asking questions, raising objections.

[00:31:41] We actually just did for the first time ever a live AMA. Yeah, we live streamed. We're like millennials. We are. Yeah. Next time we'll go on like what is it? Twitch or something? Not Twitch. Yeah, exactly. It is Twitch. It is Twitch. Yeah.

[00:32:00] And maybe we can make like a Hello Fresh or something. So if you want to email us, you can email us at VeryBadWizards at gmail.com. You can tweet, tweet to us at Tamler or at P's or the account at VeryBadWizards. You can like us on Facebook.

[00:32:21] That is back up and running again. Thanks to David Lera. You can follow us on Instagram. You can join the discussion on Reddit, our subreddit, VeryBadWizards, sorry, reddit.com. Slash R slash VeryBadWizards. There's a big community of people there and some good discussions,

[00:32:47] as well as some bad pictures of often me and my haircut. Yeah. So thanks to everybody for getting in touch with us. We really appreciate and we really enjoy all that you have to say.

[00:33:03] Yeah. And if you if you like that live stream, I'm super up for doing another one. I think it was fun. I don't know if Tamler is, but let us know. No, no, I liked it.

[00:33:11] I need to get the hang of figuring out how to like look at the questions while at the same time paying attention to like what you're saying. I think you did better than me, actually. I was just staring at my screen.

[00:33:24] So yeah, but if you'd like to support us in more tangible ways, which we always appreciate very, very much, you can go to VeryBadWizards.com. And there just click on our support page and you'll find the various ways to

[00:33:37] support us one way to do, which we love is if you go to our Patreon page, we very much appreciate our Patreon supporters. You can go to patreon.com slash VeryBadWizards directly. And if you support us for any amount, you'll get beats from me.

[00:33:55] And if you support us at higher levels, you'll get extra bonus content. Right? What's coming up for bonus content? Yeah, we have a bunch of stuff, right? So we're going to do a leftovers one. We're going to record that very soon.

[00:34:09] I just recorded with Natalia Washington and Jesse Graham, a long deep dive on Blue Velvet. Awesome. That'll be up soon after this episode is up. And you have some nerd thing coming to you. I'm along with Barry Lamb of High Fine Nation.

[00:34:30] We're planning to record a bonus episode on Picard, the new Star Trek series with Patrick Stewart who kicks ass. And as soon as dark comes out, oh man. We're going to be all over that. I hope it doesn't jump the shark, man, but I'll still do it.

[00:34:46] I trust him. You can also support us at PayPal for one time or recurring donation. That's especially helpful if you guys want to support us, but you're not in one of those countries where Patreon works. But anyway that you support us, we very much appreciate it.

[00:35:01] Thank you very much from the bottom of our hearts. Yes, thank you. So we're going to have Peter Singer on in a second just to give a brief introduction. I was actually looking at the introduction

[00:35:15] to my interview with Peter Singer in a very bad wizard of the book. And I'll put a link, like a free link to just this interview in the show notes and the links for this episode. The title for the chapter for that interview was

[00:35:33] a gadfly for the greater good. And I started it by defining gadfly, a fly that bites livestock, especially a horse fly, warble fly or bot fly. What's a bot fly? Is that like a bot fly? Those ones that lay eggs under your skin, they're terrible.

[00:35:52] I thought it was like some sort of robot. Black mirror. I am fly. But the second one, a person who persistently annoys or provokes into action by criticism. And I think Singer is that second one like Socrates. He is a true philosophical gadfly. He gets under our skin.

[00:36:16] He makes us question how we're living and whether it's justified. And maybe that we're not living as we ought to be living. This started with his very famous 1971 article, famine, affluence and morality, which is in almost every intro to ethics, anthology and course.

[00:36:41] And that was one of the first times, prominent times where he just offers simple, easy to understand arguments that have conclusions, which if true, would radically change our understanding of what it means to live a moral life. And I am just about not this week, but next week.

[00:37:02] I'm just about to do this article in my intro class, which is obviously online now. And that sucks because it is a really fun and interesting article to teach. Dave, you and I talked about the stages of Singer way back in episode 28. Do you remember that?

[00:37:21] That was episode 20. Yeah. I know, you know, and we we we were a little time crunch with Peter Singer. We really wanted to bring this up, but we didn't have time. But I wish that we had been able to tell them about the stages of Singer

[00:37:33] because as Tamela and I spoke about back then, anytime you try to teach that article from Singer, you get such resistance. Yes, from from students in a super predictable way. Yes, they are provoked in predictable ways that they like they'll start offering objections

[00:37:52] that likes about the charities that they're corrupt. And, you know, they need to go to give well. Exactly. They just go to give well.org that that'll take care of that. And then there's kind of a righteous indignation, like they earned their money. It doesn't belongs to them.

[00:38:12] Teach a man to fish. Teach a man to fish. That's a big one. The ad hominem attacks on singers, other views besides the one that we're talking about, you know. And and and sometimes after these stages comes a real change.

[00:38:31] I mean, this is the amazing thing about Peter Singer as a philosopher, is that he has made a big practical impact on the world. Animal liberation. He almost single handedly created the modern animal welfare movement. And for over 40 years, he's inspired people to increase their charitable giving,

[00:38:51] often significantly. And he's inspired, among other things, the effective altruism movement. So let's get right to the interview. Peter, thank you for joining us. We really appreciate it. You're very welcome.

[00:39:06] I'm happy to be speaking to you and even more happy to be speaking to all of your listeners. You've been, I think, probably the most requested guest. So we're very excited to have you. So we do have a list of questions that I'm going to get through.

[00:39:19] But the first thing I really wanted to ask you out of curiosity is what first attracted you to be to utilitarian philosophy? Was was it something that always resonated with you? Were you was there a particular work that convinced you? Perhaps it always resonated with me,

[00:39:37] but I only found that out when I was an undergraduate at the University of Melbourne. And I took an ethics course with H.J. Michael McCloskey, who you can find his work. He wrote a book on called Metaethics and Normative Ethics.

[00:39:54] He was very much in the mold of W.D. Ross. So he was an intuitionist opposed to utilitarianism. I think one of the famous objections might have come from him or certainly he used it, the one about the sheriff in the southern city who is

[00:40:13] faced with a lynch mob that a white woman has been raped. She says it was an African American. So the lynch mob gets half a dozen African Americans and says we're going to lynch you all.

[00:40:26] And the only thing the sheriff can do to stop this is to say, no, wait, I've got evidence that that's the one who did it and just point to one of them. And then hopefully they'll let the other go. So McCloskey thought that this was clearly wrong,

[00:40:41] although a utilitarian would do it because it's it's an injustice. And particular somebody who is in the role of a law officer should not commit an injustice. And that just struck me as wrong. So you could say the fact that that struck me as wrong

[00:40:58] resident showed that I was already some sort of utilitarian, at least. And I remember writing a paper in his course in which I argued against some of his objections to utilitarianism. And he was fair, fair minded enough to like the paper and

[00:41:14] encourage me to continue to work in philosophy and so on. So so that's how it got going, I guess, knocking, knocking down what was supposed to be solid objections to utilitarianism. Eventually convinced me that they weren't really any knocked down objections to utilitarianism.

[00:41:32] Is there one that troubled you early on in your philosophical career? Is there an objection that troubled you more than any other? So early on, I'm not sure. I guess the sort of Dostoevsky stuff about you have to torture this little child

[00:41:48] in front of you in order to produce heaven on earth, basically. You know, utopia forever after, no more wars, no more violence and so on. Of course, it's troubling to think about, could you really torture a small child? And I guess at some point,

[00:42:04] I realized the question isn't could I really torture a small child? That's just a question about my psychology and my abilities. But would it be right to torture the small child? Given that if you don't torture this small child,

[00:42:17] lots of other small children will get tortured over the years and centuries to come and you can prevent it just by torturing this one. I think it would be right to torture this one. But yeah, so so in a way, yes,

[00:42:30] that example emotionally troubled you and ought to trouble anybody to think about that because it's a completely fantastic example. And Dostoevsky doesn't bother explaining how torturing this small child will produce utopia forever afterwards. So it's not that relevant that we can't do it.

[00:42:49] Right. The example became so common that I almost think there is a clear there must be a clear causal connection between torturing an infant and having utopia. I think David goes around. We have to keep all the infants away from him right now. Right, OK.

[00:43:05] Which is the one that has to be tortured? That's the problem to know which is the one. It's not any of the ten that he tried to do it with. We know that for sure. So I actually want to start by talking about your work in animal welfare.

[00:43:24] One of the things I said in the interview and I believe it sincerely is that your book, Animal Liberation has in my view contributed more good to the world than any work of philosophy in the last century. And maybe more than that because you almost singlehandedly created

[00:43:43] the modern animal welfare movement now with the expansion of factory farms since 1975 when you wrote the book, more countries, as you've noted, are now able to create factory farms and eat more meat. And so we're causing suffering on a literally unimaginable scale.

[00:44:07] So first question I have about this, I take it now factory farms is your focus more than laboratory animals, which was a big part of animal liberation? Yes, that's correct. And I think it has been for quite a long time.

[00:44:27] I don't remember exactly where and well, in a sense, even right at the start when I started getting interested in this before I wrote Animal Liberation, I became a vegetarian and I wasn't initially opposed to animal experimentation. But that was because of my ignorance at that stage

[00:44:49] that I didn't really realize just how many trivial and unimportant experiments were conducted because as a utilitarian, of course, I'm not absolutely opposed to experiments on animals, even harmful ones. If they do a lot more good than they cause harm

[00:45:09] and naively, I had the assumption that that may be true of a lot of animal experimentation. Conversations with Richard Rider, who wrote a book called Victims of Science and who is in Oxford around the time I was in the 1970s, convinced me that that wasn't the case.

[00:45:25] Yeah, I mean, what I'm trying to say is that my focus was on factory farming pretty much from the beginning. But certainly in Animal Liberation, I have a long chapter on animal experimentation as well as a long chapter on factory farming. And now I'm definitely, I definitely

[00:45:43] far more suffering caused by factory farming. And that's my principal focus. One of the things I was thinking about, if you were, if I were a true committed utilitarian, which and I'm not there yet, I might think that all resources

[00:45:59] should be donated to ending factory farming before any other cause because it would lead so directly and with such certainty to an immense reduction of sentient suffering. I mean, well, would you agree with that? Mm. So there's an assumption there that the more resources you put into ending

[00:46:22] factory farming, the more likely it's going to happen and that we know the best way to invest those resources. If we did know that, if that assumption were true, yeah, that's a reasonable position. I'm not. I'd have to think more before saying, yes,

[00:46:43] I definitely think that's right, but it certainly seems to be a prima facie defensible position. So many of our listeners have asked us to talk about vegetarianism. Tamler is not a vegetarian. I am, but I was raised vegetarian. But Tamler has said this a couple of times,

[00:47:01] which I find intriguing and I want to know what you think of it, that if you can have conditions in which animals don't suffer and you raise them solely for meat, like whatever, free range or whatever, assuming that that's possible.

[00:47:17] As a utilitarian, wouldn't it be a good thing since those animals would not otherwise have lived? It could be. It depends on, well, let's say we accept the ethical assumptions that go on to that, which is that it's good to bring more animals into existence if

[00:47:36] they'll have good lives. And that's obviously a controversial issue, which has its analogues for human population. Is it better if we have a larger population, if we assume that the quality of life remains positive for everyone? So there's those sorts of ethical questions.

[00:47:51] But but the simpler answer here is, well, what's the counterfactual? What would have happened if we hadn't raised those animals? Would, for example, there have been wild animals, more wild animals in those areas where we're raising the animals and would they have had good lives?

[00:48:09] Those are complicated questions in themselves because there's a debate that goes on about wild animal suffering and is the lives of wild animals in general positive or negative? Another factor that you have to take into account now is what animals are we

[00:48:27] talking about and how much will they contribute to climate change? Because if you're talking about ruminants, basically cattle and sheep, then that's a significant factor. If you're talking about chickens, it's much less of a factor. So yeah, those things need to be taken into account.

[00:48:45] But yeah, it's possible that one could defend eating the meat of animals that had good lives and were then you mainly slaughtered at the end of that process. You have to assume that you can maintain that system, that it won't lead to a kind

[00:49:02] of corruption where in order to produce the animals a bit more cheaply, people will start to cut corners and we'll end up back where we are now. But if you make those assumptions, it's a possible view. Is animal suffering something that effective altruists?

[00:49:22] My impression is that they are not as focused on it as they are improving the lives of human beings who are suffering in areas of the world where there is a lot of poverty and disease.

[00:49:38] So we one of our sponsors and somebody we've worked a lot with is Givewell. And when you look at their charities, I don't think I've ever seen an animal welfare charity. Am I am I correct about that impression?

[00:49:53] And if so, why is their focus not as much on animals? Well, you're correct about Givewell because Givewell have decided to specialize in human issues and in fact in sort of immediate human issues. So you don't see Givewell have projects about reducing existential risk,

[00:50:14] either reducing the risk of human extinction. But that's what you have to go back to the the history of it. There's now an organization called Open Philanthropy, which essentially hived off Givewell, separated itself and is run by Haldan Kanofsky, who was with Ellie Hasenfeld, a co-founder of Givewell.

[00:50:39] And Open Philanthropy, as the name suggests, is much more wide open in the causes that it considers. It doesn't do the same in depth research that Givewell does into particular charities, helping people in extreme poverty. But it does more broad research on a range of different causes.

[00:50:59] And it does that does include animal welfare issues. The animal welfare section is headed up by a guy called Lewis Bollard, as well as a whole wide range of other questions. And effective altruists more broadly, you think the emphasis is distributed

[00:51:16] as it should be between reducing animal suffering and reducing human suffering? Well, it's not really just distributed between those two because long termism as effective altruists call it, looking at the long term future and trying to reduce risks that our species will become extinct has become

[00:51:36] a significant part of effective altruism. So you could say there are these three things that effective altruists do. Animal welfare now, human welfare now and looking at the long term future. Is the distribution as it should be? Perhaps not exactly, but certainly effective altruists

[00:51:57] do mostly have concern for animal suffering. And there are quite a few people who've come out of effective altruism who've been doing things relating to it. Another organization that started is called Animal Charity Evaluators, which tries to do for animal welfare organizations what Givewell does for

[00:52:15] human suffering. So effective altruists support and look at that website. So it's difficult to say whether it's the right balance. It may not be, but it's certainly significant and a concern for animal suffering is significant to my in my perspective in the effective altruism movement.

[00:52:37] Taylor, maybe it's time here to ask about some of that futurism stuff because you brought it up, Peter. And I think both Tamler and I share a kind of worry about the focus, the growing focus on sort of

[00:52:54] existential threats in the long term, specifically the threat of AI. It seems like that might be distracting us from our current problems in a way that I'm just not sure what to make of it, given the amount of uncertainty that's involved and the amount of

[00:53:13] utility you could always claim that you're building up for future generations. Like, of course, if AI took over, it would be a terrible thing. And then, you know, millions upon billions of people might be oppressed. But it strikes me as something that the people who work on technology

[00:53:32] are more threatened by this technology. And there seems to be a lot of interest in the kind of futuristic camp that has attracted a lot of the interests of the of effective altruists or utilitarians. Yeah, I have some concerns about that too. Because of the uncertainty involved.

[00:53:52] Let me just say one little detail. I don't think it's necessarily true that AI taking over would be a bad thing. Let's assume we're talking about AI that is conscious, right? That these are not just non-conscious machines that somehow, you know, one scenario we program

[00:54:10] something wrongly and AI just turns out paper clips and consumes the whole world and all of us in it in order to make more paper clips. And we assume that that's kind of like a robotic, non-conscious AI. That would clearly be a terrible thing.

[00:54:24] But if AI does become conscious and maybe is capable of higher and better conscious states than we are, more enjoyable ones and we somehow get subsumed under that and it's very wise in terms of the best and most ethical way to create more good experiences.

[00:54:44] You know, maybe that's not a bad thing. But getting back to your real question, I agree with you that the uncertainty is such that it's very hard to know at this stage whether doing research into preventing AI taking us over

[00:54:58] is actually going to be useful because depending who you talk to, maybe this is still 50 years off and if it is 50 years off, then maybe AI is going to be so much more advanced in 30 or 40 years over what it is today

[00:55:16] that our attempts to do something to prevent this happening will just seem laughable to people 30 years on. And I'll say, well, that was a complete waste of time. Now we have a better idea of what the risks are of AI taking over and a much

[00:55:30] better idea of how to prevent that happening. So now it's worth putting some time into it, but it was silly of them to do it then. Yeah, I really like the optimism that machines might actually just be better to us than everybody assumes.

[00:55:42] And it made me wonder what would the perfect suppose the consciousness exists and machines are capable of at least inferring what suffering is, even if they don't experience it directly like we do. I take it that one of the appeals of utilitarianism and I think we can talk

[00:56:00] about the impartiality principle and everything that comes from that. But it seems as if machines might just naturally become utilitarians or would they become Contian agents? So this depends, I guess, on your foundational theory about ethics. Do you think that ethics is based in reasoning?

[00:56:20] And if ethics is based in reasoning and you have these highly intelligent machines that reason better than we do, then would they in fact reach the right ethical theory? And if you think that's the case, and if like me,

[00:56:34] you think the right ethical theory is utilitarian, then you can conclude that highly intelligent machines would all be utilitarians, which from my perspective would be a good thing. Of course, Countians might not like it because they might violate

[00:56:48] the dignity of a small number of humans or use them as means to the ends of much larger numbers. But they would only do so when that was necessary in the only way of achieving a much better outcome. Do you have nightmares about Contian AIs? No, I don't.

[00:57:07] That would be an interesting nightmare to have. I've never had that one. One of the things we talked about a lot in the interview that we did is your transition from being more of a humane when it comes to grounding utilitarianism to

[00:57:26] more of a rationalist and perhaps an intuitionist. I guess you're drawn to sigwick and Parfit on these matters. What led to that transition for you? And when exactly did it happen? Because in your early work, you did give a more humane defense

[00:57:45] your normative account than you do now. Yeah, definitely. I guess a number of different things led to it. I was taught at Oxford by R.M. Hare, who you could say was in that humane view because he didn't think that

[00:58:01] there were such things as objective reasons or objective moral judgments. There were only prescriptions that were constrained by the need to universalize them. That's how he got to his kind of preference utilitarianism out of that. But the problem with that was that for him,

[00:58:19] that was part of the logic of moral language. So if you just said, fine, I'm not going to use words like or to or any other moral terms, then Hare would admit that the constraint of universalizing disappears. So there's nothing irrational or inconsistent about acting,

[00:58:38] let's say entirely in your own interests and ignoring the universalizability requirement. So I struggled with that. I struggled with ways of trying to somehow make that a more substantive, rational argument rather than something that depends on the logic of the moral

[00:58:56] terms and eventually I gave up with that. That was I'm not sure exactly when but less than 20 years ago because certainly it was after I came to Princeton, which is now 20 years. So I sort of gave up with that and around the same time,

[00:59:11] I started thinking about sort of a non naturalistic form of objectivism influenced by the directions in which people like Tom Nagel and Derek Parford were going and eventually decided that the humane position wasn't as solid as I thought it was. Parford's example of the person with future Tuesday

[00:59:35] indifference helped to persuade me of that. I don't know if you want me to go into that example. Yeah, would you actually because you mentioned that in the interview, but we didn't really go into it and I'd like to hear because I confess

[00:59:49] that I don't find that particularly persuasive, but you mentioned that as something that was influential for you. So can you explain why before you start? Can I just say that I was reading your utilitarianism, a very short introduction

[01:00:04] and the introduction to that you have, I think one of the nicest just you talking three or four paragraphs about Derek Parford. And I think it's just such a lovely way of talking about. I never knew him.

[01:00:18] I could never talk about him, but but that alone made me, I think, more disposed to becoming a utilitarian. That's very nice. Yeah, of course. Yeah, I mean, that book was published just after his death. So it was very fresh in our minds. So future Tuesday indifference.

[01:00:40] We imagine a person who's just like us in terms of not wanting to suffer in terms of thinking, let's say that a mild headache is a bad thing, but it would be incomparably worse to be tortured for 10 hours and hating the idea

[01:00:56] of being tortured for 10 hours, just as we do, except that he has this one peculiar quirk, which is that he's indifferent to whatever happens to him on any future Tuesday. So if you ask him, for example, would you like to have a headache

[01:01:15] tomorrow or be tortured the day after tomorrow? He wouldn't automatically say as we would, I'd rather have the mild headache tomorrow, he'd rather say, well, what day of the week is it? Oh, so it's Monday. So it's Sunday now.

[01:01:30] So tomorrow is Monday and I wouldn't like to have a headache on Monday. Other things being equal. And the day after tomorrow is Tuesday and Tuesday, I'm completely indifferent to whatever happens to me. So yeah, you know, I'll accept the torture on Tuesday

[01:01:43] rather than have a mild headache on Monday. Now, it's only future Tuesdays is indifferent to. So, you know, he doesn't. He has a lovely Monday without a mild headache. And then he starts getting tortured on Tuesday. And of course, he hates the torture on Tuesday.

[01:01:58] But, you know, he's accepted it. He can't get out of it now. And he survives the torture and then the next three comes around. He's offered a similar choice. Even though he hated the torture much more than he would have hated the headache.

[01:02:10] He makes the same choice because it's now a future Tuesday again and he's indifferent to it. So why is this a kind of example to Hume? Well, because for Hume, preferences are non-rational. Reason only starts in telling us how to satisfy our preferences

[01:02:29] and inform us about relevant facts to satisfying our preferences. So if he now does have this very peculiar preference, he's acting rationally in the decisions that he makes. And I just can't accept that he is acting rationally. Tamar, you can tell me whether you think he is

[01:02:45] since you weren't convinced by this example. But if we say he isn't acting rationally, then we have to say, well, there's more to acting rationally than just acting so as to fulfill your preferences. So what it does is just open a window into being able to call something

[01:03:04] irrational that isn't grounded in somebody's emotions and motivations. That's the idea. And once you have that little opening, you can then build to stronger, maybe more controversial claims like thinking that you have greater obligations to your family than you do to the strangers, that that is irrational.

[01:03:26] But what the future Tuesday case does is just open the door to even think in those terms. That's correct. Yeah, whether you would reach that further conclusion that you mentioned is a further step, of course, but it opens the door to it.

[01:03:39] It challenges the basic you, me and idea that reason is the slave of the passions, as you put it. I guess what that to just answer your question, the reason I don't find that particularly compelling is that it's such a implausible example of a real life person.

[01:04:00] And so I don't trust whatever intuitions I have about whether they're being rational or not because I don't believe that this person could really exist in my intuitions about whether somebody is being rational or irrational are sort of formed around actual real life people and there.

[01:04:23] So that's why I'm sure it's a it's an objection that you're familiar with. But what about like happy slaves? Like, I assume that this discussion is related to whether or not, you know, you can have preferences that are right, just bad that you reject.

[01:04:41] So so it doesn't have to be as as weird as the perfect example. But you, Tamler, don't think that it's rational for somebody to say like, oh, yes, please enslave me. I love this oppression. Well, I, you know, there is certainly a humane way of answering that objection,

[01:05:00] which is that were they presented with an alternative view, they would then recognize that that form of life is better. And, you know, you could say this about any kind of false consciousness view that the reason that they have the desires that they currently have is

[01:05:25] that they just have no way of conceptualizing a better alternative. But if they did, then their motivations and their beliefs would change accordingly. So so it's the fully informed and rational preferences that you would have rather than the present ones?

[01:05:45] Yeah, which I take it as Bernard Williams and the internal external reasons kind of view. Yeah, and himself had this idea that, you know, what we needed to universalize were the considered preferences or what we would have if we were fully informed.

[01:06:02] But it's it's it's difficult because it does then introduce questions. So is it just factual information that you need? Or is it actually rational judgments that some things are better than others? So in the case of the person considering,

[01:06:17] so let's say they have a present preference to make themselves a slave. And then you say, well, if you knew all the facts, you would see that it was a better life being free than being a slave.

[01:06:29] And let's say we can then inform him about all the facts. But he still says, I still want to be a slave. Do we then accept that that's OK? Or do we say he's making some kind of mistake here? I guess I'm inclined to bite the bullet there.

[01:06:44] If that's really and I'd be somewhat surprised if you weren't either. Right. I mean, and correct me if I'm wrong, but you might have a view along the line of Bentham about human rights as being nonsense or nonsense on stilts. Would you think that that person is irrational?

[01:07:03] So just to get the quote right, Bentham said that natural rights are nonsense and natural and in prescriptive rights are nonsense on stilts. He didn't say that human rights were. He didn't actually use that term, but he was quite happy for people to have rights.

[01:07:16] He just thought that they're not natural, that they have to be given by legislatures or social conventions or something of that sort. But coming back to the main point, I think you're right to bite the bullet on this. I think that's the only thing that someone

[01:07:28] wanting to defend a union position can do. And I would have done the same when I was in my earlier hair, you know, following his universal prescriptivism. But now I would say that whether and you're right also that I'm not

[01:07:45] going to say categorically that it's slavery is always wrong. And here wrote an article about what's wrong with slavery. And uniquely, he was able to write one from the point of view of someone who'd actually been a slave because he was a slave of the Japanese working

[01:07:59] on the Burma Railroad. I now would think that it's not the preferences that are crucial. It's the states of consciousness that the person will have. And if we can predict that his states of consciousness will be significantly worse if he is enslaved than if he's not enslaved,

[01:08:19] then it would be he would be making a mistake and I wouldn't follow his preferences. Contrary wise, if that's not the case, and let's assume there's no other consequences, he's not setting a bad example that will lead other people to have miserable lives of slaves or anything

[01:08:33] like that, if that's not the case, then it's OK. Then as a hedonistic utilitarian, I can't object to him deciding to become a slave. What led you to change from a preference utilitarian to a hedonistic utilitarian or considerations like these?

[01:08:54] It was part of that process that I referred to of recognizing that there can be objective grants for saying that some things are intrinsically good and others not. And I was certainly influenced by Siegwick in that. I wrote a book with the same co-author as the utilitarianism,

[01:09:15] a very short introduction, Katarina de Lazare-Ruddeck. We wrote a book that started out as a sort of study of Siegwick. It was called The Point of View of the Universe, which is a famous phrase of

[01:09:24] Siegwick's and it looked at Siegwick and to what extent his views were still defensible in terms of contemporary ethical arguments. And Siegwick argues that the only thing that is intrinsically good or intrinsically bad are states of consciousness, that if there were no

[01:09:40] consciousness, there would be no value in the universe. We came to accept that view and also came to see some of the objections of preference utilitarians or I did Katarina was never a preference utilitarian.

[01:09:56] But I was before we started the project, we came to see some of those objections as more serious than we thought before that, you know, sort of the pointless preferences, the Rawls case of someone whose preference is to cut the number of blades

[01:10:11] of grass in various lawns around the place, that if there's no value in satisfying that person's preferences, unless they get pleasure from doing it, if they get pleasure from doing it and they're unhappy if they're not counting

[01:10:23] blades of grass, OK, then we can see the value in doing that. But if a person just says, no, I'm no happier doing this, but I just have a preference for doing it, it's hard to see the value in satisfying that preference we thought.

[01:10:37] So in the in the move to hedonism, you can take care of some of the objections to preference utilitarianism, but you then are burdened with something like defining pleasure. And I wanted to hear you talk a little bit about what your view of pleasure

[01:10:56] actually is and whether this behavior, how much does behavioral science really inform this and can we measure it right? And do you think there's an answer to whether pleasure is a unitary experience in the sense that we could then collapse them all and do the calculations?

[01:11:18] Yeah, so these are good questions and I don't know that I have all of the answers to them. Katarina is currently working on a book on pleasure, although the book she's working on is in Polish, but she is going to produce an English version of it

[01:11:33] when that's done. And we've been discussing a lot of these questions. So we follow Sidwix account in terms of a definition, which roughly is that pleasure is a state of consciousness that we immediately apprehend as desirable. So it's something like when we have it, we experience it and

[01:12:00] we apprehend that as something that is a desirable state of consciousness. And you could say, you know, other things being equal, we want it to continue. And if we're not in that state, we want to be in that state.

[01:12:12] So it is related to desire and you could say, well, isn't that then somehow a preference model, but it's desire about a state of consciousness for its own sake. So whereas the preference utilitarian might say, you know, I desire to

[01:12:25] account blades of grass irrespective of what state of consciousness that creates in me. So the hedonistic utilitarian defining pleasure as Sidwix does will say, well, that's not intrinsic value. It's your judgment of desirability comes into it when you're focusing on states of consciousness, not just about anything.

[01:12:46] But then you asked a question about the science of it and neuroscience and behavioral science. I do think that's relevant and there is some good research going on about, you know, so is pleasure like a gloss that comes on other things or is it a

[01:12:58] separate sensation? I haven't gone far enough into that really to give you a good answer. But I think yeah, I think we're learning things about pleasure. And you also asked about measuring it. I don't think we've got a way of doing that as yet, but who knows?

[01:13:12] Maybe one day we will. Measuring just states of pleasure. Yeah, that's right directly, right? Because we can we can measure how much people want things in terms of saying, you know, do I feel pleasure more intensely than you do?

[01:13:26] Let's say, you know, let's say we both eat our favorite food, whatever that might be. And we both sort of lick our lips and say that was delicious. Are we at the same level or do some people get more pleasure out of eating food

[01:13:40] than others? I suppose some do. And other people get more pleasure out of doing philosophy than others and all the rest of it. So do we have a way of measuring that? No, I mean, we're the moment we say so everybody can'ts for one and

[01:13:52] for more than one. But I suppose it's possible that that's not the case. It's probably likely that that's not the case really, that some people can get both more pleasure and more misery. And strictly speaking, a hedonistic utilitarian should pay more attention

[01:14:07] to their states than to those of people who are, you know, more just a smaller little up and down. What about the question of if we could measure states of happiness? How would it compare to reduction of suffering?

[01:14:25] And how do you weigh that, given where we are right now, how do you weigh those two things when you're deciding, say, what to donate to? Right. So I tend to think that suffering is more significant in two different reasons.

[01:14:46] One is that I think we could better understand how to reduce suffering than how to increase happiness. You know, there are some very obvious causes of suffering that we can prevent and we know how to prevent that. And it's less obvious what makes people happy.

[01:15:03] So that's part of it. But the other thing is I don't think the scale is the same. That is some, you know, at first glance, you might say, well, there's a neutral state in the middle, and then we're capable of happiness up to

[01:15:17] plus 100 and we're capable of suffering up to minus 100. You know, if we have the neutral state is zero. But when you think about it, I don't think that's true. If you say to most people, so suppose you could experience for an hour

[01:15:32] the greatest pleasures that you've ever experienced, but you then have to have an hour of the greatest suffering you've ever experienced. Would you would you make that choice? I've asked various classes about that. There are some hands that go up, but it's a clear minority of the class.

[01:15:48] I can't maybe it's 20 percent of the class who say that they would accept that bargain. Most people are pretty clear that they wouldn't or the other ones that haven't really been tortured or said badly. I don't know. That sounded like a preference answer, though, right?

[01:16:06] If you're then determining you're resolving that question by going by people's preferences. But then because that's all we can do at the moment, because we don't have any way of saying, oh, look, let's look at their brain states. Ah, see, you see this person is getting more

[01:16:21] pleasure because of their particular brain patterns. And even if we did have the brain states, it's not clear whether the brain states would correspond to the sensations to the feelings. So that's all we can do at the moment.

[01:16:34] But I'm not saying that the preference is ultimately what's decisive. I want to be able to measure this stuff just so I can answer the question of what a masochist is really doing. Is a masochist really wanting pain or are they just getting pleasure in their pain?

[01:16:50] I don't know how to deal. I feel like that dealing with masochists in any ethical theory is a little difficult. Yeah, I assume that they're getting pleasure from their pain. But I don't know. As I understand it, it would be,

[01:17:06] you know, as I'm using the terms, it would be difficult to want pain for its own sake rather than for some pleasure that you get from it. We talked about this a little bit in the existential risk discussion,

[01:17:19] but I'm wondering now that we're talking about comparing happiness and suffering, what you think about present suffering and happiness versus future suffering and happiness. Do you weigh them equally controlling for uncertainty? Or do you think that present suffering and happiness is more important?

[01:17:42] No, I weigh them equally, discounting for uncertainty. And that actually brings me back to a point that I thought about making when you were speaking before, when we were talking about the person with future Tuesday indifference and you were saying how

[01:17:54] weird this is and then you said that can open a door to cases like, well, should I give more weight to my interests or those of my family than than those of strangers? But there's a there's an intermediate case which I think is much more familiar

[01:18:10] and less weird, and that's people who do discount the future and not just for uncertainty on the basis of the future Tuesday case. I now want to say that that's irrational. And I wonder what you think about that.

[01:18:22] So this is the case of the person who can feel a toothache coming on. And from past experience, they know that if they don't go to the dentist, they'll be in severe pain for several days.

[01:18:32] And moreover, there's a holiday coming up and they know that they won't be able to get a dentist appointment unless they do it now. But nevertheless, they put off making the dentist appointment and later on, they're in much more severe pain and wish that they made the appointment.

[01:18:48] So I think that that's irrational in just the same way as future Tuesday. Indifference is irrational, but you know, you're smiling and I take it that you recognize this kind of trade that it does exist in the real world. No, I that's, I think, a better example.

[01:19:02] I'm actually more I find that more compelling. And, you know, I could try to respond that, well, they're not fully informed when they're making their judgments in the present. But given that this has happened, you know, a number of times previously

[01:19:18] in your example, it seems like they are informed. They do have the information and as somebody who procrastinates a lot and who knows that procrastinating leads me to suffer more than I would otherwise. Yeah, I mean, I think that's if I was going to come to your side,

[01:19:37] it would be more from a case like that than something so outlandish as the future Tuesday case. That's good to know. I'll use that example more often. I mean, when you were describing that person, I just felt like you were just describing my week.

[01:19:54] I wish that I could limit my indifference to my future suffering to Tuesdays. Now, that would be progress. So I've always been intrigued. And to be honest, I was at first, my resistance to utilitarianism was generally that it's such a bad guide for decision making.

[01:20:14] And I just misinterpreted that this is what utilitarianism was trying to tell us to do. And it is, but now that I know that it's just the right making criteria, I can fully endorse that it is a bad guide for decision making for many reasons.

[01:20:34] And I've always been just fascinated by the view that Cidwick and others have expressed that maybe as utilitarians we should we should keep this utilitarian stuff to ourselves. And in reading you talk about it on in your short introduction,

[01:20:51] you can't answer this, but I think you might be an esoteric morality guy. Oh, yes, yes. Again, we we discussed that both in the very short introduction and in the point of view of the universe. We think Cidwick was right there.

[01:21:07] I think that there are some situations in which it's better. You know, you you can do the right action. But if you make it public that you're doing what you're doing, the consequences will be bad.

[01:21:19] And in those circumstances, you should do the right action and keep it secret. You know, if if you reliably can keep it secret, of course. So here's a question that relates then to you had a talk in New Zealand that was recently cancelled.

[01:21:35] Well, yes, it actually is now doubly cancelled because of course nobody can fly to New Zealand or for that leave Australia where I am. It was this overdetermined cancellation. It was overdetermined that you never talk. But actually the initial cancellation was a cancellation of a biopeticular venue,

[01:21:52] ironically, a casino which was taking the high moral ground in refusing me the right to speak there. But we immediately had three other offers of suitable venues. So had it not been for the coronavirus crisis, I would have still been going to Auckland in June.

[01:22:08] So I guess what I was going to ask related to the esoteric morality, it seems that when you are protested, it is generally for your views on disabled infants and what parents should have the right to do. And I wonder if there's any

[01:22:29] times that you regret that aspect of your work, making that aspect of your work public because it is a distraction from your other work, which is less controversial and has the potential to impact so many more people. Yeah, I have a few things to say about that.

[01:22:54] In one sense, of course, you're right. It is a distraction and it's led to people protesting as has happened on this occasion and generally that's regrettable. But the two things I would say that go the other way is one, I am a philosopher and I can't avoid

[01:23:13] questions about the implications of my views. And even if I hadn't written about this, I certainly could have chosen not to write about that topic. But somebody would undoubtedly have said, well, it's an implication of your views about say abortion and so on that

[01:23:28] they would apply to infants as well. And then they might bring up exactly these cases of parents with severely disabled infants who think that it would be better if the child would not live. So I don't think I could have completely escaped that.

[01:23:38] I certainly could have made a less prominent part of my views. The second thing is that at least in a reasonably free society, attempts to suppress ideas actually usually backfire because then you get a lot of publicity. You know, you heard about this cancellation in New Zealand,

[01:23:58] although you wouldn't have even known that I was going to speak in Auckland, planning to speak in Auckland if it hadn't happened. And that's something that's happened in various ways. So the best example is that these protests first started in Germany

[01:24:12] in 1989 when I was invited to speak at a Congress organised by parents of children with disabilities, a more militant group of disability organisation protested that and it became a major thing. And it was in lots of German newspapers.

[01:24:27] Now, practical ethics had been translated into German about five years before that. And if you look at the sales for those five years, they were minimal. They were in the low hundreds per year. You look at the sales after that happened and they go into the thousands.

[01:24:42] It was put out in a small, more popular addition and it's continued to sell reasonably well afterwards. So and I know that it's got used in a number of courses in Germany. So there's a lot of people who because of the fact that I said these things about

[01:24:59] disabled infants, which were controversial, have been reading practical ethics, not just the things I said about infants, but the things I said about global poverty and the things I said about animals. So it may be very hard to calculate,

[01:25:12] but maybe the overall effects have been positive rather than negative. That almost supports a view that's like the opposite of esoteric morality, that you should have something that is going to draw so much attention that that will then publicize your other views.

[01:25:29] If you're in the particular position that I am, yeah, but when we talk about esoteric morality, we're often talking about particular actions like, you know, you torture the terrorists to find out where the nuclear bomb is hidden,

[01:25:41] even though in general it's right to have a rule against torture, because otherwise people will torture all sorts of innocent people for for no point, but because they're sadists who happen to be prison guards. So yeah, those are the sorts of cases. So very brief final question.

[01:25:58] Do you find that in your interpersonal life, you maybe mask the your ethical views in order to seem sort of more in line with the intuitions that other people have? Me and my lab have done some work showing that people tend to not like

[01:26:18] people who express being utilitarian because it seems non-empathic and cold and calculating knowing you by repetition. I know that you're not like that, but I wonder if does it influence how you find your interactions with people?

[01:26:34] And I don't think it does much on a day to day level. No, I think people, you know, understand and take me for what I am. And as you say, I do have a lot of emotions. And I think, you know, you don't have to be coldly

[01:26:47] calculating in every in your interactions with individuals. It's more about your general life plan and as I say, the things that you decide to eat or not eat, the things that you decide to donate money to or not.

[01:27:01] So I think it's not really a problem every now and again. Somebody tells me that I'm being too calculating about something or other. How many fat men have you shoved off of foot bridges? Be honest.

[01:27:12] This is the thing I've never been in that situation where I could save lives by doing that. You know, it's some sense perhaps it's regrettable that I haven't had these opportunities to save lives. But instead, I have opportunities, fortunately, by donating to the most effective organizations.

[01:27:31] And let me get in a plug for the life you can save. If people want to know which are the most effective charities, please go to the life you can save dot org charity that I founded. And you'll find that out.

[01:27:41] And that's a lot easier to do and causes fewer problems than pushing heavy people off bridges. What was the example you were just going to give an example of somebody who was saying you were too calculating? All right, so this is actually in this coronavirus crisis.

[01:27:58] Somebody somebody was saying that she was feeling guilty about not being able to see her grandmother because her grandmother was in isolation, being more at risk, of course, as people or senior people are. And she was feeling guilty because her grandmother was alone and not

[01:28:12] she was contacting her when she could, but she was not seeing her. And I said, well, you shouldn't feel guilty about that, because obviously it's in your grandmother's best interest that you don't feel and you're doing everything that you can. And she said, oh, you know, Peter,

[01:28:26] don't be so rational or something like that. Of course, I still feel guilty. How could I how could I not feel guilty? So was that maybe I should have simply said, oh, yes, you know, I sympathize with your terrible guilt feelings.

[01:28:38] But I I tend to think these feelings should be affected by the knowledge of that you actually haven't done anything wrong. All right. Well, thank you so much for joining us. It's been an honor having you on. Thank you to talk to both of you. Thanks a lot.

[01:29:38] That was it.