Episode 213: What Is It Like To Be a Robot Fish Man? (with Ted Chiang)
Very Bad WizardsMay 25, 2021
213
01:57:49135.26 MB

Episode 213: What Is It Like To Be a Robot Fish Man? (with Ted Chiang)

We've done deep dives on three of his stories, and now THE MAN HIMSELF, multi-award winning science fiction author Ted Chiang, joins us to explore the post-apocalyptic world of the video-game SOMA. You play Simon Jarrett, a man who goes for a brain scan in Toronto and wakes up a 100 years later in an underwater research facility, the last remaining hope to preserve human consciousness from extinction. Pizarro confronts his worst nightmare, a first-person experience of stepping into a transporter-style scenario. We talk about how video games can make philosophical problems come alive, what "fission-cases" tell us about personal identity (Tamler's note: this really should count as our Parfit episode), what it's like to be conscious without a body, the problem with thought experiments, and lots more.

Plus, a new evo-psych study on why bullshitting is adaptive – convince people you're smart and save energy while you do it!

Special Guest: Ted Chiang.

Sponsored By:

Support Very Bad Wizards

Links:

[00:00:00] Very Bad Wizards is a podcast with a philosopher, my dad, and psychologist Dave Pizarro having an informal discussion about issues in science and ethics. Please note that the discussion contains bad words that I'm not allowed to say, and knowing my dad some very inappropriate jokes.

[00:00:34] Welcome to Very Bad Wizards. I'm Tamler Sommers from the University of Houston. Dave, we have Ted Motherfucking Chiang coming on the podcast today. Is there any way that this episode can live up to the hype?

[00:01:20] I mean, certainly not to our hype because I don't think we're so excited that there's nothing that can quite live up to that. But it's funny that you say that because every time I think about Ted Chiang being

[00:01:30] on our podcast, in my mind I say Ted Motherfucking Chiang. And I don't know that Ted Chiang himself would care for this. But you know what? He's not here for our opening segment. But he's not.

[00:01:41] And in fact, that's the thing that has stopped me from saying that in social media posts, you know, like is, wait, would he not like that? Yeah, it's amazing what you can get away just within the audio of a podcast that you would never actually write out.

[00:01:58] Oh, that reminds me of not writing things out. I briefly considered writing out the abstracts that we talked about on our last episode because, well, for one, as you pointed out, a lot of people seem to think that they were real

[00:02:13] abstracts, which can only mean one of two things. One, they didn't really listen to the whole segment, right? Or two, they are very, very gullible audience members to be, you know, say them. So like, I don't even know if they're fucking with me sometimes.

[00:02:30] Like there was some person who said, I spent hours looking for those abstracts. Like is that guy being serious? I don't know. I don't know. But then you know, I thought, well, maybe it would be fun to just put him up there.

[00:02:42] But then I'm like, I don't want that to be searchable on Google. Yeah. Peter Finger. Also, I'm like, did you really believe that there is a philosophy in Peter Finger? On slapping bitches. Yeah, slapping bitches. Yeah. First you get the pussy, then you get it.

[00:02:57] I mean, like it's just like it's mind boggling that people, but I guess that's maybe it's the mystique of the Journal of Controversial Ideas. You know what I think happened? Yeah. It's our bullshit abilities are so strong. It's so funny you say that.

[00:03:15] So what are we talking about in this opening segment? Right. Before we have to hang on to talk about the video game Soma and maybe one or two other things. Tamar, you found or you saw a neuro skeptic tweet out this article

[00:03:27] from the Journal of Evolutionary Psychology called bullshit ability as an honest signal of intelligence. So this is an article where they, you know, there's been this recent explosion in work on bullshit. And I think it was started by Gordon Penney Cook and a friend of mine,

[00:03:43] Nate Barr actually who published this or this psychological article on bullshit a few years ago. And this is sort of building on that, trying to argue that people's ability to make up pseudo profound bullshit is actually tracking their underlying intelligence.

[00:04:01] So so the ability to to bullshit at the drop of a hat is actually reflective of your real intelligence. And I guess that this is why it's why we bull. That's what I don't understand. It has this evolutionary psychology, distinctive evolutionary psychology

[00:04:20] trait of being like at the same time ludicrous, but also obvious or obvious and also ludicrous. The ludicrous part is like I think they're suggesting that the ability to bullshit evolved as a means of signaling honest intelligence. Yeah.

[00:04:42] First you learn how to bullshit, then you get the pussy. Exactly. Right, right. I mean, this is what you know, this is what professors at least in the place that have relied relied on. Right. So this is from like this tradition of an evolutionary theory

[00:04:59] that we've talked about before in the context of like emotions and stuff that that the ability to send out a sincere signal, say of like an underlying trait that can't be observed. So you see this in the animal world a lot.

[00:05:12] You'll have like some visible trait that tracks something like strength or you know, reproductive fitness of some sort or other. Like peacocks tails are very costly, right? It costs a lot in terms of calories, but that shows that that that particular peacock is you know, it's genetically strong.

[00:05:29] Can afford to do that kind of costly. Right. Right. Now that costly signals becomes meaningless if somebody can fake that signal without paying the price. Right. So it's important that it's that it be a costly signal. At least this isn't a costly, but this is not a.

[00:05:48] Yeah, this is not a costly signal. So and so they did two studies where they actually in the first study they had people read about a set of topics. Just the topics were listed and they were asked if they knew

[00:06:00] anything about the topic and some were like fake bullshit topics. Concepts right? Like they were concepts that didn't exist. And then they had people write an explanation. If it was a real concept, like a legit thing, they said, just explain it as best as you can.

[00:06:13] And if it was a fake concept, they said just basically just bullshit, just write like the most convincing bullshit about this concept as you can. So they had a bunch of people write out these bullshit explanations. They remind me of the game Balderdash, you know, where you

[00:06:27] have to come up with a fake definition for a word that nobody quite knows about. And then they actually had people, well, they gave the generators of the bullshit a series of cognitive tasks, including a couple of IQ tasks.

[00:06:44] Ravens Progressive Matrices was one of them, which is like a visual like pattern matching IQ test and this other one, which I don't remember. It's like a verbal something that correlates highly with verbal IQ. And then the highly is strong. The Raven are you talking about the Raven?

[00:07:02] The Raven Ravens Progressive Matrices correlated less I mean with IQ than the RPM, which is... Well, no, they did word some. Raven is... Oh yeah, RPM is Raven Progressive Matrices and word sum. Right? Word sum is they show you a word and a word cloud

[00:07:23] surrounding it and you have to pick out the word in the word cloud that is the closest to a synonym to the big word. Yeah. So it's just a measure of verbal IQ. And so they had these quasi IQ measures and they had people's

[00:07:38] actual bullshit explanations and they had a bunch of people rate how convincing those bullshit explanations were. And so they had some metric of how good a bullshit are these people were. And so what they found was a modest correlation in study

[00:07:55] one between these IQ tasks and bullshit ability, what were the correlations like a 0.23 with the word sum score and 0.15 with the Ravens Progressive Matrices score. So this is the author say initial evidence of bullshit ability sharing a modest positive association with measures of intelligence.

[00:08:15] And then also right in the second study, they showed that people rated a successful bullshit or as more intelligent. Right. So this is like in this case it's just in one case you could think of it as an honest signal.

[00:08:31] In the other case you could just think of it as like some sort of fake signal as long as it's the people are perceived as intelligent, maybe it does. That's all it needs to be for the for the evolutionary cycle for the evolution. Yeah.

[00:08:45] Now I like I don't object to these are just a bunch of measures, you know, correlational measures. It's kind of interesting to get people to write bullshit and have people rate it. But I don't know, like when you're writing good bullshit,

[00:08:59] you need lots of words and that's just measuring verbal ability twice. I think like I don't I'm not quite sure where the surprising finding is it's not, you know, I feel like just decent bullshit is just being able to string together a bunch of words that make sense.

[00:09:15] And isn't that kind of verbal like you but also knowing that like kind of knowing what will be convincing and what won't be convincing. And as they say knowing your audience to some degree. And so yeah, like but again, of course that's going to be

[00:09:33] like you're going to probably do a little better at it. If you have if you're more intelligent than if you are less intelligent, I mean, I would think you'd be able to write about most things better if you were more intelligent than less intelligent.

[00:09:46] And if you were intelligent and if you weren't then I would kind of question the metric. I don't they didn't see whether they looked at ratings of how good just the other explanations were. But the but yeah, right. So that that part's not surprising at all.

[00:10:01] I don't think like especially since they were being told to do it. There is some other part of it which is I didn't totally get this. But it seems like the bullshitters were asked how much of a they knew a certain concept and then a certain group

[00:10:18] of people would would say they knew about a fake concept, which means that they were bullshitting just in that initial survey to. Oh yeah, I miss I'd like the overclaiming. I miss the report of that. Participants first task was to rate their knowledge

[00:10:35] of each concept on a five point scale ranging from never heard of it to know it well understand the concept. Responses given to fake concepts were summed to create an index of participants bullshit willingness with higher scores indicating a greater tendency to bullshit,

[00:10:53] i.e. feign knowledge of fake concepts. Next now a subset of participants bullshit producers were presented with each of the 10 concepts individually. The way I understood it like they only use the people who were who says that they knew a fake concept well.

[00:11:13] No, no, no, I it's just written confusing. So this is what happened. So participants were given this bullshit willingness task, which is rate whether you've never heard of a concept all the way to I know it well or understand the concept. So on the scale.

[00:11:30] So for fake concepts, people should all say I've never heard of it because they just made up those concepts. Right. So all of the people who wrote the bullshit paragraphs were given this. And so what they report later is I think probably

[00:11:43] the more interesting finding of the paper, which is that a bullish people who were best at bullshitting were low on their willingness to bullshit. That's why they said that you would think that it wouldn't be that way. Right. Right. OK.

[00:11:58] Right. I was wondering like it's a power that they have, but they're they're like, you know, holding back on it. They don't want to use that power for you. They use it judiciously. Right. Which when it comes, you know, when you think about it,

[00:12:12] just take take our opening segment from last time. That was just a task of bullshitting. I mean, some of it was just straight up humor, but a lot of it was just the ability to write like a bullshit.

[00:12:21] And also to pretend like we definitely committed to the bit. Right. Like we pretended that that we really came across these abstracts. So we're smart. Yeah, we must be. But but I don't I, you know, as a as a professor

[00:12:35] who supposedly cares about teaching well, like I think bullshitting is the worst trait in a professor. The ability to or the sorry, the willingness to just say I don't know, rather than launch into a fake explanation, I think is a mark of a good teacher.

[00:12:48] But you just know that the people who are the clearest and the least, you know, least likely to bullshit probably could, if forced, like come up with something on the spot, you know, just complete another. Well, that's what that finding suggests, right?

[00:13:01] Is that which strikes me maybe as like a sign of sort of self confidence in your intelligence that you don't pretend that you understand the concept that you really don't understand. You know, feel the need to do that. Right. There are there's like levels of insecurity.

[00:13:17] There are some people who I think are compulsive liars because they're insecure. They just straight up like make up good things about themselves. And then you have like the bullshit insecurity, right? Which is I want to sound important and I feel like.

[00:13:30] Our our mockery of the ad like world like of the branding world. I mean, that's like that. The corporate world is is high on this. Yes. Like let me make up words to try to sound smart. Corporate and corporations are persons, so. Yeah, that's right.

[00:13:48] But yeah, no. And I've done I mean, we I think we've all done that. Like I have definitely pretended to understand or know something that I have no understanding of or never heard of. And like I can I can take it pretty far sometimes.

[00:14:02] And you know, I do it whenever you bring up a sports reference. I do it whenever you bring up a philosophy reference. I'm like breaking balls. Yeah, those are hard to hit. Yeah, you know, this is the kind of article, though, where

[00:14:22] I don't think they needed the evolutionary theorizing. They just put it together a couple of interesting measures and they gave it to a bunch of people. And I find like, yeah, it's kind of interesting. But but it's published in evolutionary psychology

[00:14:34] and and framed at least as this could be the key to understanding how humans got to be so smart somehow. Like the first sentence is human intelligence has been a long standing mystery to psychologists, in particular, why humans

[00:14:52] differ so greatly in intelligence compared not only to distantly related animals, but our closest primate cousin. Yeah, there's a lot of like, you know, talking about how did we get such big brains or whatever, you know, like, so it's a kind of like standard

[00:15:06] fair for for cognitive evolutionary psychology. I was just going to read the last sentence because Neuroskeptic pointed it out of the abstract. Yeah, the last couple of the abstracts. We interpret these results as adding evidence for intelligence being geared towards the navigation of social systems.

[00:15:22] The ability to produce satisfying bullshit may serve to assist individuals in negotiating their social world, both as an energetically efficient strategy for impressing others and as an honest signal of intelligence. The energetically efficient strategy is like, you don't need that at all.

[00:15:38] Well, they were measuring the calories of like that. They do though, right? The ability to produce satisfying bullshit with its emphasis on impressing others without regard for truth or meaning may represent an energetic energetically inexpensive strategy for both signaling one's intelligence and deceiving others to one's advantage.

[00:16:01] Like, what is this energetically inexpensive strategy means? Like, that's what it's like. Like it literally ought to mean that it takes less calories to bullshit in order to convince people of your intelligence than it would to use another method of convincing people of your intelligence, right?

[00:16:26] So like in just purely evolutionary terms, the lower cost behavior would be like the most the efficient one ought to be the better task. Right? It's not calories though, right? Like because this is supposed to be calories in the sense that like your brain runs on glucose.

[00:16:45] It really is. So, you know, one big question of evolutionary psychology is how human beings develop such big brains because they are so energy like hungry. Right. So I think they really are alluding to to like the actual physical costliness of this strategy.

[00:17:03] So it says indeed, past work provides initial evidence for this claim demonstrating that indiscriminately attaching meaningless, pseudo profound bullshit titles to artworks increases their perceived profundity. That is evidence for the claim that it is bullshitting as an energetically inexpensive strategy for signaling intelligence and deceiving others.

[00:17:27] Like I don't get the link between those two things. Indiscriminately attaching meaningless, pseudo profound bullshit titles to artworks increases their perceived profundity. Why is that more energetically efficient than describing it in honest terms or making the artwork better? Or yeah, I think this is.

[00:17:47] Yeah, I think I think what they're trying to say is you could have two two different ways of making your art really profound. One is you could put the work in to actually making your art profound.

[00:18:01] And two is you could just change the title to be pseudo profound. And both of those would convince people, right? And so one in comparison to the other is is less. Yeah, I would like to know what they mean by that.

[00:18:15] But but here's the thing I wanted to ask you about. So it says intelligence in the social world is theorized to have been formed primarily in response to three pressures. The first is the need to accurately signal intelligence

[00:18:30] in order to demonstrate genetic quality and fitness to potential mates. There's a bunch of citations. This is these are Jeffrey Miller papers. So I guess my like this is a simple, honest question. I'm not trying to be snarky or a dick. You don't even have to try.

[00:18:44] You have a very energetically efficient way of coming across as snarky. Yeah, that's why I am so attractive to potential me. So like the idea that intelligence evolves as a need to accurately signal intelligence to demand to demonstrate genetic quality and fitness to potential mates.

[00:19:05] So there is a kind of circularity there, which is like if if if being intelligent increases your genetic fitness, then like that's the evolutionary pressure, right? Like yeah, I'm going to be charitable because I don't know these authors.

[00:19:21] Like yeah, like you wouldn't want to start off your list of three things with the need to signal intelligence is a pressure for being intelligent. I would want to first start off with like why is being intelligent good?

[00:19:37] Right. And maybe end the list with like, OK, now that we've now that we've shown that evolving intelligence provides this then you would then maybe then it's all so important to be able to signal that you have these qualities.

[00:19:53] Right. Jeff Miller has this this view, too, of like moral character that that basically like I don't know if like men in particular evolved the like moral characters so that women would want to fuck them. And and so I think this view of intelligence in general is like,

[00:20:10] yeah, let me show you that I'm awesome. But but it only works if it is in fact awesome to be intelligent. Right. In which case there are evolutionary pressures allegedly, I guess, to to be intelligent anyway. Right. And you don't need to spin anything

[00:20:26] that complicated about why having intelligence is good, you know? Like what? Right. But but like the idea that the like first the first good thing about intelligence is that you can signal to others that you're intelligent, which will which has obvious like evolutionary

[00:20:44] advantages. So like, you know, it's it's like a Ponzi scheme. Yeah, it's weirdly saying that you have intelligence in order to accurately signal intelligence. Right. Evolution decided, hey, how can I get this organism to signal intelligence? I know I'll make it intelligent

[00:21:01] and then it'll figure out how to signal intelligent by bullshitting. I mean, there were these things like Robert Frank talks about this, right? That like the best way to signal that you're in love with somebody is to actually love them.

[00:21:18] But there there's an independent reason why like they would find it attractive that that you loved them and were committed to them and that you know, you weren't going to abandon them. Yeah, exactly. That you'll raise their kids with them and all that.

[00:21:35] Exactly. In any case, like that's the sort of convoluted part of this paper is anything that involves evolution. And like honestly, do you like does anybody really, really, really sincerely believe that bullshittability was selected for?

[00:21:51] Like that just seems like an outcropping of what it means to be intelligent. Like, you know what, when you're very smart, you'll be able to do things like, you know, figure out how to get a banana

[00:22:00] that's a little too high and you'll probably be able to convince somebody, you know, like, you know, Mark Twain style, like Tom Sawyer style, convince them to paint your fence. Like, but there's a lot of things that are going to fall out with with high intelligence.

[00:22:14] But what you're not recognizing is the recently discovered bullshit module in the brain. It turns out that there is one very, very specific area of the brain that's activated when people are bullshitting. It's actually right next to the cheater detector module.

[00:22:30] Because like I can't like, I could draw it out for you right here. But there it goes, cheater detector module and it's near the amygdala. And then bullshitting. I wish you guys could see where, where on Tamar's head he's pointing because he's off by like two miles.

[00:22:46] And I know nothing about the brain. Yeah, yeah. And in the language acquisition module, I think, you know, the bloom morality module. The trash talking module. Yeah. So yeah, I mean, thank you, Neuroskeptic. And it's good to know that evolutionary psychologists are still going strong. And, you know,

[00:23:10] and that they have a journal, their own journal. We think they do. And we should get our own houses in order before we cast aspersions. That's true. I actually was just having a long conversation with one of my students

[00:23:25] about how much evolutionary psychology to put into a paper on character. And like, you know, I think that there is a lot of evolutionary psychology that does give rise to some really interesting ways of viewing viewing human behavior is just sometimes,

[00:23:41] sometimes it's like trying to squeeze a large man into some skinny jeans. It's like, is it really necessary? Nobody wanted to see that. Well, also, like, like you mentioned it almost as a joke,

[00:23:54] but it's almost just taken for granted in this in this paper that there is this bullshit ability trait that we have to come up with some explanation for why it exists and what adaptive purpose it serves. And so like, yeah, OK, yeah, this is a good.

[00:24:15] So imagine saying that the ability to score a high score on on Scrabble is clearly evolutionarily selected for because it is correlated with intelligence. And when you do well on Scrabble, people think you're intelligent. Yeah. It's like nobody would believe that there was a scrabble ability

[00:24:40] that was selected for like everybody would understand that Scrabble games are complete cultural constructions that are built on our cognitive abilities. Sure. So is it tracking cognitive abilities? Sure. In some sense, like in memory and all kinds of other things.

[00:24:53] But but the need to to spin a story that there were pressures to that gave rise to bullshit ability seems completely unnecessary and and would be obvious if you just insert anything else that requires brain power in there, that's like, you know, boggle. Like, I don't know. Candyland.

[00:25:16] Yeah. So much ability to play the ability to play so not get lost. Which I don't have. And so my evolutionary pressures didn't work on on me. Yeah, we'll see how much gets edited out of the very long recording session.

[00:25:31] But one of the things that Tamler and I realized very early on is what did it say? So much takes like 10 hours to play 10 to 12 hours. And we were like, man, hours are take a lot longer than they used to.

[00:25:44] Just the idea that I could finish that game without like looking at like every web resource I could find. I know. Or like the just the the absolute frustrating experience of trying to find the new part of the

[00:26:00] wherever you are to like continue forward and like exiting from a room and being like, oh, I can go down this hall and then going down that hall and being all excited because you think you found it and just ending up

[00:26:11] at the same exact place where you started because you took a wrong turn. And it's like so demoralizing. I mean, the thing of the oceans were the worst part because they would they would say that I was let's think not possible to get lost.

[00:26:26] They weren't even going to talk about this on the walkthroughs because it was just so like obvious and I would get lost every single time. And when you're in the ocean, you're walking on the ocean floor.

[00:26:36] It's like you can't like you have no way of having a sense of direction. Right. Yeah, you can't see very far. Yeah, ahead. Like your flashlight doesn't quite work in the same way. And they're like, no, the path is well lit except for the lights just stop

[00:26:49] sometimes like wandering around. Anyway, I suggest for our listeners like who aren't probably going to play the whole game that they at least look at because we do spend a lot of time talking about Soma the game and we try to summarize it and we try to,

[00:27:08] you know, but but at a certain point, we probably assume knowledge in this conversation of the game just so we can jump to the sort of philosophical questions. So what would you suggest that listeners do in preparation?

[00:27:22] Yeah, it's a good question because reading reviews isn't going to help because reviews are on purpose and not going to be telling you too much of the details of the game. If if you really want really want to know exactly what happened in the game,

[00:27:34] I think reading a walkthrough might give you a sense of it or watching the videos of people doing a walkthrough like if you really don't want to play or maybe like looking at spoiler reviews. That's what I was going to suggest.

[00:27:46] Like if you, you know, if you type in like Soma review, spoiler, you'll just get a sense of the game. Now, of course, I only did that after I played. So I don't know to what extent those things will make sense if you haven't played it. But right.

[00:28:03] I think the underlying general concepts that we end up discussing with Ted Chang are going to be straightforward enough from our description. But, you know, there's an experiential and we talk about this with him. There is an experiential aspect to it that drives concepts home, I think.

[00:28:22] So I mean, if you have the time, go for it in the money. Yeah, in the money. That's right. And the computer. All right. We didn't talk about Trump. He's kind of a great bullshitter, not necessarily in a way that seems, you know, completely correlated with with intelligence.

[00:28:43] But something I've never figured out, whether he's a great bullshit or if he's just a liar. Right. Just a good liar. Yeah. Like a right. So just in pure self interest, utilitarian fashion, like he will say the things that will get him out of.

[00:28:58] Yeah. Because he doesn't sound. He doesn't seem to try to sound profound. No, that's true. And is that a bullshit necessary condition, though? Seeming profound or is that just one way of bullshitting? It's just one way of bullshitting. Yeah. Yeah, I don't remember. We'll have to go back.

[00:29:15] We'll have to read Frankfurt. Summers or Pizarro and Summers, excuse me. Pizarro and Summers 2000. What was it? Twenty nineteen. Twenty. C. C. Pizarro and Summers 2019 for further research on bullshit. We don't get cited enough. We don't. We there needs to be

[00:29:38] in like a good APA style way to cite podcasts. And I think maybe once that standard comes, like then then we'll get cited all the time. I think obviously. Yeah. Well, time stamps. Pizarro and Summers. Instead of page numbers, it'll be time stamps. Yeah, time stamps.

[00:29:57] All right. Well, get on that, people. Figure that out. We need citations. All right, we'll be back to talk to Ted Motherfucking Chang. Today's episode is brought to you once again by Wine.com. Dave, if there's one thing this pandemic has taught us,

[00:30:16] it's the importance of drinking, the necessity of drinking. Amen. But, you know, with all those hours in the day, you don't want to just drink shit wine. You want to drink the good stuff at a good price.

[00:30:32] And look, there are times when it's nice to go to a big wine store where but, you know, if you're in a big city and you have access to that, but you don't really know what you're doing and you just end up kind of wandering aimlessly,

[00:30:46] maybe talking to one of the people. Most of the time, you just want to snap your fingers and have the wine in your house and Wine.com makes that happen. It lets you learn, explore and purchase all from the comfort

[00:30:59] of your home on your time without the need to stand in a wine aisle for lengthy periods of time trying to make sense of label after label. This is one of the wonders of the modern age. You can just buy wine online, have it shipped to your house.

[00:31:15] And if you're one of the lucky people in a handful of states, I just saw this, you can even get hard alcohol. I think I can. Yeah, I think you can, son of a bitch. I mean, look, you could get drive-thru shit in your state.

[00:31:27] So, but you know. Which is great because sometimes, you know, you just need to pop a few back when you're on a road trip. You can get unlimited free shipping, which is huge, right? Like for wine, because it's heavy for only $49 a year.

[00:31:44] Wine.com offers that when you sign up for a steward ship membership and you can use that membership to send gifts throughout the year to family and friends. Shipping is free every single time. You have expert guidance to help you choose.

[00:32:03] It's the only site that offer free professional ratings and tasting notes. And whether you're a novice or an expert, you can have, if this is your thing, a live chat with wine experts to help you find the perfect bottle for every occasion.

[00:32:21] You can save your favorites to my wine. That's one of my favorite features because I never remember the wines that I like. And yeah, Dave, it's your one stop wine and spirits shop in California, New York, Florida and New Jersey.

[00:32:35] You just have that good bourbon ship to your house right now. So for all our listeners, we have a really good offer here. Go to wine.com slash bad wizards. No very. Just wine.com slash bad wizards and get $50 off your first order.

[00:32:51] Once again, go to wine.com slash bad wizards and get $50 off your first order. Terms apply. Thank you to wine.com for sponsoring this episode. Welcome back to Very Bad Wizards. This is the time where we like to thank all of our listeners

[00:34:18] and the people who interact with us, the people who get in touch with us and all the different ways that you do through email, Instagram or Facebook even or the subreddit, which I just got locked out of.

[00:34:36] Although I think I have my account back in, but I got hacked. So if you see. Somebody could have posted like, I don't believe in ghosts anymore by Tam. I know. I think that's like they were going to do that until I reclaimed or aliens.

[00:34:52] I don't believe in ghosts or even though aliens obviously exist right now and they're like joy taking joy rides. Oh, what about the ghosts of aliens who have died on our planet? Yeah, I think that could be. I don't want to get into this right now,

[00:35:08] but my dog was outside. Doors were closed 10 minutes later on my bed. Like so it's my prankster ghost, maybe ghost of an alien is at it again. So anyway, we love interacting with you. It was it's been fun like just putting out

[00:35:31] like the word that Ted Chang is going to come on and seeing how excited some people. It's a little nerve wracking, you know, now that it's super nerve wracking. Yeah. Yeah. But anyway, if you want to email us,

[00:35:45] you can reach us at very bad wizards at gmail.com. If you want to tweet at us or follow us on Twitter at peas at Tamler or at very bad wizards, you can follow us on Instagram, go to the subreddit, which is almost 7000 strong right now.

[00:36:05] A lot of good stuff there and you can go to rate us on Apple podcasts. Yeah, and subscribe to us on Spotify. Those are all the different ways that you can join the community without even spending a dime. But if you are feeling generous,

[00:36:24] generous not only of spirit, but generous of what? Pocket book. Wallet. Generous. And you want to support us in more tangible ways. We very much appreciate that support as well. It's what keeps the lights on. It's what, you know, on days when Tamler and I are

[00:36:44] it's just just hate each other and don't want to talk to each other. We always remember that our dearest, dearest Patreon supporters would be disappointed that we didn't put out an episode angry. Ben angry. That's right.

[00:36:58] They want their money back and that's something we're not willing to do. So if you want to support us in those more tangible ways, you can go the easiest ways to go directly to our support page at verybadwisards.com slash VBW support.

[00:37:12] And there you can see all the different ways you can give us a one time or recurring donation on PayPal. We very much appreciate that. You can become our Patreon supporter. And if you do, you will get a number of perks actually.

[00:37:28] So so we have a page that lists all the bonus episodes that we've done. But what do we have coming up? Do we have anything on the on the docket? Yeah, yeah, well, we were going to do another sopranos episode.

[00:37:43] But oh, we're going to do a ghost like just throw down. We're going to do like a throw down over. Can we just toss aliens into it then to get just get out of the way? Well, I mean, it's it's a different thing, though, because aliens are definitely

[00:37:58] here among us. The certainty levels are so different, so drastically different. Yeah. Yeah. All right. So we have some something ghosts coming up, we're probably going to do another soprano is coming up at some point. Yeah. I'm in the last season of my rewatch.

[00:38:14] So oh, yeah, I don't want to spoil it for you. I heard it's just a very definitive ending. That's right. There are other ways you can support us. You can check out our five part Brothers Kermasov series that we did at Himalaya. Go to Himalaya.com.

[00:38:33] You can either purchase that directly. You can subscribe to Himalaya and you'll get it. Or if you are a very if you are a Patreon supporter at the five dollar and up tier, you will get access to all those at one dollar

[00:38:47] and up. Everybody gets ad free episodes at two dollars and up. You get access to all our bonus content. And at five dollars and up, you get everything. The Kermasov stuff. You get our personal personal debt of gratitude in your mind.

[00:39:01] Oh, and the vote actually, let's just call it right now because the did it end? Well, it doesn't end like I don't know how to make it end. But like this is the end right now. Oh my God, it's forty seven to forty six.

[00:39:19] Coon Thomas Coon structure of science, scientific revolution. It was really a two two topic race over the trial. We're going to be saying the word paradigm a lot. A lot of paradigm, a lot of paradigm. Yeah.

[00:39:36] So yeah, if you want to buy some swag, you can get shirts and hoodies from our Cotton Bureau website. The most comfortable shirts and hoodies money can buy. You can also go to the VBW mug section and get one of our awesome, amazing shirt.

[00:39:54] Tamela, you don't have one yet. Not yet. Not yet. But they're awesome. I do. They look great. I really want to go in. So yeah, we appreciate all the forums in which you guys support us.

[00:40:06] And thank you from the bottom of our hearts, from the bottom of our hearts. So let's get to Ted Chang and I will just give a quick introduction to someone who probably needs no introduction whatsoever, especially if you're a listener to this podcast.

[00:40:22] Ted Chang is a multi award winning writer of science fiction and fantasy short stories and novellas. He's published two anthologies. The first is called Stories of Your Life and Others, and the second is Exhalation. Both are just incredible and you should buy them and read them

[00:40:42] right away if you haven't already. His short story, The Story of Your Life was adapted for the great 2016 movie Arrival directed by Denis Villeneuve, which we have an episode on. And we've done episodes on three of his other stories, too, which you can always find in our archive.

[00:41:05] His work delves into the deepest philosophical questions. He's obviously one of our favorite authors. So let's get to the interview. All right, we're here with Ted Chang. Ted, thank you so much for joining us. Happy to be here.

[00:41:19] So when we were thinking about what to focus our discussion on today, we thought it might be fun for you to talk about a work of some kind that you love or that influenced you like a short story, a movie or a short novel

[00:41:31] or a video game even. And you suggested the video game Soma, which we've now played. And so that's what we're going to talk about. So let me ask you why did you want to talk about Soma? Why did you want to talk about this game?

[00:41:45] Well, because you included the possibility of video games as a sort of category of work to discuss, which isn't very common. I guess, you know, I think people are often going to talk about movies or TV shows or books in terms of explorations of philosophy.

[00:42:03] But when you actually mentioned video games and I was like, oh, that's an interesting possibility. And immediately Soma came to mind because it is I think it is a great dramatization of some pretty common philosophical ideas. But but the fact that it's done in video game form,

[00:42:25] it can make the question sort of more vivid than the maybe other media would when they address the same type of question. Yeah, we should definitely talk about exactly how to because because I had the same thought, Dave, do you want to do like a little

[00:42:41] summary of the game? Yeah, so for our listeners who haven't played the game or not going to play the game, feel free to listen. If you don't want to be spoiled, I guess, then just you should stop listening

[00:42:52] because you're going to play the game and you don't want to find out, then stop listening. But there's a very, very brief synopsis. So Soma is a first person psychological thriller. It was published in 2015 by Frictional Games. It's an independent game.

[00:43:05] And in the game, you play a man named Simon Jarrett, who goes into the doctor's office to get a brain scan. You're not sure why, because he had some head injury and it takes place in present day Toronto.

[00:43:18] But as he gets the brain scan, he wakes up nearly 100 years later in what turns out to be an underground research facility named Pathos 2 that unexpectedly came to house the lone survivors of humanity on Earth because it turns out Earth got hit by a comet.

[00:43:33] Everyone on the surface is dead. And this research facility got turned into sort of a research facility focused on saving the last remnants of humanity. And so they devised this plan. The plan was to construct an ark and to put the remnants of humanity

[00:43:48] on this ark and launch it into space with a magnetic rail gun. But as you play the game, you come to find out that the things that will be put into space to preserve humanity are not flesh and blood at all, but rather consciousnesses that have been

[00:44:03] scanned and uploaded into essentially a machine. You also realize throughout the game that you are one of those early scans when you went to that doctor's office, your brain got scanned, it got saved and you ended up here in this research facility.

[00:44:18] And it's your job essentially to navigate this underground underwater research facility to get that ark, which has been just sort of laid dormant because what you find out is that even this research facility has been pretty decimated. Your job is to get that ark into the rail gun,

[00:44:36] launch it into space and save humanity. Along the way, you realize that one of the things that went wrong was that this general intelligence, this artificial intelligence called the wow, whose job it was to help humanity get this done, kind of fucked up along the way and created

[00:44:50] these monstrosities. So a lot of the game is spent, I think the moments of tension for me, especially playing with surround sound are spent trying to avoid what are clearly hybrid monsters of some sort that are part or part robots, part humans, part animals. How's that? That's great.

[00:45:09] I have here's one question I have for you. So I'm not a gamer. Did you just do the game without any of those walkthroughs or anything like that, where they sort of tell you how to get the level done?

[00:45:21] So I first played the game, you know, I think shortly after it came out. And then I just recently, I just recently replayed it for this podcast. I think that when I played it the first time, I mostly did it without a walkthrough. That's incredible.

[00:45:36] Like I think I was telling my family that I could play this game every waking hour for the rest of my life without finishing it if I didn't have the walkthrough. You must there must be some like video game intelligence

[00:45:48] or something that gets, I don't know how that gets locked in like that, but I definitely didn't have. I mean, yes, video games, modern video games, they have their own sort of language, which you have to be fluent in. It's and it's something that you acquire over time.

[00:46:09] Almost any sort of modern narrative game is going to be difficult for a first timer because they are all relying on certain conventions, certain narrative tropes, which regular gamers are familiar with and that helped them navigate the game world.

[00:46:30] I'm also assuming that you guys played in safe mode where the enemy chose not to. I wanted some stakes. I needed some stakes and I boy, did I regret that on one of the levels where you have to like unplug three

[00:46:43] of the power hoses and run away from from the monster. Yeah, I must have done it 50 times, but I honestly didn't realize that I wasn't running. I was just walking because the first time I played it, they had not patched it to include a safe mode.

[00:46:58] So I had to deal with the enemies. I played safe mode and I still died in one game. What I loved about this video game is that it asks these big questions. And as you said, Ted, like there's something about playing through the narrative where you get involved,

[00:47:15] that there is a more emotional way of communicating some very philosophical ideas. And so I sort of think that about short stories like yours, where I would read a story and say, well, now I don't have to read essentially like some

[00:47:30] tome of philosophy in order to understand the concepts because this short story can introduce and get you to feel the problem, the philosophical problem in a really, really efficient way. But a game does that and then removes all of the efficiency from it.

[00:47:47] So I wanted to just ask you in general about the narrative structure of games and how, you know, a while ago, I remember Roger Ebert made this claim that games weren't art. And I think everybody jumped down his throat because clearly they are.

[00:48:01] I think this is the case. But how do you see games in their ability to communicate emotional and philosophical concepts like this, as opposed maybe to the short story format? So of course, you know, I am a big fan of fiction as a

[00:48:17] as a vehicle for examining philosophical questions. You know, I think fiction can increase engagement and make questions that might otherwise seem really abstract, can make them seem more visceral. But I think what video games offer is, well, OK, so to some extent

[00:48:37] part of it has to do with the amount of time that you spend with them because a movie, if you were if you were watching a movie, you'll be spending a couple hours in a character's shoes. In a video game, you might be spending 20 hours or 40 hours

[00:48:52] or 80 hours in a character's shoes. So you were going to, you know, pretty strongly identify with that character. And you will also feel not just a sense of a sense of identification, but because you are actually, you know, controlling that character's actions,

[00:49:12] you will feel a kind of complicity in those actions. You are the one who hit the button which causes the character to do the thing. You know, that is something which which other media don't offer. You know, there are games where

[00:49:29] where the outcome of the game changes radically depending on what decision you as the player make. Soma is not one of those games, but I think that the fact that you are doing the thing, you are the one who is having to kill things

[00:49:44] or decide to choose not to kill them, even if they don't change the ending of the game, they don't change the course of the plot. You feel complicit because you were the one who had to push the button, who had to, you are causing the story to unfold

[00:50:01] in a way that you don't when reading a story or watching a movie or watching a TV series. You are intimately involved in the unfolding of the story. And I think that can make narratives in general, you know, more involving

[00:50:15] and it can, you know, in terms of examining philosophical questions, I think it can make them very visceral. Yeah, I mean, so to make it, to tie it to one of the concrete philosophical themes, maybe the most obvious personal identity and these kinds of fission,

[00:50:31] fission cases where you upload a copy of yourself into another body and, you know, in the game switches that your perspective now to that person but makes it very clear that you are also in the other body. And that's, you know, a very common problem.

[00:50:49] What does that mean about personal identity and the conditions for personal identity? But there's something that this game does that no philosophy article could do, no movie could do, which is make you then decide,

[00:51:02] well, two things actually make you then decide what to do with the copy of you, which to kill the copy or not. But then also to viscerally give you the sense that you either went one way or you went the other way.

[00:51:16] You were either staying in the body or going into the other one. And like, and since there's two times where this happens and you get both perspective, it makes that problem alive in a way that I had never really thought

[00:51:30] that problem was alive, reading Parfit or, you know, whoever on the on the on the topic. That, you know, yeah, that is that's why I suggested this game, because yeah, I thought it was one of the I thought it was just a great science fiction narrative.

[00:51:46] Even though, you know, the even though the specific, you know, scenario that it's fixed, it's not exactly breaking new ground. You know, these are ideas that we've all encountered before. They are they're much more dramatic in a video game.

[00:52:03] You know, the fact that it is a first person game. It has very detailed environments. It has great sound design. That sound design is amazing. Yeah, all of those contribute to that feeling of immersion. Tamela, that thing that you were talking about how it switches.

[00:52:22] So let's so at some point you realize early on, but I think probably before they tell you explicitly that you are a copy of Simon Jared from Toronto. So it's 100 years later. Yeah, 100 years later, I thought I was just a robot

[00:52:35] coming to find out you're actually like in a dead person's body, I guess. You're a woman. I was a woman's body. You're in kind of a weird combination, right? A dead woman's body and some computer parts.

[00:52:51] Right. And there is that, you know, there is a sort of like a narrative device that is this bio gel stuff that allows organic and computer to fuse together. I think that's what it does.

[00:53:05] And so that's what allows your scan, your, you know, bits, your ones and zeros to be interfacing with the body. And just to be clear that I get this part of it, the wow, which is this AI,

[00:53:18] biological AI organism has sort of gone rogue and has started to reanimate all the dead things that it finds or create fusions of things just because its job is to keep everybody living or something like it doesn't

[00:53:36] or we don't know exactly what the motivations are, but that's what's happening. So. Oh, and I should I suppose just for our listeners when the the wow is that's WAU so that's not WOW. But yeah, the wow, the WAU that that's the rogue artificial intelligence

[00:53:53] that yeah, so it was initially just sort of running this underwater station. You know, the details of exactly what is going on. I think it seems like after after the comet hit the earth and all of humanity was destroyed, the wow realized that it had to, you know,

[00:54:13] change its strategy for protecting human life. So it seemed like it concluded that the best way it could ensure the continuation of humanity was to take scans and scans of people's brains and incarnate them either in robots or in some organic robot hybrid.

[00:54:36] It has somehow concluded that that is its mission. And so a lot of the sort of horrors that we see are, I guess, the result of its misguided attempt to preserve humanity, to keep humanity from going extinct.

[00:54:55] Yeah, it took me a while to understand what was going on with the wow. And I once I understood that it was this sort of last ditch misguided attempt. There are two things that that I think maybe are being communicated there.

[00:55:09] One is a computer doesn't have a conception of what it means to be human. And so, you know, of course, it's going to try to animate corpses and download what it thinks that humans are, which are these scans.

[00:55:24] And it doesn't realize that what it's creating is to us monsters. But to the wow, this is like fulfilling its task. It's like actually so I actually felt a little bad because one of the last things that you do is you destroy the artificial intelligence,

[00:55:42] you know, if rendering it unable to continue to create these monstrosities. In fact, a lot of the good horror comes from you have these patrolling robots in the ocean. And as they approach you, you realize that they're human minds that are like stuck on something or other.

[00:55:58] You know, they're just grumbling, one of them is cursing at you. They're like this very degraded form of a human mind that's been that's been put into this this weird hybrid body. So they're monsters to us, but not to the wow.

[00:56:10] The why the wow is innocent, I think. And I actually felt bad destroying it. Like it didn't wasn't to me such a clear villain that the horror, the true horror comes from knowing that that this is just what it understood it should be doing.

[00:56:23] Well, another interesting thing about so those those robots that you encounter, you know, they they visually seem fairly crude, but they somehow have, you know, they've had a human brain scan loaded into them. And so they seem to think that they're people

[00:56:42] and they don't notice that their bodies are, you know, they don't notice that their hands are these metal clamps. They see themselves as having human bodies. Look, I'm obviously hurt. If you see anyone else around, just tell them where I am. So where are you exactly?

[00:57:04] Are you for real? I'm right here. See you waiting. Look at my hands. Hey, buddy. OK, OK. I'm just going to be I'm not seeing it. I see a machine of robot talking. How are you looking at? I'm here. See.

[00:57:22] Which is actually the same situation that Simon is in. Because, you know, it takes a while for Simon to realize, you know, what his body looks like. He initially sees himself as having a completely ordinary human body. Only after a while, the Simon starts to actually see

[00:57:42] that his hands are not the hands that he's used to. They are, in fact, like the gloves of a sort of a diving suit that that this corpse was wearing. There's also I don't know if you guys ever

[00:57:57] when you were playing, if you ever tried looking in a mirror. Yeah, in the game. That was the first time I saw like myself as the robot. Yeah, I don't know that I saw myself. Yeah, because, you know, it's it's kind of interesting

[00:58:09] because there's a mirrors earlier in the game, which you can't really look at. But at a certain point in the game, you can look at a mirror and then you see your your spooky robotic face. Yeah, so that is that is timed to match

[00:58:26] the sort of dawning realization that Simon has. You know, what his physical form actually looks like. Yeah, and there's a line that I think Catherine. So Catherine is this just mind that's loaded into something called an omni tool that Simon carries around.

[00:58:44] And she's pretty much guiding him the whole way. She was the one who designed the arc. And she says at one point when he when it dawns on him that he's the robot, like he says, why didn't I realize that before?

[00:58:58] And she says your mind really only has one way of perceiving yourself. And so that's why you have just taken yourself to be who you are because you're it's almost Kantian in that way. Like you have these categories that are just based on you being human.

[00:59:15] And that's how you're going to see the world barring some, you know, major event. So it gives a kind of interesting subjectivity to, you know, another thing that the game can do is sort of really make that idea come to more vivid, I guess.

[00:59:32] Yeah, there's a lot in there about what how much having a body matters for who you are and what kind of body you have mattering, which I found to be super interesting idea.

[00:59:44] Like can you have a free floating mind in a robotic body that that is the same? Like, or would your mind itself be changed if you were in that other body? But that feeling of defense viewing, seeing yourself as human,

[00:59:56] even when you're a robotic arm, honestly, that's kind of how I feel when I look in a mirror every time I look in a mirror. I'm still like 18 year old David. And I'm like, wait, who is this person? Because I don't change. My body is changing.

[01:00:11] However, we still see the world from inside the same way. Yeah. And so, you know, I think, you know, the character of Catherine, Catherine Sean, she's a really interesting example through which to look at this question because so she herself

[01:00:27] is a brain scan that is residing in a computer. She has no body. She, you know, she's not she's not controlling any sort of either another robot or a corpse. And, you know, what does she see herself as? She's also extremely comfortable.

[01:00:47] You know, she whatever whatever she sees, you know, she's she's comfortable with it. She is quite at ease with this in a way that Simon is not. Even though her experience is probably much weirder than what what what Simon is experiencing.

[01:01:05] And so she's getting disconnected all the time and reconnected. Yes, it's like anesthesia is the analogy that left to mind. Because, you know, like when you go to sleep and you wake up, you realize that you've been asleep.

[01:01:18] But with the one time I in my adult life that I had anesthesia, like I thought I never got it. Like I was talking to the nurses or the anesthesiologists and then all of a sudden like the surgery is done.

[01:01:31] And I didn't think any time had passed and it was like six hours later. That's what I think Catherine's life is like. She just gets unplugged and she doesn't she barely when she gets replugged in, it takes her a second to be like, wait, where am I?

[01:01:43] What happened? Like, you know, yeah, sometimes because sometimes there she has interruptions when she's mid sentence and then she actually completes the sentence. Right. So there are a couple things that I think are interesting about Catherine. One, you know, when when Simon sort of plugs

[01:01:59] Catherine in to, you know, when they when he enters a new part of the station and plugs her in and then she helps him, how is she controlling, you know, things like power in the station? What is her sensorium like?

[01:02:12] Right. Because she just made the argument to Simon that, you know, you perceive things in a certain way based on how, you know, how your brain knows how to perceive them. What is her perception when she is like, because, you know, she is looking through files.

[01:02:25] She is if she's running diagnostics, then she's, you know, rerouting power. What does that look like to her? Right. Is she in like a does she view herself as like in an office looking up files in a filing cabinet, you know, hitting switches?

[01:02:40] Or is she just completely embodied in the computer in a way? So part of what you you learn about Catherine is obviously she's the smart engineer who's building the arc, but she also is not very socially gifted, apparently.

[01:02:54] You know, like in one of those logs you find out she didn't have very many friends. So I thought that her comfort with her circumstances were almost there it was their attempt to show us that she's so dispassionate

[01:03:10] as an engineering mind that this didn't bother her nearly as much in the same way that it would somebody who's like, you know, I don't know, higher on the empathic spectrum rather than in the autism spectrum or something like that.

[01:03:22] I don't know if that's what they were going for, but. Or just that she's, you know, she was never that comfortable in her human form, you know, like she was just never for whatever reason, it doesn't have to be a spectrum or anything like that.

[01:03:35] You know, socially, she was she never felt like she fully fit in. And now she does in a different in a way that's because she she even says like it's better this way, you know, when she sees her dead body.

[01:03:50] I mean, so yeah, this is this is the other thing that I I wanted to talk about because at one point Simon asks her, you know, how are you OK with this? And she says, I was never that comfortable in a human body anyway.

[01:04:03] And she said it's not that much. It's not worse being a box. And, you know, I think, yeah, this is this is. This is interesting because because certainly, like she is the of all the sort of instantiations of brain scans, people that we encounter in the game,

[01:04:21] she is by far the most comfortable. She, you know, and I mean, like some of the other scans, they have sort of been forced to construct these sort of delusions where they are, you know, like I'm still a regular person, even though they're, you know, a welding robot.

[01:04:38] So, you know, they are all having difficulties varying levels of difficulty coping with their reality. But Catherine does not, even though it seems like she is the one who is experiencing maybe the most incongruous reality because she has no body at all.

[01:04:55] Because, you know, and because I don't think that she is perceiving herself as being in an office at a, you know, at a, you know, file cabinets or using a terminal. I think that, you know, yeah, she is bodyless

[01:05:06] and she is sort of a an entity of pure will who has fully integrated into the computer system. Yeah, she's become the machine. Could be a paper. What is it like to be Catherine? So, that's right. Mysterious position. And maybe that jarring discontinuities have gotten,

[01:05:29] she's gotten used to it. But you know, Derek Parfit, the philosopher who talks a lot about personal identity, who had this view. That none of us survive. You know, there is no true Ted Chang that survives day to day or moment to moment

[01:05:41] even that this is all just sort of one persistent illusion. And in this paper, when he talks about it, he actually says that he's this somehow brings him some comfort. He's at peace with this idea that Derek Parfit isn't a thing that survives that persists over time.

[01:06:00] I always found that completely counterintuitive. I'm fighting. I am like Simon in this in this video game. I'm fighting the whole way through. I don't care. Like, I'm I'm appalled at the thought that that you would kill me and upload a copy of me. That's not me.

[01:06:16] Catherine is like a Parfidion about this stuff. She just seems OK with it. She just seems OK with it. This episode of Very Bad Wizards is brought to you by NordVPN. Tamler, have you ever had that experience

[01:06:31] where you're in a different country and you log on to Netflix and you realize that they have a whole bunch of other shit that you never get to see at home? Yes, good. Yeah, I when I went to Sweden, I remember for whatever reason

[01:06:44] in Sweden, they had a really great Netflix. Like it's a little down secret about Sweden. Yeah, yeah. I had the opposite experience. I was in Canada for a year and their Netflix, at least at the time, was super shitty. You know, so it's Canada. Only it's Canada.

[01:07:04] If only I had had access to NordVPN with NordVPN. Like to let you know, you can as of today, make Netflix think that you're in Sweden and have access to all that sweet, sweet content. Yeah. You install a VPN is a virtual private network.

[01:07:22] It provides sort of an encrypted tunnel of your internet traffic. But I don't want to talk about the nerd shit today. I just want to talk about like what this can do to unlock your entertainment life. You don't have to miss your favorite content when you're traveling,

[01:07:35] when you're abroad with one click. It lets you log into servers across 59 countries, 5400 servers. And importantly, NordVPN is rated as one of the fastest VPN services that you can get. So there's no buffering. There's no annoying slowdowns. It has amazing speed. You're not going to get your bandwidth

[01:07:55] throttled and you're going to stream everything securely. So if you want to have those advantages, we suggest, I suggest you might want to try NordVPN. And if you do want to try NordVPN, please go to NordVPN.com slash VBW and use our coupon code VBW.

[01:08:14] And you'll get a pretty steep discount, 66 percent off of a two year plan with one additional month free. So if you just right now go for the two years, like instead of 300 bucks, it's because like something like 100 bucks,

[01:08:29] you can see all the Swedish Netflix that your heart desires. So go to NordVPN.com slash VBW and use coupon code VBW for that discount. Thanks to NordVPN for sponsoring this episode of Very Bad Wizards.

[01:08:46] So we should talk about like the different choices that you have to make that like you said, Ted, they don't affect like the outcome of the game in any way, but they affect you like in terms of how you're sort of understanding

[01:08:59] the terms of where you are and how it works. So I guess the first big one that you get to choose is Dave, this is your worst nightmare, but you have to you have to get your personality

[01:09:13] put into a diving suit so that you can go down into this abyss, which means that your body now where your consciousness is has to be transferred to another body. And Dave, who you know, has called the transporter and Star Trek a murder machine like a genocidal.

[01:09:33] It's a Holocaust that leaves the same exact number of people. Yeah, right. So that this would be his worst nightmare, but he agrees to do it and you don't have a choice about I guess whether you're going to do it or not.

[01:09:45] But then when we wake up, we get the perspective of new Simon. Simon Prime. Yeah. Simon Prime in the suit, but there's this question of there's still Simon who's actually Simon Prime to because you know, real Simon is dead in Toronto.

[01:10:03] But but what to do with him because he's going to wake up and think what happened? Like where's Catherine? Where did it look like? Well, he doesn't wake up. We hear him. Right. We hear him kind of mumbling and because he's he's he's it's going to

[01:10:17] like leave him foggy for a while. And so there's a decision of whether to kill your previous version at that point or leave him alive. And I don't know we haven't talked about it. Like what did you two do about that?

[01:10:29] So, you know, just to again, just to clarify. So, you know, like there is the original Simon, Simon one who lived in Toronto. He went in for a brain scan and, you know, he then continued to live for a few more years in in Toronto.

[01:10:45] So, yeah, at the very beginning of the game, we are following Simon one. Then we transition to, you know, the scan which has been instantiated in in Pay those two. And so if we call that Simon two, we were playing Simon two for,

[01:10:58] you know, maybe the first half of the game. Then we need a more robust body because the existing body isn't going to work for the task ahead. Ahead. So the brain scan is copied into another body and we'll call that Simon three.

[01:11:13] So then, you know, we wake up the Simon three and it seems like we've been transferred. But then, you know, we hear Simon two's voice because Simon two is still running. And so the question is, you know, should, you know,

[01:11:26] you have the choice of turning the power off of Simon two or letting him linger on. And so, yeah, so this is here. Here's an actual instance where the player can choose one or the other. And it feels like a momentous decision. Yeah. So what did you choose?

[01:11:44] I chose to I chose to terminate. I chose to let Simon two, you know, like, let the battery run out on him because, you know, and I think Katherine makes this argument that, you know, you're not doing him any favors by letting him live,

[01:12:00] letting his consciousness keep running because, you know, he there's there's nothing left for him on in this part of the station. He'll be alone. So it's a kind of mercy killing. Yeah, yeah. I chose the same thing, but really driven by like, you know,

[01:12:20] let this guy suffer, not only suffer like, you know, death from whatever slow asphyxiation or in, you know, if he survives death from salvation or something, but also the terrible psychological torture that that it would be to realize what had happened, which. So I chose life.

[01:12:43] You let him live. Wow, you're going to let him live. I chose life. Yeah. I let him live. If you chose a slower death is what you're saying. Well, I mean, you could say that about any time you don't kill somebody.

[01:12:55] No, and I'm not sure I fully understood the stakes. I think I thought he's going to wake up kind of hopefully put together what happened and I don't know, like, realize that he's kind of done for it.

[01:13:08] But, you know, he has time to make plans and or, you know, I don't know, he could get into meditation or something that allows him to just at least have a little more, you know, a little more conscious experience left.

[01:13:24] So some robot porn, some robot porn, anything, you know? Yeah, I feel like that's what I would want to happen to me. And I say this as somebody who wants to have like the plug pulled on me,

[01:13:36] like at the least at the sign of like the flu or something like that. But OK, so so maybe this is a good time to talk about there's a character, Mark Sarang, who we only hear through recordings. Yeah. But he, this character,

[01:13:53] when when Catherine Chan was originally performing brain scans for for copying into the arc, we learned that through audio recordings, we learned that Mark Sarang, he advocated killing yourself as soon as your scan was done. Right. Because he thought that that would ensure

[01:14:15] that you would be the one in the arc, you would be the one who would live in the in the simulation. If you if you remain alive after the scan, then it's not you. Yeah. And so, yeah, so his character makes this argument and apparently many people

[01:14:33] on the station find it compelling because a number of them killed themselves so that they would be the scan on the arc would be the continuation of them rather than the the incarnated version that they currently are. It was like a suicide cult kind of thing.

[01:14:51] Kind of. But, you know, it is in some ways, you know, the the decision that you make as Simon three is similar. It's, you know, it is the motives aren't exactly the same. But, you know, you are sort of deciding, you know,

[01:15:05] should I get rid of the original Mark Sarang's character? He makes argument that we should get rid of the original so that we are the ones who live on. You know, Simon three's decision is a little different.

[01:15:16] But, you know, in a practical sense, he is doing something very similar. Except that he already feels like he is the real Simon. So if presumably what Mark Sarang was thinking was that there won't be real continuity unless this person dies, you already know that's not true

[01:15:36] as Simon because you feel like Simon. So I saw it as more of like kind of a euthanasia dilemma at that point. But I like that. I love this that kind of suicide cult thing. Intuitively for me, it doesn't seem like it matters whether, you know,

[01:15:58] this body goes on living or not, except for the reasons like that, you know, it might be just too painful or something or too psychologically difficult to realize. But it's not you. But that's the well.

[01:16:12] So there's this we should maybe talk at this point about this coin flip idea, which is that anytime somebody makes a duplicate, you now have like a one and two chance of being into the good one, like in the Ark or in the diving suit

[01:16:25] or the bad one, which is no longer useful and done for. And and I end one of the things that the game is sort of playing with is this idea that it really is like you can. There is a you that now splits.

[01:16:40] So you can now either be either of those, maybe in place. In place of that is that there are now two us. I don't know. Yeah, but what the game does so brilliantly is that it takes you along

[01:16:54] sort of this Star Trek narrative, which is, you know, on one reading of the the transporter, you are killed at point A and copy is created at point B, the killing that the disappearing of point A person

[01:17:06] is so crucial for the rest of the world to think of you as the same person that it has to happen. And here it's made clear that they play you through the transporter feeling the first couple of times. So Simon one becomes Simon two seamlessly.

[01:17:22] Simon two becomes Simon three seamlessly. It's not until the last Simon decision that you realize, oh no. Your decision, you're just going to die. That is you. The other thing is another thing completely. The person in the arc is not your screwed. Right? That's why he's mad.

[01:17:43] But see, I don't think of the game like I don't know. What do you think about that? Is that what the game is telling you or I see it the ending too as totally consistent with this idea that there's now two us that you could be. Yeah. Yeah.

[01:17:55] I think that and I think, you know, Catherine makes this argument to Simon. Yeah. She yells at him because he is almost willfully misunderstanding the situation. Despite having gone through it more than once that he he's unwilling

[01:18:11] or unable to grasp the idea that that both of the copies that exist are are him. There's there's one that's going to stay and there's one that's going to move on. It is not a transfer. The both of them are, you know, equally valid.

[01:18:28] Both of them have equally valid claims to being him. Catherine is perfectly okay with this. Simon is not. So, you know, this this is something that is kind of breaking Simon's brain. But, you know, yes. So the for the first, you know, two transitions, the game

[01:18:44] follows us with the copy. You know, when Simon two, when Simon one in Toronto becomes Simon two in Paythos two, you know, the game follows that then at a certain point, you know, we hear a recording an audio recording of the original Simon one, you know,

[01:18:58] after the recording was done and you know, he went on and you know, lived. And so, you know, that's sort of a reminder like, oh, yeah, that was, you know, that Simon did continue on his own. And then when Simon two becomes Simon three because

[01:19:15] he's copied into the more robust suit, the game puts us into Simon three's perspective. But then we hear Simon two talking a little bit in the in the chair. Then the game switches things up at the end because then Simon three is copied.

[01:19:33] But the game keeps us in the point of view of Simon three after the copy. It doesn't switch to Simon four. Although you do see some it does switch you to Simon four in the post credits scene, right? Right.

[01:19:49] But the fact that it switches things around, you know, that is something that the game like maybe could have done on the previous occasions as well. It would have been, you know, just as accurate from a sort of experiential point of view.

[01:20:03] He always wakes up in the same chair that he sat up there or at least there is a Simon who always finds self still in the same chair that he sat down in before the when you get scan. Yeah, right. Which is almost like a philosophical stand that

[01:20:19] they're taking, Dave, which is against yours, which is no, like given that it's putting us and we are taking Simon's perspective and assume we are Simon, like it seems like it's saying, no, there are multiple versions. So your thing where I will die, that's

[01:20:35] that's not what the game makes you think. If I'm if I'm being the most fair, I think the game doesn't offer a philosophical opinion here. I think what it does well, I was trying to point to, but I said it wrong because I wanted it

[01:20:48] to agree with me is it gives you the visceral puzzle of what what this truly means. And it does it by playing along in the sort of Star Trek transporter way for the first three transitions. And so you're getting comfortable in this, like, oh, yeah,

[01:21:03] I just transferred my consciousness over to this machine and and you don't get and you're always in the new copy. And then boom, you're left stranded on the planet and you're like, oh, yeah, but that's also a consequence. The thing with psychological continuity,

[01:21:19] the thing that had the only thing with continuity is condemned to die on the planet. And of course, he doesn't think that he survives, you know. And and so I think that it's making us again

[01:21:31] in a way that you can't really you can say all of these things. But to take you through those first three and then switch the perspectives in a way that's, you know, again, they're both true.

[01:21:42] Is like, I think a great way of getting you to feel that dilemma. And I think since you're already inclined to believe that you're indifferent to the copies and I'm already inclined to believe that the persistence only lasts, you know, it can't last across the copy.

[01:21:56] That I'm just like, I'm pissed like Simon is at the very end, you know, and then when I see the new Simon copy, Simon four in paradise, I'm like, oh, good for that fucking guy. Like I'm still on this planet.

[01:22:08] Yeah, what did you think about the end credits thing? Part of me thinks I wish it had just kept us as the same and that been instead of splitting it for the first time. Like narratively, I kind of or dramatically, I kind of like that.

[01:22:21] We never even get to like maybe you get to see like, like we did with the third person that Simon, but the fact that we get to be first person that Simon after the credits. I didn't totally love. I don't know. What do you think, 10?

[01:22:36] Well, you know, I guess I feel like that. That's a horror game. You know, it's a horror game. It seems like, you know, the ending that it the ending and the feeling it wants to leave you with before the credits roll is that of horror.

[01:22:54] So so I think, you know, that seemed to be, you know, I'm guessing as the principle that guided their choice. You know, they could have they could have stuck things. You know, they could have stuck with what they did previously in the game.

[01:23:06] You know, at the moment at that final, you know, copy, we could have gone directly to the Ark there and see. Yeah, that would have been not good. Been in paradise and then the credits could have rolled.

[01:23:18] And then the post credits scene would have been I thought that did. Simon three, you know, cursing that it didn't work. And Catherine telling him that you still don't get it. Right. So, you know, that, you know, that would have been super. That would have been interesting.

[01:23:33] But, you know, like that would leave your feeling when the credits role is not a horror feeling that would that would that would feel like, oh, things worked out. Everything worked out for the best credits role.

[01:23:44] Maybe that mood would not be consistent for the intentions of a horror. I guess my point was just don't like leave it end at the way it did and don't have the post credit sequence, you know, just have the horror movie ending or the horror genre ending.

[01:23:58] And that's it. Well, because, you know, the that scene, the post credit scene in the Ark, I think is remains interesting for. I think it's interesting for a couple of reasons. One is because, you know, it more fully dramatizes this issue, which was, you know,

[01:24:15] in the first two transitions, you're just you just sort of hear from an audio. Right. You know, here you get to actually see both both copies. And so that's going to make it hit home even more than it. Be both copies too. Yeah.

[01:24:29] So, you know, that I think is worthwhile. But another thing that is interesting that the the ending is that, you know, in the Ark, in this simulation of this beautiful, you know, verdant forest and, you know, you have the opportunity to take a survey

[01:24:47] where you, you say how you feel about being an upload. And, you know, you took the survey once, you had the opportunity to take the survey once before in the station. You know, that I think was really interesting because I think the game is sort of asking you

[01:25:03] have your feelings about this changed. Yeah. You were you were an upload the first time you took it and you were an upload the second time you took it in the, you know, in the first case, you know, you were in a dank

[01:25:16] station at the bottom of the ocean and you were inhabiting, you know, a corpse that was animated by computer equipment. And in the second time around, you are existing in a purely digital realm, but things look great. You know, your body feels like your regular body.

[01:25:31] And, you know, you see, it seems like you're in a, you know, I think it's interesting way for the game to sort of ask you, you know, yeah, have your feelings about this changed? Does it depend entirely on, you know, how pleasant the surroundings are?

[01:25:47] If you were uncomfortable, was it because of the, you know, being an upload, being a digital copy? Or was it because because your environment wasn't pleasant? OK. So here's my so. So I think it wasn't until talking to you guys that I

[01:25:59] that I felt this or at least that I realized that I felt this ending pre-credits in the arc is like a happy ending. And it's it's consistent with everything they've done up until now, which is you follow the copies and then you have this happy ending.

[01:26:18] And there I think it ruins the horror. Even if you go back and credits and show the guy stuck on the bottom of the planet, I'm still like, well, that's just kind of like the guy who I unplugged. Right? Like, yeah, I feel kind of bad for him.

[01:26:32] It's more darkly comic than it's more darkly comic. Or at least it's consistently making me not feel identified with the original person, but always with a copy in a way that I think is. Show making me identify at the last pre-credits scene

[01:26:49] with a guy who's stranded, the guy who didn't make it onto the arc is to me a lot more psychologically compelling and ending. And it also makes me think of the arc ending as kind of a horror ending as well, because that's not me.

[01:27:05] Like, that's just a computer up there running these other simulations. I'm stuck. So like all of you in that body, you're in that perspective just the same way you were like. No, I understand. I understand.

[01:27:16] But I think that without making any normative claims about which one is really me or anything like that, the fact that they forced me into the guy stuck on the planet and made me come to terms with me with like what happens to original

[01:27:31] me not copy me. And then they show me like I don't identify as much with the guy on the planet. Yeah, I agree. I still did a little bit, but it's upon thinking about this now that I'm like, if you're the guy stuck on the planet,

[01:27:45] would you even want to see this other guy walking around being like, Catherine, oh, that's great. I'd be like, no, fuck you. That's just like a video running like that's just like a. Well, but it has. But Simon three does not get this.

[01:27:57] He he he he never sees it. He's not forced to watch Simon four live in his great life. But I guess what I'm saying is because I've just been forced into Simon three emotionally. I am kind of a hater for Simon four.

[01:28:08] I agree that Simon four on the arc, you know, that, you know, there there is and there is horror implicit in that. It is, you know, it is only superficially a happy ending. It is kind of, you know, the horror of, you know, like

[01:28:26] a camera pulling away from a idyllic Stepford Wives neighborhood. I guess I'm not sure that I think you'll I don't know that having those two endings in the other order would have, you know, would have changed that because I feel like, you know,

[01:28:43] on consideration, you would you could realize you could be aware of that no matter what order they were presented. Right. And the arc is there's something just creepy about it. Like it's there's not there's almost nobody on it.

[01:28:55] It looks kind of like an like what someone might think of a kind of Eden like location, but there's not that many minds there. So like, what are you supposed to be doing this whole time? Well, I don't know that we can say that for certain.

[01:29:07] You know, we that that might just be the entrance and, you know, the arc itself might, you know, it might have everybody who was scanned. You know, they're all they're all they're living happy lives. For hundreds of years or so. Brandon, you know, Brandon might be there.

[01:29:24] You remember Brandon? Yeah. Yeah. Poor Brandon. We had to keep weight bringing him back into existence to like where you have to you have to sort of instantiate the brand scan into a virtual environment in order to get information from Brandon.

[01:29:39] You know, that that is that's an interesting that's perhaps the most vivid example of a sort of a theme that runs through the game, which is are you willing to inflict suffering on digital beings? Because early on, like, yeah, there's that robot who you have to

[01:29:55] kind of electrocute or something because you got to reroute the power. That that was the hardest decision for me in the whole game to kill that because all all the other ones I saw is mercy killings or it's purely sort of, you know, like this guy,

[01:30:09] this is a little cute robot that's helping you out, you know, like as your it follows you around and on the ocean in the ocean. And then you didn't have to kill him. You had to kill some other one. That is a little robot and take its chip.

[01:30:21] Right? No, I thought you had to kill the the sort of like shit talking one. Oh, maybe I killed the wrong one. I killed a little cute guy. Oh, yeah. Yeah, I felt terrible. Yeah, you don't have to you don't have to do that. I killed an innocent.

[01:30:38] I killed an innocent robot. Helpful robot. My hands blood on my. But but yeah, but the game, you know, repeatedly, you know, sort of asks you or sometimes it forces you. But other times, you know, it's asking you, yeah, do you have?

[01:30:51] Can you inflict suffering on a digital organism? You know, something that might be, you know, something that feels that thinks it's a person. And again, it's interesting that, you know, Catherine says it's not a big deal. Yeah, right. Which is sort of an odd position for her.

[01:31:10] I mean, because she is also making the case that they are just as valid as, you know, as exactly as anyone else. She's also not bothered. She's also like Simon just left on. Like there is a version of Catherine unlike the previous time

[01:31:26] because she's attached to the Omni tool in this time, because the Omni tool at the end of the game doesn't go on the arc. There's there's two Catherine's just like there's two Simons and she's fine with it. She's like, I'm proud of what we did. Like let's celebrate.

[01:31:39] Like you knew this would happen. Yeah. I thought that was kind of interesting too. Like she really just is there for the, you know, creating digital universes and how wherever she happens to land in that whole scheme is, you know, different. She's indifferent. Yeah.

[01:31:58] She has fully embraced, you know, really all the implications of digitized consciousness. You know, talking about digital suffering, you were recently on the Azur Climb podcast and I think that the thing that fucked with me the most about what you said was you were having this discussion about,

[01:32:21] you know, AI becoming whatever, you know, all these people afraid of the singularity. You're saying, no, what's going to happen is we're going to hurt software way before it ever becomes powerful powerful enough to hurt us. And we won't even notice that like we will be

[01:32:36] introducing these low levels of pain as software becomes more and more complex. And we won't even be paying attention to it. And I thought that that idea one is just messing with me now as I feel sympathy for my Microsoft Excel crunching numbers and being my slave.

[01:32:55] But that is the case in this game where you have to make a real choice like does, is that suffering or is this just a really good simulation of its suffering? So I admit, yeah, I don't know. I don't know.

[01:33:07] Well, you know, you don't have to feel bad about anything you do to Microsoft Excel. Please don't make me pee hack anymore. I can't do it. None of the software that currently exists, you know, is remotely capable of suffering.

[01:33:23] Nothing we have, you know, is, you know, in the same ballpark or even in the same, you know, sports league as that, you know, it's I choose to trust you on this. But what makes the simulations? The simulations, are they suffering?

[01:33:36] Well, in the game, as posited in Soma. Well, I mean, they are, you know, they're probably 10 orders of magnitude more complex than, right, you know, any software we have right now. That's at a minimum, you know, but I think it is, you know,

[01:33:55] if we do build software that is conscious, you know, then I think, you know, it's that software will be capable of experiencing actual suffering, not merely a simulation of suffering. And in the game, from the perspective of the game,

[01:34:08] certainly, given that you, to the extent that you feel like you suffer in the game, then you have to assume that these simulations suffer because they are no different than you. This episode of Very Bad Wizards is brought to you by better help.

[01:34:26] You know, as the pandemic winds down, we're all supposed to get back to our normal lives. And if you're like me at all, that might come along with some anxiety, because on top of everything we've been through in this year of quarantine, year of stress,

[01:34:41] we're supposed to go back to normal, whatever that is. So if you're like me and you're experiencing some anxiety about this whole thing, maybe you can turn to an online therapist like you can get at betterhelp.com. If you're feeling particularly anxious, stressed, if

[01:34:57] you're having trouble sleeping, trouble in a romantic relationship, maybe you're grieving or maybe you're just not happy and you don't know why. If you're looking for some help and you want it on your own schedule, better help online counseling just might be for you.

[01:35:11] BetterHelp lets you get help from a professional counselor, a licensed therapist in a safe and private online environment at your own convenience from the comfort of your own home. You can get therapy over video chat over the phone. You can even chat and text with your therapist

[01:35:25] all through secure channels and absolutely confidential. BetterHelp is in all 50 states of the United States and it's available worldwide. You can connect with licensed therapists quickly in under 24 hours. All you have to do is fill out some forms, let them

[01:35:40] know what you want help with and in under 24 hours you will be chatting with a therapist who can help you. Again, it's secure and convenient. These are professionals. It's affordable therapy and if you can't afford it there is financial aid that you can apply for.

[01:35:56] Best of all, if you are a listener of very bad wizards you can go to betterhelp.com slash BBW or enter the code BBW check out and you'll get 10% off of your first month. So why not get started today if you're looking

[01:36:13] for help or if you need some help, go to betterhelp.com slash BBW for 10% off of your first month. Our thanks to BetterHelp for sponsoring this episode of Very Bad Wizards. All right, so okay like maybe we should wrap up soon but would you Ted feel like you're

[01:36:28] surviving if the earth was ending and you got the option to get scanned and uploaded into a computer? Do you consider that Ted surviving? Yeah, it's that that's a tough question to answer because I think that question sort of skips over huge all sorts of details that are

[01:36:51] really essential. What is exactly what is the nature of the scan? What is the nature of this simulation? We need to know more before we can answer that question. I don't think you can give a good answer to that question unless you also specify like

[01:37:06] 100 other things about the scenario. All right, I'm going to just for this I'm going to specify that it is a perfect duplicate of your mind. Of course, it's digital but it would respond when poked and prodded in all of the ways that

[01:37:24] you respond and I would be unable to Turing test to tell the difference between you guys in an interaction. Yeah, I don't know trying to make it as concrete as possible to answer the question if you have any hope that at least personal identity can persist

[01:37:41] through that kind of transformation or that kind of copy? Well, I think okay so I think you know there is this interesting question of well okay there I guess there are a couple questions here because you know there's often this assumption that uploading is possible to do maybe

[01:37:59] theoretically possible to upload someone in a non-destructive scan so that it would leave your body intact. Right. And that is I think a that I don't think is realistic. I don't think it would be physically possible to do that in a non-destructive manner may not even be theoretically

[01:38:21] possible let alone you know from a any sort of practical standpoint. And why theoretically because there's something about consciousness that's embodied and so you need to have you need to like kill the whole body? Well I guess and I'm not saying that

[01:38:38] I'm a proponent of this that I believe in this but there is this idea that you know if there's something about consciousness which incorporates quantum states you could you could you could you could scan the complete quantum state but you can't clone

[01:39:00] the quantum state there's no you know the no cloning theorem in quantum mechanics so you could only you could the only way you can replicate a quantum state is by sort of destroying the original. Nice and if consciousness is a can be described purely classically then

[01:39:19] in theory yes you could even like it would probably still require a destructive scan but once you got the information then you could make multiple copies of the information in a way that you could not if it were needed quantum information.

[01:39:32] OK so I think I'm under the right you know if right circumstances I might continue I might consider that continuation survival because like I said I don't believe that it could be done non-destructively you know so there wouldn't be someone here right and I think you know that

[01:39:49] protesting right the person there protesting plays a large yeah I think that you know which is funny but it does it doesn't as much for me yeah but but I think when people you know when people object you know I think

[01:40:02] that that does play a part in their objection so but I guess I think that I might I might yeah I think I might consider that a survival but this is a very you know yeah this is a really theoretical argument because right Dave has such strong

[01:40:22] intuitions that it's not that it wouldn't be you I think that it allows him to have as little sort of background kind of specificity about the thought experiment for him to say no no that's not me because you're always fishing for details well no I just

[01:40:41] don't have strong like to me like the answer is in some ways it's survival in other ways it's not or in some ways it's me in other ways it's not like I don't as surprised doing a little research on this like refreshing on Parfitz view this idea

[01:40:56] that actually like it's it's it's doesn't there's no determinant answer to the question of whether it's really you or not like it's kind of you and it kind of isn't or it's kind of to use and it's kind of not but that's not what matters what

[01:41:10] matters is just the details of it which is you know do you have the consciousness do you have all the memories do you have a body do you have you know so like that's kind of how I feel about it physical continuity is

[01:41:22] important to me in my sense of who I am and I know that 12 year old me is a very very different person but I am metaphysically confident in saying 12 year old me was me as maybe as misguided as I am but yeah I'm I'm

[01:41:38] I'm very sympathetic to that point of view and you know I think that it is and you know that's why I say that you know you know this question you know can't really be answered without answering many other questions because this you know kind of thought experiment

[01:41:55] of you know some kind of teleporter you know that is so far removed from anything that we actually have that I don't think they are intuitions are are that useful there when we get there you're like if we ever get there

[01:42:13] you know yeah that is that is so far off that yeah I think you know we'll maybe have a different sense it will have a much better grasp of you know things like you know what what consciousness is and we'll probably also have had a lot

[01:42:25] more experience with you know sort of intermediate questions of identity and consciousness right that will you know that will shape our intuitions yeah I think that's right it's like we'll have like new concepts that we don't have right now to try to even understand the

[01:42:45] problem so that'll give us just like a whole new lens of trying to you know make that call that it's impossible from our position now to have access to I think that's true with a lot of thought experiments sometimes in philosophy is that it sort of

[01:43:03] presumes that it's you now in this completely outlandish situation that we don't even we can't conceive of really because we don't have the proper understanding and I think this is one of those problems although it does it it's for whatever reason in this game

[01:43:22] does a good job of making it like here it is you know yeah this is now like we don't have any you know new concepts we don't have any new technologies we don't have any intermediate steps it's just like your assignment right now and you feel

[01:43:38] like you are so then some ways gives you at least the illusion of this being something that you can judge yeah and to be honest this game is the first one the first time I really started to be able to grasp the intuition

[01:43:56] that they're both Simon you know in a meaningful way that the one on the arc and the one on the on the planet are both they both have equal rights to claim to be Simon which I still can't fully embrace but but after like after

[01:44:15] a hundred years of you know playing games of this sort and you know in scenarios you know yeah eventually yeah you your position may change right right because right now as you both were saying anytime a thought experiment like this comes up the

[01:44:30] examples have to be stripped down and what ends up happening is I infuse the details with stuff that yields my intuition so when I'm thinking about all of the contingencies okay what would it mean I'm using metaphors like okay I'm shooting this person and I'm

[01:44:46] like molding a new one out of clay on this side you know I'm using all these these attempts at grasping what's going on with uploading even when I was talking to Tamela earlier today I'm like am I dragging and dropping a folder or am I cutting and

[01:44:59] pasting it you know it's like these metaphors that don't help me understand this problem and different people might be infusing the very different details to flesh out this very stripped down example and that might lead to very different intuitions that isn't truly

[01:45:14] about a disagreement more than it is just about well how did you picture it right versus how did I picture it yeah yeah and I think that this is yeah I think that's true of a lot of philosophical thought experiments like like Robert knows experience machine

[01:45:34] which yeah it's very similar to this but you know I don't I don't know that you can reasonably you know answer any questions like oh would you would you use the experience machine we need to know so much more about right you know right how does the experience

[01:45:50] machine possibly work you know I the the typical fashion is like just accept that it does this it's like I philosophers like doing that yeah just just stipulate that okay just stipulate that it works and it's like that is yeah I think that is something

[01:46:06] that happens a lot in philosophical thought of commandments even like Searle's Chinese room you know that has that you know that is so far removed from how computers work that I don't think it has any relevance to computers but you know like when people point out these objections

[01:46:21] like Searle's like we'll just accept that it does this and like it's terrible that is you know so yeah like in the same way that I think like yeah any intuitions you have about you know the Chinese room when it's posed in you know bear theoretical

[01:46:33] terms those all will change if you you know recast it in terms of actual computer technology and so I think similarly any intuitions you have about like oh Robert knows its experience machine like well though you know we haven't you know we don't have anything remotely like an

[01:46:48] experience machine but if we ever you know get something close to it you know it will be you know the sort of details about how that technology actually is implemented that will inform you know our answers about like should we use it

[01:47:05] or not I just want to object to Dave thinking he's above philosophy and the use of thought experiments when your field has run like 10 million trolley problems. You're right we picked up a very bad habit from philosophers. I fully accept this is the case.

[01:47:23] But this is you know the difference between now Ted that you're saying that like in your stories. Hope it's a fair characterization. You you're for you force yourself to flesh out the details of what would happen with a given technology and that's something that I really enjoy

[01:47:38] so you can't you can't you don't stop at what if we had perfect memory. You actually play out the scenario where you fill in all of those details and you put human beings in that scenario and then you see what falls out and and

[01:47:54] that's I think just a more refreshing and honest way of approaching these of course. Yeah I guess yeah that may that approach may arise out of yeah this feeling I have that yeah that these very abstract kind of thought experiments you know they gloss over really important questions

[01:48:16] which is not to say that you know I'm opposed to all thought experiments because you know I obviously am very into thought experiments but I guess I think that getting into the details of how might this actually work. I think that is really important

[01:48:34] in trying to make any sort of decisions about you know is this ethical is this you know or what is the right thing. You know what is what how should I how should I behave around you what actions should I take given the existence of this.

[01:48:49] So you can you know you can pose your thought experiment but at least you'll get into some of the some of the details. But so just to bring this back because I think Dave David's original question was if you were Simon like would you feel

[01:49:02] like you you could transfer yourself to the arc and or you didn't want to answer that question but Soma gives you a lot of detail right Soma builds this world as you said at the very beginning and flesh is out you know as much

[01:49:18] as you could hope for when presenting a kind of thought experiment. So then do you think that's enough what Soma does to allow us to make a kind of a judgment there. Well I mean Soma is project is you know positing a non-destructive scan because

[01:49:37] you know because yes I'm the original Simon is still was still walking around in Toronto after her scan. So right then and there you know Soma is positing something which I don't think is physically likely are possible. OK so. But I want to just ask you

[01:49:56] then I mean so a story which we just talked about like hell is the absence of God has so many things that are completely like not possible in ways that we currently understand science and yet you still want us to I assume to have sort of opinions and

[01:50:17] to sort of think about what we might do if we were those characters right. Well so in the specific example of hell is the absence of God. Most of what is depicted there you know has a very strong resemblance to ideas that we are very familiar with.

[01:50:38] It is not. It is not a radically alien cosmology. It is it is it is one that people have been thinking and talking about for a long time. So the world of that story is you know in you know in no way as radically different as

[01:51:01] a world where doctors can can repair any bodily ailment without you know even breaking the skin. That is something which you know no one has any experience with no one has ever really thought about at all. And yet I think you know that

[01:51:19] is sort of the world that would you know it's kind of implicit in the idea of a non-destructive scan. So yeah I feel like you know that you'll contribute to why you're like. Our intuitions about a non-destructive scan of your entire consciousness of consciousness they may not

[01:51:39] be reliable because that that positive technology really comes from a completely different experiential universe. It is not something that is compatible but the universe as we have experienced it. Well I love that that this is that the suicide cult is sort of part of the game

[01:51:59] because maybe they just can't wrap their heads around what it would mean right. They have some metaphysical view that that the body must die in some way if it is to live on in this digital plane. And so they just they just do it. And that's also interesting

[01:52:16] you know in relation to this is you know we've talked about Catherine who is fully at home with these ideas. But then there is Simon who not only is you know not like Catherine and just kind of preternaturally just made for this new way of understanding persons

[01:52:34] and consciousness but he's from 100 years ago. So he at least these other people on the in the ship had those intermediate steps presumably that you're talking about Ted. But Simon didn't. He's just like us if we were in that world. And Toronto for God's sake.

[01:52:49] Right. He's from Toronto. So he's probably you know he's a little slow. And so so yeah. So that's what I think is kind of interesting about you know at first when he's so mad at the end I was like I you know my reaction is

[01:53:02] well what did you expect? This just happened you know like in my case like 25 hours ago but whatever like this did just happen. Why are you so mad? But maybe it's because like Dave said with the suicide people who are still more advanced

[01:53:16] he's just he's so far from there in terms of his experiential universe. Yeah. Yeah. I think that's I think that's right that the people of that era are much more comfortable with these ideas than Simon is. Catherine is apparently still an outlier but

[01:53:33] she is not as much of an outlier in her time as she would be you know in Simon's time. Right. Because everyone everyone knows brain scans are a thing they've been around a long time. I love by the way this idea and maybe we should wrap up soon

[01:53:47] but I love this idea of our intuitions changing slowly as technology progresses because I think that one of the certainly as a psychologist maybe as a philosopher probably as well intuitions feel timeless and stable and sort of pipelines to truth and our intuitions change pretty drastically

[01:54:08] even in our own lifetimes given the technologies that develop around us. I mean you look at imagine trying to explain something like you know the distributed internet to somebody in the 70s. Right. And not only do we understand it now we have I think culturally developed

[01:54:26] intuitions that that we pass on very easily to our children but even you look at Star Trek the next generation they have you know faster than light ships and they're still handing out little little pads because they don't have apparently they don't have a network that

[01:54:42] can transmit information from one path to the other. And so they have a parent flies in their translation but we don't we don't see our intuitions changing. It's hard to see but maybe video games who play a role will play a role in that.

[01:54:57] Well or science fiction in general science fiction. I mean I think we get real time examples of this maybe with technology or maybe with something like trans issues or something like that and like intuitions about that shaping they they shift over the course of even just a human

[01:55:17] lifetime just based on exposure and your environment changing in different ways and opening up new understandings and possibilities. So it totally makes sense that when you know these big leaps will happen that will have gone through all sorts of graduate gradual steps to

[01:55:39] get there to try to understand it. Yeah I think that I think that's right I agree with that and I think yeah your example of trans identity is a good one because that is one that I think has evolved significantly in a pretty short period of time.

[01:55:54] And what people think about trans identity now is something that would have been very difficult for people to wrap their heads around 40 years ago say. Oh yeah I mean yeah the way that it's changed is incredible and it's just I think it's it's always uncomfortable and

[01:56:19] hard while it's happening if it's if it's that fast you know that's why I will never get into a transporter no matter what. All right. All right well I don't think we've solved personal identity but this was a really fun discussion thank you so much

[01:56:37] for for joining us. Thank you I'm fairly certain we were talking to Ted the whole time which makes me very happy. Even though you only saw me as a image on your screen that's right. Show me your hands Ted I want to see.

[01:56:55] Thank you so much for joining us. Thanks for having me I had a good time and join us next time on Very Bad Wizard.