Natural Hazard

It wouldn't be the road to hell if it wasn't paved in good intentions

Interview with Michael Vassar and Cade Metz

(recorded 8/5/2020)

(transcript published 12/31/2021)

Context

Back in mid-2020 Cade Metz of the New York Times was doing research for his article about Scott Alexander, the blogger who wrote SlateStarCodex, which has since become Astral Codex Ten. As part of Metz's research he interviewed Michael Vassar. Michael told me about this interview and after listening to it I decided it was worth transcribing and posting on-line.

Why?

The first 20 minutes covers some history of different rationalist organizations, Vassar's involvement, and how various people met each other. All of that holds some interest for me, but it's not the main event. Everything after that portion is what really interests me.

The second half of this conversation (starting here) is an exceptionally clear illustration of what I've recently recently been calling "interpretive obstinance". I won't define the term now, but I can give an additional pointer to the phenomena via Agnes Callard's piece, I Don't Want You to 'Believe' Me, I Want You to Listen. Callard's article describes the dynamic more explicitly, but is abstracts away from the gritty details of specific conversations. I love this Metz/Vassar interview because you can watch Vassar and Metz's interpretive frames collide in real time in a very striking way, as Metz repeatedly and with great tenacity persists in not understanding that Michael is trying to have an interaction that is outside of the standard interviewing script.

I've annotated and highlighted the parts that jump out to me the most.

Usability Notes

You can click on the speaker names to link to any part of the transcript, and click on the timestamps to begin playing the audio from any point. I highly recommend listening to the audio immediately surrounding the parts that interest you. The pauses, sighs, and stammers communicate a lot!

Transcript

Cade Metz
0:00

Okay. So yeah, thanks for doing this. It sounds trite or heard. It's hard not to in your world, but you know that I'm writing a piece that basically looks at Slate Star Codex. And, and, you know, in a lot of ways the, the rationality community. So I'd love to talk about all that. I mean, but But first, I mean, the question I just always ask anyone I interview is just for them to tell me more about themselves. I'd love to hear because you and I have never met you know, more about you and your background and, and eventually how you got involved with this community?

Michael Vassar
0:43

All right, that's fine with me. May I, by the way, have a copy of the recording as well?

Cade Metz
0:47

Sure.

Michael Vassar
0:49

Okay, I would like to have that. Thank you.

Cade Metz
0:51

You're welcome. I'll send it to you.

Michael Vassar
0:54

Great so I got involved with I mean, I really built the community in that prior to 2007, the singularity Institute was this guy, Eliezer Yudkowsky. And some people involved in transhumanism. And I had been tracking him and talking with him on and off for about eight years. And I'd been talking to some of the other people like Robin Hanson, and Nick Bostrom. And the conversation that I had with people largely at Columbia University and Harvard while my wife was going to college, led to funding coming in for a follow up to the 2006 singularity Summit. And that was, you know, quite cool and successful, exciting, I met a lot of people who I hadn't met, and a lot of people met each other for the first time. And in 2007, at the Palace of Fine Arts. And then in 2008, the guy who was running the Singularity Summit, tried to have another follow up conference. And it was kind of a disaster. And the Singularity Institute was in danger of going out of business. So they asked myself, and two of my friends who had been involved in putting together the 2007 funding, to, if any of us could step up and become the new executive director, and one of my friends was in law school, and the other was at an investment bank. So I was the only person whose position in a music rights management startup was relatively tenuous, not a giant loss to my sort of future earnings. So I came in and on it. And that lasted until 2012. And the rationality community came out of that. And at the end of my tenure there, Singularity Institute, basically split into the Machine Intelligence Research Institute, and Center for Applied Rationality and Leverage Research. And we sold the Singularity Summit and brands name to Singularity University, and started actual AI and logic research. And that has been an ongoing research program for the subsequent time

Cade Metz
3:19

Got it. It does seem, you know, I'm, I'm also coming to the end of a, of a book, basically, it's a narrative, you know, about a nonfiction narrative of, you know, about sort of the last 10 years and what we'll call AI and, and, you know, part of that is your moment at the Singularity Summit in 2010. When Demis Hassabis and Shane Leg presented and they ended up meeting, Peter Thiel at at a speaker's party at his his apartment near the Palace of Fine Art. Were you involved in, you were in 2010? Were you at that party? Do you remember that?

Michael Vassar
4:10

Yes, yes. Yeah, of course.

Cade Metz
4:17

And do you? I mean, do you know Demis and Shane, or did you have you involved with him after that?

Michael Vassar
4:22

Yes, I've known them from long before that. Yep.

Cade Metz
4:25

Got it. Because both of them are are just Shane, Shane sort of goes way back with Yudkowsky.

Michael Vassar
4:34

So I'd say I knew Shane more prior to that. And I know Demis more now. I haven't really seen Shane much. Since not too long after that. I Demis, I think did more of the public facing for the organization.

Cade Metz
4:50

Yes. So when you talk about the rationality community sort of emerging from, you know, the Singularity Institute and the Singularity Summit? What do you mean there? What do you mean specifically?

Michael Vassar
5:08

I mean, Anna Salamon and I put together some summer programs for people who were culturally associated with the Singularity Institute, who'd often into the 2007 Singularity Summit that we put together the funding for and we put together summer programs in 2008. And in 2009, and then in 2010, I moved out there, and we worked there, like, full time for throughout 2010 and 11. And, you know, built, you know, brought together a bunch of people to fill out some, like apartment blocks and have these sorts of discussions, are thinking explore the ideas that we thought were important.

Cade Metz
6:00

And what were those ideas? And what was that work in particular?

Michael Vassar
6:07

In particular, there, okay, there was to a much greater degree back then than there is now, something like a standard neo-liberal media narrative, which people who were on the naive side of things were likely to regard as literally true. So like, there was a strong push by people like the New York Times and people like Steven Pinker, to promote the idea that America was more or less a successful information processing system that processed information by using free market, and therefore allowed an ever growing production of new things. But that the new things that were produced, were not subject to being planned out in advance. And that we were moving towards understanding intelligence better, and building new machines plus biological possibility that would change the underlying intelligence that generated this progress. Does this make sense to you?

Cade Metz
7:20

Yes.

Michael Vassar
7:22

All right. So how long have you been working on the book?

Cade Metz
7:25

Um, years now, it's basically three or four year project it's in it's been copy edited, as we speak, I think I get the final version on Monday.

Michael Vassar
7:39

Wow, so what got you started on that?

Cade Metz
7:41

Well, I mean, this is it, you know, has become, you know, my, one of my if not my primary beat, I was at Wired, Wired Magazine for a while. And now at the Times, this sort of rise of, you know, essentially, you know, deep learning over the past 10 years is a big part of my beat. And I think they do, there's a real interesting narrative story, like I said, to tell there about the people involved and about how how this happened. And you know, I think, I think you're, you're right, I think, you know, what you and Yudkowsky to we're doing plays, plays a role in this.

Michael Vassar
8:29

So, I mean, in terms of there, being a community Yudkowsky was, you know, hiring me to run his organization. And Anna was the other primary person who like, brought people in, but like Yudkowsky had been doing his thing for a very long time, he provided the intellectual structure that led to the whole thing.

Cade Metz
8:49

Yeah,

Michael Vassar
8:50

but he didn't like, answer his phone or response to his emails reliably enough to like, create an ongoing conversation,

Cade Metz
8:58

right

Michael Vassar
8:59

You know, he was a very high quality analytic philosopher, mostly calling out the places where the narrative of our society was basically saying, "Don't think too hard about these issues", and saying, "Wait, no, we should think about things that will kill us if we don't think about them. What?!"

Cade Metz
9:24

right. No, I'm, I'm with you. And, you know, I think, you know, that is, I mean, the point's well taken, and one of the things I want to do is show people, you know, how that happened. And, you know, part of it is, you know, I want to show how those ideas sort of moved into the mainstream through particular events or people so, like I said, you know, one, you know, one event where you can sort of show this is that you know, Yudkowsky introduced Demis and Shane to Peter Thiel. Peter Thiel invested in their company. He also invested in the Singularity Institute and the Singularity Summit. You know, how else do you feel like in those concrete ways this moved into the mainstream?

Michael Vassar
10:22

So, it was a critical transition, where we Jaan, where Demis and Jaan Tallinn, spoke with Elon Musk, about artificial intelligence, potentially destroying humanity, even if we escape into space. And that led to a commitment that he didn't actually deliver on immediately or ever, above a billion dollars, I think, by Elon Musk to Open AI. And which was initially a nonprofit and not a for-profit. And for that, Elon... so Bill Gates had been talking about this for actually since the 90s. He is famous, he was the richest man in the world. He was writing on the back of Ray Kurzweil's book that these are like super important issues, and Kurzweil is the best thinking about the future. But Gates didn't have the sense that it was possible to do something about it. Even though he expressed a lot of qualms about Kurzweil's vision, being potentially Pollyanna-istic, even though Kurzweil's vision expressed that more likely than not, humanity destroys itself in the 21st century.

Cade Metz
11:38

Right. Right.

Michael Vassar
11:39

So like Kurzweil always did an amazing job at saying "we're all going to die" and being accused of being a Pollyanna by everyone. It was a, it's fascinating. So Bill Joy didn't think he was a Pollyanna was like freaked the fuck out. But basically, people heard Kurzweil and said he's being too optimistic, but he wasn't. He was just like, writing in a way that was designed to vibe with optimism, rather than being designed to vibe with pessimism, which in retrospect, I think was a very good idea.

Cade Metz
12:14

And, you know, I've talked to Jaan Tallinn. And I've talked to Demis and Shane. I mean, so I mean, what was your understanding of that, that meeting with Musk and do you have first hand knowledge?

Michael Vassar
12:26

I mean, I was at the Puerto Rico meeting, where the results of that meeting were unveiled, but I wasn't at the meeting with Jaan and Demis and Musk. I mean, I was at a later meeting where they unveiled it, but not at the meeting. But yeah.

Cade Metz
12:42

And I don't know, what did you What did you think of Puerto Rico?

Michael Vassar
12:47

So when I was there, it was fairly immediately clear to me that the proposal for Open AI was not a intelligible response to the ideas and issues that we've been talking about for all these years, that this was a co-option or capture, a term that people have been using with Microsoft is "embrace, expand, extinguish". But where the ideas that we had been promoting, were being replaced with egotistical narratives around competing people, when the ideas were very specifically ideas about not competing, not creating an arms race, so like, something Nick Bostrom had ultimately originated this whole line of inquiry, and from the very beginning, had been calling for reflection and collaboration, and instead, and to some degree secrecy, and instead, we were getting the exact opposite of what Bostrom had been calling for, for 20 years, in the name of the narrative that he'd been calling for. So this like led to my, completed, I might say, a transition in my thinking, away from the initial attitudes towards the media towards media narratives that had led me to build Singularity Institute up from the point of failure six years earlier.

Cade Metz
14:30

And what was your understanding of how that meeting between Demis in Shane and Elon Musk happened and where it happened?

Michael Vassar
14:40

So, I don't know when the events or where the original meeting happened. You probably know more about that than I do. But certainly, Jaan and Demis came together through my bringing them and other people together in the context of Singularity Summit VIP meetings.

Cade Metz
14:56

Yeah.

Michael Vassar
15:00

And Peter of course knew Musk really well, so it was pretty inevitable that he would introduce Musk to Demis.

Cade Metz
15:09

Got it. Got it. Meaning Peter Thiel, meaning Peter Thiel.

Michael Vassar
15:15

Yes.

Cade Metz
15:16

Got it.

Michael Vassar
15:16

But I think I think it's really important to focus on the question you asked, you asked about what I came away from the meeting thinking. So like, you've seen GPT-3 now, right?

Cade Metz
15:28

Yes.

Michael Vassar
15:30

Have you seen the conversation between GPT-3 and Spencer Greenberg about Elon Musk?

Link: it's a short read and well worth it.

Cade Metz
15:35

*laughs* Yes.

Michael Vassar
15:37

You have. Okay, great. Do you Spencer, by the way?

Cade Metz
15:40

I don't.

Michael Vassar
15:42

Okay, he's pretty cool guy. I was talking to him extensively, yesterday, we did a podcast. So Spencer, okay. So, in that conversation between Spencer and Elon Musk, GPT-3, which is doing a sentence completion and like tension minimization process, interrupts the conversation and starts expressing that Open AI is not a reasonable response to the problems that Elon Musk expresses concern about. So you remember that right?

Cade Metz
16:21

Yeah.

Michael Vassar
16:22

So like, it is very interesting. It's been gotten to the point where we can have the AI's themselves express that self-evidently, the humans are doing something perverse and dishonest, something fundamentally different from what they claimed to be doing.

Cade Metz
16:40

Got it. Got it. And, you know, what has been, you know, you're involved with Peter Thiel over the years. So he's helping to fund these things, right, meaning the Singularity Institute, Singularity Summit. And then he eventually helped fund MetaMed, correct.

Not the main point, but Cade is lamentably disinterested in the content here. Billionaire's coming in and funding the opposite of your mission in the name of your mission is pretty fucking weird and worth digging into.

Michael Vassar
17:01

Yep.

Cade Metz
17:04

And so I mean, how did you get to know him?

Michael Vassar
17:07

So I initially met him at Victor Niederhoffer's Junto in New York. And I had been spending a lot of time with people like Rob Zahra, who was a Harvard student who had done very well over the summer in investment banking, and Nicholas Romero Green, and some other people. We had been talking about singularity, and how do we not make machines that destroy us all. And I had a more like, 20-25 minute conversation with Peter, at Victor Neiderhoffer's Junto about the efficient market hypothesis and how it can only possibly be an approximation and how the calculation problem if human brains can solve it, so can machines in principle, we just may not have institutions for building the right sorts of machines. And then the next day, we met again, we'd like planed that and he somehow didn't recognize me, after having spoken with me for like, 25 minutes. So I was kind of surprised by that. And we had another like, several hour conversation, we kind of got to know each other, somewhat gradually, for a little while, and then he, and then much more rapidly after around from around like, 2010 to like 2014 We spent like a lot of time together.

Cade Metz
18:32

I see. So then when did he invest in MetaMed? And what was the idea there?

Michael Vassar
18:39

So why don't you tell me what idea you took away from it before we talk about that?

Cade Metz
18:47

Well, I mean, it sounds like you know, sort of a consulting firm of sorts. So you had this sort of team of doctors who will... you know, you can you can you can call on and it sounds like Scott Alexander, you know, at one point was one of these consulting doctors,

Michael Vassar
19:13

no, he was never a consulting doctor. He won a prize early on when we were reaching out to people to do sample reports, but he never actually did any reports what while we were operational.

Cade Metz
19:25

okay, but it sounds like he worked for the company. Like on the website, you

Michael Vassar
19:30

no he was an advisor. He did something at the very beginning, but then he was not available after fairly early on.

Cade Metz
19:37

I see. Got it. Got it. So it was corrected to call him an advisor.

Michael Vassar
19:45

Hmm, sure. Yeah.

Cade Metz
19:46

Got it. Got it. And so how would you describe the company?

Michael Vassar
19:52

So have you been tracking Robin Hanson?

Cade Metz
19:55

Yeah, I talked to him recently.

Michael Vassar
19:57

Okay, so you know his stuff on medical Medicine?

Cade Metz
20:02

Yes.

Michael Vassar
20:04

Okay, so MetaMed is the straightforward response to Robin Hanson's stuff on medicine. Robin is pointing out that science says that medical treatments work. And science says that in aggregate medicine does not work. And therefore, like there seems to be a unsolved problem of investigating which treatments are appropriate to a particular individual situation and matching them up with the treatments that will help them, right?

Cade Metz
20:33

Right.

Michael Vassar
20:35

So, okay, um, so it okay, let, let me ask you a couple of questions. First of all. 15-20 years ago, it seems to me like having an interview with a journalist was a more straightforward process. Because people were much more operating within a shared set of narratives, like to assume, you know, it's possible to assume that we were each all like talking from the assumption that things work in America more or less the way people like Steven Pinker say they do. You know what I mean?

Cade Metz
21:28

Uh, yeah, I think I think I know what you're getting at. Yeah.

Michael Vassar
21:34

So like, today, that's less possible. Why would you say that's less possible?

Cade Metz
21:41

Well, I, I mean, I don't necessarily agree with that. I mean, I understand what you mean.

Michael Vassar
21:46

Okay, so why do you think I'm saying it? If you don't agree. Disagreement is a critical part of discourse. People need to express their disagreement or it's not possible to get anywhere.

Cade Metz
22:00

Sure. I mean, I mean, from my point of view, not all journalists are the same. And I think that a lot of people, for instance, might deal with journalists from other outlets or you know, or other backgrounds and and then apply, apply what they assume are based on their and acts with those journalists to me. My, my aim with any story is to look at it from all angles. To get as many many people on the phone or better yet in person, talk to them about a situation and really understand it, understand your point of view, understand Scott Alexander's point of view, understand, you know, who everybody okay, that's, that's my aim. So my aim is not my aim is not to agree with your worldview, or write it from my worldview. My aim is to look at this from all sides.

This is the first of a several moments where Michael asks a question or makes a point and Cade interprets it in a wildly different frame, different enough that his responses basically become non-sequiturs to anyone who can understand what Michael is saying.

Michael Vassar
23:12

So I think that worldviews are maybe potentially more divergent than what you just said, really can embrace. So like, if you're engaged with Americans who are advocating for, for instance, like a, on the one hand, a more libertarian policy, and on the other hand, may be like a more Andrew Yang, Basic Income type policy, then it seems like it's possible to like engage with and there are competing worldviews connected around both sides. While like it seems that if you're talking about evangelical Christians, and like, you know, Daoists, then like, looking at things from both sides, it's much less clear what it means as the distance between the perspectives grows, what it would be to look at them from both sides, you see what I'm saying?

Cade Metz
24:08

I see what you're, I see what you're saying. I mean, I still think that you know, you know, this story is a good example, it becomes much harder to wrap your head around everything that's going on, but you can do the work that allows you to do that.

Cade received Michael's previous "You see what I'm saying?" as a request for validation (many such cases), and is trying to respond with a soothing/placative "Don't worry, I'm validating you, you can relax" response.

Michael Vassar
24:25

So, one way of saying this is from within the schema of something like neoliberalism. The MIRI and the Singularity Institute before were organizations trying to draw attention to a particular place where the shared narrative was incoherent, where the shared narrative needed more investigation and exploration. Right, right. And from The perspective of like, academic humanities, the Singularity Institute was much more plausibly interpreted as a cult or something. And these are like, very different perspectives such that like, looking at it from both sides is like a hard thing to do. It would require, like, clarifying what the relationship between the academic humanities and the neo-liberal narrative is, which is something that is a practically unsolved challenge, you see?

Cade Metz
25:38

Gotcha.

Michael Vassar
25:41

So like, that's why I'm asking about background because like, we it's not because of specifically something about journalism, it's something about specifically not having, in a postmodern context, a very easy way of sharing a frame so that one knows even in the roughest approximation, that we mean the same thing by words, or that we are talking within the same assumption about what a human is, you know?

Cade Metz
26:10

I hear you.

Michael Vassar
26:13

So... like... Okay.

You can hear Michael recalculating. Michael was trying to make it very clear that he thinks the gap between their worldviews is so different that they can't really have a meaningful interaction until they've talked directly about said gap, and Cade just responded multiple times in a row with placative "validation" responses. I'm guessing Michael's trying to figure out if Cade is fucking with him or actually doesn't understand.

Cade Metz
26:23

Well, I mean, so look, I hear you, and, you know, this stuff is hard. What I try to do is, in addition to getting people's opinions is is, you know, it's also just stick with with the facts of things and make sure I, you know, I get that fundamentally right, as well. And, you know, one thing, I'm just trying to understand, you know, the, the nexus of a lot of what's going on here. You know, you did a good job of helping me understand, you know, how you got to know Peter Thiel, how he came to invest in in your company. And so tell me, like, and you started to describe how you came to know Scott Alexander, it sounded like he, explain that again, so he actually sending a questionnaire?

Michael Vassar
27:19

No, no, there were sample projects to be done sample medical reviews, that we wanted people to submit to, like, demonstrate the feasibility of investigating the medical literature to form the sorts of integrated consensus that MetaMed was assuming were possible. But, like, we'd been familiar with Scott Alexander from long, from long before that, he had been blogging for many years, and had been the most prolific blogger in our general vicinity of the internet for many years, in fact.

Cade Metz
27:56

Got it. And,

Michael Vassar
28:02

I would like to know what your interpretation of Scott Alexander's cultural role is, that would be really helpful.

Cade Metz
28:10

Um, well, it's interesting how, in some respects, like the, the center of gravity, so to speak, for the, the rationality community kind of moved in his direction, you know, 2014, post 2014. You know, he is, he is read by, you know, a fair number of people. You know, among, let's call it, let's call them the Silicon Valley rank and file, you know, software developers, engineers, but but also all sorts of other people. You know, across the globe, but also, he's read by some very influential people, including, you know, venture capitalists and company founders. So there's this overlap with, you know, with kind of Silicon Valley as we, as we think of it, and, you know, and you can kind of see that and kind of the reaction to people when he, you know, took down his blog, right. A lot of those people came to his his defense, but you could see that even before that. So, you know, it's, it's an interesting situation that, you know, he kind of grew out of this, this rationality community, but his influence, you know, goes beyond that, I would argue,

Michael Vassar
29:49

like, so I guess a lot of the issues that, I don't have any idea when you say rationality community, like what sort of a network structure is there in your mind? Like I don't have any understanding of what the word community or rationality as individual words, and how they relate to facts and journalism and knowledge. And like there's a...

Cade Metz
30:19

Well, I mean, I think that people of all people, all sorts of people define it differently. Right. Like you know, you could argue that Scott Aaronson is part of the community, but he will say he is not. You know, some people see the community is smaller than other people. And you know, so I'm not, I don't think,

Michael Vassar
30:47

So there's a question about what its boundaries are, but that's not what I'm asking.

Cade Metz
30:51

Right.

Michael Vassar
30:51

I'm asking like, is it a type of snake? Is it an animal, vegetable or mineral? Does it have , is it bigger than a breadbox? What like, what sort of a thing is it in, like, assuming that someone knows, like, nothing you know?

Cade Metz
31:08

Well, it's a, I would call it sort of a, a sprawling community that has certain core beliefs, and you can and you can sort of list those among them. You know, the belief that you should apply rational thought to every situation. And that's going to includes to statistics and probabilities and Bayes theorem. And but then there's also like, there, there are meetups across the globe, I've talked to people across the globe who go to regular rationality meetups, or in some cases, they're called Slate Star Codex meetups. And, you know, you can you can use that to define define this group, the AI risk thing is one of those beliefs, that's part of it. There are group houses.

Michael Vassar
32:03

So I guess it would be helpful to know, like, how you see the AI risk thing, relating to the other things, that would be like really helpful.

Cade Metz
32:13

Well, I mean, you know, I think some of this goes back to Yudkowsky. You know, he has something AI risk has been he's been interested in for a while. And and is also interested in this this rationality idea, he helped create that community. And those beliefs, you know, there is sort of a relationship between the two, meaning. You know, in some sense, people, including him, are applying that sort of rational thought to the idea of AI, that there is a non trivial risks there. So it's best to prepare, prepare for that. You know, but again, he, you you talk to all sorts of people about the relationship there, and they have all sorts of different answers.

Michael Vassar
33:22

Sure, of course, but I'm trying to understand what yours, what, okay, how your understanding is, like, I'm trying to understand, for instance, when you say the rationality community believes that people should be rational, and that includes statistics and Bayes Theorem. I am like, curious as to what the alternative to people should be rational is, like whether the word "should" is being questioned or the word "rational"? Like, like, what, what is the, like, you're proposing a proposition, but that proposition is like, a boundary, in a sense, that proposition is a distinction, like, between a position that people should be rational and a position that people shouldn't be rational? A position that "should" is meaningless? You see what I'm saying?

Cade Metz
34:10

Well, I mean, look, everybody should be rational, everybody thinks they are rational.

Michael Vassar
34:16

I'm not sure that's true.

Cade Metz
34:17

Okay, well,

Michael Vassar
34:18

I'm actually sure it's not true.

Cade Metz
34:19

Okay. Well, I mean, I, you know, I've talked to enough people in the community and people who are at least consider themselves very close to community about this and, and, you know, that's the common denominator, right? This this notion that you are applying, applying calm, rational thought to things and, and then the following question is, well, what do you mean there and then, you know, what people would typically do is they say, you know, we're, you know, we're applying Bayes, you know, Bayes rule to this, right? You know, we're thinking about biases, we're thinking about statistics and probability, we're taking that into account. Right? And, you know, part of my job is to explain these types of things in very clear and concise ways. So I'm going to have to boil all that down. And and,

Michael Vassar
35:31

Ah, that's pretty helpful. What would you say are the differences between the norms of reason endorsed by Eliezer Yudkowsky, Scott Alexander and the journalistic community, this would be a really like, you can only understand things by understanding differences. So like, if we understand the differences between what norms are being endorsed by Eliezer, by Scott, and by the journalistic community, this would like help us to see whether we were seeing the same thing.

Cade Metz
35:57

Well, I don't you know, I don't think that the comparison is what I'm trying to do here like I'm,

Cade is coming from a frame that can only see "comparison" as "this group good, that one bad". To Cade, any attempt at describing concrete object level differences between Eliezer, Scott, and a journalist would obviously be just a cover for making a value judgment. He expects Michael to be trying to "catch him" making a value judgment, and so he refuses to "compare".

Michael Vassar
36:09

but I'm saying that communication is impossible and is nothing but comparison, like communication is literally empty and meaningless, except as a set of comparisons.

Cade Metz
36:19

Got it. Got it. Well, I mean

Michael Vassar
36:24

*laughs*

Cade Metz
36:24

Yeah, I mean, I look, I hear you, I think we're gonna we're gonna end up going in circles here. I think what I want to do is,

Michael Vassar
36:30

I don't believe you hear me is the thing, if you heard me you would try to make a comparison, that I could understand.

Cade Metz
36:36

*pause* *sigh*

Cade Metz
36:49

Well, I think it is, I think it is very difficult to paint everyone with the same brush. Okay, so you're not gonna want to paint everyone in the rationality community, the same brush, it's hard to find what the rationality community is, it's hard to paint all journalists with the same brush.

Michael Vassar
37:07

So journalism has standards and norms that it can try to apply.

Cade Metz
37:12

Yes

Michael Vassar
37:12

So journalists have to at least be able to perceive a set of constrainted behavior. Otherwise, they can't have such a thing as journalistic standards or journalistic ethics, they need to at least loosely agree about what a fact is. And it's not clear to me what the agreement where there might be agreement or disagreement between, for instance, Eliezer Yudkowsky, and like journalistic norms about, for instance, whether the Many Worlds Theory is a fact, or whether that is like an opinion that, like whether cryonics should be expected to work is a fact or whether that is an opinion. Like, these are the facts as Eliezer understands them, and I think most people would not understand them as fact. And but like, this is a, since we're referring to facts. And in terms of any people with a broad brush. That's why I'm talking about specifics. I'm talking specifically, we have these three examples of like, Scott Siskind, Eliezer. And either you could make it you or you could make it your preferred journalist Malcolm Gladwell level, some person to represent the journalistic.

Cade Metz
38:20

Okay, well, I think we can do that very easily. You bring up the Many Worlds thing right there. I was in a quantum computing lab at a at a major company recently, and I talked to maybe 15 of the physicists working on what you might call the cutting edge of quantum mechanics. Some of them believe in the multi worlds thing, some of them absolutely do not. So I would argue that that is not a fact. Because there are some people working in the field a very, very different opinions about it. And it's not just two opinions, it's a wide range. So my job is

Michael Vassar
39:01

I'm familiar that there's a range of opinions about it. But it seems like you're then asserting that facts are socially constructed, that whether something is a fact is determined by whether people with some sort of a form of authorization, like a PhD in physics or a job in physics, endorse it or don't endorse it.

Cade Metz
39:26

In that case, yeah. So people who understand the field, I think that they're, talking to them about that is valid and taking all what each of those people say into account as I am relating it to the reader.

Michael Vassar
39:49

So taking is not, people definitely don't agree about what it means to take what someone says into account. They also definitely don't agree about whether facts are defined by social construction, or whether facts are designed by something like a correspondence theory of truth. That's the traditional philosophical view, or whether facts might be defined by power relations. That's the conventional postmodern view. There may be other views.

Cade Metz
40:18

Okay, I'm with you.

Narrator: he was not, in fact, with him.

Michael Vassar
40:21

There's a traditional Hindu view where all facts are false, or Daoist view where all facts are false.

Cade Metz
40:27

Okay. Well, I've, I've told you how I view it and, and that that's the way I would approach a story about quantum mechanics. And it's a way I would approach a story about rationality, meaning, I talk to

Michael Vassar
40:43

So with quantum mechanics, you can have a PhD in quantum mechanics, and you might have to make a decision and judgment about what institutions are authorized to grant PhDs and how many University of Michigan PhDs you need to add up to an MIT PhD, but we at least have a somewhat tight, I think, shared reference frame on who has a PhD. When it comes to rationality, it's, if we were to do an analogous thing, we would be doing something like asking who has an MBA, because an MBA is closer to being a degree in rationality, I think, than anything else, or maybe a finance degree or something, a behavioral economics degree, something like that. But I don't think any of the people in the rationality community more or less have behavioral economics or finance degrees. You know, I have an MBA, Robin has an economics degree, well, sort of a social science degree. But it's extremely uncommon. Someone like Eliezer doesn't. And, you know, yeah, it seems important for Eliezer, and if we're going to socially construct facts, it seems important for Eliezer and Scott, to be socially constructing what the rationality community believes, rather than MBAs socially constructing what the rationality community believes, doesn't it?

Cade Metz
42:02

Sure, of course.

"Sure, of course" is not Cade agreeing with Michael. Not only does Cade not know what Michael is talking about, he doesn't seem to consider it a possibility that Michael could be talking about anything in the first place. Cade's just trying to answer "What does he need me to nod my head along with in order for me to ask him the next question?" which we are watching fail spectacularly given how Michael is not looking for nodding but is trying to have a conversation.

Michael Vassar
42:05

But like, then, you say, of course, but I mean, you see that the exact analogy between what you were doing in quantum computing and what you were doing in rationality would make Andrew Gelman and not Eliezer Yudkowsky, middle school dropouts,

Cade Metz
42:26

No, no, no, no, I'm not doing that. Okay, well, I'm gonna say one more thing about this, okay. Of course, Yudkowsky's view matters here because he helped create this community and is inside it. Of course, Scott Siskind's view applies because he's been in the middle of this. Others I've talked to Kelsey Piper, her opinion matters. You know, she is in this Scott Aaronson's opinion matters. He's very close to this knows a lot of these people, as followed them for a long time. Robin Hanson's matters, I'm not worried about their degrees, I'm not worried about any of that I'm worried about, you know, their proximity to, to this community.

Great illustration of their disconnect. Cade hears "Eliezer, middle school dropout" and immediately thinks he's recognized a particular social attack that Michael is worried he is making, and rushes to declare he's not trying to call anyone a middle-school dropout (which Eliezer is). Because of course, to Cade, you'd never say that unless you were trying to insult them.

Michael Vassar
43:22

So we've talked about quantum computing. And David Deutsch is, of course, the person who proposed quantum computing originally as a concept, and he's the person who created the first quantum algorithm. And his motive for proposing quantum computing was to give the empirical verification of the many worlds hypothesis. So like, there is a question about to what degree the people who create a concept and its definitions and its underlying mathematics and formal structures get are determined and determined to, get regarded as determining what's the concept means. And to what degree other people who repeat the words determine what it means and how you weigh the other people who repeat the words if you see what I'm saying?

Cade Metz
44:13

I do. I do. Okay, so can I just, you know, ask you just a couple more things that I'm trying to understand. I would love to get your view on.

Michael Vassar
44:28

You can but I'm not sure I can answer because I don't know your view on views. Like I don't know. Currently, what, how you relate to words.

Cade Metz
44:41

Okay.

Michael Vassar
44:42

Did you see GPT-3's conversation about itself? GPT-3 speaks to philosophers.

Cade Metz
44:47

Yes, I did.

Michael Vassar
44:49

So you see that GPT-3 relates to words in a very different way from how I relate to words, even though GPT-3 can like compose articulate prose?

Cade Metz
45:01

*pause* Yes.

Michael Vassar
45:05

Right, okay. So like, the range of possible ways of relating to words is very large, like GPT three, for instance says that it can't lie. If it does something that would be a lie, if a human did it, it's not a lie when it does it, because words are meaningless to it. And like, I am concerned about the possibility of in the larger world, that attitude existing, because like, if people have ethical norms against lying, and and also social pressures to lie, it seems like a natural way in which people can reconcile those is by adopting something more like GPT-3's relationship to language.

Cade Metz
45:50

Okay, well, I can ask you a couple of questions. If you don't want to answer that, that's fine. That's, that's up to you. But you know, this is a couple of them, or they don't want to ask or we can in the call now is whatever you feel like,

Michael Vassar
46:01

You may as well ask your questions, I'm just trying to, I would love to have a productive conversation about these topics, I would very much love to have a productive conversation about these topics.

Cade Metz
46:15

Okay

Michael Vassar
46:15

that's why I'm spending my time to have this conversation, it's just that I am trying to lay these sorts of ground that would make it possible to have what I regard as a productive conversation.

Cade Metz
46:24

Okay. You know, I want to have productive conversation too, you know, I thought that the first part of our, thought was very productive. I just want to ask a couple questions along those lines. So, you know, one of the the other things it's interesting to me here is, is how Slate Star Codex, in a way is, it's not just a blog, it's a place for people to converse, right? And, some people see it as like a, like a place for really productive intellectual discourse, because you have so many different views represented, like some people see it as a place where sort of a rare place where people with all sorts of views can can discuss them, whether they're conservative, liberal, you know, Neo reactionary, you know, whatever else you can have, you can sort of have these discussions. And so, as part of that, you know, what's interesting to me is sort of there's this in a way, this this overlap, or at least, with, with the Neo reactionaries, or at least they play a role here. And, and that's what I'm one of the things I'm, I'm trying to understand the relationship between what people call the Neo reactionaries and and sort of this, this situation where those people are sort of at the table in these in these discussions.

Michael Vassar
48:09

So you have read Scott posts about Neo-reaction, right? They're very long.

Cade Metz
48:15

Yes.

Michael Vassar
48:16

So what did you think of those?

Cade Metz
48:19

Well, okay, maybe maybe I'll get even simpler here. So one thing I mentioned is just sort of the way all this stuff played out. So you had this relationship with Peter Thiel, Peter Thiel has had was this relationship with, with Curtis Yarvin. Do you know much about that? Like, what's the overlap between sort of Yarvin's world and Silicon Valley?

Michael Vassar
48:52

Okay, well, Yarvin literally lives in The Castro, so, his relationship with Silicon Valley is he is Silicon Valley based. You can clarify your question more and the you know, give me some thoughts about what you've already what you already see as his relationship.

Cade Metz
49:11

Right. You know, Thiel also invested in one of his companies you know, did were you involved in that or do you do you know about that? And how did that did he meet Yarvin?

Michael Vassar
49:25

All right so Thiel did in fact invest in a company that Yarvin no longer owns called Urbit. But I'm still trying to understand what you're seeing as the relationship here between Slate Star Codex and Mencius Moldbug because like there is a very long article as I said, in fact two I believe Neo-Reaction In One Giant Sized Nutshell, and if I will upload it Well, I think Why I'm Not a Neo-Reactionary or something like that. I'm asking if you've read them and what you thought of them?

Cade Metz
49:57

Well, you know, honestly, my Like, what I think of them doesn't matter what I'm trying to do is understand what's going on like, and so,

Such a disconnect. Michael has been trying to tell him for 20 minutes why it he cares about what Cade thinks, of course it matters.

Michael Vassar
50:06

so then let me rephrase that from what you thought of them to how you understand them.

Cade Metz
50:13

So I'm not, I'm just making sure that I appropriately rep represent the people what the situation is meaning. You know, Scott Alexander does not identify as as a Neo-Reactionary, but he does or has and, you know, sort of promoted neo-reactionary blogs on his blog.

Michael Vassar
50:41

Which blogs?

Cade Metz
50:44

Curtis Yarvin's, Nick Land's.

Michael Vassar
50:47

So I don't know if he ever linked to.

Cade Metz
50:51

No, he has. He has that. So

Michael Vassar
50:56

I don't mean to link to from a blog post. Of course, he would do that. I mean, I don't think he linked it in his blog roll.

Cade Metz
51:01

It is blog roll. Yeah. In his blog roll. Yeah.

Michael Vassar
51:04

That's interesting. Okay, I didn't notice that. From when to when was that?

Cade Metz
51:12

I mean, Nick Land was this year, I mean, this year. So

Michael Vassar
51:15

wait, I'm very confident Scott would not put Nick land in his blog roll because Scott doesn't understand Nick Land at all.

Cade Metz
51:23

Well, I don't know if he understands that or not, but he has been in his blog role this year.

I'm making a smidge of a leap, but I get a sense of, "How could understanding something possibly relate to one's decision to signal boost it? What a weird and irrelevant proposition."

Michael Vassar
51:28

I think you're mistaken. I would like citations there.

Cade Metz
51:31

Yeah, definitely everything will be cited and all that I'm I'm just like, I'm just trying to at this point do my research research and so I'm just trying to understand

Michael Vassar
51:41

So it seems to me like you are trying to do research into the boundaries around political coalitions.

Cade Metz
51:51

No, I'm not looking. No, I'm just trying to explain to people

Michael Vassar
51:55

Okay, so do you have a hypothesis as to why it would seem that way to me?

Cade Metz
52:00

Um, do I have a hypothesis? I am trying, I'm gonna I'm just gonna be completly honest with you, I am trying to gather information, I am trying to gather information. And once I gather all the information that I need, I will write a story. And, and so that I am, I'm trying as hard as I can with you, and all sorts of other people to gather information. And that is fundamentally what I do is I gather information. And then from that information, I try to explain to people as best I can, what is happening.

Feels like the closest Cade comes to saying "Look, I don't know what you're talking about, I don't care what you're talking about."

Michael Vassar
52:44

So I have gathered a lot of information, I guarantee I have a great story. And the ability to explain things less well than Alex Karp would but still pretty well. I think that the... So there are some important facts that need to be explained. There's there's this fact about why it would seem threatening to a highly influential psychologist and psychiatrist and author to have a New York Times article written about his blog with his real name, that seems like a very central piece of information that would need to be gathered, and which I imagine you've gathered to some degree, so I'd love to hear your take on that.

Cade Metz
53:42

Well, I mean... *sigh* Well, rest assured, you know, we we will think long and hard about that. And also,

Michael Vassar
53:56

I not asking you do to anything, or to not do anything. I'm asking a question about what information you've gathered about the question. It's the opposite of a call to action it's a request for facts.

Cade Metz
54:06

Yeah, I mean, so you know, I think what I don't know for sure, but I think when it comes time, you know, depending on what the what the decision is, we might even try to explain it in like a separate separate piece. You know, I think there's a lot of misinformation out there about this and and not all the not all the facts are out about this and so it is it is our job as trained journalists who have a lot of experience with this stuff. To to get this right and and we will.

Michael Vassar
54:47

What would getting it right mean?

Cade Metz
54:54

Well, I will send our send you a link whenever, whenever the time comes,

Michael Vassar
55:04

no, I don't mean, "what will you do?" I'm saying what what, okay. That that the link, whenever the time comes, would be a link to what you did. If getting it right means "whatever you end up doing", then it's a recursive definition and therefore provides no information about what you're going to do. The fact that you're going to get it right becomes a non-fact.

Cade Metz
55:26

Right. All right. Well... *pause* let me put it this way. We are journalists with a lot of experience with these things. And and that is,

Michael Vassar
55:57

Who's "we"?

Cade Metz
55:58

Okay, all right. You know, I don't think we're gonna reach common ground on this. So I might just have to, to, to beg off on this. But honestly, I really appreciate all your help on this. I do appreciate it. And I'll send you a copy of this recording. As I said, and I really appreciate you taking all the time. It's, it's been helpful.

Michael Vassar
56:30

All right, I am glad to have had this conversation as well, in that case, I would have to, based on the content of this conversation, I think, recommend that people regard the New York Times as fundamentally postmodern in its epistemology and of the position that facts are socially constructed, based on patterns of authority, the details of which are to be concealed. Does that seem like a fair summary of your perspective?

I expect that Cade can only relate to this description as "you are bad people doing bad things". Michael is actually making a specific and kinda straightforward claim. Cade has dodged 10's of repeated questions about the nature of his standards. He has shut down all attempts at having a conversation about how he thinks or reasons, giving only "trust us, we're professionals". That's straightforwardly concealing details. The only substantive thing Cade did venture to say was that the truth of quantum issues is determined by the agreement or disagreement of experts. Which is straightforwardly the idea that what is a fact is constructed by a social body. "Postmodern epistemology" is mostly just a reiteration of that.

Cade Metz
57:09

No, I because because, okay.

Michael Vassar
57:13

Is there any way you could correct that because it does seem like the only summary I mean, I feel like if I gave that to a philosopher, like a philosophy professor who writes articles for the Stanford Encyclopedia of Philosophy or whatever, they would be certain to agree with me like I would bet 100 to 1 on it, we could make it better if you like, we love bets rationality community.

Cade Metz
57:37

Okay, I'm just saying you are welcome to characterize it however you like, but you know, I have, but I have told you the way the way I see what I am doing, and

Michael Vassar
57:49

okay,

Cade Metz
57:50

right. So I have told you what, what i, Those are not my words, what you just said. But you know, I have told you the way I am approaching this.

Michael Vassar
58:03

Great, I am happy that you've told me how you are approaching this and that those are not your words. I look forward to looking at the transcript. Thank you very much.