The Not Unreasonable Podcast
Subscribe in iTunes, stitcher, or by rss feed. Sign up for my newsletter here and also see us on youtube!
Show notes at notunreasonable.com
The Not Unreasonable Podcast
Robin Hanson on Distant Futures and Aliens
Social science is brutally hard to do well. I once got a guest to admit there has been no progress on it ever and another one to say that moral progress is literally impossible. I think there's a deeper link to morality and social science than most so this is depressing stuff for me. This episode was part of an effort of mine to get back to first principles. But, uh.. what ARE the first principles of social science? Enter Robin!
Robin Hanson is Associate professor of Economics at George Mason University and is back for his second appearance to talk about social science of very distant (in every possible sense of the term) societies. We have aliens, we have future humans, we have simulations. Robin speculates about things he can see today, things he thinks might happen in a century or two and even what might be going on in 1 million CE and perhaps beyond.
As a frame for the show I think of it as trying to reason about what is permanent about social life by thinking of things that are weird and distant. Robin immediately disagrees with my premise and we go from there. We always learn from Robin Hanson, people!
Show notes: https://notunreasonable.com/?p=7460
Youtube link: https://youtu.be/oYCEo3LnGFE
Twitter: @davecwright
Surprise, It's Insurance mailing list
Linkedin
Social Science of Insurance Essays
My guest today is Robin Hanson, associate professor of economics at George Mason University blocker at overcoming bias.com, an author of two books, the age of m, and the elephant in the brain. In an earlier appearance on this show, Robert and I discussed the latter book and today we'll be covering the former book in more detail. The theme of this little episode here is going to be exploring the core building blocks of social life by imagining strange and strange and distant other societies. I think Robert is an expert at an alien cultures and humans human successor cultures. Robin, welcome back to the show. Nice to be back. First question, are there any Immutable Laws of social science? And if so, what are they?
Robin Hanson:So immutable is an odd word to use? Okay, it just means it doesn't change over time. Right? Like, people are usually asking for general laws that would apply in a wide range of circumstances. So if you had a law that only applied at a limited range of circumstances, but those circumstances didn't change, then it would be immutable, even if not very general. And so then your question becomes equivalent. Are there ever immutable circles? social circumstances? Is there ever a social situation that will last for ever? And that's kind of a pretty big claim, right? Forever is a really long time. I don't like to make claims about forever if I can write that right. So you,
David Wright:you, you have been suckered into it a few times, not forever. That's a very long time, but at least very, very long periods of time. Right. And what I noticed about you have a couple of really interesting pieces, several interesting pieces. So the age of em the book, which speculates about future societies. There's another paper you wrote, which was the hardscrabble frontier. Right, which is really interesting. And and I think that one of the things that links these another point is your paper on grabby alien. So that's a little less explicitly focused on kind of speculating with the aliens are just that they're there. The interesting, I think, a theme that runs through that, I think, is your use explicitly in the hardscrabble aliens paper, but less explicitly an age of em, of game theory, right? And so game theory gives you things a framework in which to establish immutable laws, at least over the course of the game that you're playing out, I would think that lets you speculate about what might happen in the future. So game theory gives us a way into this, and how do you think,
Robin Hanson:well, game theory is more of a subset of competitive environments? So, you know, we have a long history of billions of years, up until this point in the world where various organisms were competing. And, you know, for the earliest, simplest organisms, they weren't very aware of the other ones. And they weren't very much responding to the other one strategy. So So game theory is about how agents would behave if they had something they wanted. And they had expectations about the other agents, plans, yeah. And then you have a match between those. If you just don't have any expectations about the other agent, you can still have competition. And that still gives you predictions. So I would think the larger category is the robustness of competition. Okay, game theory is a sort of thing that happens in competition for agents that have a certain level of capability, right agent needs to be able to foresee the game, think about the consequences, know what they want, have some guesses about what the other agents plan to do. And then they can use game theory to figure out what best to do in a competition but we can have competition without agents were that good? Could we say then, that competition, it maybe not belong is not the right word. I don't know if you can call it that. But persistent feature of social life. Competition has been so far a pretty robust feature of life, not just social life. And it does seem likely to continue for a pretty long time. And there are interesting ways in which it might become limited or less. And I discussed some of that in my GrabIt aliens work. And we can go into that. But mostly, I think competition will remain strong. And that's one of the most robust things we can use to predict outcomes and strange worlds. I definitely
David Wright:want to get into the specifics on gravity aliens and and your your Jupiter competition age event and in hardcover frontier, because I think it's maybe slightly different in each of them. But is there anything else that is as permanent or general, mutable, as competition, that that's worth thinking about? Is that it?
Robin Hanson:Well, you know, once you do have somewhat capable agents, then some sort of computing framework is also a pretty robust framework to have in mind that is an agent in order to choose actions will have to have some sensors that, you know, update memories, then then input into algorithms that will make choices about behaviors, right. Yeah. So that I think is going to remain a robust way to think about agents in social worlds for a very long time.
David Wright:I am so happy you said that because that was one of the things that I was Wondering about which it seems to me that the concept of uncertainty and an information processing right is really critical? Because it's something that you, you I don't think you've touched on quite as much. And well, I don't know, we'll see if I can get into it. But I think that the the idea of prioritizing attention and effort and decision making is really hard. I mean, it's really hard. It's the basis of intelligence, right? Or, you know, is a way of thinking about it, maybe. And so you kind of have competition, and you have people trying to figure out right to focus on how do they come in?
Robin Hanson:Right. So the general category of competition does sort of produce more and more specific things that I think will remain. And last, but you know, over time, they've been added to sort of the base thing of competition, right? So once you just had competition, and then you had organisms that did have inputs, and memory and algorithms to make choices, and so that was added, and that pretty much now in forever more competitive organisms will want to have some of that, right. And then, in addition, there's some degree of decision theory, I think, is a useful framework for thinking about how you would want to construct such algorithms, what would want them to be approximating, you'd want to have some idea of what you were trying to do and some uncertainty and merge those into a plan of action. And then even further game theory then becomes a robust thing you would add to decision theory for an agent who has sufficient, you know, memory and calculation to be able to think about other agents and their plans, in addition to just their own goals and plans. And, you know, even beyond that, the idea of coordination mechanisms becomes important and coordination institutions. So once you have game theory, you realize there's many ways that we can fail to achieve our ends through a failure of coordination. And then we start to say, how can we coordinate and that opens up a whole world of institutions and mechanisms by which we might together coordinate to achieve our ends. And that say, starts to introduce the idea of organizations governance, law,
David Wright:social life,
Robin Hanson:like language, of course, is an awesome another robust concept, you know, to add, which is, once you have these internal processing, and you have these other creatures around you, that you need to coordinate with, you might use language, to coordinate with them yes to, and then you think about spaces of languages, and how languages could evolve and continue. And so these are all, you know, larger, robust set. So in some sense, you know, the future will contain a lot of these things for a very long time. But they weren't always there in the past. And humans have, in fact innovated some key additions to this library of things that all agents in the future will want to have.
David Wright:I. So one of the I was a real interesting paper, I'm, I mean, reading that book, each of em for this paper reference, which I hadn't seen yet was kind of worth all the time I've spent preparing for this, because I was looking for something like this, which was showing that groups make more rational decisions than individuals. And so I think that's a real deep, important thing. Because you could ask yourself, sure, what, why do we have these groups? Why do we coordinate right? And, you know, had Joe Henrik on this podcast little while ago, and learning about culture evolution, and it was kind of a real deep, important kind of concept there of we use groups to, to protect ourselves and do make better decisions and work together, right coordinate, which makes us more powerful and important. Now that we are more rational was something that just, I mean, just delighted me. And now you're kind of coming on this concept back to you made the point about decision theory. We are talking about social science here today. That's kind of the idea. And if the idea of coordinating into groups, right. The trick though, with that, which you show in your I think your other book is that once we start paying people working together in groups, we had all kinds of nonsense. Right? Now we get like these these hidden motives, we have these complex complexities around how we communicate and how we coordinate, you know, is that necessary to?
Robin Hanson:Well, that is the interesting question of when our descendants explore a larger space of possible minds. How different will those minds be than ours? So, you know, in the last 100,000 years, humans have innovated and explored a lot of ways to help coordinate and manage minds like ours, but our minds themselves haven't changed at more fundamental levels. It's certainly possible that our descendants will change their minds at more fundamental levels. And then we start to think about, well, what will a universe of creatures like that be like what sort of issues that we have? Won't they have and vice versa? So that's one of the more difficult things when you try to think about futures to try to think about the space of minds because the space of minds really is enormous. Yeah, and All mines don't have to be like house. Yeah.
David Wright:But if and that's kind of one of the things we'll get out here is this, you know, if we can if we can ground some of this analysis in the general concepts, in general better the word immutable in this case, then our do we wind up deriving some of this other? You know, because here's kind of one, you know, if we combine two thoughts here, one of them is group coordination. The other one is decisions under uncertainty that we are rational in groups and as individuals is that, I'm just wondering if degree that's necessary, like, I see, you know, if I think about a simple assignment from the insurance industry for a long time, right, and one of the things I've observed it insurance is that people, no matter how sophisticated their knowledge of insurance is, they make very similar kind of mistakes when thinking about insurance. And more or less, it boils down to cognitive bias that that to me, you know, like anybody, availability bias is like, you know, of course, the across like, I can use machine learning terms throughout the training set of my experiences, what are the things that I've learned are important, right. And those are the things I really understand. And those are things I make decisions on. But I kind of know, I haven't read common necessarily, but I do buy into what he's saying, I don't really trust myself. And so I'm going to have to use the group to help me make these decisions. And so to me, like the group is a pretty stable requirement for individuals to break through their own kind of little local spacetime cone, right. That of information that they can process. And then I'm sort of wondering how necessary all the other stuff would come in from groups. That really is it seems like, I bet you it's underrated, was to kind of think of it that way. I don't know anything about that. I think if you just take the way we are, and you look closely at how different groups of us are different around different ones of us are different. That's just not a very reliable guide to sort of the vast space of possible minds and Merna happen where we get out there. So those are just two different conversations, I think, you know, it's certainly, we want to reflect and notice how we are in groups and how we relate to each other and all the local things. But if the conversation topic is where can we go in the long term future, then then you're not going to get very far there. That is, as you may know, there's a long history of people trying to prove impossibility theorems. And those things just not working out very well are, you know, you have to have really strong abstractions to be able to take prove things are impossible. And even in physics, we are they have those. So I would more look for examples of, you know, kinds of minds or cases, which people have said challenge the Atari model. So the simplest thing is to say, Well, what sort of mind features have people proposed that future minds could have that humans don't have such that, you know, that would change their world a lot. And of course, also just to look at Worlds of minds like software today, and ask how are those worlds of minds different than human minds? And can we notice some big difference that we might think, yeah, that would generalize, that would be the sort of difference that could continue. That's, I would think, the most useful place to look for ways that might help you see how very different things can be. Yeah, I think the idea of speculation is a an imagination are pretty important human characteristics. Right. And so I we can speculate and imagine, you know, a fair
Robin Hanson:so I think one of the the strongest intellectual tools is the ability to know when to go abstract and when to go concrete. Okay, right, to go back and forth between abstract and concrete. And so, um, you know, whenever you're kind of running out of abstraction, and not sharing what to do with it, just go concrete. Right. And so that's what I'm suggesting here. Like, there are many concrete proposals for particular ways that future minds could be different. And, you know, you once you get into those, then you could abstract from them to other interesting generalizations about, you know, the space of mines. But again, if you if you just look at an abstraction like groups, and you go, I don't know what I can say about that. Exactly, then, you know, go concrete.
David Wright:Yeah, you know, I was actually kind of my mind was a little different direction, I was thinking, you know, let's find some data on what we do, though, about the variety of the spaces that have been explored, and kind of thinking a bit about how much data we have. Right? So you know, if you read Henrik, and he'll say that for the last 1000 years or so, that's when cultural evolution has been kind of really kind of biting into overcoming kind of other more called ecological distribution for human prosperity. Right? So if you can take that 1000 ish years of cultural evolution, how much of that space that we have? We know, have we explored? I'm not sure of a way to quantify that, presumably, there there is. But how much data do you think we have? I know you're gonna you're comparing it to an infinite quantity, which is the amount of space you could possibly explore right, but you know, how much how much have we done? How much do we know about what we've done? How come I don't know how I would estimate that?
Robin Hanson:I do think, like, the industrial revolution is the main thing that happened in the last 1000 years. Yeah. And the industrial revolution is this enormous change. And it would have been pretty hard to anticipate it, you know, 1000 years ago. And so that has to give you pause about how far you can see things in the future. After the main thing that happened, the industrial pollution was larger organizations. And I will have to say we are still very early in the day is learning how to structure and run large organizations. We are still making very basic mistakes, we still have very simple organizations. And so that's clearly has a long way we can still go with that.
David Wright:Yes. On the top of larger organizations, it makes me think of the of the grab aliens paper the end? Well, actually, no, sorry, of the of the rather your speculation of aliens that are here. Right. So if we look at the information that we have, we're gonna have to kind of just accept it, I think for the purposes of this conversation, right. So that, you know, last couple of years, there's been a lot of information released declassified by the US Government about UFO, so they call them something else. Now I forget what they are
Robin Hanson:UAP I think
David Wright:UAP. Right. And I have to say, Robin, one of the most exciting blog posts I've read in five or six years was when you started digging into that was a couple of a couple of pieces of overcoming bias. And they were I was just delighted to hear kind of you kind of apply a lot of your thinking to to these, I wonder if you might just sort of talk about why you think the aliens are doing what they're doing here now.
Robin Hanson:So for a lot of people, as soon as they heard you say, UFO, they went, what am I watching? Like? Where are these people? Like I'm about to turn this off? Right? So we need to like, address that. Right? Good. Really getting here. Okay. Okay. We need to say that, I want to say that basically, you know, when you think about UFOs, there's a bunch of different things you can think about? Well, you know, like, you could look at very particular videos, or very particular recordings or testimony and ask in the detail about like, who was looking what direction when? And what sort of evidence do you have, and what other glimpse of sunlight or cloud or whatever else could explain that very particular thing you saw? And that's not my area of expertise. And I'm not going to claim to know about that. I'll just say, look, on the surface, it looks striking. And, you know, disconcerting enough that it's not, it's not crazy. Somebody somebody should looking into this stuff. Yeah, I'd say that. One of the main reasons I ever hear people dismissing UFOs is a priori, they say, look, it's just crazy to imagine that this thinner sort of thing would even happen. Therefore, you know, this couldn't possibly be true, like it must be a mistake. So that's a judgment about the a priori plausibility of sort of the basic hypothesis that's being considered, and that I do have expertise that I can talk to. And so when you do any sort of Bayesian analysis about a hypothesis, you force it with evidence, you both need a prior prior probability hypothesis and a likelihood, how likely would this evidence be if that were the true hypothesis? And again, for an awful lot of people, they are willing to grant the likelihood, but they just think the prior must be crazy? Yeah, sure. I can say, Well, no. If I try to work out the prior, it's not crazy law. And so I might give, say, a prior of maybe one in 1000, which, if you think about it is plenty high enough to have a trial. So for example, the typical prior of a murder trial would be one in a million, right? What's the chance that anyone personally murdered anyone other person? That's pretty damn low. Right? Nevertheless, if you say, oh, but this guy did murder someone, we're willing to sit down and listen to your evidence, because that's a prior that, although low is high enough that concrete evidence can overcome it. And so I'm going to say, well, prior here is, you know, only square root of that's even even much better. So you really need to look at the evidence here, if you're going to make a judgement. So that's what I thought my contribution was, was to try to say, what, what is a prior probability here. And so that comes down to making up a story that fits with other things we know that could account for the key puzzling features purported in this evidence? Yes. So the key puzzling features are number one, that there would be so I mean, the hypothesis aliens are here now wandering around in the skies, and have been for a while. That's the hypothesis. Yeah. And the, the two key striking features that are weird and in need of explanation are one. Well, you know, we know that aliens would arrive, you know, not very close in time with other civilizations. So they would be many millions of years older than us, almost surely. So they would have appeared somewhere else at our level millions of years ago. And then they came here, and they've been hanging out, but they didn't go everywhere else. That is, if they had been anywhere remotely near a million years ago, they could have like filled the galaxy. It would could they could have completely restructured made it completely different very visible, very obvious that they had just made an enormous civilization. Over a million years, a million years, plenty of time to do that. Millions there's plenty of time. So the first puzzle about them is They didn't do that. And of course, one theory might be, well, they didn't expand, they didn't leave their home, they stayed a lot, but then they're here. So why is it that they're here, in addition to being home, but not everywhere else, that's one key puzzle to explain. And the other key puzzles explained is, okay, they're here, and they've got some reason to be here, and they're interested in us, but they could have either just made their existence really obvious Shut up on the White House lawn, as they say, right? That's completely a million a civilization millions of years more advanced than ours could easily have done that. On the one hand, or on the other hand, they could have easily been completely invisible. There's, there's no reason why Joy writing teenagers of their type would be, you know, making visible things that happen to catch our eye. Right? They, if they wanted to hide, they can hide if they wanted to be visible, they can visible. So but instead, they're hanging out at the edge of our visibility, taunting, teasing, what are we what are they doing? Right? Those are the two big things to explain. And those are things when people go they that doesn't make sense, therefore this hypothesis is crazy. That's that's a little too fast. Right? Right. We need to say, Okay, what, what could explain these two hypotheses, right? So part of the first one is also like, Well, if there was somebody closer enough to us in the last million years to have been somewhere else, then there, if they were around at that rate, then the whole cosmos would be full of these things. Not just them, but, you know, there'd be a lot more others. So then we say, Why aren't there lots and lots of others, even if those ones didn't decide to explain why, you know, if they're just 1000 light years away, then another 1000, another 1000, there's plenty room for others that we would somebody out there would do something and then we see that right? Okay, so my hypothesis, to explain this has, again, two puzzles, so two main elements. The first element is the idea of panspermia, siblings. So life is here on Earth, and it was here on earth very early, and is possible that it didn't start on Earth, it started on some other place, call it Eden. And life went a long way there and developed a long way there. And there's actually no reasons why that would be a likely hypothesis in the sense that life is really hard to evolve. And that extra time gives life a lot more time to do all the difficult steps it needs to. And then life transferred from that previous planet to the stellar nursery, where Earth was formed with the sun, and 1000s of other stars at the same time, all really close together, all throwing lots of rocks back and forth. So if panspermia, you know, had seeded Earth, it would likely also see those other 1000 stars. And so, and then after, you know, relatively soon after Earth and the Sun was formed, the stars drift apart, and then they spread into a ring around the galaxy. And then there'd be these others, roughly 1000 stars out there, each of whom had some life initially on it, which was then advancing over the last 4 billion years. And then one of them reached our level before us. But the rest of universe could be like, empty for a really long way. Right. So there's this key thing, the rare random thing was eaten, forming life. And then the places where that it seated, they could more reliably reach our level at some point, but, you know, we just might not be the first. And so this, you know, advanced civilization that would have be our sub panspermia siblings, it would be relatively close would be in the same galaxy, a few 1000, you know, light years away. And it would appear millions of years before us, and it would know about all its siblings. So that's the first assumption. And the second assumption is going to be for some reason that decided it didn't want to expand, it was going to stay home and not allow colonies. And we can talk about all the reasons why that might be an attractive thing for them to do from their point of view. They, they want the band, basically, local people to expand out and change the universe, which is why we don't see the universe around here, I'll change, they decided to ban that for a bit. But they knew because of the siblings were out there that they were advancing, and that they could eventually reach an advanced level and then go expand themselves. And so this rule that we can't expand, if you want to enforce it, on the longer term, you have to go out to the siblings and enforce the rule on that. And so, the hypothesis is that's why they're here, but not everywhere else. They had to make an exception for their Hey, Everybody stay here rule in order to allow somebody some, you know, cause When he some some expedition to come here to enforce that rule on us. So that then explains why they're here now, and not everywhere else and why the universe isn't full of other things like, but we still need to explain the Yeah, but why are they hanging out at the edge of our vision,
David Wright:always at the age of division as our there's an XKCD, as our technology improves, because I pick up another Yeah,
Robin Hanson:well, so um, so the hypothesis is, they're here to convince us not to expand. Now, one way to do that would have just been to kill us. So obviously, they didn't choose that for some reason, right. So they have some degree of sympathy or sibling attachment to us. And they didn't want to just tell us, but they do want us to follow their rule. So most, you know, on earth, most social species have status. hardcase is very robust feature social species, and status hierarchies, you defer to higher status animals. That is you follow their example, and you do what they say. And try not to pick fights with higher status animals. That's what a status hierarchy looks like. And so they had that too. And they could predict that we would have. And so they said, our strategy will be to come here and be the highest status animal in our tribe, they're gonna come here and just impress the heck out of us, and not threaten us, right. So they have to be here look like they're locals and be really impressive. And it's when ventually when we're convinced enough that they are here, and they're really impressive, then we can configure out why they're here, we're smart enough to that. And we might follow their lead. Now, to make this work, all they have to do is hang around at the edge of our vision being really impressive, right, and just being around and not threatening. But they also need to not say too much. Look, we sometimes get the heebie jeebies about other humans, and the weird things they do. And that's all evil, right? And they're pretty close to us. So these aren't aliens really mean? Alien. So if we knew their history, and all their inclinations, we would find out something we hated. And that would be the end of this emulating stuff, they would be the other the enemy, and we would be, you know, terrified and fight them. So they don't want that outcome. So they have to impress us, but not let us know very much about them. They have to just be at the edge of our vision there. But you know, they're not going to come and give us their history, they're not gonna show us, you know, their appendages, how they have sex, whether the babies, they're just not going to do any of that. Not for a very long time, at least until they could be more sure that we would, you know, not take it back. And so, now, this scenario has a number of pieces, and I did design these pieces to, you know, that the evidence and that's, you know, some degree of, of, you know, looking for the theory that would fit. And that means it's not a priority, prior, you know, 50%, or I'm saying one of the 1000. Which is, you know, therefore, you know, a priori likely the All else equal, but then you have this evidence, right. So now, you have to fold in the evidence with this prior, like in a murder trial and decide, how strong is the evidence to commit? And I don't know the answer that that my job, right.
David Wright:So what's what, what has intrigued me about that whole? Theory, I guess, right. So that the set the story is that it really is very different from the stories you put together, both in hardcover frontier and in the age of M. And in kind of one particular way, which, in both of those, and we can maybe focus on age of M here. And we're explicit is that life, or call emulations life as well. And that to describe what those are in a sec, of course, really pushes hard to fill every niche that it can go and right the expansion is like right, relentless and inexorable, and forceful and aggressive.
Robin Hanson:Right, so my usual first tool of analysis is going to be competition and selection. Yes. And in addition, there's this key powerful principle that to predict what Rich creatures do you need to know what they want? But to predict what poor peepers poor creatures do you just need to know what they need to do to survive? Preferences matter last for predicting poor creatures behavior? Interesting, okay? And so in the age of m and in the hardscrabble frontier, we're having poor agents, and we can talk go into the reasons why agents who are near the edge of survival, and therefore it doesn't really that much matter what they want. What matters is what it takes to survive. Yep. And that's why I can say so much about them, and even in some sense, be able to say less about what our world will be in 50 years from now. Because I will 50 years from now will still probably be pretty rich. And therefore to predict what we will do you need to know how our wants will change, what will we want in 50 years? And that's actually not that easy to do. Yeah. So if I'm postulating the agents though, these aliens, I'm postulating that they are preventing competition in a certain key way. And that's going to tell us more things about them. To be able to do that is an unusual characteristic is a striking feature. And we can draw more conclusions about them from that. And I think that inspired me to think about that being an issue for our future that we might also go down a path like that.
David Wright:And so let's talk about what I want to get into kind of why you think will be a subsistence level, right will be very poor in the future. So maybe, if you don't mind, tell us what it is. You have to scrap it and emulation is. Okay, a So so there were these two scenarios. So we've been saying couple of minutes on that. we have an explaining. So actually, the hardscrabble frontier scenario is just easy to explain, as imagine an expanding wave of colonists moving away from Earth as fast as they can, then at the frontier of that wave. There's a selection effect, that the behavior at the frontier is just the behavior of whatever it takes to get to the frontier. Yep, very soon. And if you have some other behavior, then you could enjoy your life and do other things. You just won't be at the frontier. And so that's the competition that sets the behavior at the frontier is the fact that you are only there, if you are doing the most competitive thing to get there. It's a race car. It's not an average want to come in. But the average citizen, right got society. It's about right. Yeah, the margin
Robin Hanson:exactly, right. So if we have an automobile race, right can say a lot of things about the losers race, that they could be all sorts of things, but the winners kind of have to like, really find whatever it takes to be at the front race, right? And so that's why competition is such a powerful analysis tool in the hardscrabble frontier scenario, okay? Now, Age of M is a different scenario. So the age of M is a scenario that stays on Earth, it happens within roughly a century or so. And it's a scenario where brain emulations become possible. So a brain emulation is you take a brain like mine or yours, you scan it in find spatial and chemical detail to see which kinds of cells are were connected to what. And then you have computer models of how each kind of cell works in terms of taking signals in changing state signals out. And with enough computer models of all the different types of cells are good enough scan of where all the cells are in the brain connected to what you could make a computer model of the whole brain. And if you have good enough scan, good enough models, that model that you've made, will have the same input output behavior as the original brain, you said, you can hook it up with artificial eyes, ears, hands mouth, and then you could talk to it, it would talk back, you could ask to do things and do things just like the original would, in the same situations. And if we can make such a thing, which we can't now and probably can't for a long time, it would wholesale substitute for humans in the labor market. That is you as an employer might rather hire one of these because they're cheaper and flexible, and how many other advantages. Okay, so this is a scenario by which we create brain simulations. And once we can create them, we can make a lot of them fast. So we takes a while to scan anyone personally. But we can make billions of copies of anyone's game. So the rate at which we could make these random relations would just be limited by the rate at which we could crank out computers in factories that make computers especially when these emulations are helping you run the factories, making new ones redesigning them. That's what sets the rate at which they could make more emulations. So the key idea is, the reason we are now rich per person, is that for the last few centuries, we have been able to grow wealth faster than we grown people. That wasn't true until a few centuries ago, you know, for all of life, billions of years and all of human history until a few centuries ago, the rate at which we could increase our overall productive capability was slower than the rate at which we actually didn't increase the population. So every time we were able to do more, we just had more babies. And then we were back to the same situation of resources per person. In the last few centuries, we've been able to grow wealth faster than we've grown the population because we've been technology for improving wealth has just been fantastic on all so many fronts, except the technology for making babies not improved, much cheaper. Pretty much the same tech, yeah. So if we ever in the future, find a way to make babies or substitutes for babies a lot faster than then it's an open question. Will wealth grow faster than babies then replace more population? And so in the age of m, this is a scenario where in fact, you can increase the population faster than you can grow Wealth, we're no longer limited by the slow, awkward, expensive way that we make people today, you just have to crank out another computer from a factory. And that's enough to make another brain emulation. And the population can just grow really fast. And so the wealth per person falls. And the only thing that can really stop it, unless we coordinate strongly somehow, is subsistence wages. IE, you know, where you try to make another emulation, but it can't survive, it can't earn a wage enough to pay for the hardware and energy effort, things it needs. So that's why the age of M is an age of subsistence an age of competition, and therefore an age that we don't need to know what they want to predict what they do. Yes, we just need to figure out what it takes to survive.
David Wright:The probably the thing that most I don't know, challenges me, right challenges my agreement with this scenario. And to a certain extent, you know, as I was analyzing the book and reading and think about it, you just sort of have to accept that's there. And then you can sort of what's next, that's part of the fun of imagining distant and strange places is you can learn something, even if you don't necessarily accept every premise, but one part of it that I have a hard time with, is this concept of us really making him you know, humanity be can you put it that way? Kind of fungible, right? So it becomes a commoditization of individuality, which I just have a hard time it, you know, there's a few distinct aspects, which are kind of real, you know, really specific that I kind of challenge me one, one of them was that we will have things called spurs where you can create a person, and then they will use the word end, you don't use the word die, but they are equivalent. Right. And to me, if we are genuine, if they're, you know, assuming away some tweaks, you know, that, as you mentioned, I have a hard time seeing us as a society, being so callous with life.
Robin Hanson:Well, so I presume you're aware of past societies, up until recently, and for most of the societies, humans were pretty fungible in commodities, that is, say subsistence farmers don't actually vary that much in terms of how productive they are on the farm. Humans were grown as a commodity to be subsistence farmers, right? And we were pretty callous about them. That is we produced a lot of them, even if many would die in a famine or war. Wars, callously wiped out large numbers of them repeatedly over and over again, that is our human heritage. So I would say we've already demonstrated a willing, you know, ability to be to treat humans as commodities, to treat humans callously to make as many of them as you know, society finds useful.
David Wright:So there's a interesting, another topic that I really wanted to touch on very happy we hit it right now is explicit versus implicit priorities. Right. So I think that we maybe don't know, he told me, he definitely thought about this more than me, is that we did, except in the case of like, a Roman Colosseum or something, right, where you're killing people. And I think war is an interesting kind of strange case. But for the most part, I think that we didn't mean to be, in fact, we've evolved a lot of social behaviors and institutions, which are explicitly designed to prevent us from, you know, like, we care about each other, or family and all that like, even though I might have had the kids get to put them on the farm. I do love them, I have that sort of a biological attachment to my kin group. Right. And I think that that meant that although we were perhaps callous with them, it wasn't because we wanted to be the bit that flips in the age of em, is this is the explicit prioritization of many things. Right? So you're more more directly going after?
Robin Hanson:Actually, I don't think that's right. So the age of M itself is explicit. Yeah. But the people in the age of M aren't making their world according to my book, by hypothesis isn't that they read my book, and then they go build a world that matches my book. My hypothesis is, they build a world through other processes. And then my book is describing the world they have built, but they are not building it according to this recipe or plan or, you know, explicit goals. So the main thing to notice is that all through history, the large scale structures of society, and the large scale outcomes, were just not anything anybody chose or plan or voted on. We have not been running society in that sense, right? Humanity has not been driving the train, you know, there's been this train of progress or change. And it has been a big fast train especially lately and is making enormous changes all through the world, but it is not what we would choose if we sat down to discuss it or Vona, it's emergent. We just don't have process for doing that. And we haven't had a process for doing that. And so my claim is that the whatever analysis tools that help us predict the past changes that were changes that were callous, and harsh in many ways and, and commodifying people in many ways, those past processes and changes that happened, not through our approval or explicit, you know, analysis or assent, those processes will continue. And therefore, I can continue use that same analysis tools predict what will happen, because I also predict that we will continue to not choose that we will continue to have a world that's the result of many local actions being taken for local reasons as they usually work. So, but that's a way to challenge my hypothesis is to say, No, we will, between now and then acquire an ability to foresee the consequences of such changes and to talk together and vote together on do we want it and we will have the ability to implement such choices. And that will be a change in the future that may prevent the age fo em.
David Wright:I am totally with you on that. I, I think that the one thing the one kind of concept of a rendering a goal explicit. So, so expansion is, is kind of a degree an emergent result of culture. Right? So I mean, cultures do tend to, or they often have a desire to expand, but I don't think that's kind of Top of Mind necessarily all the time, or rather, you know, when when culture will go to war, and you can destroy another culture, assimilate it, that's kind of more explicit. But the idea of cultural cultural evolution doesn't require people to go to war and kill each other, you can, you can steal ideas and adopt them. And that still counts as cultural evolution, because now the idea is spreading is the kind of the idea and the practices, the features that you're measuring across different cultures within the matter. So my point there is that you can have a, you can have an implicit goal of expansion, and still have, you can still expand, the thing that's interesting about h of m, is that you are you are deliberately making humans, so many humans that you're making a suscipit subsistence economy, right. So like it?
Robin Hanson:Well, no one person is making so many of them? Well, you have so somebody, the first person makes some, yeah, they make them for some reasons. And then the second person makes the second one, and then the 100th person makes 100 of them. Right. But nobody, at any point is choosing there to be a billion of them.
David Wright:Right, well. So one thing that comes into my head here is there's a point in the book at which you, I think it's in this book, where you say, you know, if we would just explicitly pursue our goals of expansion, we'd be a lot better at it. Right? Where, you know, right now, we have this implicit, you know, emergent force of expansion. But, you know, if we just sort of had in our head, you know, what we need, we need a couple billion people. And, you know, we're gonna try and convince people to do
Robin Hanson:I don't actually remember saying that in the book, but I have certain blog posts. Okay, maybe that's where got it. Yeah. That is, humans, are creatures who follow a lot of impulses and inclinations, that we are not very aware of their source or their meaning. And that's the kind of creatures we are, we have high level thoughts and abstractions, and plans, but those are response to lots of lower level more opaque inclinations and desires and pushes. And the question is, will that always be the way of future creatures? It would seem that if you could create a creature who had the basic goal, I want to have many descendants? Yes. And I'm going to think about that consciously and plan that out. And that's what I want. And that's what my plans will be targeted for. That agent would seem to win competition to have more descendants than somebody who's following a set of ancient inclinations and pushes that matched once to an ancient world, but don't match the modern world very well anymore, and don't adapt very well, when things change. It just seems like that loses compared to having it as an explicit goal. So if evolution continues, and selection continues, among various goals that creatures could have, it would look like that wins. And in fact, somebody who's wants to make creatures who do when they may just jump there and make that on purpose, say let's just let's just make an agent who's like that, because we know that wins anyway. Yeah. So that seems but that's a longer one than the Hmm, so hmm, isn't an error necessarily. Were that happens during the HMM. So it just to be clear, by the way, hmm, probably only last a year or two. Yeah, I know, I want to get something else happens. And so a lot would happen in that as much what happened in that year or two as it happens during the entire many centuries of the industrial era or the 10,000 years of the farming or, or before so a lot happens in that year or two, but still, it's not the whole future. It's just an important year or two. And after that changes, probably even faster.
David Wright:Yeah. Okay. I mean, I was gonna hopefully wait on that, but I can't now because I feel like people would be like what? So I I mean, I suspect that the I don't know, I'm not sure I remember if you explicitly put an explanation for why they are so short, but the dawn of the next year will come in and you make, you know, I forget, I don't know how you want to take this is it? Is it brain emulation speed is it sort of this trend of doubling every kind of like new generation.
Robin Hanson:So my key argument is to say, we have seen a sequence of eras in the past. And although they have encompassed very different time durations, they have encompassed similar number of doublings. So the industrial era, the number of doublings of the economy is similar to the number of doublers of the economy during the farming error, similar to the number of doubling economies in the forger error before humans arose, which is even actually somewhat similar to the number of doublings of brain sizes in the previous half billion years. That's amazing. I mean, it's so because within within serving, or an order of magnitude, and so that's all I need to say, Well, if the next era went through that many doublings, it would only take a year or two. Therefore, that's as far as I'm willing to protect it. Now, it could turn out that that next era lasts for far more doublings than previous eras did. But I don't need to make that claim. And so it seems like the safe cautious thing to say is, you know, I'm going to predict it roughly that far, and not further. Now, I do have some other thoughts I've had since writing the hmm, book about what will happen after that, I did try to do some more careful analysis there we could discuss, but that is, you know, somewhat of a separate topic.
David Wright:I'm interested in that. I'm on the edge of endpoint, though, because one of the really, most maybe the most interesting and, and kind of like, I don't know, chewable insights in the, in this concept of emulation is sprint speed. And, and the effect that brain speed doesn't lead you to talk about objective time and subject of time. And one of the things that, and we'll talk about that in a second, one of the things that, well, if you're going twice as fast, your your subjective time of a year, actually is six months, because objective time because your brain is going twice as fast. Now, the what I wondered about though, was like, if a time if you were able to do some kind of like integral of like, a subjective time, annual growth rate, would that be different like is the is you kind of just cheating by speeding up time, and so you're packing more growth into its own space?
Robin Hanson:Hmm, I often do multiple methods of analysis. And when they lead to similar conclusions, I feel confident in those conclusions. So in fact, I have two different methods of analysis that lead to a similar conclusion here. One method of analysis says that today with factories that we have today, the value of the factory typically equals the value of the product that goes through the factory within a few months. Yeah. So if factories could make everything in a factory, then the economy could double every few months. That's direct implication of the way factories work today. That is nothing regarding emulations. Except for the idea that the reason we don't double the economy every few months is that one of the crucial things in a factory is a human. And we can't make those in factories. And that's why the economy grows by doubling roughly every 15 years, because humans just don't grow that fast. But in the age of them, if you can make a substitute for human in a factory, that's all you need to draw the conclusion that the economy could double in a few months or less on that scale. So that's one line of argument that gives you a doubling time for the economy, which then gives you the prediction that, you know, given how many doubles there would be that the HMM would only last a few years at most. A whole separate line of analysis says that range simulations can run at different speeds. And that I can actually estimate what speed they would typically run at, from a trade off of two considerations. First of all, there's the possibility of speed. So we I think we can safely say that wood could go up to a million times faster than human speed, or down to a billion times slower than human speed. So just from computer technology, and the fact that the brain is very parallel process memory, we can make that range and within that range, the cost would be linear, that is the cost per subjective minute would be about the same.
David Wright:And so you would choose the symptoms of energy reduction, we pay for
Robin Hanson:energy and the hardware and everything else. Right. Then, therefore, you would choose the speed based on other considerations than the cost per subjective minute. And there's two basic considerations. One is that when an economy doubles, say, every few months, if your career length lasts longer than the doubling time of the economy, that means you basically have to retrain your career several times during the career, which is expensive. So it makes more sense to fit a career into a duration. That's the duration of the doubling time. economy. And so that is roughly, if the doubles every few months, and you subjectively have a quarter of a century, then that's roughly a factor of 1000. So that says, you don't want to go any slower than a factor of 1000. But you might want to go much faster. Because if the economy doubles every few months, then you'd have to keep retraining. During your career, that would, that's awkward and difficult. The other consideration is that EMS can have much bigger, denser cities, because they don't have to do as much commuting, they can compute commute, via just sending the bits that represent the room, the grand and what they're saying, they don't have to send their entire mind. So within a city, they can move around and meet a lot without actually moving their brains. But that suffers a delay of the the time it takes up to send a light signal from one side of the city to the other. So with familiar sized cities, that delay starts to become serious when you're getting faster than 1000 times human speed, right. So if you go much faster than 1000 times human speed, you will notice awkward, difficult delays when you try to talk with other people in the same city. And the economy because it can have easier bigger cities than most of the world economy would be collected into a small number of densities. And so talking across the city would be you know, a big part of talking about the whole world, from your point of view. So that says this trade off picks about 1000 times human speed as a typical duration, then if you ask, okay, if you're running 1000 times human speed, then in say, three years, you've run 3000, human subjective years, you know, how far are you willing to predict a future civilization? Are you willing to predict it more than a few 1000 subjective years, and I'd go on that's getting a little iffy there. And that's another reason you might think I'm gonna stop my prediction after after that, that I'm not gonna go farther than that.
David Wright:What I just can't help but ask what I maybe have missed this body of work on what you're, and you're speculating beyond the age of them. Maybe you want to touch on that for a sec, whatever, what have you what have you.
Robin Hanson:So the key point is, I mean, you know, me from thought. So we have artificial minds and brains in other forms, right, we've got deep learning, we've got software. But at the moment, that other stuff isn't remotely as valuable as human minds, they just can't compete. Yep, slowly, over time, they're getting better slowly, over time, they can do more tasks, eventually, those things, maybe you could do pretty much everything that humans could do. And at that point, they're humans are not in that much demand, in which case, the ability to make brain emulations of humans is also not in much demand. So this hmm, scenario really only plays out before that happens. That is, we have to be able to make breeding relations before. Nobody wants humans anymore. Or human like minds anymore, right. And so that's a deadline. Now, you know, if I look at trends, I'd say that deadline is many centuries away. So if I think the age of am is likely within, say, a century or two, then I think we will reach a point where emulations can be made both while still humans are still valuable, and therefore, while emulations are still valuable. And of course, there's actually a bit over overlap in the sense that emulations are cheaper than humans. So emulations are valuable, even when humans aren't. Still, you know, there's a deadline out there. And so a plausible end of the age of em from a third different stream of analysis is to say, okay, the M's take over this process of writing software, deep learning, etc. That process still continues even in the HMM. When do they finally reach artificial intelligence that competes with humans? And that would again, I would say, gives a few years? Yeah, so again, three methods of analysis giving roughly the same estimate there,
David Wright:right, because the ems figured out general artificial general intelligence or whatever, right.
Robin Hanson:But now, when we do have artificial general intelligence, we will have two kinds of minds available to us to do any given task, right, we'll have human brain emulations and we will have these other artificial general intelligence, both of those types of minds will now be on more of an equal footing in terms of hardware. So today, computers get this artificial hardware, which like can go into space has many advantages. And we're with this hardware, which has its own advantages and also disadvantages, but once we arrange emulations, then we both get to use the same kind of computer chips. And we both have the ability to use the same energy sources and live in the same environments. And we will both be improving that as people would be changing and improving both kinds of software. So then the question is, we have to, you know, two classes of technology, which wins where that is, you know, so today for example, you might have internal combustion Engine versus a nuclear engine. Those are two kinds of engines. And some of them wind in some places and others wind in other places, right? And so you want to think about the different characteristics of two kinds of technologies, when you want to ask which ones were not sometimes one kind of technology just beats all the other, and it's done. Right? And then, you know, it just went. And many people have said that about artificial general intelligence, they are just convinced that once we have it in, it's good enough, it will just win out over humans. And why do they say that? They say, well look at humans, they have those biases, and you go, like the others not going to have biases come up. Or they say, look, humans are, you know, stuck in the past, right? human minds were designed a million years ago for some ancient environment, but our new systems are going to be designed for this new environment. So human minds will be old fashioned, but you might go with human minds have this enormous inherited heritage of all these things that have been built on it. So right today, we often have platforms that have a lot of power, because a lot of people have been adding tools to those platforms. And they can beat out a brand new platform that's clean and shiny, but doesn't have all these tools supporting it. Right. So. So that's the question. And then I thought of a framework, I think to help answer that question, a way to think about so the key thing to think about is, what are the key driving constraints the design derive drove the design of the human mind? And how do those differ from the key driving constraints that we have been using in design, designing artificial software. And I think I found one that really tells me a lot. So the key thing is that the human mind, never found some key distinctions that we have been using very reliably in artificial hardware, and software, artificial minds. And what is the distinction between hardware and software. So you're just used to the idea that you have computer hardware, and you could put different software on the hardware, that's just the thing we're all used to. But that's not true of our brains, our brains tie hardware and software together, that is, brains evolving, in order to have a new thing that happens, the brain had to both invent a new algorithm and had to add a piece of brain hardware to run that algorithm. They were packaged together that way. So today, when you have that meant, the brain had to search for a long time to figure out how to add new packages. So brain couldn't just add new software without also adding hardware. And pretty soon, you have a big pile of hardware that's filling up the head and using a lot of energy. And then you can only add new software, if you can find a way to either delete old software, or merge old software and new software via abstraction. So the brain spent many millions of years under evolutionary pressure to find ways to only, you know, add the things that were really needed and to abstract to find ways to sort of do two things at once with the same hardware software package. That was, you know, a very high hard constraint. And so the brain spent a long time honing its design under those constraints. Now today, when we have the separation of hardware and software, we sit down at a computer screen. And we've got like, implicitly all the old software and all the old screens. And we can start with a blank screen and just make a new set of software. And we only need to connect it to the old software, when we want to. And we know that every connection we make will make the system a little harder to modify and update because that's the thing that, you know, violates modular modularity is our shining powerful principle that lets us constantly add new pieces of software that, you know, use the old one when we want to or not when we don't, but we don't spend a long time thinking at the higher level about exactly the best organization because we don't need to. And so the organizations we choose aren't the best and they rot quickly. That is quickly over time, we find that as we add more things, the structure is getting problematic and our software rots. And eventually, we throw large systems away and start over from scratch. That is, but we have this enormous advantage, the blank screen and just writing new software and the brain couldn't do that. So the thing you should expect about the difference between brains and artificial software, including state machine learning software and the systems it produces, is that the the human mind will be this marvel of modularity and abstraction, where it spent a long, long time thinking about what the higher level structure should be such that they could do multiple things with the same modules. And they've just done a spectacular job of and the software we make, we can make fast but it's just nowhere nearly as good Well abstracted and well organized as the brain. And so we have to expect that the artificial software just rots more quickly, and have to throw it away. And therefore, that will tell you where it's useful versus not. And so there's some other things we can say about which are useful, but that's one of the main ones. Another one is just, you know, there are roles to be in that humans used to be in. And so many of these roles are designed for human in that role, like a lawyer, say, and so, you know, it'll just be easier to put a piece of artificial hardware in that role that's also structured much like a human that thinks much like a human because those roles were designed for humans. And that will also be a way to identify the roles that would last longer with you in mind.
David Wright:You know, it's, that's really interesting. And I've seen your stuff on rod. So that thank goodness, I didn't miss all of that, Robin, I was gonna worry that I missed the whole strand of your thinking. And the connection to AGI in this post M world I hadn't figured out. So that's pretty interesting. As you were talking about the ability to do multiple things with one, one, innovation, let's say, I my mind, my mind goes to status seeking behavior and some of these other kind of puzzling features that you document and talk about frequently. In think to myself, because one of the things that I've noticed, and I just recently recorded a podcast with Stan McChrystal on leadership and uncertainty and risk, and one of the things that I've noticed is, is there's this really interesting connection between let's call it leadership leadership is more or less, asking the question of who's the highest status person around, right, you follow the leader, right. And we're always in this competition to, to assume the mantle of leadership by trying to pursue status. That's my words, and you can disagree with any thought about status more than me. But to me, like, the thing that that does, the thing that leader does to us is it resolves uncertainty. So it guides us through this kind of world of of risk, says, We, you know, we were talking, I think, before this session, we're a little recording at the beginning, talking about how we, we need to focus on things and that can allow us to actually remove a lot of information and ignore stuff, by the ability to focus. And to me, that's one of the things that leaders do is they actually destroy information about the world around us. But but the gift that that is, it allows us to focus on things. And we'd like that, right, we crave that actually, we want people to tell us that it's all okay, you don't have to worry about all this other stuff. You know, that's something that that leadership gives back to us. And yet we have all this other stuff that that, you know, guides a lot of our behavior, but under underneath at all, we kind of have this real objective of trying to reduce uncertainty in the world around us, we're kind of doing a lot of stuff with some pretty abstracted behaviors. What do you think about that,
Robin Hanson:so if to make a connection back to the aliens, discussion, the a lot of people when they think about the future, they think about this competitive world that will evolve minds. And they think about things about humans they cherish, such as love or our style of leadership, or humor or music. And they fear, oh, the future competition will destroy that and take it away. That is, our style of these things won't be competitive. Yeah. And they imagine the thing that replaces as some sort of horror aboard, for example, or just something else that isn't us. And that comes, you know, that issue comes over and over again, every time we say we talk about something like leadership, and we have to ask, which elements of leadership are sort of robust to any future creatures, you know, needing to organize and have somebody be in charge and which are unique features of the way humans have done it that aren't like how other animals do it and how the future will do. And that's hard to tell him anyways, we have to do is sort of go into each feature and try to think about how general could be the functions it's performing. And then that's hard to tell, but many of them are general, and many of them aren't. Right. But we still have to think about that there. But that that sort of raises this scenario of what if we could anticipate the changes were coming and see how much our descendants would change? And then what if we blanched at that and said, No, what if we could coordinate to stop it, would we? And that's sort of the scenario by which you might imagine the distant aliens chose not to allow expansion. Yeah, they chose to lock down and prevent competition because they feared changing themselves that much. And so you can start to sympathize more. The key point is, in a say, planet wide civilization or even a solar system wide civilization, a central government can actually function to prevent a lot of big kinds of competition. That's a feasible thing on a planet or a solar system. But once you allow interstellar colonization, it just looks a lot Less feasible. So that's the line you might draw. I'm saying, if we let colonists go out willy nilly and keep going, they will become strange to us repellent to us, we don't want that we would even be forced ourselves to become that in competition with them. We don't want that. And we will stop.
David Wright:Because we'll there will lose connect maybe a little Galapagos Island kind of places, right where they have, you know, because you know, the limit to coordination is the speed of light sort of, right? I mean, it's right out of time, right. So you sending to a far distant talk is harder than smarter than your talk, somebody's going to go to three or four solar systems away, you know, my, with my kids, I'm reading Hail Mary, the book, right, brilliant. And, you know, they're, they're too far away for us to talk to each other. And so, if you had a civilization out there, you're gonna lose touch. And they're gonna, only because they're evolving further and culturally, whatever they are, they're going to become weird.
Robin Hanson:Right? And so you can sympathize with people who would say, we can't let that happen. So even if you had asked our distant answer, if you could have described us to our ancestors from 300 years, or 3000, or 30,000 years ago, if you could have really described us in great detail, a great many of them just would have been horrified, and said, No, I could I do something to prevent that, yes, I will push that button, I will do what it takes to stop that from happening because they would not have approved. But we may now become have enough foresight and enough coordination to stop such things. And we may choose to do that. And, you know, that's a concern I have, because there's this if we will allow ourselves to change. There's this enormous potential we have. And by enormous I mean, like, we could fill, you know, a billion light years. 4 billion years of stuff. Active, complicated, passionate, lively descendants, that's I think we could do or we could say, no, they would be too weird.
David Wright:Yes. And the trade we make, and maybe maybe your superpower, Robin is that you're a little more comfortable with weird than other folks that's possible. Or for the for some goal, which, you know, when I was describing this, your work last night to my wife, right? She was not familiar, and she didn't have different interests than I do. Let me say, and she said, I want to hear it. I was like, what you mean, she said, this, this feels to me, like a story that you're going to tell the tale we tell ourselves to throw our feelings in our face. Right? So it's like a dark science fiction, kind of like morality play, which is like we this is, you know, holding up a mirror to you and saying, you're actually awful. All you care about is expansion and you know, rapacious, hardscrabble kind of stuff, and horrified her in a you know, like, relatively speaking, right? And what's interesting about that, I mean, anything's interesting about that, I think, but one of them is kind of don't like the idea of expansion, I think culturally, we, we resist it, let's think just generalizing a little bit like we resist the grasping for power and influence. Right. And that's kind of one of the things that I was pondering about the concept of
Robin Hanson:we didn't use to. So that's something that's changed in us in only the last few centuries.
David Wright:Why would you say we didn't use too?
Robin Hanson:Well, because in the past, they did an awful lot of pretty conscious grasping, and expansion.
David Wright:fWell, some subset of people did, right.
Robin Hanson:Yes. And most of the others were on board.
David Wright:Well, they were forced along, right, and maybe, you know, coming back to this point,
Robin Hanson:other people coming in expanding into you is a lot less popular than you extending out the others, right?
David Wright:Well, leaders, like you know, we have the Baron or whatever, who's going to try to conquer territories where you have a small number of individuals that are outsized prominence in our historical record, right, and that, you know, the thing that maybe a achievement of our society is we've shut that down. Right? We said, You know what, we don't like that anymore.
Robin Hanson:Well, I'm trying to be more neutral about what's an achievement versus not, and just say, look, through human history, up until a few 100 years ago, almost all civilizations were animals, all people in the more pretty fine with us going out there. And in fact, migration, and conquering was a major force in human evolution, and human behavior and history for a very long time until very recently, I mean, in most societies, the idea that some of us will just go out there was just a standard thing. It wasn't a strange hypothesis, it was to be expected that some of us would try to do that. In fact, that was often a way to sort of divert internal conflict. Take a bunch of people who would otherwise stay around and cause trouble. Point them in a direction saying, you know, we'll help you go out there and start a new place. That was just a thing that humans have done for many, many 1000s of years, it wasn't at all strange. It wasn't at all horrifying. It was could be scary, of course, if you thought about the risk of failure, but it was celebrated in many ways. And so the it's a remarkable change if we feel otherwise now.
David Wright:Yeah, I would say that if I were to, I kind of have no, I suppose this is like a point of discussion. Right. So does that trend continue of us? You know, or is it reverse? Right, you're kind of telling a story about what might happen if it reverses. And my feeling is, if we were to pick something to, to become more explicitly focused on even more, it's actually preventing some of that behavior. It's actually, you know, the continued pacification of humanity seems to be
Robin Hanson:a very basic question here is how to explain the major changes in values and attitudes over the last few centuries. Yeah, right. It's central to the story here, right? We're trying to predict how our values will continue to change. And we've seen this enormous change in values and attitudes and centuries, and it's not obvious why there should be such changes in values. Now, it's just like humans had pretty stable attitudes and values for a really long time. Why would they change? So my story, there is the story I tell at the beginning of Age of em about the major past eras, and how the, the farming era which started say 10, or 15,000 years ago, roughly, was a wrenching transition from the prior foraging era. So foragers live in small groups of roughly 30 people, they only ever meet 150 people in their life, maybe they are very egalitarian, they are, have a lot of leisure time. They share, they don't allow anyone to dominate their group, they're relatively promiscuous. And they do what feels right. Much more often than farmers did, in the sense that they don't need to resist their inclinations, they can go along with them, because their inclinations were evolved for the world, urban farming became possible and only was possible because humans have enormous cultural plasticity, as Henrik would tell you, we are capable of becoming different things by different culture. It's incredible. But that's only if culture changes to, to push a different thing. So cultural evolution explored the space of different possible cultures, and found cultures that were congenial to farming. And then those cultures one, but those cultures pushed a pretty different set of attitudes and values than foragers had, and which were pretty, you know, horrifying to foragers. Actually, you know, the farmers put up with enormous ranking and domination, war, conflict, marriage property, not sharing, not leisure, things that foragers just would not like. And in many sort of objective ways, the quality of life just went down for individuals moving to the farming world. But the farming values and attitudes were more conducive toward the farming world. For the war property, the hard work, the less variety of travel and less variety of food, the the new social rankings, the new attitudes were more conducive and supportive of the new farming behaviors, which is why they want. So that was this enormous wrenching transition. And then in the last few years, centuries, we found a way to grow wealth faster than population. And the key idea is the social pressures that turned foragers and farmers were quite often mediated by poverty. So if you're a poor farming, one young woman, tempted to have a child out of wedlock, which is a very tempting thing for a forger thing to naturally do, you would be told quite credibly that if you did that, you and your child might start. And you would see concrete examples of that that could be pointed to it was a credible real threat, and the sort of threat that kept people following the farmer norms. As we got rich, those threats became less believable and less compelling. You tell a young woman today that if she has a child out of wedlock, she may suffer what exactly whatever she may suffer isn't remotely as threatening as what would have been threatening 1000 years ago. And she may well go with what her feelings tell her what feels natural to her, what would feel natural to a forger. So the cultural pressures that turn foragers into farmers are just not so compelling on us. And that's why over the last few centuries, we have been drifting back toward forger attitudes and values outside work as work as the goose that lays the golden egg. We have all of these freedom to change our attitudes and values because we are so productive at work. So we need whatever attitudes it takes to remain productive at work, which are kind of hyper pharma attitudes, and so we're sort of more schizophrenic at work. We put up with more ranking and domination than farmers would put up with, honestly, most of the people who have harmed they wouldn't put up with it. And we do. But once we leave work, we have had trends toward increasing leisure, art. less inequality, overt inequality, less ranking, less domination, less war, less slavery, less religion. These are the major trends over the last few centuries. And they can be explained, I say, by reverting to our forger attitudes outside work, because we're rich. And so if that's the explanation, then we can predict that as we continue to get rich, this will continue to be the trend, a culture novel or Star Trek future, where we get richer, we have to work less we are more indoor and what feels good, we are less less ranking, less domination, less slavery, less war, less religion, those trends should continue as wealth continues. And right, even less fertility for less fertility is also part of the package of prediction of this, of this model explains the demographic transition, the great reduction of fertility, to some degree,
David Wright:the the end will eventually must come right must eventually come, which is no, we will asymptotically go to zero growth. Right? Yep. On a per capita basis. And, and so like, it seems to me that you call it a dream time. Maybe it'll last a long time. But
Robin Hanson:so so the per capita growth doesn't have to end if we keep shrinking the population. Okay, well, extinction, we can keep increasing per capita wealth, and decreasing the population until we reach extinct sure that that's what that's that's one of the possibility.
David Wright:Yeah, that is a possibility. I mean, I would assign a fairly low probability that myself, we will eventually see that it's a problem.
Robin Hanson:But But, but, but, but overall growth could continue for over 1000 years. And, you know, at recent projections, it would take less than 1000 years for extinction. Yeah, reducing population,
David Wright:if we did nothing about it. And so that has to do nothing. So I want to kind of closing this running out of time here. There's one I brought you up in my conversation with Joe Henrik. And the the, I want to kind of get your reaction to sort of the line of thinking, which was one of the one of my readings of your body of thought is that our status seeking institutions with color, or other habits are kind of messy. We don't always pick the right. Or the most, I don't know, effective. You choose your goal, but for the most part, we have a fairly general way of determining status, and it's not that great. But if it is the transmission mechanism for cultural evolution, it's done. All right. I mean, it hasn't destroyed us. Right. And so picking high status people to mimic and then so that is the transmission mechanism for for culture. That's all right. What I'm interested in. And one of the things Hendricks says is that there's two possible is to kind of general approaches to, to status was dominance and prestige, right? Dominance was kind of more important back today. And now it's prestigious or this other thing, which is really amazing to me that there's like another mechanisms evolved or emerged for choosing status, it presents the possibility of yet another one emerging at some point. And I'm wondering if maybe you could close on speculating on whether you know what that might be if you were to pick something or what you might prediction might be, which would be different of what might emerge as another way of assigning status, and transport.
Robin Hanson:So there's different timescales here. So in the age of them, in the book, I do talk about what would be new status markers. So for example, today, we concerned about per capita wealth, but we hardly ever care about how many descendants somebody has. But I would think in the age of M, the size of a clan would matter, in addition to the per capita wealth in each clan, that would become a new status marker. I talked in there about how school would become less important because you could more reliably estimate some of these productivity from the other copies. So you know, but in the very long run, you know, I have this thing about being explicit about reproduction. So