Few subjects occupy the minds of futurists these days more than automation, and its effects on the future of work. Various forms of automation, robotics and artificial intelligence are changing the world of work and threatening to render millions or perhaps billions of jobs obsolete. A general shutdown of much of the world’s economy during the current pandemic crisis only accentuates the issues. Things like automation and remote work are the only things keeping many businesses going.
Ever at the forefront of major global issues, the Millennium project has issued a far reaching report looking at the possibilities for a greatly altered world of work between now and the year 2050. Millennium project executive director Jerome Glenn spoke to me recently to discuss the findings:
Jerry, thanks for joining me.
It’s nice to be here.
Mark Sackler 1:31
The purpose of your report entitled ‘Work Technology 2050 Scenarios and Actions‘. What exactly is the purpose and what went into its making?
Jerome Glenn 1:44
Well, its purpose is to broaden and deepen and make more global the discussion about the future work and technology. We found we did about a half a year worth of reading everybody else’s studies and there are few who are really looking at the global situation. It was basically people looking at their own country, at one industry and maybe five years. And some of the big deals coming up are cultural changes. We can’t do those in five years easily. So that’s why we picked the year 2050. So that there you can talk about economic systems changes and cultural changes. It gives you some more elbow room to talk about new ideas.
One of the things that was a surprise was when I read everybody else’s report—not everybody , but as many as I got hold of—one of the things I did was keyword searches on terms. Not one of them mentioned synthetic biology. I mean, that’s amazing when you think about the future of next 25-30 years, most futurists I know think that synthetic biology and genetic engineering, and a lot of these new industries and activities, are going to be gigantic. And that wasn’t mentioned at all. So the purpose was to broaden it, deepen it, make it more global, long range, to open up a conversation, because in the study, we also did workshops around the world. So we really helped to change the conversation.
Mark Sackler 3:29
Synthetic Biology and genetic editing is interesting. And I’m going to get to that again, toward the end of our talk here, but in the introduction to the report, the very introduction, you bring up a very controversial issue, and that is, of course, in the past, when new technologies killed off entire old industries, they usually created at least as many jobs, if not more in new industries, but it may be very different this time. Why is that?
Jerome Glenn 3:56
Now in there, I think I ticked off something like seven reasons – I might not remember every reason – but one, of course, is the speed of change. Driving down the road at 25 miles an hour is not the same as driving down the road at 250 miles an hour. It makes a difference. Speed by itself makes a difference. Two; the interdependencies of things. Now I’m holding a telephone, which is also a flashlight, which is also a calendar, you know, the whole nine yards, the integration of technologies is far faster and far more fully integrated than before. So the manufacturers of flashlights are knocked out, manufacturers of catalogs are knocked out, the manufacturers of cameras are knocked out, etc.
So here’s this one little device, taking all these jobs away and we just take it for granted.
So here’s this one little device, taking all these jobs away and we just take it for granted. So the integration technologies another factor. A third one is when you add in AI, that means continuous repetition and machine learning goes faster and faster, and the products change faster and faster, which means we don’t get a chance to rest as easily on a plateau. So for example, I had a laptop in 1992. That’s quite a few years ago. And I still have a laptop today. The laptop looks similar to what it did back in 1992. Obviously, it’s got a lot more capability than it had in ’92. But I got used to the idea of carrying it around. So culture had a chance to adjust to the new technology. But if the technological change is so fast, we don’t have a chance to adjust culturally and socially to the technology because it keeps changing. We’re constantly learning a new this a new that and this is just a constant deluge. So that’s a problem coming up. And another one is that when we went from the industrial age to the information age, we really did not have yet the internet, the internet was part of that change. But now we have the internet and more than half the world is attached to it. Which means that half the world could have instantaneous technological upgraded transfer information without errors. A lot of times like, for example, when I was helping countries get packet switching, which was the backbone for internet in the third world, I might make a mistake. I might, you know, write down something wrong or I might make some sort of mistake as I’ve traveled from one place to another. But when you have an instantaneous transfer, and you have a global platform for training, you don’t have errors, and the reduction of errors is a big change as well.
So there are some more ones I listed, I’m sure, in the introduction but this gives you a flavor that this is different than before. It’s not only a matter of time, but also a matter of degree. Now, that doesn’t mean we’re necessarily going to hit all the unemployment. That’s why we talked about different scenarios. It doesn’t have to be a disaster economically.
Mark Sackler 7:15
Okay, well, you said a magic word there as we’re still kind of on the introduction, and that’s artificial intelligence. It’s a big fear inducer as far as jobs are concerned. But as you pointed out previously, in fact, on this podcast, one needs to distinguish between three types of AI. So just briefly reiterate that for the benefit of those who might have missed it.
Jerome Glenn 7:36
Sure. Artificial Narrow Intelligence has just one narrow purpose. So the AI that wins in AlphaGO you know, the Chinese game of Go, that got everybody excited, or the AI IBM Watson that beat the Jeopardy thing, or the chess champion being beaten, or the AI that drives a car, or the diagnoses cancer, they’re all single purpose. So the AI that you put together or machine learning that you put together for driving a car does not diagnose cancer, does not play Go, etc. Furthermore, if you take the software that plays Go that beat the human champion, you change the rules of the game, instead of a grid from 19 to 19, or maybe say 20 to 20, the software wouldn’t work, the human would wipe them out. So narrow AI does have machine learning, it does get smarter, it does all those things that people you know, are talking about, but within a specific category.
Now, Artificial General Intelligence we don’t have right now, as far as I know. The military may be ahead of some of the civilian stuff, I don’t know, but in any case we don’t have it in public. Artificial General Intelligence is a little bit like us. Not the same as us but like us in the sense that when we’re confronted with a new problem, we call up our friend, we do a Google search, we do all kinds of stuff to figure out what to do. The narrow intelligence is in one category again. But general intelligence can initiate its own research approach to solve the problem. It can draw on the Internet of Things, it can draw on sensor networks, it can draw on historical records. It does all kinds of things. It sort of acts like we do but is not the same as us. Now, the reason we hear a lot of controversy from guys like Elon Musk, and Bill Gates and others, is because of the next intelligence and that’s Artificial Super Intelligence.
The difference between Artificial Super Intelligence and Artificial General Intelligence is its super intelligence sets its own goals independently of us.
The difference between Artificial Super Intelligence and Artificial General Intelligence is its super intelligence sets its own goals independently of us. And that is an issue because we don’t know how long it will take to go from general to super. It might happen immediately. It might happen in many, many years. We don’t know.
But first of all, we don’t know if we can do general intelligence. It may take a long time. But I would put a bet that we’ll eventually get it, and if we get it, then it seems inevitable that we would go to Super. So the big controversy is to worry now, because if some people argue it’s possible to get general intelligence as soon as, say, 10 to 20 years, I mean, the military is working on these things. There’s a race between the United States and Beijing and so forth. If it is possible to get it in 10 to 20 years, and if it’s gonna take 10 to 20 years to create the international agreements and standards that can prevent the general intelligence going into super in a way that we don’t want, then that means we have to start working on it today. And that’s another reason—one of the things that came up in our study—that we have to come up with rules, regulations, audits, treaties, international governance systems, in anticipation of artificial general intelligence, because if we hit it, and slides into super before we’re ready, then the warnings of the science fiction writers will come to pass.
Mark Sackler 11:38
Well, indeed, I know also that there is some efforts in that regard. IEEE issued their guidelines for the ethics of autonomous systems, but to what extent that’s going to be looked at particularly by governments is another issue, but let’s move on to the actual scenarios here because there are too major parts to the report and the first one are three different scenarios for what the work outlook is for 2050. To me as a futurist, these three scenarios would look very much like a baseline scenario, a collapse scenario and a preferred scenario. But let’s take them one at a time, starting with the first one entitled, it’s complicated, a mixed bag. That kind of strikes me as maybe a baseline scenario, but also, I also think just my reaction to that title is any future where we’re all still working and functioning as a society is going to be complicated and probably more complicated than we are today. But tell us about that scenario and what it’s finding.
Jerome Glenn 12:45
The idea of the first scenario is your earlier baseline or your projection. Now a projection in a rapid accelerating rate of change doesn’t mean that 2050 looks like today. Still a whole lot of change, a lot of different stuff happens. It just happens with a couple of assumptions. One, there are good decisions and there are dumb decisions, and there are non-decisions. Two, as I mentioned, I was involved in the internet, early days spreading around the world. And it was very uneven, how it spread and how it was used. It’s very irregular. So we might assume that a lot of the technological advances and the decisions about them are also irregularly done around the world. So you have some countries that have strategies and do okay, and some that don’t do okay. And so you got migrations, you get people, where you have environmental impacts, where you have political failed states, and then you have high unemployment rates, and then you have loss of jobs or jobs which would have been created as they grow but didn’t get there. Then you have a bunch of mass migrations at various points and so it’s a mixed bag. There’s some wonderful things going on in scenario one, but there’s a lot of turmoil in there.
Secondly, corporations are getting larger and stronger. And again, so if it’s a trend projection, then you have to assume that they continue to get larger and stronger. Well, this would mean that corporations have moved beyond government control in many cases, not always, but in many cases. One of the classic things I think about in this transition is a painting of Napoleon when he was an emperor and, you know, the normal thing is a religious leader anoints them and gives them the crown. And Napoleon in this painting, grabs the crown out of the religious leaders hands, puts it on his own head. That’s sort of like the transition, you still have religion, but the power moves to the nation state.
And now a lot of the power is moving from nation state to the corporations. And so corporate control is far more powerful in that sort of world.
And now a lot of the power is moving from nation state to the corporations. And so corporate control is far more powerful in that sort of world. Now there’s a lot of good stuff in there. And a lot of good ideas in there. These scenarios are very rich, each one is 10 pages. So it’s not these little snippets of often, when people call this a scenario, they’re really talking about the view of the future. But here, it had a lot of different elements to it, a lot of good ideas all by itself. And it explains also a little bit about synthetic biology in there about how it creates a lot of new jobs. And that’s one of the new growth areas in the economy that does end up.
And so scenario one, you still have slightly more people, so there’s more new work and jobs than the past. We had about 3 billion in the workforce in 2000. By 2050, you got about 6 billion, maybe more than that. We figure that you’re going to have still a billion people doing jobs. In the sense that you have an employer you have a salary, etc, etc. So not everything changes, but a billion people to run civilization. But then you also have the tremendous growth of the self employed. You’ve got a lot of people increasingly, that are self employed, you also have the informal economy, which are basically self employed as well. But now we have the means of technology that allows a person to be self employed to find markets for them worldwide. Whereas your informal economy, you know, you’re selling something to somebody down the trail. You can’t get to a world market. But now you can. So you have economic growth, but you still have about a billion unemployed and transitioning in that scenario.
Mark Sackler 16:49
It sounds to me like more of a linear scenario. Now we get to the second one, which you call future despair. That doesn’t sound good. Perhaps it’s a collapse scenario, please elaborate on that.
Jerome Glenn 17:04
Yeah, well here governments and people didn’t anticipate the shift to Artificial General Intelligence. So when it hits around 2030, 2040 or so, and then eventually starts to spread, hitting by 2050, you have a shockwave of unemployment. When you have narrow intelligence, and let’s say getting rid of truck drivers, you can prepare for it. Not all the truck drivers are going to be automated in one day. You can phase things in, you can, you know, invest into your truck, there’s a lot of things that you can do, retraining and so forth. Whereas when you hit general intelligence, it hits across many different fields simultaneously. And that’s the real worry about unemployment, because the shock of a country can have an unemployment rate of say 7 or 8 and get along, maybe get up a 10 to 15, like in some some countries like Spain has these problems, a lot of developing countries have these problems. But then if you jump up to 50, 60, 70%, and then do it over several years, countries can’t absorb that. So you go into social chaos. So a local militia start to run things – your Yakuza is powerful in Japan, your mafia gets powerful in other places. And you have corporations start to create their own—in a sense—countries, so it’s a very fractionalized world. It’s a very large, very violent world. And organized crime is in a tremendous growth in here, because when you don’t have decisions being made, then someone fills the gap.
…one of the assumptions is that the ability of the internet to cause in a sense, little bubbles, different bubbles, people listen to their own group… the idea of decisions being made across society can’t get made because you have all these groups saying that group A is no good, and group A says group B is no good.
In scenario two, one of the assumptions is that the ability of the internet to cause in a sense, little bubbles, different bubbles, people listen to their own group, they get more into their own group. They stay in their own little way. And so the idea of decisions being made across society can’t get made because you have all these groups saying that group A is no good, and group A says group B is no good. And you know, all this polarization gets worse and worse and worse and worse, and because it gets worse, a lot of things that should be made as decisions don’t get made as decisions through governments and international organization. So they get made by others, such as organized crime and corporations as they get a little rougher. It’s a bad world we don’t want to get into. By the way, when we do workshops on these things around the world, Israel said that scenario two is likely for them. That’s a scary thought.
Mark Sackler 19:45
Yeah, that most certainly is and in terms of the segmentation, compartmentalization of groups by ideologies and views, I think I’ve seen that politically in a lot of the world, pretty much in the US right now, just with alternate news angles and the like. So the third proposal finally described, I would say is the preferred future, self actualization. What does that entail? How might that unfold? And how realistic a chance do we have of achieving it?
Well, this is where everything works great. So obviously the future is not going to happen exactly like any of the three scenarios, I mean, it’ll obviously be a mix, probably of all three. But in any case, the idea here is that countries, governments, and people take seriously, anticipating what could be a serious impact on unemployment, and then have strategies in place and create it—they implement these sorts of actions. So the transition when they hit into general intelligence eventually, becomes quite smooth and welcome.
Now, one of the key elements in here is the artist, the role of art. What runs culture is art, music, TV, movies, you know, all this sort of stuff. This is what tells us how we’re supposed to be to a large degree, because religion is losing its power. It’s still there, but it’s losing its power. And the media and the arts take over much of that role. But the arts create alliances and they start to say we have to get people ready for a post-job only future. Right now people identify themselves as being good. If they are like a good lawyer, a good plumber, my identity is I am a futurist. We get these titles often because somebody hires us.
But if we don’t get hired as much in the future, then we have to get ready for the idea that we invent our own future. People don’t think that way.
But if we don’t get hired as much in the future, then we have to get ready for the idea that we invent our own future. People don’t think that way. They think I get an education, I go get a job, I do what I’m supposed to do, and I retire. But then if people get laid off, we don’t want to throw them in the streets. So along comes the guaranteed income idea. Well, when I was writing the scenario, one of the purposes of writing a scenario by the way, is to find out what you don’t know, that you didn’t know that you should know but you didn’t even know that. So I didn’t know that there weren’t any cash flow projections on guaranteed income. I mean, if you’re going to have guaranteed income it’s a perfectly reasonable idea to do if the arts help people understand that they’re changing their self identity. Right now, if you don’t have a job, your self identity is bad. So that’s why the arts are important in this scenario, but if you create, a guaranteed income, it’s got to be sustainable. You don’t want to, you know, break the bank so to speak.
So I immediately started contacting Finland and Switzerland, a bunch of other countries to get their cash flow projections on this because they’re experimenting with it. And I found out nobody had a cash flow projection. So I didn’t know that there weren’t any of these things. So we used a questionnaire process around the world to collect ideas on how to put together the elements to make it financially sustainable. And we figure that you’ve got – imagine two curves, bell curves, one, the cost of living right now is going up. But the things that we have to pay money for, eventually the cost starts to come down like medical diagnostics. Once you have good AI for medical diagnostics, you don’t have to pay a doctor. And once you duplicate the software, it doesn’t cost you a whole lot, you know. Transportation already, a lot of other places are already starting, Denver’s got, and in different places around the world, slowly creating transportation. Education, because we got this disease running around the world, people are starting to realize, hey, we can do our education online. So that sort of takes on so the cost of education starts to go down.
So a lot of the things that we would normally spend money on, those things start to go down. So imagine a bell curve, where the cost of living goes up, bell curves maybe around 2030 or so slowly starts to come down, so that what you have to pay somebody for a living wage or living income, so their not thrown to the streets, is less in the future than it is today. But we’re not there yet.
The second curve is new forms of income. We don’t tax robots, but we probably will. We don’t tax artificial organisms, but we probably will. So there’s a lot of new income coming up. When you take labor out of production, the costs go down. And the same time, the wealth goes up, because you’re getting more income per unit. So it’s reasonable to assume that the new sources of income will go up, and the cost of living will go down. Where those two graphs cross over, we project loosely speaking around 2030, 2035 or so, which is about when we got to take seriously the idea of Artificial General Intelligence affecting the labor market. So in this scenario, people start making a living on being themselves, like for example, Mark, I enjoy talking to people about the future. That’s what I enjoy. That’s what I like to do, and I learn and all that sort of stuff. So I’m making a living out of doing what I want to do. So I’m self-actualizing.
…if the guaranteed income can take care of the basics, you’re not gonna make much money, just so you’re not throwing the streets. But then you have the the elbow room, the flexibility to start to say, Who am I? And what is my gift to the world or my gifts to the world?
Societies go through these evolutionary steps, as you know, and we’re having much of the basic needs of the world being met. I mean, in 1980, over half the world was in extreme poverty. Today even with population growth like crazy, since then, less than 10% are in extreme poverty. So the basic needs of life are being met worldwide. And if the guaranteed income can take care of the basics, you’re not gonna make much money, just so you’re not throwing the streets. But then you have the the elbow room, the flexibility to start to say, Who am I? And what is my gift to the world or my gifts to the world? How do I want to play in this world? With a guaranteed income, you’ve got that flexibility, so then you can actually make new income. So it’s not that people are going to be poor, you can still make an income but you’re defining your own life. So in scenario three, we’ve got something like three or more billion people out of the six are making a living in the self actualizing economy. And it also means that they can take on social causes that they might like to take on, that they can’t take on, if they’re working at a job that gets them dead tired at the end of the day. But now you can make a living out of being yourself, finding markets for yourself around the world, communing with others that are similar to yourself, and you get feedback and then become more self actualized as you go and take on those causes. So a lot of the good stuff that we’d like to do in the world will have the space to be done. So there’s a very positive scenario, but it all depends, in large degree on ‘can you make this guaranteed income sustainable’, which looks like that will eventually be possible, but not now, but maybe by 2030. And that we today start to take seriously all these issues. And so that goes back to the purpose for creating these things to begin with.
I must say you killed two birds with one stone because UBI was going to be my next question. So we will go on to just very briefly, the second half of the report provides recommendations for action in five major categories; government, business, education, science and technology, culture and arts. Right? How are these arrived at? And realistically, how useful are they?
Jerome Glenn 28:24
They’re really useful. Going back to the literature review and report review that we did before we started this study, most the reports really didn’t say what to do. They would say something vaguely like retraining or education. And then some would get a little more specific by saying STEM education, (science, technology, engineering, mathematics, education), okay, but in 2050 what percent of the world can make a living in those categories? You know, not everybody. There’s a difference between how many people do you need to make civilization work, versus how many people you need to make civilization worthwhile?
Well, if we just focus on making the civilization work, we got an awful lot of people that are going to be unemployed, because you don’t need everybody to make civilization work. So we thought it was important to not just write the scenarios and let them go. We thought it was important to use the scenarios as input to national workshops. So like we had two in the United States, Washington DC, we had then another 29 or so countries around the world. And the idea was that when you say, okay, read the scenarios before you go to the workshop. Think about them, then you can throw them in the trash can and then meet with your friends and other colleagues. And you say, what should we do? Well, we divided those into different groups. You know, what should business and labor do is different than what education might do. So education learning is another group. Well, what should government do? It might be different. So we divide it up so that these categories came from the workshops. And we had there and as I mentioned, about 29 countries, there were 30 workshops, because some countries like United States had more than one workshop. And so what we did is we took the results from these workshops. And then we – because there’s obviously a lot of overlap, because the things that you recommend for government in the United States might be recommend in France, and so forth. So there was, I don’t know, several hundred recommendations, and then we compress them down to about 80 to 90 or something like that. And then because we couldn’t send out the recommendations to people to evaluate, there’d be too many. If I sent you a list Mark, of 90 things to evaluate, you would probably not get around to doing it. So we just kept them in those same categories. The five different Delphi’s. So theres a Delphi on business and labor with those actions to assess. There was a Delphi on government and governance with those actions to address. So each of the, I don’t think there’s ever been five simultaneous Delphi studies, by the way, that may have been a landmark in methods. So then we took all the responses from all those Delphi’s and then compress those down. So what the reader gets in the report is a, let’s say there’s 20 recommendations for business. Then they get those recommendations and how they were judged in terms of likelihood and impact or a Delphi’s rating process. But they also get a page of commentary from around the world distilled down. I mean, there’s hundreds of pages of these things, but we had to distill it down. So you get a page of analysis for each of the 93 actions. This is the most extensive study on that we should do about all this stuff. I mean, the other studies aren’t even close to this. So it’s a rich menu. If so, if you have a responsibility to say, what should we do in our university about this? Or what should we do in our government, this is a rich resource, because not only does it say here’s the added advantage of an action. It also says, here’s how you can mess it up. This is what you got to consider about that. So it’s a whole analysis about the action, not just listing the actions.
Mark Sackler 32:26
Alright, one last issue I want to hit you with. What makes this so complicated, of course, is there are so many different potentially disruptive technologies emerging that will interweave and influence the future of work, but to me trans-humanist technologies that literally might change what we are as human beings, could be the most disruptive and you hinted at that a little bit in terms of changing the typical life trajectory, but this can change what it means to be human and some of that is synthetic biology, but these technologies include brain computer interfaces, digital twins and biotech interventions such as genetic enhancement, and radical life extension. To what extent has any of this been taken into consideration in any scenarios? Or is it too far out there?
Jerome Glenn 33:16
No, as you probably know, it’s salt and pepper throughout all three scenarios. One of the things we talked about, I think, in scenario three is a little bit of the conscious technology stuff. And there, you have an integration of the human with technology, like I’m looking at you and talking to you. But what I’m really doing is I’m looking at a piece of metal and plastic. And I am talking to a machine that talks to a machine that talks, but you and I, somehow have figured that we’re talking to each other. But we’re really talking to a whole bunch of intermediaries, but those intermediate seem to disappear. So we have created, in a sense, a mini version of conscious technology. So imagine where you are so interconnected with technology, and it is so interconnected with you that where the technology begins, and the human consciousness ends, or begins or leaves off is not clear, just like it’s not clear as I’m talking to you in this video conference. But this is just a mini version of that. Now, I wrote a book about this 30 years ago, I don’t know if you know, that’s called Future Mind. And it was about the merger of consciousness and technology. And one of the variables I had in there was how well our mystic self and our technocratic self can get along. Same with ourselves as well as in civilization. Because the masters of technology, the technocrats and the masters of consciousness are the mystics. I don’t mean religion, I mean the mystical experience. I mean the trouble is these are the polar prejudice throughout all time, the tool makers and the consciousness sharers, you probably hear it in your futures work. ‘Well, what we have to do is let them raise consciousness’, no, no, no, what we have to do is a new law, ‘well a law without changing consciousness won’t matter’.
Well, this is an argument between consciousness and technology all the time. And if we can make a synthesis and a merger of this in a harmonious way then I think the future civilization of all these technologies will be good. If it’s not, it could be quite bad. One that I think your audience can appreciate; when a great piano Maestro plays a Chopin piece, and after they interview him and they say, ‘Well, what was it like playing?’ and hey says ‘Well the music, the composer, the piano, my fingers, my mind, all merged in one moment of performance’. Okay, one moment of performance, imagine a civilization being a moment of performance of that relationship of consciousness and technology together. We’re not talking about that one much yet. But I think we’re going to.
Mark Sackler 36:16
Jerry to just to wrap this up, where can our listeners find this report?
Jerome Glenn 36:22
Well, they can do their Google search on millennium. Spelt the English, not the French Way. Millennium-project.org you go there and it’ll give you all the information you want.
Mark Sackler 36:39
So again, I thank you as always, for your great insights and I look forward to the next time we have a chance to chat.
Jerome Glenn 36:49
Excellent. Take care of yourself.
Mark Sackler (Postscript)
Of course, none of these scenarios anticipated the Black Swan that is COVID-19. And even if the world of work and the global economy with it emerged with minimal long term damage, the world has been forever changed. This underscores why futurists prefer scenarios to firm predictions, there are just too many possible futures.