How will artificial intelligence shape our future?

17 January 2020

Artificial Intelligence, or AI, is often talked about as the next technological revolution.

It has the power to completely change the way we live—but many people worry about the pace of change, as well as the risks and ethics involved.

Join us for this fascinating chat with one of the world’s leading experts in the field, Professor Toby Walsh.

Toby’s latest book looks at how the world might look in 2062—and how AI will change the way we live.

From the loss of basic skills like reading a map, to a new golden era of philosophy, Toby makes some bold claims and offers some fascinating insights.

Toby chats to BETA’s resident data science expert, Dr Shea Houlihan, about the future of AI and the major role behavioural science can play. Don’t miss this one.

You can hear the full story, and all BETA’s past episodes, on the website.

Disclaimer: BETA's podcasts discuss behavioural economics and insights with a range of academics, experts and practitioners. The views expressed are those of the individuals interviewed and do not necessarily reflect those of the Australian Government.

Transcript

[music]

Dave:

Hello, and welcome to another episode in BETA's podcast series. In this episode we discuss the future of the world that artificial intelligence could create. We're lucky to be joined by one of the world's leading experts in AI, professor Toby Walsh of the University of New South Wales, as well as BETA's resident expert on data science and machine learning, Dr. Shea Houlihan. Enjoy.

Toby Walsh, professor of artificial intelligence at the University of New South Wales, welcome. And welcome also to BETA's resident data science and machine learning expert, Dr. Shea Houlihan. Thanks to both of you for coming along.

Toby:

Pleasure to be here..

Shea:

Appreciate it.

Dave:

Toby, we'll start with you. So you're a professor in artificial intelligence or AI. What does that actually involve? Because as I understand it, AI is really just a collection of a lot of different disciplines?

Toby:

It is. I mean, we always joke that AI is trying to get a computer to do all the things that we can't yet do with a computer. And the mistake that people make is they think AI is just one thing. But our intelligence isn't one thing, we have a bunch of different skills about perceiving, reasoning, and acting in the world. And AI is about trying to get computers to do all those different skills.

So it's a collection of tools and algorithms that we have developed over time to help computers perceive the world. That's recognising images, understanding speech, those sorts of things. Then reasoning about the world, making decisions like we make decisions. Learning from the world. So there's things like machine learning in there. And then acting in the world. So I have colleagues who build robots and actually make actions in that world. So it's a lot of different things brought together. Bringing in expertise from computer science, philosophy, linguistics, psychology, neuroscience and lots of other areas.

Dave:

So are these all areas that you practise at the university, or do you build robots one day and teach philosophy another, or is it just a nice, handy umbrella term?

Toby:

It's a handy umbrella term. My background is mathematics and computer science. But increasingly, you have to do worry about things like the ethics and the philosophy of what you're doing because it has consequences, material consequences on people's lives.

Dave:

And that is sort of the premise of your latest book, is that right? Should we go into a little bit about what 2062 is all about?

Toby:

Yes, sure. I mean, perhaps I should say, why the title, why does it say the year 2062. I should point out, I did go and give a talk in a library in North Sydney and a lady came up to me and said, "What's it about my postcode?" I said, "It's not about the postcode, 2062. It's about the year 2062." Which I didn't choose, it was chosen actually by 300 of my colleagues. I did a survey of other leading experts around the world on AI, and that's when they predicted, on average, that machines would be finally as smart as humans. Now, I should point out, there's huge great variance in their answers. 8% of them said never, but 92% of them said sometime in the next 50 or 100 years. And the median estimate was 2062.

Dave:

Okay. And it looks at a lot of the implications for society as a whole, doesn't it? One of the things I found fascinating was that you're saying a lot of skills will just depart humanity, like reading a map or potentially even driving a car, quite soon?

Toby:

Well, we have lost some skills already. I mean, I think we're probably the last generation that knows how to subtract. When I make change in the supermarket, people look at me blankly and wonder why I've given them extra money, because it's going to give a round number back. So that's a skill that's gone. Remembering telephone numbers. No one remembers telephone numbers. We've outsourced that to our phones. The only telephone number I can remember is the telephone I had when I was a young boy. 712469. I can tell you what the number is because it's not connected to anything anymore, which is a wonderful number to remember because no one can actually even dial it, not even myself. Machines remember that.

And so other skills will go. I mean, the next skill I suspect will be map reading. We're outsourcing that to our apps. It's interesting to think, that will change us. I mean, it's known that reading maps and understanding the typology, geography of a city changes the brain. Taxi drivers in London who do The Knowledge, the year long test to get your black cab licence, you can measure the physical change in the hippocampus, the front of the brain. You can actually see the increase in the volume of the hippocampus. So there's a physical structures change in your brain when you remember all those roads.

And so if we're outsourcing this to machines, that will be changing us. I suspect people won't perhaps understand how the earth goes around the sun so well. I mean, I still could walk out into the street and tell you which direction south is. But if we're relying exclusively on our apps, then maybe those sorts of skills will depart us.

Dave:

It's interesting, isn't it, how as humans, if we can take a shortcut, we will. If we're losing all of these skills ... I mean, are we getting a little bit lazier or are we actually supplanting those with new skills?

Toby:

We may get lazier. Of course, we get something back in return. I mean, we always get something back in return. We spend perhaps less time being lost, which is perhaps a good thing. But not understanding how the earth goes around the sun might be a bad thing. It's always worth remembering, every technology takes something away and typically gives us something back. And a good example I like to give to people, a fantastic example, is literature. When we invented literature people at the time said, "We're going to lose something. We're going to lose the oral tradition." We used to sit around the campfire and remember stories and pass them down from generation to generation.

And we do, we don't sit down and remember those long stories anymore. We lost that oral tradition in most societies. But we got something back, and actually what we got back in that extent was far, far better. We got back literature. We got stories that cross time and space. We got the stories of Shakespeare and Wordsworth. A fantastic a return we got for that. We gave up the oral tradition but we got something much better back. So the interesting question actually perhaps we should ask with all technologies is, what do we get and what do we give up with that technology?

Dave:

And the book kind of goes on to predict that possibly this could be a second renaissance or a golden age of philosophy. So are these the sort of things that you think that, in a sense, technology will give us back, is that these other areas might be able to blossom?

Toby:

Yeah, I think two things it will give us back. One is that it we'll appreciate some of the things that really make us human. The things that distinguish us from the machines. It will give us back an appreciation for those things. And then the other thing it will give us back, of course, is time. If we're careful, we let the machines take the sweat, we get the thing which is actually the most valuable thing that even money can't buy, which is our limited time on the planet. And we will get more of that time to do the things, hopefully, that's important to us and not to do what most of us do, which is spend most of our waking hours feeding ourselves and clothing ourselves and housing ourselves.

The machines can actually generate much of that wealth and take much of that sweat and give us back the time to do the things that we think are more important.

Dave:

And those things that make us human, I guess that's what we in behavioural science do our best to try to understand a little bit more. Shea, what do you see as the crossover between artificial intelligence, if you like, and behavioural science?

Shea:

Well, I'll take the academic route, if I may, and say that it'll be a two-way road, I think. I think that artificial intelligence will help to enrich applied behavioural science. But I think that applied behavioural science also has a little something to offer artificial intelligence as well. And the way I see it, I think that the benefits of machine learning to apply behavioural science may come in the form of personalised interventions. We already see, especially in the private sector, it's possible to look on social media and see that you're receiving personalised ads. That is to say, ads that only go out to a very, very, very narrow subset of people, and maybe only to you in the future. I also see that machine learning will maybe help us identify what we in behavioural science are very concerned about, and that is cognitive biases, especially cognitive biases that we haven't previously diagnosed.

Daniel Kahneman, world famous psychologist, winner of the 2002 Nobel Prize in Economics, points out that the three biases that he's most concerned about are optimism bias. IE., the fact that we are typically wildly optimistic about our future selves. Present bias, the idea that we are so focused on the here and now that we neglect to plan for the future. And anyone who has teenage children I expect can appreciate that point. And confirmation bias, the fact that we tend to seek out external validation. That is to say, news and information that affirms our previously held positions. And I think that while we care about these three cognitive biases and a whole host of others, machine learning may help us to identify systemic biases that people tend to exhibit in a range of other areas as well.

But I want to argue that behavioural science may help to translate humans back to something like an artificial intelligence, like this quintessential rational actor. Because of course, most economists, most mainstream economists have long been concerned with the idea of individuals as perfectly rational, as thinking about their future selves as being as concerned about the extent to which we can achieve utility in the future as we care about achieving utility now. But we've known for a long time now, and behavioural science has brought this to the fore, that we don't always act perfectly rational. We don't always act in accordance with our long-term interest. And yet we are closer and closer and closer to creating something like the quintessential rational agent. So I think to the extent that behavioural science can educate, can train, can show an AI what it is to be human, as in why we do make faulty decisions, why we do act the way we do, I think that there's real potential there.

Dave:

And it's something I think that really comes out in the book, Toby, is that we're going to need to explain some of our decisions to artificial intelligence in order to make sure it can work around us and work with us properly. Because, as you say, there's a lot of examples of algorithms that exhibit human biases, but also artificial intelligence that ... You give some catastrophic examples of where it could all go wrong if we don't explain how it should interact in a imperfect world with humans?

Toby:

Yeah, there are plenty of markets that we're building where there are humans and AIs interacting, and those AIs need to respect the fact that humans aren't going to behave completely rationally and they need to make rational choices under that working assumption. So there's a huge, great interplay that's going to happen between AI and behavioural science to actually build markets that work fairly and efficiently, given those limitations of human decision-making. And of course, given the limitations of machine decision-making. All of these systems have their own limitations.

Dave:

Yeah, of course. And I think the next big challenge that you raise in the book is, what are going to be the challenges and opportunities for governments? Because this could fundamentally, and will fundamentally, change the way that societies work. And we're going to need to think long and hard about what structures are in place. What do you see as the sort of big challenges for government?

Toby:

I think one of the biggest challenges is the changing nature of work. That's one we're already starting to see impacting upon people's lives. And one where we really start need to make decisions because kids going to kindergarten today will be working in the second half of this century, and many of them will have careers where they do a dozen or more jobs. So how can we equip those people? How do we re-skill the people already in the workforce? Because it's pretty clear ... I don't think jobs are going to disappear. There's always going to be plenty of work to be done, but the nature of that work is going to change. Just like it has changed.

If you went into an office back in the 1970s, 50 years ago, it looks quite different to an office today. There were typing pools. There were no personal computers. Almost nothing that we were doing back then we're doing today. And equally, if we go into an office in 50 years time, or even less, because we do seem to be accelerating in these, to use that often misused word, exponential times. So that in 20 or 30 or, at most, 50 years time, again, offices and most workplaces are going to look completely different than they do today.

So how do we allow people to re-skill themselves through that change? And with technologies, many of which have yet to be invented. They're not things that we can necessarily teach people today because we have no idea what they are. I like to remind people, 12 years ago there were no smartphone app developers because we only just invented the smartphone 12 years ago. 20 years ago, there were no social media promoters because social media was yet to be invented. All of these things, all these jobs appeared out of nowhere.

Dave:

And it was an interesting point I thought, that you've raised before, that we do need to make some pretty tough ethical choices about what jobs we do give to machines. I mean, you gave the example of the software that is being used in the court system in the United States. Is that something that we actually want to use machines for? Are you able to talk a little bit about that?

Toby:

Yes. The capabilities of machines will get better and better. But there's always that question is, do we necessarily want to give them all of the jobs? You mentioned the example of the legal system, where there are some worrying examples in the United States and elsewhere where we are outsourcing some of that decision making to machines. And we should think, I think, very carefully about that because those are very fundamental decisions, they are some of the toughest decisions we make as a society. Depriving people of their Liberty, it doesn't get much more difficult, more challenging than that. And if we think carefully, when we make those sorts of decisions we often, in many societies, we hand them to a jury of our peers. And we've done that for hundreds and hundreds of years.

And to hand those decisions to machines which don't have our empathy ... And it's not clear to me if they ever have our empathy. And certainly, they can't be held accountable for their decisions. They're machines. They don't have our conscience, they can't be suffered, they can't be punished. All those sorts of very human characteristics are things that they lack, and likely to always lack. So it's not clear to me that we should hand those really high-stakes decisions ever to machines. And certainly from a legal perspective, they can't be held accountable. So ultimately, a human has to be held. Only humans can be held accountable for their decisions.

Shea:

If I may, Toby, can I ask, do you see ... This is a simplification, but do you see that the role of an AI in government potentially then as being more of delivering interventions or delivering the outcomes of a particular decision, and in aiding in decision making rather than kind of being at the centre of the decision?

Dave:

And that kind of brings us neatly onto maybe what the opportunities would be for government. And Shea sort of mentioned service delivery obviously, but are there any other opportunities we see for AI?.

Toby:

It's worth pointing out, government is perhaps uniquely placed to take advantage of AI, because a lot of it's being driven by technologies like machine learning, which need lots of data. And government is the institution that has perhaps the most data, and ability to use that data, and has the most to gain from the delivery of better and more efficient services. So there's a huge amount that government can do. Of course, all of those decisions are going to require walking a narrow line about the ethics around what we're doing and making sure that it's done in an equitable, fair way.

Dave:

Yeah, absolutely. It'll be an interesting 50, 60 years or so. And just finally, what's next for you, Toby? Obviously, your first book ended at 2050, this one's now into 2062. What's next on the horizon?

Toby:

So the new book, 2062, was really an answer to all the questions people ask me when I wrote my first book, which just told people about the technology. So the second book was about the impact it has on society. And so my next book, which I'm halfway through writing now, is answer to the questions that people keep asking me now that they've read 2062, which is all around the ethics. It's all around thinking deeply through the ethical choices, as I say, in 2062. I think it could be the next 50 years, could be the golden age of philosophy. One thing that computers are, is they're frustratingly literal devices. They do exactly, and only, what you tell them to do. And so some decisions, that we haven't thought very carefully through or haven't formalised very well in our society, and values, will have to be made very precise if we're going to get computers to be making those sorts of decisions.

Dave:

And so you'd recommend that any budding students out there probably take up philosophy, is that what I'm hearing?

Toby:

Well, yeah, funny enough, a friend of mine did say to me, he said, "Toby, you're working in a very practical subject." Which I found quite funny, because 10 years ago no one thought what I was doing was very practical. But anyway. And he said, "My daughter's about to go to university. She wants to study philosophy. Can you persuade her to do something more practical?" And I said, "I hate to tell you this, but I'm going to applaud her choice and tell her how there's going to be plentiful work for philosophers moving forward." Every company's going to have a CPO, a chief philosophical officer, to help them deal with all the challenging decisions that they have to make. And government's going to have to be full of philosophers to help them decide these challenging issues. I mean, what does it mean to write a fair algorithm? How can we make sure that we're not discriminating against particular groups?

Dave:

Sounds like it's time for us all to retrain. Toby and Shea, thanks very much for coming on the podcast.

Toby:

Been my pleasure.

Shea:

Thank you.

Dave:

That's it for this episode. Don't forget to follow us on Twitter and read more about BETA's work on our website. Until next time, thanks for listening.

[music]