BETA Podcast: Interview with Kate Glazebrook

14 November 2018

Do people assess job applicants differently at 4:35pm to 9:15am? Can a web platform smash through biases in recruitment and help target candidates with exactly the right skills for the job?

In BETA’s latest podcast, Kate Glazebrook, CEO of Applied, talks us through some ground-breaking tech.

Kate takes us on her journey from the Australian Government to CEO of the UK’s first behavioural science tech venture, via Harvard and UNESCO.

This is the latest in BETA’s BX2018 podcast series, where we recorded frank and fascinating discussions with speakers at the 2018 Behavioural Exchange Conference.

Disclaimer: BETA's podcasts discuss behavioural economics and insights with a range of academics, experts and practitioners. The views expressed are those of the individuals interviewed and do not necessarily reflect those of the Australian Government.

Transcript

Elaine Ung: Hello, and welcome to another episode in BETA's podcast series. BETA is the Behavioural Economics Team of the Australian Government, and the team sits in the Federal Department of the Prime Minister and Cabinet. BETA's mission is to advance the wellbeing of Australians by applying behavioural insights to public policy and administration. Hi, my name's Elaine, and I'm part of the team. In this podcast series, we interview a whole range of behavioural economics and behavioural insights academics, practitioners and policy makers.

This episode is actually a part of our mini BX series, where we interviewed some of the speakers at the 2018 Behavioural Exchange conference or BX2018. There, speakers travelled from all around the world to share the work they've been doing in the behavioural economics and insights field.

In this episode, I interview Kate Glazebrook, who is the CEO and co-founder of Applied. Applied is the Behavioural Insights Team's first tech venture, and it's a web platform that uses behavioural insights to help organisations find the best person for the job. Before joining Applied, Kate was the team's principal advisor and Head of Growth and Equality. She's also worked for UNESCO in South East Asia, and for the Australian Treasury. She does very interesting work, hope you enjoy!

Hello, I am at Behavioural Exchange Conference, BX 2018 today, and I have with me Kate Glazebrook from Applied. Thank you so much for taking the time out of your day to join me now. Could you please give us a bit of background, Kate, who you are and where you've come from?

Kate Glazebrook: Hi, Elaine, it's great to be here. I am Kate Glazebrook. I am the CEO and one of the co-founders of Applied. It's the first technology spin out of the Behavioural Insights Team, and we use behavioural science and data science to remove bias from hiring decisions. We're a technology platform aiming to use the best available evidence to help people to hire better and fairer.

Elaine Ung: Great, and how did you become interested in behavioural economics?

Kate Glazebrook: I became interested in behavioural economics when I was working in the Australian Government, in the Australian Treasury, and I started to read ‘Nudge’ and think about ways that we could be using this burgeoning field of behavioural science for social good in government. I then went to the US and studied at Harvard, and specialised in behavioural science and analytical methods. And had the opportunity to work with some phenomenal academics, including Cass Sunstein, who spoke this morning, and think about ways that we could use this entirely new toolbox to achieve some of the things we'd thought couldn't be achieved using traditional government tools. I then went from there to work for the Behavioural Insights Team in the UK, and from there have spun out Applied.

Elaine Ung: Wow, fantastic. Could you please share with us what you're working on right now in Applied?

Kate Glazebrook: In Applied, we're really focused on how to improve predictive validity in hiring. So how to help organisations select the person that's most likely to be a good fit for the job and stay in the job while, at the same time, removing any risk of bias or noise from that decision. So that you're not influenced by factors like that person's background when you're deciding whether they're going to be great for the job. At the moment, one of the things we're really focused on is how do we leverage this huge corpus of data we have on ways of testing candidates to help organisations make sure they're focusing on the things that really matter.

If you think about getting a job in government, there might be many things you might test a candidate for. You might want to know whether they got the analytical skills required to crunch through the data you're going to give them. You might also want to know whether or not they've got the EQ to deal with all the stakeholders that inevitably exist in developing any kind of policy well. Then you might also want to know whether or not they have the right approach to user-centred design. Do they really have an understanding of who's going to be the end beneficiary of these policies and how to best elicit the needs and responsibilities that we have to those particular groups. We start to look at ways of testing candidates so that you really identify the things that truly matter. What's a day or a week in the life of being a policy analyst at PM&C and how do we use that as a way of getting better at the kinds of people who are going to be succeeding in those jobs?

Elaine Ung: Great! So how do you counter some of these biases through Applied?

Kate Glazebrook: In many ways we actually have tens of nudges, everything from how do we interact with candidates, when do we send nudge emails to remind them to finish their applications, right through to how do we design feedback when candidates don't get selected for the job.

One of the features of Applied is that we make sure that they get skill-based feedback that tells them what their strengths and weaknesses are in different areas of their applications so that they can learn from them. And of course loads of behavioural science can help us design a page that tells you that information in a way that's helpful and resilience building rather than demoralising at a moment when you're told this wasn't quite the perfect job for you.

But, I guess the part that we're known most for is how we remove bias from the shortlisting decisions. There are four key things that we do. The first thing that we do is we remove names and all demographic details of candidates so that when you're assessing a candidate you don't really need to know what the name of that person is to know whether they're going to be great at their job. But we know from decade's worth of research that sometimes names and other details get in the way. Cognitively, they become a distraction and we can fall prey to affinity biases and stereotype biases when we do that. So we remove those kinds of distractions. That's the first thing we do.

Then what we do is we actually splice up applications slightly differently. We flip the default that most of us have from reading applications top to bottom almost vertically. So you'd read Elaine's application and then Michael's application and then Kate's application. We say, "Actually, why don't we just take the three of us and think about what our skillsets are in a particular dimension and look at those back to back without knowing who's who?" Maybe we'd take our ability to get into user-centred design. How good are we at that and look at each of those candidates back to back? Then we'll turn to a different dimension of the application and look at those back to back. Through that, we can account for the halo effect which means that because you don't know who's who, suddenly you can identify that Kate's really strong in one area but actually she didn't do so well in that second area. But, Elaine on the other hand did really well in both. Whereas, if we'd read them top to bottom we might have got a little bit distracted by who started off well. The third thing we do is we allow multiple people to score without seeing each other's scores. We might assess candidates quite differently and in fact a lot of our data suggests we're all quite idiosyncratic in how we understand candidate quality. The platform's designed to try to leverage that diversity but make sure that no one person's view dominates.

Then fourth and finally we actually order candidates in a completely randomised way. We try to leverage some famous research out there about Israeli judges. Many people might have heard that there's a piece of research that seem to suggest that Israeli judges were more generous in their sentencing after they'd had food. We wanted to know whether or not that sort of thing held for recruitment decisions, like, did it matter whether or not you were being assessed at 4:35 PM when everyone's tired and fatigued? It turns out in our data that it does. We found mostly that people tended to be more generous with their scores of candidates at the beginning of a process and they become harsher over time. We also found all kinds of other variance that was really more about the context of the decision than it was about the quality of the candidate.

One of the things we did to counter that bias, which is successive contrasting and ordering effects and re-referencing, is we just randomised different candidates so that everyone gets a fair shot at being seen first, middle, and last. Those are just four of the ways that we use behavioural science to pack that into product design so that we can remove bias from shortlisting decisions.

Elaine Ung: That's really, really excellent. Do you have a favourite behavioural economics concept or behavioural bias that you've come across?

Kate Glazebrook: Oh, I'm a bit, being a pragmatist I guess I have to go with, my favourite has to be the most effective one which is defaults. Just wildly, wildly more effective than any other type of behavioural nudge we've yet been able to test. Just literally thinking about whether the opt out and opt in rules are set in the direction that we actually want people to be acting is a wildly obvious but nevertheless still incredibly powerful technique that we in government and elsewhere can use to our best effect. That could be even, is the default that I send you a reminder or not? It doesn't have to be the scale and size of a default around how you save for your time which is often the one that people think about but it can be just the small defaults we use in our everyday lives.

Elaine Ung: Yes, I think default is something we've seen in the marketing world for so long but hasn't actually been leveraged by government in actual policies that have real practical applications to people and can have changes in people's lives. How do you BE yourself? How do you apply behavioural economics in your everyday life?

Kate Glazebrook: Well, I wish I was doing it better. My former colleagues from the Behavioural Insights Team Rory Gallagher and Owain Service have written a book specifically on this and I need to take more of their tips from that book. But, I guess some of the ways in which I do it, I think a lot about how I write emails. A lot of people have pointed to the way that women and men write emails slightly differently and that women have a tendency to qualify themselves a little bit more. They start sentences saying, "I wondered if you might think about," et cetera, or they say please and sorry a lot more than they maybe need to, to just get to the point of what they're aiming for.

I BE'd myself by downloading one of those email add-ons that highlights the words that you use in an email that may seem a little bit excessive and you could remove those. Now it's really tricky to change the natural way in which you write. I think the beauty of these kinds of tools is not about saying be your best self every time you're writing 12,000 emails in a given day, you have it as a plug-in in your email and then it will just do the,—you write as you normally would and then it highlights for you, and you can decide whether or not to take a recommendation to maybe slightly abridge or make more concise the emails that you have. Wherever possible I'm outsourcing my BE to the tools that I think are out there that are designed to help me do that as quickly and efficiently as possible.

Elaine Ung: That's a really great tip. I think I'll need to download that as well. Could you share some reflections on how you've seen BE evolve over time? Generally in public policy but also more broadly as a field, and where do you think this is headed?

Kate Glazebrook: It's evolving incredibly quickly if we think about BX 2014, the first of these iterations just four years ago. It's amazing how the conversations in the room are changing from then. I'd say a couple of the threads that I've at least seen in my experience of behavioural science field lie in, first of all, I think where there's a lot more attention now being paid to whether or not the kinds of interventions we're talking about have broad applicability or whether or not they're still context specific. The scalability of the findings that we have in behavioural science: how much can I take the findings around defaults being powerful in retirement incomes and think about whether or not they apply to how people might even save in other parts of their lives, or even into a completely different domain like health care or education. I think this idea of expanding and scaling nudges across sectoral or policy lines is definitely something that's happened apace very quickly.

I think the other thing that's growing is this interest in personalisation of behavioural science. Rather than saying cohorts of people operate in a particular way, women or men in these particular cases are influenced by these types of nudges, we can actually get a lot more specific and say what is the best kind of behavioural nudge for Elaine? And how much do we know about her and what drives her that helps us to design an intervention that's most effective for you as a person? I think that's still in its nascent stages but it's interesting to hear more and more people thinking about ways of pulling together machine learning and AI techniques and the kind of rich body of personalised data that we all actually have on ourselves, or at least organizations have on us, to try and use that to design better interventions at the individual level.

The final thing I'd say, though this is more of a plea to all of us working in the sciences, is we do still generate most of our insights from a relatively small unique population of people, so the WEIRD population. Mostly there're North American studies done, to some extent on university campuses. Now we can learn a lot from those things but what we are very much missing, if you were to do a map of the world and our real true understanding of the contextualised nature of the world, I think you'd say we were still pretty early days in understanding whether or not these behavioural science techniques have differential impact on different populations in different time settings.

I think there are lots of organisations, the World Bank and the OECD in particular working on broadening the base of people that we use as our research participants and that helps us to make our insights more rich in that context.

Elaine Ung: You've touched on some really great points there. I see scalability as something that is a big question and something we have to address to be able to actually progress this field as well. The other side is taking all the different data sets and bringing it together. I know this is what's happening in the health field as well. It's a holistic approach rather than just splicing it up and looking at single individual ones, we're looking at a personalized approach and looking at ways we can do that better.

One last question. What's next for you?

Kate Glazebrook: Well I'm going to be plugging away at Applied and helping to connect the research sciences with real products that real people use in their real daily lives. We love the fact that we get to work very closely with a lot of the world's best academics to try and take their research and say, "Great, you've found this. Let's see whether or not it applies in these different kinds of settings," and where a tool, being technology, is relatively simpler for us to run A/B tests and it can be in field-based studies.

I'm really excited about continuing to do that work not only in the hiring space but thinking about ways in which we could bring that same research, that corpus of research to questions like how do people get promoted? What is the best way to on-board a new team? How do we build better inclusivity in the organisations in which we work? And how do we best understand how to drive more diversity not only at the beginning stages of the pipeline but right the way through organisations?

I'm pretty focused on making sure we do that and we do that as well as we possibly can.

Elaine Ung: That's excellent. All the best with your future work and with Applied.

Kate Glazebrook: Thanks Elaine. Great to be here.

Elaine Ung: Thank you so much for your time.

Hi, again. Thanks for listening. If you want to learn more about Kate's work or Applied, you can find more information on the Applied website. If you haven't heard our previous episodes, listen to them at Beta on the PM&C website. Until next time!