John List
Big Brains logo

How John List Revolutionized Economics by Studying People in the Real World (Ep. 28)

Behind the scenes of policymakers reshaping our society, there are researchers supplying them with answers to our most pressing questions. University of Chicago economist John List is one of those people.

John List
Big Brains logo

Show Notes

If you’ve played Candy Crush, flown on United Airlines, or taken an Uber or Lyft, you’ve been in one of Prof. John List’s experiments without even knowing it. List has revolutionized economics research through his pioneering use of field experiments. A field experiment is conducted in the real world instead of in a lab, testing theories on people in their day-to-day lives.

List’s experiments have changed the world by equipping policymakers with real-world data to address issues like climate change, the gender pay gap, and why inner-city schools fail. But now, he’s warning of a crisis that’s threatening the impact of scientific research: Many studies that claim to tell us something about the world fall apart when you test them on a larger scale. It’s something he calls ‘the scale-up problem.’

Subscribe to Big Brains on Apple PodcastsStitcher and Spotify.
(Episode published August 8, 2019)

Recommended:

Transcript:

John List: When I step back and think about what I want to accomplish with my research career. It's quite simply to change the world.

Paul Rand: Behind the scenes of policymakers passing legislation to reshape our society, there are researchers and academics supplying them with answers to our most pressing questions. University of Chicago economist John List is one of those people.

John List:Now, how are you going to do it. The way that I think about doing it is I want to take on the biggest challenges that we face. And I want to use the economic lens and data to shed insights on those questions.

Paul Rand: In the mid-90s, List revolutionized economics research through his pioneering use of field experiments. A field experiment is conducted in the real world instead of in a lab, testing theories on people in their day-to-day lives. If you’ve ever voted, flown United, taken an Uber or Lyft, you’ve been part of one of List’s experiments without even knowing it.

John List: We need data. To make well-informed decisions. And the typical way that people have used data, is they wait for the world to give them the data. My way is to go out to the world and generate new data that can help inform important questions and give us important insights on how we should be acting.

Paul Rand: List’s experiments have reshaped our world in ways that affect our everyday lives—by better equipping policymakers with data to address issues like discrimination, the gender pay gap, and how to raise charitable donations. But now, he’s warning of a crisis that’s threatening the impact of scientific research: many studies that claim to tell us something about the world fall apart when you try to test them on a larger scale.

John List: You know you find a result there in your lab. But does that results scale up? And what interventional specialists have shown us over the last two decades is that a lot of times it doesn't.

Paul Rand: List wants to change the world, but if the research turns out to be wrong on a large scale? Well… 

John List: Houston we have problems.

Paul Rand: From the University of Chicago, this is Big Brains, a podcast about the stories behind the pivotal research and pioneering breakthroughs reshaping our world. On this episode John List and the power of field experiments. I’m you’re host Paul Rand.

Paul Rand: As far as accomplished economists go, John List is near the top. He’s served as the chairman of the economics department at The University of Chicago, worked in the White House on the Council of Economic Advisers from 2002-2003, and, most recently, was positioned as the chief economist at Uber and Lyft, all while continuing to teach and research. But List’s early years—and humble beginnings—are crucial to understanding how he came to redefine economics.

John List: Sure, you're exactly right. It is very different than how my colleagues have come to be economists. So, I started in a little village called Sun Prairie Wisconsin as a kid. My father was a truck driver and my mother was a secretary. And it was sort of an ordinary kind of humble beginning in the sense that I was very loved, but education really wasn't that important when I was a kid. Now, what was important to me was sports and, in particular, I loved golf. When I went to college, I went on a partial golf scholarship. So first goal was to be a golf professional. That was it.

Paul Rand: Great goal.

John List: Yeah great goal, fun activity. I arrive the fall of 1987, and I learned two important things about myself. One was I'm not good enough to play professional golf.The second thing I learned that fall is that I really loved economics .And, we would golf every afternoon, and every afternoon I saw three or four econ professors out there golfing too. So I thought well.

Paul Rand: Good gig.

John List: Yeah. This is a pretty good gig. You can teach in the morning, you can golf in the afternoon. So, I said I want to do what those guys are doing.

Paul Rand: Golf wasn’t the only sport List loved, he was also obsessed with baseball. And just like golf would lead him into economics, it was a collection of tiny memorabilia that would lead him on the path to developing his groundbreaking field experiments: baseball cards. 

John List: That's right. That's right. So as a kid I would cut grass. That was my way of earning pocket change back in the 70s. After being paid, I would go down to the local convenience store called the PDQ and I would buy gum and baseball cards. And I amassed this huge collection. I loved the art of the baseball card. But what I loved more is the aspect of trading. But then, through the undergrad years, I also became a baseball card dealer.

Paul Rand: Going to trade shows and so on. 

John List: Absolutely. So what does that mean? What does it mean to be a baseball card dealer? What it means is pretty much every weekend that you're not golfing, you drive down to either Madison, Milwaukee, or Chicago. And you set up what's called a dealer's table which is a six foot long table in a big convention center. So, at the same time, I was learning about the principles of economics and economic theory. So I was always thinking, I wonder if those theories are actually right. And do they give you the correct depiction of what's happening in the baseball card market. So it's always in the back of my mind. I was doing experiments naturally. I would do things like, if a person would come to my table, I would try one form of bargaining with that person, and then when the next person would come, I would try a new form of bargaining, and then I would kind of see what is the best way to bargain.

Paul Rand: List didn’t know it at the time, but he was laying the groundwork for his field experiments at those card shows. But what was so special about he was doing? 

John List: So, in the fall of 1987, when I opened up my first economics textbook I turned to a page where it said: economists, essentially, can't do experiments because the world is too messy. Why is it messy? Because you have all kinds of people making all kinds of mistakes, zillions of prices, different products, markets, et cetera et cetera. They thought about it through the lens of a chemistry lab. Back in high school, when I did my chemistry experiments, rule number one was that test tubes had to be pristine. If they were not pristine, you would have a confounding variable i.e. the speck of dirt that would cause your interpretation of the experiment to be wrong. So they're certainly right, there are lots of specks of dirt in the field and in the world. So you can say, well how do you get around that problem? The area of field experiments gets around that problem through randomization. Randomization does not get rid of the dirt. The beauty behind randomization is when you randomly put some people in treatment and some people in control what it does is it balances the dirt across the treatment and control groups. And when you figure out what is the effect of my treatment in this scientific experiment, that dirt ends up differencing out and then it becomes a zero in expectations, and now I can learn something causal in the real world. So that's  kind of why I say they actually had it exactly reversed. 

Paul Rand: So what was the reaction from the field when you began going on this direction was it, boy brilliant or you don’t know what you’re doing or, that doesn't make any sense?

John List: Yeah it ranged from you're an idiot to what are you doing to just stay in the lab because we can learn everything we need to in the lab or just use naturally occurring data and make the assumptions. I think field experiments give you something unique though in that you're generating the data yourself. You're not waiting for the world to give you those data, and you can go beyond measurement and figure out the whys behind the data patterns. For example, why do people discriminate, why do people give to charitable causes, why do women earn less than men when they do the same job, why do inner city schools fail?

Paul Rand: List continued to develop his method of economic field experiments. And, in the late ‘90s, he was finally given the opportunity to put his ideas into action.

John List: I was an Assistant Professor at the University of Central Florida. And one day the Dean of the College of Business knocked on my door and he said, John I want you to help me raise money. So I said, well let me think about it. I’ve never raised money before, I've bought and sold sports cards in markets, but I've never raised money. And I went back to him and said, I'll help you on two conditions. One: you will allow me to help you raise money but doing it in a field experiment. And two: you will give me five thousand dollars in seed money to help me raise more money for you.

Paul Rand: So did he think, you’re a pain in the ass for asking this or did he think, wow this is compelling.

John List: A little bit of both, a little bit of both because he had to go back to his coffers. Here he thought he was going to get some free labor from an assistant professor so a little bit of both. But he agreed to it.

Paul Rand: List’s field experiments in philanthropy yielded data about how, why and when people give that standard experiments just couldn’t provide. The answers he uncovered are now regularly used by charitable organizations. Like: the higher the announced seed money, the more people give, matching grants increase giving—even if the match is 1 to 1 or 3 to 1. But, List always brings things back to policymakers.

John List: Now, after you establish those facts you can say, well, are you finding anything that can help policymakers and indeed we do. So one piece of advice that policymakers always need is how should we change the rules around charitable contributions in a way that will help the industry flourish. These are things like marginal tax rates, what are the charitable deductions that are allowed. So when they talk to us they say, you know we're thinking about changing the rules what do you think will happen if we do rule A B or C, and then we tell them exactly what the predictions from the data would be in terms of how it would affect overall charitable giving. Now a third kind of contribution is to say, why do people give? For centuries, you have great philosophers arguing that altruism is the most important underlying reason why people give.

Paul Rand: You feel good when they give you.

John List: Well, that's a little bit different. So the way that economists define altruism is, you give to help others. Now, you're onto something that is right though. The other kind of major model for why people might give is you just feel good about giving. That's called warm glow. Now, our data suggests that most givers are giving for that warm glow in what economists would say is sort of selfish because you give just so you feel good about the act of giving. Now to me, when I give talks to large groups of charitable organizations they get very upset with me when I stand in front of five hundred or thousannd people and say, look my data are saying that altruism is important, but not as important as warm glow. And I show them the data and they get upset because they say, why are you making things dirty in a way. Why as an economist do you have to bring in selfish motives et etc? And what I tell them is, I don't care why people give. I want to. 

Paul Rand: Take the insight.

John List: Exactly. I want to take the insight and kind of the fourth thing that our data can be used for is to help practitioners. I want to take the insight and say, if I want to grow the pool of, not only the number of donors, but the amount of money that they give, I have to know what makes a person give and what keeps them committed to a cause. Because if I don't know that, I'm not going to know how to fundraise appropriately.

Paul Rand: List’s work with philanthropy made him a world-renowned economist, and it wasn’t long before for-profit companies started to notice. For the last few years, List has been working as the chief economist at Uber and Lyft, reshaping the way those firms operate while researching drivers and riders. What he’s learned about everything from the gender pay gap to how companies should apologize when they fall short, after the break.

John List: So this story about me working with firms goes all the way back to when I arrived at Chicago. Steve Levitt published a book called Freakonomics which, all of the listeners, if you haven't read it.

Paul Rand: You need to read it and listen to the program.

John List: And listen to the program, exactly, exactly. So, when Freakonomics came out a bunch of firms started to call Steve. And Steve and I started to fly from firm to firm and talked to them about using economics, using behavioral economics and using field experiments to help their business. Now what we were after, of course, would be data. So then you fast forward to 2010, and I received a phone call from Amazon dot com. And it was Jeff Bezos on the other end. And Jeff wanted me to come out to Amazon and be their first Chief Economist.

Paul Rand: Just calls you and says John I got an idea.

John List: I've got an idea. I was very close to going and then there was one hiccup: the research that I did at Amazon would never see the light of day outside of the firm because they were trade secrets and they did not want me to use them in scientific publications. I was not ready to give up science at this time. And so I said, I'm sorry, you can give me whatever amount of money in stock options you want—I kind of regret that—but I'm not going to come. But I'll help you hire someone, and I did help them hire Pat Bajari. Who's still there, and he sends me Christmas cards and gives his net worth at the bottom of it. (laughs) So then, I receive a call from Travis Kalanick. 

Paul Rand: Of Uber.

John List: So Travis is the founder and CEO of Uber.

Paul Rand: Did you say hey bro.

John List: Hey bro, exactly. You know Travis. So Travis tells me that they're on the market for a chief economist. And I said, you know well Travis unless I can use it to gain scientific insights and write scientific articles and publish the work, I'm still just not interested. He said, John we can take care of that for you. So I did that for nearly two years. And as you might guess, some things happened at Uber, and I ended up quitting May 15th of 2018 and I became the first chief economist at Lyft on May 21st of 2018. So at Uber and Lyft, I've taken on questions like who earns more money as a driver, men or women?

Paul Rand: Turns out, men make about 7 percent more than women, and it’s because they drive faster and drive in more lucrative but sometimes less safe locations.

John List: If the company messes up. What are the best ways to apologize to its customers?

Paul Rand: List’s team showed that apologies had a literal cost for the company—discounts on future rides or coupons—were far more effective. It turns out that money does speak louder than words. 

John List: I've worked a lot on tipping, how to think about inducing more people to tip, who gets tips, who gives tips.

Paul Rand: According to the data, even though men make more, women do get more tips. Sometimes 10 to 20 percent more.

John List: Really what I'm up to is, I'm trying to insert economic thinking at every decision node in the company.

Paul Rand: The lessons and methods from List’s research in philanthropy and at Uber and Lyft can be applied far beyond those companies, to issues like climate change.

John: I think a very important component of climate change is how the household is behaving. So, we’ve looked into how to induce households to adopt energy conserving technologies. And what we find is basically we can do two tricks to get households to adopt more energy conserving technologies. One is sort of a social norm treatment in that we tell them, you know, 70 percent of your neighbors have one of these gadgets in their home. Would you like one too? That gets people to adopt the first time. Now, if you want them to adopt deeper levels you have to use prices. And what I mean by prices is rebates and subsidies. So now we know if we want to get them to adopt the first one, we can use social nudges or social norms. If we want them to adopt a deeper level, we use price discounts. 

Paul Rand: List has even tried to tackle questions like why inner-city schools fail.

John List: That research is essentially going back to pre-K education. I think we pay way too much attention on trying to give kids cognitive skills and not enough attention at giving them executive function skills. After they leave my program if they develop executive function skills in my program they still have them in the eighth grade where if you push cognition that tends to deteriorate after they leave the program. What we also find is that the parental component is not largely ignored but it's not valued at the same level it should be. We think parents are the missing component in early childhood education, and we've designed incentive schemes to get parents more involved in their children's education from zero to five.

Paul Rand: When List tells you about his work, it seems as though he’s running hundreds of research studies at once. But lately, he’s been turning a lot of his attention to a credibility crisis in scientific research. List calls it the scale-up problem. That’s after the break.   

Paul Rand: It’s a phrase we hear every time politicians are debating or defending a proposal, ”the research shows” or “the data is clear.” We rely on academics and researchers to provide our leaders with information to tackle the issues in our world—and we trust that information is as accurate as possible. But there’s been a growing concern that threatens the credibility of scientific research. List calls it the scale-up problem. 

John List: As an example, for the last 10 years, I've been doing an early childhood center in Chicago Heights so we can explore what's called the Education production function. So we're learning a lot about what works in Chicago Heights. But the question, of course, of the greatest import is after we find a result in Chicago Heights, if we scale that up to say all of Chicago or all of Illinois or all of the Midwest or all of the U.S. etc. should we expect to find the same impact from our program that we reported in Chicago Heights. We've never really gone after that question but that's the most important question that policymakers should have. They call it the voltage effect. And what that means is, if I find this really big effect in a small scale experiment if I go to a larger scale the effect ends up being one tenth, or one fifth of what it was in the original experiment. And to that I say Houston we have problems.

Paul Rand: What do you do about that? 

John List: So where we started as any good economist would is race to your office and write down a model. So, this is work with Dana Suskind, who was a previous guest on your show and happens to be my wife and one of my graduate students named Omar Al-Ubaydli. And the model is essentially a model of scientific knowledge creation. And we break down this credibility crisis or let's say the scale up effect into three bins. The first bin is what we call a false positive—I ran an experiment and I just got lucky in terms of I found a big effect but it was unlucky in the sense that it wasn't the truth. What we need to do is we need to make sure that we replicate that original experiment roughly three or four times.So, that’s different researchers replicating in your original work. And if you find the same result continuously, you can be very confident that it's a true result in that it should be scaled.Inference number two, kind of bin number two is: well, there could be incentives that the original researcher had that when he or she did the original experiment they might have done something that gave them an overly optimistic result. Let me give you an example of that. So, at Chicago Heights we hired roughly 20 to 40 teachers in our program. What researchers typically do is they put out an advertisement for teachers and a bunch of teachers apply and they take the 20 best teachers. It only makes sense. I'm starting a program, I want to give my program its best shot of being good, so I'm going to hire the 20 best teachers. But now when that result gets scaled up or when that program gets scaled up you might have to hire 20,000 teachers. And guess what, those first 20 are probably your best teachers and the next several thousand are not as good as the first 20. That's a reason for the scale up problem because when you did the original experiment your incentives were, try to get the biggest treatment effect. But at scale you can't choose just 20 teachers.Now, as researchers what can we do? You know what we can do is get a large pool of teachers and then randomly choose which ones are going to be in my original scientific experiment. Because if you randomly choose which ones come in as teachers, that looks a lot like what the population is going to be if we scale it up. So, those are the types of elementsthat as an original researcher you need to think, how do I want my results to be used and if I want them to be scaled up. I need to make those kinds of choices in the original research to make sure that my results will scale.

John List: Economics to me is common sense. So I think economics that way can be viewed as a great window into the world and a great way to understand how people will respond to incentives for example. WhenI first started. To do field experiments. My main goal was to go to the real world. Generate data. That can test economic theory. Generate data. That can be potentially used by policymakers. 

Episode List

The troubling rise of antibiotic-resistant superbugs, with Christopher Murray (Ep. 90)

Global health expert warns about a potential ‘pandemic in the shadows’

Is scientific progress slowing? with James Evans (Ep. 89)

Scholar examines how researchers could generate greater innovation and discovery

Could we vaccinate against opioid addiction? with Sandra Comer and Marco Pravetoni (Ep. 88)

Scientists discuss promising solution, now in clinical trials, to address drug overdose epidemic

The man who fought to sanction Putin and Russian oligarchs, with Bill Browder

Businessman explains how his work on the Magnitsky Act made him country’s No. 1 enemy

Why big ideas fail to scale—and how to fix it, with John List (Ep. 87)

Economist discusses the secrets of using science to scale promising social programs

Could personalizing laws make society more just? with Omri Ben-Shahar (Ep. 86)

Exploring the benefits and pitfalls of using big data to tailor different rules to different people

How to stick to your resolutions, with Ayelet Fishbach (Ep. 85)

New book explores science of setting goals, achieving success and learning from failure

Confronting gun violence with data, with Jens Ludwig (Ep. 82)

Director of Crime Lab explains evidence behind community-based solutions to violent crime

 

Unlocking the secrets of black holes, with Andrea Ghez (Ep. 81)

Nobel-winning scientist examines the monster at the center of our galaxy—and how we got here

Course registration