Omri Ben-Shahar
Big Brains podcst

Could personalizing laws make society more just? with Omri Ben-Shahar (Ep. 86)

Exploring the benefits and pitfalls of using big data to tailor different rules to different people

Omri Ben-Shahar
Big Brains podcst

Show Notes

Big data has created a world of personalization. We have personalized medicine, personalized education, personalized advertising. Now, one University of Chicago Law School scholar is asking: Why not personalized law?

In his new book, Personalized Law: Different Rules For Different People, Prof. Omri Ben-Shahar lays out the case for why our idea of equality under the law actually leads to unequal outcomes, and why we should use data and algorithms to tailor our laws to individual people. As he says: If one size fits all doesn’t work for shoes, why should it work for speed limits?

Subscribe to Big Brains on Apple PodcastsStitcher and Spotify.
(Episode first published February 2, 2022 and republished on July 27, 2023)

Related:

Transcript:

Paul: AI promises to change every part of our society, but one area that has already started to be affected is law. 

Tape: Open AI unveiling its newest version which can answer complex questions in seconds. It can write speeches, take tests, we learned it can even now pass the Bar exam to become a lawyer. Placing in the top 10%.

Tape: Most of the economy is knowledge and information work and that is who is going to be most squarely affected by this. I would put lawyers at the top of the list. 

Paul: But AI may do more than just upend the profession of lawyers. One professor here at the University of Chicago believes that, with its ability to handle massive data sets, AI could be used to entirely change the law itself, making it personal for every individual based on how they live their lives. It’s a radical idea, but one we may need to start confronting sooner rather than later.

Paul: In the last few months AI has become the number one topic on everyone’s mind. This is the second episode of a three-part series on the ways today's researchers think AI will build the world of tomorrow. As we continue our summer break, these first two episodes are re-releases, with the final part of the series featuring a brand-new guest. Given the recent developments in AI, coming back to these episodes has made them all the more fascinating and important. We hope you get as much out of them as we did. Thanks for listening!

Paul Rand: In the eyes of the law, we’re all supposed to be treated equally.

Omri Ben-Shahar: If you think about the goddess of justice, Justicia, or Justicia, who’s a sculpture who decorates the entry to many courthouses all over the world.

Tape: Here she stands in her classic Greek dress, blindfolded, weighing facts on her golden scales.

Omri Ben-Shahar: Her eyes are covered with a blindfold, and therefore, she treats everyone as equal. This approach, this paradigm so fundamental and unchallenged, we are challenging.

Paul Rand: That’s Omri Ben-Shahar, a law professor at the University of Chicago and an advocate for a radical new way to completely reshape how we think about how the law works.

Omri Ben-Shahar: Let’s remove the blindfold from Justicia and let’s know everything that’s needed to know about the individuals so as to apply to them proper legal treatment. If people are different in their characteristics and their histories, and if you can identify these relevant differences accurately, why ignore it?

Paul Rand: We have personalized medicine, personalized education, so why not personalized law? That’s the idea behind Shahar’s latest book, Personalized Law: Different Rules For Different People, which he co-wrote with Ariel Porat, the president of Tel Aviv University.

Omri Ben-Shahar: We are asking in the book, Ariel and I are asking, is it justified and efficient for the law to impose different duties of care on those among us that create greater risks to others. Imagine a world in which drivers, those who are more reckless, must obey tougher traffic laws, drive slower, pay higher fines compared to drivers who are more cautious or on the side of rights. Would it be more justified for the law to grant more meaningful and effective protections to those among the consumers who need them more?

Paul Rand: From the University of Chicago Podcast Network, this is Big Brains, a podcast about the pioneering research and the pivotal breakthroughs that are reshaping our world. On this episode, what it would look like if we personalized the law. I’m your host, Paul Rand.

Paul Rand: Before we get into the incredible opportunities, and frankly terrifying possibilities, of Shahar’s theory, we have to start by asking, why do we need to change anything in the first place? What is wrong with Justicia’s blindness?

Omri Ben-Shahar: Well, first it’s a myth. The reason we need mythology in front of the courthouses is because we want people to believe that that’s what happens. Let me quote here one of my favorite observations about the law from the great French author, Anatole France. “The law,” he says, “in its majestic equality, forbids the rich as well as the poor to sleep under bridges, to beg in the streets and to steal bread.” You get the gist of this statement?

Paul Rand: Yes.

Omri Ben-Shahar: That the equality of treatment does not impose equality of burdens. Equality of rights does not guarantee equality of benefits. These ideals of equal access or equal opportunity or equal duties often play out in a way that affects disproportionately people who are poorer or weaker in society. We give everybody consumer protections and disclosures, but who enjoys the most? Those who need them least.

Omri Ben-Shahar: Uniformity of law does not guarantee fair treatment towards people. You need to take into want more relevant features of the individual to determine how to treat them. We think of that all the time. When we tailor the penalty for particular crime, we look at the circumstances of the defendant to try to do justice. We’re not blind to it. The whole notion of blindness is needed for society that outright discriminates in favor of the elite. There is a sense that in almost everything else we do, we don’t like one-size-fits-all. We don’t think it’s fair to tell everyone to wear the same size shoes. That’s not equality.

Paul Rand: This is probably the point where the audience is listening saying, “Man, I don’t know about this. How in the world would you make this happen?”

Omri Ben-Shahar: Well, we imagine personalized law operating on different levels of intensity. You can do it very intuitively, in the way that parents personalize their parenting towards their children, or you can try to do it more systematically, to try to figure out what is it exactly about each individual that matters for the proper legal treatment. For example, what do we need to know about you to figure out at what age you should be allowed to purchase alcohol? What factors can predict reckless and abusive behavior, drunken driving and things like that?

Omri Ben-Shahar: That can be done in a similar way to how insurers predict risk, namely, using a lot of data, a lot of information about individuals and training algorithms to predict, to identify, the best predictors. Using some forms of machine learning and massive data could sharpen the accuracy of personalized law.

Omri Ben-Shahar: While in the book we are considering the benefits and the problems with this kind of paradigm, namely, law fueled by big data and AI, we are also thinking about personalized law operating on a more intuitive, less intense scale. It could work intuitively based on assessments that maybe judges or experts make by looking at very limited number of factors. For example, we can tailor the right warning or disclosure for a borrower based on their education. You don’t need big data for that and you can get a lot of the benefits of personalized law through this kind of crude personalization.

Paul Rand: What do you mean by that? That example that you just gave, tell me how that would work.

Omri Ben-Shahar: Yeah. Example would say that part of the application for a mortgage, there is some information that the applicant already gives. Since we know that aspects of education and maybe income are correlated with the ability to absorb the information that’s in these warnings and disclosures, the way they’re presented could be personalized.

Paul Rand: You’re probably thinking, what does this all look like exactly? How is my world going to be different? Well, in his book, Shahar paints a picture for us using two made up characters, David and Abigail, and walks us through a day in the life of their personalized law worlds.

Omri Ben-Shahar: It is a world in which two spouses live in a household. When they, for example, enter into their car to drive, they get, on the screen, a personalized driving speed based perhaps on the data of how well they slept that night, which is received from their Fitbit.

Paul Rand: And this isn’t the only way personalized law could affect the rules of the road, but if it’s cloudy that day, well, your speed limit might decrease. Or what if your map knows you’re in an area you’ve never been before? Your speed may decrease. Car accidents are a leading cause of death, but could we have less accidents if the rules of the road were more tailored to the person behind the wheel?

Omri Ben-Shahar: The law imposes duties of care on people. You’re not allowed to put others at risk when you drive, when you engage in a professional activity. Now, what does that mean? There is a level of care that the reasonable person must take, and if you violate that you will be liable to compensate your victim. The reasonable person is a legal artifact that averages the normal, ordinary human being.

Paul Rand: Okay.

Omri Ben-Shahar: But we know that some people are more dangerous than others, more reckless than others. Perhaps these people should be confronted with a more burdensome and challenging duties of care. For each one of us, we have to impose a duty, not of the reasonable person, but of the reasonable you.

Paul Rand: But what about when David still breaks these personalized law or Abigail parks illegally? Could the punishments be tailored too?

Omri Ben-Shahar: If we want these fines to act as deterrents, for very poor people, they might be over deterring, they might be crushing. Whereas for a wealthy person, they might be nothing.

Paul Rand: And this makes sense, a hundred dollar ticket may be a huge deterrent for a low wage worker, but for a Wall Street executive, it’s a snap of the fingers. Our current deterrents and punishments, although the law is equal, actually lead to unequal outcomes.

Omri Ben-Shahar: This is an idea that Scandinavian countries have already recognized, and in fact, incorporated into their criminal law. They call it day fines. The fine is not measured by a currency amount, but rather by a portion of your daily income.

Paul Rand: Oh, interesting.

Omri Ben-Shahar: A particular traffic offense is 50% of your daily salary, daily wage.

Paul Rand: Wow.

Omri Ben-Shahar: Those who make more money, who earn more money, might pay more. In care, there are anecdotal stories about a person who committed a small parking offense and had to pay the equivalent of let’s say $60,000. I see no problem with these kind of approaches, where what is equalized is not the dollar amount of the fine, but the burden that it imposes on offender.

Paul Rand: And so that’s happening in Finland, for example, is that right?

Omri Ben-Shahar: Yes, that’s happening in Finland.

Paul Rand: Okay. Back to David and Abigail, what about when they go to pick up a new prescription from the doctor and they want to check about possible side effects, an experience that many of us have had.

Omri Ben-Shahar: Everybody gets the same long insert in the drug. You open the drug container, you unfold the insert, unfold and keep unfolding it and then you see 10,000 words in tiny font and ignore it.

Paul Rand: Exactly, we throw it away.

Omri Ben-Shahar: Ariel, my co-author, came up with this idea in his earlier work with my other colleague, Lior Strahilevitz at the University of Chicago, saying why not personalize these things so that instead of all consumers receiving the full lists of all risks of a product, each consumer will get information only about the risks that are relevant to them based on what we know about them, about the possible drug interactions, other drugs that they take, genetic problems, preferences, whatever it is, and also based on their cognitive ability to digest this kind of information.

Paul Rand: This would be no small thing. In the book, they cite a 2014 study that found every year tens of thousands of people die from drug complications due to warnings that were too complicated or too jumbled to follow.

Paul Rand: You’re also getting into a whole nother avenue here, because I’ve always assumed that the majority of times those types of instructions are for the legal protection of the company, not for the protection of the individual. Is that not accurate?

Omri Ben-Shahar: Well, you are cynic, Paul, but let’s put it this way. Whatever their intended goal is, the bottom line is that all that they do is help companies disclosures avoid liability. What they were intended to do, supposedly, namely to help the recipients make better, more careful, more thoughtful decisions before engaging in something that could cause them harm or be regrettable, that, they do not accomplish. So we throw a prayer, maybe if they’re personalized, they can budge the meter.

Paul Rand: Okay, so let’s check back in with David and Abigail.

Omri Ben-Shahar: They’re expecting a new child and they’re thinking about how that might change their finances. People sometimes have to think about how to divide their estate in the event of death. There are laws that determine what happens, but you can write a will. In personalized law, they don’t have to write a will because the law, based on the prediction of what someone like you would want to do, if you were to write a will, there will be a person legal rule. Of course, one that you can change if you don’t like. But you will go to a website like inheritance.gov and see what is your current default allocation rule and how it changes when you have another child. It will be based on a whole host of other factors, like wealth, education, years of marriage, number of kids and other factors.

Paul Rand: There are other default laws that Shahar says could be improved with personalized law.

Omri Ben-Shahar: When you start work for an employer, you have to make a choice for how much to set aside for your retirement every month. The law provides a default rule, although enrollment into some kind of-

Paul Rand: Yes.

Omri Ben-Shahar: What are the law trying to do there is to give people the treatment that suits them, but the law does it in a one-size-fits-all way again, the same default rule for everyone. If the goal of these defaults is to try to mimic what people would have chosen for themselves anyway, why not personalize it?

Paul Rand: It’s clear there are some massive benefits and possibilities to this world of personalized law, but there are obviously dangerous implications. The downsides and Shahar’s response to them, after the break.

Paul Rand: Have you ever wondered what goes on inside a black hole, or why time only moves in one direction, or what is really is so weird about quantum mechanics? Well, you should listen to Why This Universe. On this podcast, you’ll hear about the strangest and most interesting ideas in physics broken down by physicist Dan Hooper and Shalma Wegsman. If you want to learn about our universe from the quantum to the cosmic, you won’t want to miss Why This Universe, part of the University of Chicago Podcast Network.

Paul Rand: All right, at this part of the segment, if it hasn’t happened a long time ago, everybody is dying to say, “Oh, come on. You’re not thinking about all the downsides here. Let me tell you why this isn’t of work.” I doubt there’s a question that I could bring up to you that you don’t have an answer for, so I’m going to approach this a little bit differently. Tell me what kind of criticisms and pushback you get and most often, and how do you answer those?

Omri Ben-Shahar: Yeah. Our book is dedicated primarily to highlighting the risks, the dangers. This is not a polemic. This is an attempt to here’s an idea and let’s see what the problems are and how severe they are. One problem could be, that we think would be difficult to resolve, is data in the hands of government. One of the biggest challenges of our society is how big private enterprises are collecting and using datas to personalize their business with us. Data is useful, but it is also dangerous.

Paul Rand: Very dangerous.

Omri Ben-Shahar: It could be, I call it in a different study, data pollution. It can pollute many of our public spaces the way it is used. Now, all of this is augmented because the data is used by the government. That’s scary, that brings to mind the Chinese social capital system.

Paul Rand: Sure.

Omri Ben-Shahar: The only defense that I have is to say the problem with the Chinese social capitalist is not with the method, but with the goals. The goals is to get people to obey the Communist party. The goal is not necessarily for society to flourish and for individuals to have more freedoms, it is to control people. So yes, if Big Brother uses data to do that, user does not use data, situation is pretty dire in China in terms of these measures of freedom and thriving, but it got worse when it became personalized.

Omri Ben-Shahar: I would like to imagine our system operating with guarantees that the goals underlying the personalized comments are the goals that liberal democracy cherishes, rather than an autocracy.

Paul Rand: In Shahar’s system, big data and algorithms would be central components of personalizing our laws, and this creates a concern in and of itself.

Omri Ben-Shahar: Yeah, it’s a tough one. The idea of learning to predict or to fit people with the proper treatment is based on looking at the past and asking what factors are correlated with risk, with this kind of outcome or the other. This data that will be used to train the algorithms is full of biases and prejudices in the way that it was generated. We are looking at society in the past, how it operated, and it had operated with many imperfections and injustices.

Omri Ben-Shahar: The concern is that the system will just perpetuate these past biases by using this data to predict the risk of people. For example, there could be racial bias because the data that demonstrates which people are likely, for example, to re-offend might be data that was collected based on discriminatory practices, where persons of color were targeted more often by law enforcement.

Omri Ben-Shahar: How do you solve that? Well, it’s a challenge. It seems to have no good solution in our existing practices, where we use humans rather than artificial intelligence, very hard to de-bias judges and the police. The problem is not with technology. The problem is with the human mind, and that is the fact that we have all sorts of biases and prejudices that affect us. Very hard to undo, unless we pull the humans out of this equation, or at least create some kind of algorithmic baseline decision. The algorithm suggests that this will be the decision, the human can override it, but now they will have to have good reasons to say, “No, no. We want to charge this particular defendant higher bail. Even though the algorithm says they are not dangerous, we think they are.” They’ll have to justify it.

Omri Ben-Shahar: Now, I’m not a computer scientist, I do not develop these methods, but the literature is full of discussion about both the risks of algorithms and the promise of doing something that will at last make it possible for society to treat people without the implicit biases that currently afflict it.

Paul Rand: Okay. Are there then inputs that you think right out of the gate are non-starters, whether it’s race or sex or religion or gender, that just should never be part of the equation?

Omri Ben-Shahar: There are, but it’s not what I think. I mean, the Constitution does not allow to treat people based on race, gender, religion, and other suspect classifications. The problem, Paul, is that the algorithms can outsmart these prohibitions. They will not look at race, they will look at other factors that correlate with race. You don’t allow them to know the race of the suspect, then they will look at their zip code. Everything in America is correlated with race. That needs to be fixed. It’s very easy to make them blind to suspect classifications, make these algorithm. It’s very hard to prevent the disparate impact.

Omri Ben-Shahar: We read the literature in economics and computer science to suggest that it is possible, and we try to suggest some efforts in the book by which even that can be avoided so that there will be no effect to aspects like religion and race, the most important ones that we would worry about.

Paul Rand: One of the things, if some of these things could be democratically decided rather than representatively decided. In your book, you talked about MIT’s Moral Machine. I wonder if you can just give a little bit of context around that.

Omri Ben-Shahar: Yeah. When we are thinking about humans versus robots, the most immediate context in which this currently comes up is in the training of autonomous cars to drive in a safe way.

Paul Rand: Yes.

Omri Ben-Shahar: Part of what the algorithm, the code, for these autonomous cars must have is to help them make life-life trade offs. What if the car is in a situation where a family just jumped into the road and there’s no way to avoid hitting them other than by swerving or turning and causing other damage potentially to other people?

Paul Rand: Okay.

Omri Ben-Shahar: Cars don’t have instincts, it has to be written into their code. They have to determine where, for example, to cause more or less harm, but how do you determine what is more or less harm? Is hurting two 80-year-old people worse than hurting one 20-year-old person? If we want to train algorithms to do that, we have to give them a way to make these determinations. One of the interesting solutions, well, you can of course ask philosophers and they will decide, or you can poll people. The MIT project called the Moral Machine has got millions of people to respond to these scenarios, several scenarios. Now you have a democratic kind of survey of how autonomous cars, how the algorithms ought to respond to these trade offs and what is the right thing. Whatever the solution methodology is, it has to be transparent so that society can debate it, because this will be our new moral.

Paul Rand: Okay, so let’s unpack both of those things you just talked about, and especially as you get into some of these decision where I may disagree with how I’m treated or that I can buy a certain amount of alcohol or I can only drive a certain speed. The question then starts becoming what if I don’t think I’m treated fairly in this? How does that happen? Because most people are most worried about their individual rights.

Omri Ben-Shahar: Yes. That could crash the system. If people begin to try to figure out that the fact that they’re not treated well would seem to them as a systematic form of discrimination against them.

Paul Rand: Right, exactly.

Omri Ben-Shahar: It’s not good. We would like to imagine that people will be somewhat treated more harshly or less forgivingly in some contexts, but better in other contexts. They might be subject to higher standards of care, but more consumer protections. The overall comparison is not clear cut. But let me just say one other thing about, again, in the compared to what line of defense. Our rules today are equal, but their enforcement and implementation is subject to so much discretion.

Paul Rand: That’s very true.

Omri Ben-Shahar: You don’t need social scientists to show you the discriminatory patterns, because people can sense them in some contexts. When a police officer stops you at the highway, you know that they treat differently different people, who they let go and who they target. Only these things are very hard to prove. With the algorithms that we will use to personalize treatment, you can see exactly what are the factors that led to your treatment. They will be explainable, why they were used. They will be more easy to modify, change and take out. Because try to change the discriminatory behavior of judges or police officers, it’s probably impossible because they don’t even think that they are biased, whereas in the algorithm, you’ll see it’s a line of code. Just work on that, on the technical fix.

Paul Rand: These are huge challenges, but ones Shahar says we’re inevitably going to face. When you look around the world today, it’s clear that big data is already reshaping our lives every day. That inevitability is what drove Shahar to get into this work.

Omri Ben-Shahar: The work that Ariel and I did, and he started it even earlier, as I mentioned, with another colleague, Lior Strahilevitz, was a way to set up to prepare our thinking. Once these things become available, how should we think about them? Now, I don’t imagine that tomorrow morning we will personalize all of law, but we are already seeing personalized aspects of our law. For example, when criminal courts have to make risk assessments about individuals in order to grant them bail or determine the length of their sanction, they’re increasingly using very imperfect algorithms to do that. But the academics have shown that you can build algorithms that are much better and more reliable.

Omri Ben-Shahar: This is what inspired me, demonstration by colleagues here at the University of Chicago, that bail can be granted to the right people in a way that is better than what judges do. You can lock up less people in jail, those that are released will commit less re-offending, and far fewer Black people will be locked up. It’s both less jail, less violence and less discrimination if you only trusted the algorithm. That persuaded me that there is a potential here.

Omri Ben-Shahar: Now, I’m not a politician, I’m not even a political scientist. I don’t know whether society is ready to absorb that. I see it happening in criminal law a little bit, very controversial. It’s happening in a way that’s a bit less controversial in some areas of consumer protection in Europe, so there is some early movement towards that. The extent to which this will take over our legal system is hard to tell.

 

Episode List

Why Countries Choose War Over Peace, with Chris Blattman (Ep. 93)

Studying gangs and political enemies, scholar tries to understand why we fight—and how to stop conflict

From green burials to DIY funerals, how death in America is changing with Shannon Lee Dawdy (Ep. 92)

Anthropologist examines what our rituals reveal about society, especially after 9/11

Why we need to invest in parents during a child's earliest years, with Dana Suskind (Ep. 91)

Book examines how better policies can bolster families and boost early childhood development

The troubling rise of antibiotic-resistant superbugs, with Christopher Murray (Ep. 90)

Global health expert warns about a potential ‘pandemic in the shadows’

Is scientific progress slowing? with James Evans (Ep. 89)

Scholar examines how researchers could generate greater innovation and discovery

Could we vaccinate against opioid addiction? with Sandra Comer and Marco Pravetoni (Ep. 88)

Scientists discuss promising solution, now in clinical trials, to address drug overdose epidemic

The man who fought to sanction Putin and Russian oligarchs, with Bill Browder

Businessman explains how his work on the Magnitsky Act made him country’s No. 1 enemy

Why big ideas fail to scale—and how to fix it, with John List (Ep. 87)

Economist discusses the secrets of using science to scale promising social programs

Could personalizing laws make society more just? with Omri Ben-Shahar (Ep. 86)

Exploring the benefits and pitfalls of using big data to tailor different rules to different people

How to stick to your resolutions, with Ayelet Fishbach (Ep. 85)

New book explores science of setting goals, achieving success and learning from failure

Master of Liberal Arts