Rebecca Willett
Big Brains podcst

How AI is transforming scientific research, with Rebecca Willett (Ep. 117)

Examining the possibilities—and the dangers—of this rapidly developing technology

Rebecca Willett
Big Brains podcst

Show Notes

One of the first areas to be affected by AI has been science. Researchers have already started to integrate these new technologies into their work., but what does it mean to let these systems into our labs? Can they be trusted? And, if they so, how radically can they push science forward?

Prof. Rebecca Willett is the faculty director of AI at the University of Chicago Data Science Institute. She is at the forefront of thinking through how AI will change science, and in this episode, she examines the opportunities—and the dangers—of this rapidly emerging technology.

Subscribe to Big Brains on Apple Podcasts and Spotify.

(Episode published August 10, 2023)

Subscribe to the Big Brains newsletter.

Please rate and review the Big Brains podcast.

Related:

Transcript:

Paul Rand: Big Brains is supported by the University of Chicago Graham School. Are you a lifelong learner with an insatiable curiosity? Join us at Graham and access more than 50 open enrollment courses every quarter in literature, history, religion, science, and more. We open the doors of U Chicago to learners everywhere. Expand your mind and advance your leadership. Online and in-person offerings are available. Learn more at graham.uchicago.edu/bigbrains.

 

Hey ChatGPT.

ChatGPT: Hello Paul. How can I assist you today?

Paul Rand: Tell me everything you know about Professor Rebecca Willett.

ChatGPT: As of my last update in September 2021, Rebecca Willett was a professor of statistics and computer science at the University of Chicago.

Paul Rand: At this point, many of you have probably had an experience just like this.

ChatGPT: Professor Willett’s Research has included significant contributions to the theory and methods of machine learning.

Paul Rand: Instead of simply going online when you need to research something. For instance, a guest coming on your podcast, you go to ChatGPT.

ChatGPT: She has also worked on applications of these techniques to a variety of fields including healthcare, astronomy and social science.

Paul Rand: ChatGPT isn’t the first AI in our world.

Tape: ChatGPT, the revolutionary new language model developed by OpenAI with the ability to generate human-like text and hold natural conversations.

Paul Rand: But it did prove that these machine learning systems are capable of things that many people thought were impossible.

Tape: It impersonates what it’s read on the internet. Type in a request in it can write legal documents, software, even school essays. People are predicting it will wipe out whole industries.

Tape: Artificial intelligence tools like ChatGPT have really, they’ve seemingly changed the world overnight, and many are racing right now to find ways to harness what seems like incredible power.

Paul Rand: ChatGPT, are you going to change the world?

ChatGPT: As an AI language model, I don’t have personal agency or the ability to directly change the world. It is ultimately up to humans to utilize technology and knowledge to shape the world for the better.

Paul Rand: One of the ways we shape the world is with what we focus on in this podcast, research. And AIs could give us the power to completely revolutionize the way that we do science. But as our ChatGPT friend said, it’s up to us to use them to make it better, even though it has the power to make it worse.

Rebecca Willett: I think there’s an enormous amount of hope that it’s really just going to fundamentally change the way we think of the scientific method.

Paul Rand: That of course, is University of Chicago professor Rebecca Willett.

Rebecca Willett: Just as when computers became widely accessible, it changed the way that we approached science and what we could do with scientific data. I think AI is similarly going to have just an enormous impact on this entire process.

Paul Rand: There isn’t a single thing that the professors and researchers on our show do that couldn’t or won’t be changed by AI.

Rebecca Willett: The way that we formulate hypotheses, design experiments, collect data, and then analyze that data and then kind of close the loop by using that to help us develop new hypotheses.

Paul Rand: Willett is the faculty director of AI at the Data Science Institute, where she is at the forefront of thinking through how AI will change science, both the opportunities and the dangers.

Rebecca Willett: And so the hope is that with these more complicated non-linear models that are being used in AI right now, things like neural networks, that we can take large amounts of data and figure out patterns that as humans we didn’t recognize or even expect.

Paul Rand: There is no one better to help us think through how these systems will fundamentally change how research happens.

Rebecca Willett: And we’ve seen this sort of step up in terms of the capability of these systems, and I think this has really just sparked the imagination of a lot of people. With these kinds of capabilities, what could we do now that we hadn’t even imagined trying to do a few years ago?

Paul Rand: Welcome to Big Brains where we translate the biggest ideas and complex discoveries into digestible brain food. Big Brains, little bites from the University of Chicago Podcast Network. I’m your host Paul Rand. On today’s episode, how AI will Change Science and Research. ChatGPT if you were going to interview Rebecca Willett for a podcast about how AI will change science and research, what would you ask?

ChatGPT: That sounds like a fascinating podcast episode. Here are some potential questions you might consider. One, what are some of the most exciting opportunities that AI offers for advancing scientific research?

Rebecca Willett: So there’s a lot of work in trying to use AI to accelerate the development of pharmaceuticals and drugs. I am especially excited about this because as we know, there are diseases that predominantly impact underserved communities, that are often under prioritized for this kind of research or under-resourced. And so if we can use AI to accelerate this process or reduce the cost of this discovery process, hopefully we’ll see a real leap forward in the treatment of disease worldwide.

Another thing that I think we will see people doing is using AI to design new materials, especially materials that are more sustainable and perhaps more biodegradable or better for the environment. Using AI to design things like microbial communities that can help break down plastics or remove nitrates from water. It could be really useful for developing sustainable climate policies. So not only do we want to predict what the climate might look like under different scenarios, but we’d like to have a better sense of uncertainties associated with those predictions and to design better economic policies, better tax strategies, better incentive programs.

If we only have forecasting systems that can run on supercomputers, then our ability to do that is somewhat limited. But with AI systems, I think we’ll be able to do this much more effectively and quickly and reliably. And so these are just a few of the things off the top of my head, and this is just in the basic sciences. If we expand our scope to also think about health sciences or healthcare, there’s just a lot of potential there as well, in terms of improving our ability to analyze lab tests or medical imaging data, our ability to understand a patient’s entire case history or even better evaluate how they will respond to different kinds of treatments.

Paul Rand: These are just a few of the incredible ways AI could change science. But what do they look like in practice? There are some basic steps of the scientific process; hypothesis generation, experiment design, data collection, that are going to be revolutionized by AI. But we’ll start with Willett’s specialty, data analysis.

Rebecca Willett: One kind of first pass thing that’s going to happen is that people are going to start using AI to analyze data being collected within scientific context. So many of us have read, for instance about the James Webb Space Telescope.

Paul Rand: Right, right.

Tape: NASA’s James Webb Space Telescope, the largest and most powerful of its kind, launched last Christmas and released its first image in July, the deepest sharpest view we’ve ever seen of the universe. Since then it has captured far away star nurseries, cosmic cliffs and galactic clusters. Anyone can see that the images carry breathtaking beauty and astonishing scale, but what do they actually tell us about our cosmos?

Rebecca Willett: This instrument and many instruments like it are collecting just huge volumes of data that can’t possibly be looked at by a human, not all of it. And so the hope is that by using these AI tools, we’re going to see patterns that might escape a human or be able to see phenomena or anomalies that kind of countermand our current understanding of the science and lead us to asking new questions that we hadn’t thought about before or questioning where our existing models are starting to break down. And so using AI to just analyze the raw data is the bare minimum of what we’re going to be seeing a lot of in the future.

Paul Rand: This raw power to analyze massive sets of data could solve a problem that’s plagued science forever. Many times, whatever was being studied led to a negative result. For example, we thought these two compounds when mixed would create a new malaria drug, but they didn’t. And because it’s not a positive result, it would get discarded.

Rebecca Willett: Yeah, I think this is a common concern in the sciences. I think people refer to it as the file drawer effect. Right? You get a negative result, you put it in your filing cabinet and forget about it.

Paul Rand: Yes, yes.

Rebecca Willett: That’s just sort of the nature of the field. If I have a positive result, then it’ll generally get more attention. And publishers are most interested in publishing those kinds of results.

Paul Rand: But that doesn’t mean the result is useless. We still learn something. As the famous saying goes, we’re just discovering a thousand ways not to make a light bulb.

Rebecca Willett: And I think perhaps AI will change some of these trends. And I know that there are ongoing efforts with using things like large language models to analyze the scientific literature and to cross-reference different papers that are being published in different journals by different groups around the world, in order to kind of extract higher level themes or patterns.

Paul Rand: Fascinating.

Rebecca Willett: And I think that’s a setting where these negative results could be enormously impactful and help with the development of those models. And so it’s possible that this kind of file drawer effect that we’ve had in the sciences for decades, we might just change the way we think about it with the development of these AI tools for trying to extract information from the literature. Maybe we’ll see an added value to that that was a little harder to harness in the past.

Paul Rand: But there is a concern when it comes to using AI to analyze data. The founders of ChatGPT have already admitted they’re not quite sure how their AI comes to any individual results. In the context of an experiment, what if an AI analyzes the data incorrectly? If half the time AI models make predictions that contradict established scientific knowledge but turn out to be correct, how will we know when it’s right or when it’s wrong, especially if we don’t understand how it works?

Rebecca Willett: Real science is about more than detecting patterns. It’s about really understanding what the underlying mechanisms are. It’s just much more than making raw predictions. And it’s not clear to what extent AI tools are really reflecting understanding, as opposed to having recognized different patterns. So let’s just take ChatGPT as an example, because I think a lot of people listening have maybe played around with it a little bit. And when you do, it can almost feel like you’re interacting with a human. It produces very realistic text. But under the hood, what it’s doing is on the most basic level, very simple. It’s saying, I’m going to build a model of a probability distribution that’s going to say, What is the most likely next word that you’re going to say, given the last 400 words that you say?

Paul Rand: Yep.

Rebecca Willett: And then when I want to generate some text, I just start drawing words from this probability distribution. And so of course, building this model is not trivial, but at the end of the day, all it’s doing is it’s generating somewhat random sequences of words from this distribution. That’s a far cry from understanding what the language is telling us or actually being sentient, for instance.

And I think it’s the same with science. Right? I think this could be an enormously useful tool, but that’s a far cry from it really understanding science. And I think humans are just going to be an essential part of this process. If you’re trying to use something like ChatGPT for science and having it write a scientific paper for you, you’re going to be in trouble. It’s definitely going to be making stuff up. Like I said, it’s drawing words at random from a very sophisticated probability distribution, but it doesn’t actually know anything. And the more text you have it generate, the more likely it is that it’s going to be inconsistent with itself. I have two feelings about this. On one hand, people already make mistakes in science, innocent mistakes. This is why we form a scientific community. This is why all science isn’t done by a handful of Nobel Prize winners.

Paul Rand: Right.

Rebecca Willett: We have thousands of people all trying to examine each other’s work, find where the potential holes might be, identify real discoveries that change the way we think. And that community is going to play a critical role in analyzing ideas coming out of an AI model, evaluating whether they make any sense at all, whether it’s a fresh take that nobody thought of, or whether it’s just complete BS. Ultimately, just that human in the loop is essential, people with rigorous scientific training who can evaluate these systems. Having peer review determine what’s ready for publication versus what’s relatively more or less made up.

Paul Rand: One of the other areas, at least as I’ve read about AI and the sciences, one of the ones that gets talked about is this idea of hypothesis generation. And I wonder if you can tell us what that is and why that might be particularly compelling.

Rebecca Willett: We’re starting to also see people thinking about using AI for things like even deciding what data to collect in the first place or what experiments to run. So imagine, for instance, that I wanted to design a microbial community that could help improve somebody with a broken gut microbiome, and I want to help fix that. So we could just sort of randomly put a bunch of probiotics in their system and hope for the best. But a lot of the current approaches can be pretty short-lived if they work at all. And so what we’d like to know is what determines what’s going to make a good microbial community versus a bad one. And there’s maybe trillions of possibilities. I can’t just build them all and test them all. It would take too many resources.

And so what I’d like to do is to integrate AI into this process, design a small number of communities, run some experiments on it, take that data and narrow down somehow the space of the hypotheses I have about what makes a good microbial community versus a bad one, and use that model, and any kind of uncertainties associated with that model, to help design my next set of experiments or which microbial communities I wanted to test next. And the hope is that by using AI in this process, we’ll be able to use our money and experimental resources much more effectively than if we didn’t have AI helping to suggest the next new experiments to run.

Paul Rand: But if we become too reliant, is there a concern about a future where our research agendas are becoming driven by AI? Could AI actually lead to a decrease in creative ideas from the scientific community, through path dependency based on the inputs we put into the system?

Rebecca Willett: It depends on the context. So if we go back to my earlier example where I want to find the best microbial community out of trillions of possibilities, and I have a very clear notion of what makes it the best, I can measure that, I have a lot to gain here. I can reduce the amount of resources I have to spend on collecting data, but that approach is not appropriate if I’m really sort of more in an exploratory mode. So if I don’t know exactly what I’m looking for, then using one of these methods might mean that I just never do an experiment on something that’s really interesting, but just not exactly aligned with my overall objective. And so there’s this kind of inherent trade off between exploration and exploitation.

Paul Rand: How do you mean by that?

Rebecca Willett: Part of good science is just exploring the unknown. Part of what we try to do to make products and services available to people is exploitation, trying to exploit our known knowledge to design better systems or to guide the way we design experiments.

Paul Rand: Okay.

Rebecca Willett: And so depending on the context, yeah, I think using AI for experimental design would not be the right choice. And relying overly on an AI system to make predictions without kind of a thoughtful human behind the scenes is possibly a fool’s errand.

Paul Rand: And of course, as our AI co-host mentioned at the beginning, who that human is behind the scenes matters a great deal. How AI could open the ability to do science up to more people and why that may not be a good thing, after the break.

If you’re getting a lot out of the important research shared on Big Brains, there’s another University of Chicago podcast network show you should check out. It’s called Entitled, and it’s about human rights. Co-hosted by lawyers and new Chicago Law School professors, Claudia Flores and Tom Ginsburg, Entitled explores the stories around why rights matter and what’s the matter with rights.

Big Brains is supported by the University of Chicago Graham School. Are you a lifelong learner with an insatiable curiosity? Join us at Graham and access more than 50 open enrollment courses every quarter in literature, history, religion, science, and more. We open the doors of UChicago to learners everywhere, expand your mind and advance your leadership online and in-person offerings are available. Learn more at graham.uchicago.edu/bigbrains.

There is this concern that AI will eliminate jobs, but could it be the other way around? There have always been strong barriers to doing science, like needing a deep knowledge of fields, methods and statistics, and let’s be honest, a high level of intelligence. But could these tools open the gates wider to people who may know how to ask the right questions and explore ideas, but don’t have the other skills or time or money to acquire those skills?

Rebecca Willett: I’m not sure about the answer.

I think there’s inherent value to rigorous scientific training. So as we said before, what ChatGPT is doing is it’s generating plausible strings of text that might in no way be true. And I think it’s important for somebody to be able to recognize when this string of words is at all consistent with our understanding of science or where it might be going awry. And with no background, I think you’re just unequipped to do that. On the other hand, creativity is extremely important in science. We normally associate it more with the arts and humanities, but really thinking of creative explanations for how the world works and why is essential. And so to some extent, if these tools allow people to generate more creative ideas, if we can develop AI assistance for scientists that allow them to really harness their creativity, I think it could be exciting.

And there’s a lot of people who are really thinking about leveraging or developing creative AI assistance. Another way in which AI might help democratize science is in helping us to process our training data. For instance, one big citizen science initiative that’s been running for many years now is called Galaxy Zoo, where humans do a little bit of training and then they’re presented with images of galaxies and they’re asked to answer some questions about those galaxies. And what this is doing is basically producing labels for the training data that might be used to analyze just millions of images of galaxies. And I think that having high quality training data is essential to making a lot of these AI systems work well. And so these kinds of citizen science projects provide a really cool opportunity I think, for science enthusiasts to play an important role.

I think there are also a broader category of risks that we need to think about. For instance, if we place too much trust in these AI systems, we might think, well, we need to train fewer scientists in the United States, because the AI is going to do all this work for us. And I think if we overestimate the capability of those systems, that’s a real risk and a real missed opportunity. We still need those human thinkers.

Paul Rand: But what if those human thinkers are bad actors? We know that news organizations and people on social media will often cite case studies they’ve seen online, but have done very little research into. In a future where AI can generate a thousand fake studies that look legitimate in a matter of minutes, how should the scientific community be thinking about maintaining integrity?

So if you were going to build safeguards in to help advise on protecting against some of these downsides, what kind of safeguards would come top of mind to you?

Rebecca Willett: Yeah, it’s a good question. So first I’ll just tell you some of the things that the people might’ve read about already in the news.

Paul Rand: Okay. Rebecca Willett: So they’ll say something like, “Well, I want to know what data that system was trained on.” And on one hand that sounds good. I want to know if your face recognition was only trained on white men and will probably fail on Black women. That seems like a useful thing for me to know. On the other hand, when we look at something like ChatGPT that was trained on trillions of words that no human could possibly read, where no human could possibly read all of them, it’s kind of vacuous. Right? Telling me that doesn’t tell me anything informative about what’s going on under the hood for that ChatGPT system.

Another thing people have called for is building transparent or explainable AI systems, where the AI system can explain the decision it’s making to a layperson. And again, this sounds good in certain contexts, if we’re using AI to decide who’s going to be let out on bail before defending their case in court, it sounds good for us to be able to explain what criteria the AI system is using. On the other hand, there are other tasks that are very difficult to explain, especially to a lay person. Like, how is a CAT scan image constructed from the raw data off the scanner? So there are a variety of things like this that have been proposed, that in the right context are important and meaningful, and in general are really insufficient.

And I hate to say this because I don’t have a better solution that I can propose. I think that these are actually open technical questions. How do we build a system that’s going to allow us to somehow certify it, certify that it’s not too biased against vulnerable groups, certify that it’s protecting people’s privacy in very general ways, certify that your autonomous vehicle is not going to kill a bicyclist? Besides just designing tests and trying things out, we don’t really have a good handle on this. And it’s an open question about whether we can actually build in hooks or inroads into these systems that will allow us to test and validate and certify these systems more effectively.

Another risk, science misinformation, if you will. So you could imagine someone maliciously trying to generate a bunch of fake scientific articles towards some end, presumably malicious, and making it very hard for earnest scientists to figure out. Well, what is actually known? What experiments were actually run and what’s been faked? And that’s going to just put a drain on the resources for this whole scientific community.

And so yeah, I think there are definitely several different risks. Some of them, just in terms of what we need to do as academics to make sure that people are using AI in a rigorous and ethical way, and others about outside actors potentially doing malicious things that would have a terrible effect on us all. Right now, human oversight is just essential. Here at the University of Chicago, like most US universities, we have IRBs, institutional review boards. And before I run certain experiments, I need their approval to make sure that there’s no major ethical lapse. Now, for the most part, those boards are for when I’m running experiments on humans or animals. A lot of the research that I do on AI is not covered by those sorts of human oversight boards. So yeah, there certainly are risks.

Paul Rand: Here at the University of Chicago, I’m seeing your name popping up with great frequency, all sorts of different topics with AI and the sciences.

Rebecca Willett: One of the great things about U Chicago is that there’s a huge number of interactions across different departments. And so physicists and chemists, astronomers, ecologists, computer scientists, and statisticians, are constantly getting together and talking with each other, and partnering to help advance using AI in a rigorous way in the sciences. And I think this is especially exciting, because it’s not like things are somehow pigeonholed, where one little group is thinking about AI and physics, and a totally separate group is thinking about AI and chemistry, with no meeting in between. We’ve really been focused on trying to think about core principles in AI that will influence many of the sciences. And we’re already seeing connections across different disciplines.

Paul Rand: Can you give any examples of some of those?

Rebecca Willett: The Margot and Tom Pritzker Foundation recently supported a joint conference between the University of Chicago and Caltech, bringing in worldwide experts in AI and science across multiple different disciplines for a three-day conference. And this was really an experiment. Most of the conferences in this space are much more kind of narrowly focused on a particular scientific domain, but it turned out to be great. We had a U Chicago researcher, Samantha Reisenfeld, talking about how she uses clustering to understand aspects of immune responses in tissues. The idea is I’ve got a lot of different data points. So for example, I’ve just got lots of different images of dogs, for instance. And these data points or these dog images, they don’t have any labels. And what I want to do is I want to just group them into groups where somehow everything in the group is similar, and members of different groups are dissimilar.

Paul Rand: Fascinating.

Rebecca Willett: And so she was using these kinds of clustering ideas to analyze data from human tissues, and understanding people’s immune responses to different pathogens. And there was a physicist from MIT who was listening to this talk. And he said, “This is amazing, because it turns out I’m studying particle physics, and I’m facing exactly the same challenge, but in a totally different context.” And some of the specific approaches that Samantha was using turned out to be extremely relevant to the constraints associated with his physics problem.

Paul Rand: My goodness. 

Rebecca Willett: And people were thrilled by this. They said, “Yeah, normally I just talk to the same group of people over and over, and see the same ideas in our small little insular community. And by having this conference across different boundaries, I saw a whole different set of methods I could use.”

Paul Rand: It’s clear that AI could be a powerful tool that scientists could use to cure diseases, solve climate change, or even take us outer space. But, as Professor Willett explains, there are all sorts of ways these systems could go wrong, radically wrong, if they get too far ahead of human oversight, judgment and control. And even ChatGPT agrees.

ChatGPT: AI can be a powerful tool. It doesn’t replace the need for human judgment. AI is best used in partnership with human researchers, rather than as a replacement for them.

Matt Hodapp: Big Brains is a production of the University of Chicago Podcast Network. If you like what you heard, please leave us a rating and review. The show is hosted by Paul M. Rand and produced by me, Matt Hodapp, and Lea Ceasrine. Thanks for listening.

Episode List

Why we die—and how we can live longer, with Nobel laureate Venki Ramakrishnan (Ep. 134)

Nobel Prize-winning scientist explains how our quest to slow aging is becoming a reality

What dogs are teaching us about aging, with Daniel Promislow (Ep. 133)

World’s largest study of dogs finds clues in exercise, diet and loneliness

Where has Alzheimer’s research gone wrong? with Karl Herrup (Ep. 132)

Neurobiologist claims the leading definition of the disease may be flawed—and why we need to fix it to find a cure

Why breeding millions of mosquitoes could help save lives, with Scott O’Neill (Ep. 131)

Nonprofit's innovative approach uses the bacteria Wolbachia to combat mosquito-borne diseases

Why shaming other countries often backfires, with Rochelle Terman (Ep. 130)

Scholar examines the geopolitical impacts of confronting human rights violations

Can Trump legally be president?, with William Baude (Ep. 129)

Scholar who ignited debate over 14th Amendment argument for disqualification examines upcoming Supreme Court case

What our hands reveal about our thoughts, with Susan Goldin-Meadow (Ep. 128)

Psychologist examines the secret conversations we have through gestures

Psychedelics without the hallucinations: A new mental health treatment? with David E. Olson (Ep. 127)

Scientist examines how non-hallucinogenic drugs could be used to treat depression, addiction and anxiety

Do we really have free will? with Robert Sapolsky (Ep. 126)

Renowned scholar argues that biology doesn’t shape our actions; it completely controls them

A radical solution to address climate change, with David Keith (Ep. 125)

Solar geoengineering technology holds possibilities and pitfalls, renowned scientist argues

Master of Liberal Arts