<

Big Brains homepage

Overview

Can you imagine a world in which a wearable device, like a smartwatch,could move your fingers to strum the guitar or play the drums? That kind of technology is part of the innovative research coming out of the Human Computer Integration Lab at the University of Chicago, led by renowned computer scientist Pedro Lopes. His lab is developing a new generation of gadgets that use haptics (or tactile sensations like the buzz of your smartphone) to move your body, replicate your sense of smell and even make you feel things.

In this episode, Lopes explores the potential of wearable devices to transform our future as well as brain-computer interfaces that are being developed by companies like Elon Musk’s Neuralink that directly into the body.

Related

Transcript

Paul Rand: Perhaps one of the most popular gadgets of the last decade is something that you wear on your wrist, a smartwatch. These remarkable little devices tell us a lot about our bodies. They can track your sleep cycle, record your workouts, and even measure your resting heart rate.

Tape: That Apple Watch on your wrist may be more useful to your long-term health goals than you think. That’s according to the Wall Street Journal. Some doctors now are recommending it as a medical device.

Paul Rand: But what if they could do a lot more? What if these devices could actually stimulate specific movements in the human body, like moving your finger to do something as intricate is playing the piano.

Pedro Lopes: As you can see, I’m hooked up and connected so my muscles can be stimulated. She’s going to teach me how to play this finger, hit the wrong note. I got repelled from that and then I got corrected only when I made a mistake. And so not only they’re getting smaller, they’re also getting more intimate, more closer to the human body.

Paul Rand: That’s Pedro Lopes, associate professor of computer science at the University of Chicago, where he also directs the Human-Computer Integration Lab, which creates devices that integrate directly with the person’s body.

Pedro Lopes: And that’s kind of where my lab kind of jumps in and says, sure, they’re getting smaller, but they’re also getting extremely intimate. And we have to ask ourselves what the next evolutionary step is. As we see computers and bodies getting so much closer and closer.

Paul Rand: Lopesz is interested in pushing the limits of wearable and immersive technologies that exist today. He imagines a future in which computers can actually act through the body itself by stimulating muscles, altering sensations, and even shaping perceptions.

Pedro Lopes: But there’s potentially another generation of nonmedical computers around the corner with glasses that are now smart and have AI systems listening to us and helping us navigate in the environment, potentially waste train ourselves physical skills. Imagine a person playing a batting cage or a golfing swing, but doing it in virtual reality with virtual reality goggles. Again, very intimate, very close to your body.

Paul Rand: And Lopes’s lab has actually made some of these ideas come to life like VR technology that sends sensations through the user’s body using haptics. Haptics are a scientific word to describe tactile sensations like the vibrating buzz of your smartphone, for example.

Pedro Lopes: That’s what we mean when we say haptics, but it is something that is a little bit absent in technology, right? Computers can make sounds and they can shine light through their screens to tell us there’s a new thing happening, but they don’t talk back to our skin except when they vibrate inside of our pockets, which is already helpful, but it’s not as powerful as if they could move our body.

Paul Rand: And Lopes’s has developed a number of haptic devices from a backpack that can actually modify one sense of jumping. And a haptic device that can mimic the feeling of terrain under your feet. While these new devices could be used for fun, they could also be used for a number of practical uses for people with physical, visual and hearing limitations.

Pedro Lopes: And so we’re really curious of what happens when people not just interact with computers but integrate with computers. And vice versa too, when computers not only interact with us but integrate with us to some extent, you can imagine immediately if you’re a person with accessibility needs, if you’re in a wheelchair, paraplegic, this is kind of game changing.

Paul Rand: Many startups, including Elon Musk’s Neuralink are making headlines for doing just that. Developing implantable devices that connect the brain to external devices.

Pedro Lopes: And one very exciting area that these folks are looking at is can we put those implants close to the parts of your brain that deal with vision, that deal with images that come from your eyes and help maybe restore vision? Neuralink is not the only company trying to do that, but imagine just being able to see lines or a few features of the world if you at the moment cannot see. So that’s very exciting.

Paul Rand: While Lopes says, this kind of technology has exciting possibilities, it also should have its limits to not compete with humans, but to adapt to us.

Pedro Lopes: So I tend to think the human condition is augmenting ourselves with tools but not replacing ourselves, right? We want to be in here making meaning and artistically expressing our thoughts, but not replacing us, just augmenting us to do more and do in new ways.

Paul Rand: From the University of Chicago Podcast Network. Welcome to Big Brains where we explore the groundbreaking research and discoveries that are transforming our world. I’m your host, Paul Rand. Join me as we meet the minds behind the breakthroughs on today’s episode, the Future of Computer Integration and Wearable Devices. Big Brains is supported by the University of Chicago Graham School. Are you a lifelong learner with an insatiable curiosity? Join us at Graham and access more than 50 open enrollment courses every quarter in literature, history, religion, science and more. We open the doors of UChicago to learners everywhere, expand your mind and advance your leadership online and in-person offerings are available. Learn more at gram.uchicago.edu/bigbrains. Before we get into all of your work and your interests, I wonder if you could start off with a bit of a history lesson, and tell us about how the whole human computer interaction has evolved over time. Where did we start and what kind of progress steps have we gotten up to?

Pedro Lopes: Yeah, one thing that I’d really like to emphasize to the listeners is that we take a lot of computing advances for granted. The fact that computers even respond to us and we can touch them and they’ll do the useful things they do, is in big part due to the evolutions in computer interface design. Back in the fifties and even before universities like University of Chicago had maybe one computer and everybody would fight to access that one computer, the way people would interact with those large machines was actually to punch holes in the cards. And those holes were essentially the way that you would load little programs. You would go overnight, put your little card in the card reader, and then come back the next day and hope everything worked out and you got your computer simulation running overnight. As you can imagine, and all the listeners are probably surprised by that.

Things have moved in a much faster pace and a very different pace. And as you said, Paul, there was a lot of evolution steps that took us from very slow interactions with computers, almost like a trade overnight to things that respond within milliseconds to any request that we have. So the next evolutionary step that folks in human computer interaction, that’s the field that studies how we can interface with computers took, was to ask the question, how should we interact with computers? Should we really just put big programs and wait overnight for them? Or could computers talk in like human speed? I ask a question and a few seconds later like we are doing here together, it will respond with something. And so immediately there was a sort of search for ways to talk to computers using text with keyboards, talk to computers using the mice, the mouse that we use to operate things two-dimensionally on our screens, even screens wasn’t there at the beginning.

So there was many, many, many evolutionary steps needed from going to those really early-day computers to computers that could sit eight hours a day in your office. And world has changed when computers were able to sit in our office. And then later computers sitting in our laps and then computers sitting in our pockets and now computers sitting in our skin, the wearable computers that we all have to track our steps to track our health and who knows what the next revolution will look like. And I think that’s obviously why we’re talking. It’s trying to understand what the next step is.

Paul Rand: So Pedro, with that evolution, when did researchers start thinking about the body as part of the interface?

Pedro Lopes: I think what resonates with most researchers and even with the public when they think of computers connected to the body, is the idea of the wearable devices. Is this idea that the computer is no longer just in your pocket as your mobile devices, but it’s this idea that the computer is attached to your skin, has a little light and is measuring your pulse or measuring the oxygenation of your pulse. It’s measuring how many steps you do, but it really only came to be in the public sphere in the two thousands, 2010, with the evolution of all these smart watches that we now wear and have on our daily lives.

Paul Rand: Tell us if you can, now that you’ve kind of brought us up to date chronologically, what does your lab particularly focus on?

Pedro Lopes: Yeah, we focus on that question that I was mentioning earlier, what happens after we’ve taken all these big evolutionary leaps? If you can imagine computing history as a sort of evolution with a couple of pressures, just like biological evolution has evolutionary pressures. Computing evolution also has a couple of pressures, and we jumped from really large computers that only existed mainframe computers, universities and research labs only had one to computers in our desks to computers in our labs to computers in our pockets, in our bodies. Then one of the evolutionary pressures that I imagine is sort of computers becoming smaller? Many of your listeners have heard of this phenomena called the Moore’s Law, which is an observational law. We call it a law. It isn’t a law of nature in the sense of other physical laws, but it’s a very powerful observation and it has held truth to many, many decades that computers keep getting smaller and smaller and it’s because we can pack more transistors, the little building blocks of computers.

Paul Rand: Is that still true or if we caught up to Moore’s law?

Pedro Lopes: It depends on who you ask. I think we’re at a point where parts of Moore’s law are about to break or already broke, and there are little tricks that can make it go for another generation, but now we’re all little trick. So you can stack them transistors in 3D, back then you could only do it in 2D, So we’re now hacking it to make it still whole truth. So you’re absolutely right. So that’s one evolutionary pressure. We have computers getting smaller. And so not only they’re getting smaller, they’re also getting more intimate, more closer to the human body.

And that’s kind of where my lab kind of jumps in and says, sure, they’re getting smaller, but they’re also getting extremely intimate, and we have to ask ourselves what the next evolutionary step is, as we see computers and bodies getting so much closer and closer. And this shouldn’t be news to people who are thinking of computers in the medical domain. In that space, there’s been prosthetics and cochlear implants and insulin pumps and all these kinds of medical devices that of course have computers inside. But there’s potentially another generation of nonmedical computers around the corner with glasses that are now smart and have AI systems listening to us and helping us navigate in the environment. Potentially waste train ourselves physical skills. Imagine a person playing a batting cage or a golfing swing, but doing it in virtual reality with virtual reality goggles. Again, very intimate, very close to your body. So that’s sort of where my lab is at asking what the next generation of the computers is looking like.

And of course, because we’re engineers, we like to also try our best guess at inventing some of those and testing them in the lab and seeing how people respond to some of those ideas.

Paul Rand: When your lab uses different terms, one that stood out was that you used the word integration, not necessarily interaction, and I wonder if you can tell me what that means?

Pedro Lopes: We’re trying to provoke, right. It’s probably not an advice that would give-

Paul Rand: I was provoked, by the way.

Pedro Lopes: Yeah, it’s probably not an advice that would give to every scientist out there, but my field is called human computer interaction. Typically, folks call it short for HCI, and I just wanted to provoke and flip that eye to integration, which is not always something you should do because upset some older folks in the field. But I wanted to ask. What if we are just stopping to have these slow dialogues with computers interactions? I click, it shows me a pop-up and I move and it makes a sound to really integrate it where I no longer can pretty easily tell the boundaries between what’s the computer and what’s my own body. If I have a prosthetic device, maybe it’s a hearing implant that helps me listen and it’s computerized.

So for example, if the loudness of the noise around me is very loud, it will adjust automatically to help me hear that. Well, I probably won’t say that’s my device. I would say that’s a new part of my body that helps me hear just like my biological ears also do some of that degree of noise cancellation. And so we’re really curious of what happens when people not just interact with computers but integrate with computers. And vice versa too, when computers not only interact with us but integrate with us to some extent. And some of it might cause our minds to go into some dystopian views, and that’s important. We should be very critical of this-

Paul Rand: And we’ll talk about that. Yep.

Pedro Lopes: I’m sure we will. But also maybe interesting aspects where maybe we learn in more physical ways, and we are less obsessed with screens and less enveloped in that world that is very distracting right now, which is something that we’re seeing in some of the experiments in our labs. That people come to do experiments in our lab that maybe are different nature, maybe more physical or maybe more augmented reality. And they come out and maybe they like the experiment, maybe they didn’t, but they often tell us, “Oh, that was very different from how I relate to computers. That’s not distracting me. That’s not text messaging and et cetera.” That was actually a very enjoyable experience despite whether they worked for them or not. And I’m very interested in that future where computers might bring us back to become more physical and less disembodied.

Paul Rand: So you are kind of like designing for the human body rather than designing with it, is a simple way of phrasing it.

Pedro Lopes: What would it mean to have a mobile device at the time that would be more physical that would move my body? I moved my body to touch the screen, but the screen wasn’t talking back in that physical way. And of course for many things it makes no sense. If I’m just sending emails, well the emails shouldn’t move my body, but if I’m learning a new physical skill, if I’m learning ... I’m a barista and my first day at the job is to learn how to tamp the coffee grounds with the right amount of force, then I can watch a YouTube video to learn that. But it’s going to be extremely difficult to absorb that physical knowledge from the YouTube video. So I went home and thought, well, maybe I need to endow computers with a way of moving our body with obviously consent. I should tell the computer, yes-

Paul Rand: You’re a big coffee drinker that the example you use is you want a perfect cup of coffee, you’re going to get it no matter what.

Pedro Lopes: I want coffee a cup of coffee and I want all the baristas in the world to have a great time learning this. And so we went and looked at the history of robotics, and understood how people have been moving the human body. And of course, maybe this is a term you have heard about before, Paul exoskeletons, these big mechanical devices that you put around your body almost like you’re donning a suit and those suits have motors and they can move your body. Now obviously I looked around me and I saw no one wearing an exoskeleton. No barista was learning by wearing an exoskeleton. In fact, if I asked them, they were learning by either observing other humans doing it or by watching videos of other humans do it. So computers are super powerful and they touch almost every aspect of our lives, but when it comes to physical assistance and cognitive assistance, you have a little computer telling you you made a spelling error.

But when you make a movement error as a new barista, as a new surgeon learning a physical skill about surgery, there’s no computer telling you you made an error and that you could improve that physical skill. And so I got very interested in that. Turns out there’s a whole field studying that inside of robotics called haptics, which is the sense of touch and sense of forces in our body. And then I got deeper-

Paul Rand: Can you explain that word comes up a lot. Can you explain for folks what haptics means and why it’s so important?

Explore more

Human-Computer Interaction

Scientists create living smartwatch powered by slime mold

Users have to feed Tamagotchi-like device in human-computer interaction experiment

Pedro Lopes: Yeah, it’s one of those beautiful terms that sounds a little jargony at the beginning and sort of scientific, but it’s very, very helpful. So I hope everybody starts using it more in everyday life. It’s a term that we can use when we want to talk about a sensation that it’s neither auditory. We’re not listening to it, we’re not seeing it, but it’s something that we feel with our skin. And it turns out we feel a lot of things with our skin. It’s actually not just whether we’re touching something or not. We feel forces. We feel pressure, textures. So all those things that we couldn’t live without, right? You couldn’t even grab a pen if you didn’t have a sense of touch in the sense of forces. That’s what we mean when we say haptics, but it is something that is a little bit absent in technology, right?

Computers can make sounds and they can shine light through their screens to tell us there’s a new thing happening, but they don’t talk back to our skin except when they vibrate inside of our pockets, which is already helpful, but it’s not as powerful as if they could move our body and show us that new skill. And so, one way that we can get inspired by inventions in medicine is by looking at other ways to move the body. And electrical muscle stimulation is one such way. This is a technology that is extremely old. It looks very sci-fi and new. It uses the little electrodes, little patches that you can put on top of your muscles that look like this and you pass a little electrical current through the inside of your body, a very, very tiny amount of current. And because your muscles are little electrical machines, once they feel the presence of the electrical energy, they will actually contract.

That is how your brain controls your muscles by sending those impulses. So we’re just mimicking those impulses and letting the muscles contract. This technology is old, more than a hundred years old, and in medical rehabilitation is very often used to sort of bring back your muscles. If you had some muscle tissue loss, maybe you broke your leg and you had your leg in a cast for months and you lost, your muscles have atrophied, one way to bring you back to speed is to put the electrodes inside and kind of exercise your muscle safely while you’re still inside of the cast. Now what we are doing is sort of asking the same questions that roboticists have asked with the motors, but we’re asking with the muscle stimulation.

Roboticists said, “Can we help people move with these exoskeletons by sending the computer control to the motors that help the body move and maybe pick up a box that otherwise you couldn’t or balance in a situation where you would have a hard time balancing.” Now we’re doing this directly to your muscles rather than indirectly through this exoskeleton. So we’re sending these computerized electrical impulses at the right time, at the right moment to rebalance your body, to move your body in ways that give you a sense that you’re being controlled, but also a sense that you’re doing the movement. And this is where it gets really different from the exoskeleton. This is really interesting, which is if you’re wearing this exoskeleton, you have to almost imagine Pinocchio’s father puppeteering you. And You’re being moved by it. Whereas if your body is being controlled by the muscle stimulation to some degree, you’re being puppeteered by this computer. And again, we’ll talk about the dystopian angles in just a second, but it is your body that is moving.

You actually feel your muscles contracting. You immediately know which muscle to move with, how much force, because in the end of the day is exactly what you do when you contract your muscles.

Paul Rand: What is happening between your brain and the muscles at this point?

Pedro Lopes: We move your muscles through these electrical impulses and your brain now has to wrestle with this command that was never initiated by your own body. So it creates a sense of conflict that you feel the muscle is contracting. Your brain is confused by that. Sometimes people call that a sensory conflict similar to how maybe you felt, Paul, if you are in the car and you start to feel nauseated because your eyes see movement, but your body is not moving. That’s typically how people explain this sense of dizziness with car sickness, right?

This is a type of conflict too where your body is moving, but your brain didn’t instruct it to move. So one thing that we do at an engineering level is can we fix that conflict, right? Can we, for example, wait a little bit and instead of moving you very rapidly and very early, almost like you would be surprising, we actually wait for your brain to generate the wish of moving and now you have intended to move. We can still help you move, but we waited for you to move so you no longer have this sensory conflict of I wanted to, I never wanted to move and I’m moving now. You wanted to move and you were moving, but we are able to help you move maybe with the right force, with the right timing, and with the right balance.

Paul Rand: If you’re getting a lot out of the important research that’s shared on Big Brains. There’s another University of Chicago podcast network show that you should check out. It’s called Capitalisn’t. Capitalisn’t used as the latest economic thinking to zero in on the ways that capitalism is and more often isn’t working today. From the debate over how to distribute a vaccine to the morality of a wealth tax. Capitalisn’t clearly explains how capitalism can go wrong and what we can do about it. Listen to Capitalisn’t part of the University of Chicago podcast network. 

Paul Rand: Tell me what reactions you’ve seen for users, and how do they experience what you’re talking about? 

Pedro Lopes: I must say that the most common reaction at the very first time you try this is laughter. And I’ve asked the neuroscientists and I’ve asked the psychologists, and they also don’t know why this happens. It’s very, very interesting. People look at their hands and especially we do a lot of stimulation on the hands, helping people, for example, learn how to operate tools they’ve never seen before. Imagining you’re a carpenter first day of the job, and they tell you, please use that chisel, and you’ve never even heard a word chisel before, and you grab the tool and automatically the muscle stimulation starts to perform that smooth movement-

Paul Rand: Holy cow.

Pedro Lopes: And you go like, “Whoa.”

Paul Rand: Wow.

Pedro Lopes: “That’s what a chisel do.” The first thing they do is look at their hands, very surprised, and they laugh because it’s haptics. It doesn’t need to be explained to people. We actually do very little talking in these experiments because people very rapidly understand, “Oh, this is doing X, this is my body doing.” We’ve actually put people through experiments where we ask them to manipulate tools they’ve never seen before. Really strange tools like a magnetic sweeper. It’s something in factories you use. If you drop a thousand nails on the floor and you go like, “Who’s going to pick those up?” You have a little sweeper to pick them up that has a magnet inside. But if you’ve never worked in a factory, you have no idea what a magnetic sweeper is.

We give this thing to people. We throw a thousand nails on the floor, they have no idea how to use a magnetic sweeper, but we stimulate their bodies to start performing the right movements in the very rapidly. Within seconds after the laughter ceases, they really rapidly understand not only what the tool does, but even what the mechanisms of magnetic sweeper has a little latch that you have to disengage for the magnet to move. They very rapidly understand my hand is doing the right movements. I now understand how this tool works and sometimes even infer the mechanism, and I should really emphasize that we’re not exploring this as an opposition to watching videos and learning from mentors. The idea is you could do all of them now, right? You could have also a video telling you this is a magnetic sweeper and it works like that, but imagine having all this extremely rich information entering your body at the moment you’re learning. And so the other reaction that we’ve had is when we do experiments that endow people with an ability to do something they’ve never done before.

For example, we had a long project that just finished where we were helping people learn musical skills. We put their hands on a piano and we’re going to move their fingers independently to play a melody they’ve never played before.

Paul Rand: Unbelievable.

Pedro Lopes: People love learning a melody like that, right? It’s not the same as watching a video. We’ve actually compared it to watching a video, and this is potentially a little bit faster. It’s not the same as copying a teacher. Copying a teacher is very good. We as humans have evolved by copying each other very well. So we have a big brain infrastructure for copying movements. But there are things that are difficult to copy. I don’t know how much force you put in each key of the piano, the intensity of a note, but if you feel it, you immediately know it. So people have a kick out of feeling those sensations. How much force to put in each melody, you have to play the melody again, unassisted muscle stimulation is out the window.

The device isn’t even there. Do you still remember? In the group that trained with muscle stimulation remembers it better? And in fact, we even found the same thing that you find in education studies, which is not only the muscle stimulation should constantly tell you what the melody is, but it should actually help you where you make mistakes. So we had actually a group that experienced also the muscle stimulation, but it only intervened when you made a mistake. If you’re good, it just lets you play. And then the moment you make a mistake, it says it was actually not that finger. It’s not the index in this part, it’s the ring finger only kicks in there and corrects that mistake. That group remembers it even better than the other.

Paul Rand: Okay, extraordinary. Tell me about devices like Jump Mod or this whole idea of chemical haptics and how that impacts our understanding of perception.

Pedro Lopes: So Jump Mod is a device that helps you feel a realistic sense of jumping. So if you’re practicing basketball throws in virtual reality, we can actually give you a sense that you’re jumping higher in virtual reality and your jumps are becoming more useful, and you can come out back to the outside world and feel like those jumps are meaningful. Chemical haptics, much like that is an attempt to make you feel like you feel physical things in the virtual reality world, touches there. Vibrations are there, even pain is there. We can simulate pain in a safe way so that if you’re practicing say, what do you do in an emergency situation in a laboratory, if you spill a chemical on you, it becomes really immersive to feel like a sensation on your skin. It’s not a real pain. We simulate pain, but now it’s much more serious than if you’re just watching a video on YouTube. One of those training videos that we watch every year at universities that says, what do you do in a case of a fire emergency?

And you’re just watching the video laid back. Yeah, sure, sure. But we’ve actually put people in these immersive situations, they feel the heat of the fire.

Paul Rand: I’m sure people are hearing about this, but I think the perception and the idea, and arguably the ethics of Neuralink is very different than what you’re talking about. What is Neuralink and how do you think about it?

Pedro Lopes: Yeah, the number one distinction that the listeners should start to make as they think about all these new technologies that are in the body, is are they surface level? Are they outside of my body? Are they things, that I strap on like my smartwatch or maybe like a little EKG, that’s little leads to measure your heartbeats that your doctor might send you home with. Those are surface level devices or is it an implanted device? Does it live primarily inside of my body? Neuralink, unlike the things we investigate in my lab where we’re probing the future of these externalized devices that you can take on and off very rapidly, Neuralink investigates a different future and many other companies like Neuralink exists, I should note, of implanted devices of devices that exist inside of your body. So you can imagine immediately if you’re a person with accessibility needs, if you’re in a wheelchair, paraplegic, this is kind of game changing. And many companies are trying to do the same.

Now more recently. And Paul, maybe that’s why you’re mentioning Neuralink being so much on the news, is Neuralink is also on the news for trying to do, not just reading, but writing to your brain. So I describe when the devices are inside reading from your electrical activity, but neurons in most cells in your body go both ways. So if I can send a little current, just like we did for the electrical muscle stimulators and it moves your body, we can also send a little current to your brain and that maybe causes something to happen in your brain. And one very exciting area that these folks are looking at is can we put those implants close to the parts of your brain that deal with vision, that deal with images that come from your eyes and help maybe restore vision? Neuralink is not the only company trying to do that, but imagine just being able to see lines or a few features of the world if you at the moment cannot see.

So that’s very exciting, and I agree with brain computer interfaces for writing to the brain as being such a powerful tool. Other things are interesting in that realm too. In my lab, we do research using those same ideas, but for helping you learn to move faster or helping you learn new movements. We don’t do it internally like Neuralink do because we don’t want our participants to have to take surgeries to put the electrodes inside. We just do it externally. So you can use big magnets to stimulate the inside of your brain in a safe and external manner, but it all falls into this category of what could we do if we could interface with you by writing to your body, not just reading to your body from your body.

Paul Rand: You used the word a few times in our conversation about a potential dystopian future, and let’s break apart what that word potentially could mean with dystopian, and whether it’s eroding autonomy, giving a different sets of agency, messing with somebody’s head in a completely unexpected way. Talk to me when you think about this potential dystopian threat or barrier, how do you think about it?

Pedro Lopes: Yeah, there’s so many parallels to what we’ve been seeing in the robotic side that I think it’s a helpful analogy too, right? In self-driving cars, and now we are having this sort of trend of robotic humanoids or little robots that almost look like humans that are going to come into our homes and help us do the dishes. But what happens when they have a glitch, and technically something goes wrong and they, rather than putting the dish in the dishwasher, they slap us. We need to deal with the same type of threats in this human computer integration work, right? If we’re going to design a technology that can help a barista learn the correct amount of force or a golfer or a surgeon, we need to design it with all the fail-safes possible. So the word that you used Paul, agency and sense of control, it’s something that maybe in the last five years or so became actually the most prevalent research topic in my lab.

Most of the papers we publish these days, most of the experiments we do these days are not so much about how to control the muscles because we’ve done that about 10 years ago. But while the muscles are being controlled, how does the human override that control?

Paul Rand: Got it.

Pedro Lopes: What dials do we give to the human to automatically turn on the system or turn off the system, right? So for example, in the split body, how do I say stop stirring the caramel, I want to do it myself, or I’m the one now. I don’t want you to start that type of control. It’s something that folks in robotics are looking at as well. So we do it in many, many forms and shapes with some of our colleagues in neuroscience who will look at what does the sense of control actually means in our brains? What part of our brain might be responsible for the sense of control? Because if we can know where it is, we can also monitor that part. And anytime that part says I shouldn’t be in control, the system is always off. Sometimes we just have a control system inside of our muscle stimulation that if the text that you’re moving turns it off immediately, that’s very similar to what these robotics companies are going to try to put in your home. If the robot moves against you, it will automatically stop.

But again, if you, and I know you had many security people on this show as well, if you know a little bit about computer security, you know that no matter how many layers you put of these fail safes, there’s always one more hack and one more way to attack the system.

Paul Rand: Yes, of course.

Pedro Lopes: So ultimately, there’s also a societal type of discussion that is to be had about these technologies. Where should they exist? Who has access to them and how should they be regulated? Which agency will regulate them? Is it the FDA? Is it some other agency that we need to create that doesn’t even exist, that fundamentally understands that these are not just drugs and food, but are new kinds of technologies, and we might have to need the FAI and a federal of AI regulation to regulate these kinds of things.

Paul Rand: The level of accomplishment that maybe a barista feels, certainly somebody playing the piano, and that human level of accomplishment is a pretty human condition. As you experiment and we think about how machines and bodies share control, how does that change or begin your thinking of an evolution of what being human actually means?

Pedro Lopes: I think you just touched on the most important question of them all, and that has been always the most important question with technology, right? As a new technology innovates a sport, that sport goes into a little crisis and asks, “What is running, if we now have new running shoes? What is baseball, if we have carbon fiber, something?” So I find this a beautiful human moment, and you’re right, that body integrated technology can revolutionize this, in a little deeper level and make us question even a little bit deeper what means to be human. Something very funny, just to tell an anecdote of something that happened here in the lab in 2019 is that the Guinness Book of World Records, which is this association that sort of cracks crazy amounts of performance in games and in other activities of life, contacted us because of a work we did where we accelerated human reaction time. With the muscle stimulation. We can make a human move a little bit faster than they normally would.

They could maybe catch a falling object that they would normally not catch. And so they contacted us and they did some measurements, and they determined that we had enabled a human to be artificially assisted by 50 milliseconds. And that was, now, it’s still in the book, I believe, the fastest artificially assisted reaction time.

Paul Rand: Wow, cool.

Pedro Lopes: It’s very fun as a scientist to have a record like that. But what that did in my lab was to instigate the question that you said. We said, “Wait a second. We didn’t really help a human, did we? This is all artificial, or is it, or is it not?” And we started debating that ourselves. And that’s actually what led to us thinking about human values and agency and the sense of control was all because the Guinness World Book of Records came and asked us that question whether this was real or not. And you’re right. What happened then is that most of the work in our labs stopped being about helping people to do something when the electrical muscle stimulation is on.

But the question of when you remove it, did you learn something? So we became very interested in this as a learning technology that maybe we’ll supercharge people to be better at jumping, like with Jump Mod or better at running or starting the running a hundred-meter dash because they have muscle stimulation telling them when to move, but not necessarily at pushing them over the edge electronically, but biologically kind of pushing them over the edge, training them to be better humans. So this led us to question also the work we did with piano learning. I don’t want to go to a bar and see the pianist wearing an exoskeleton playing virtuoso. Isn’t that fun for anyone? I don’t think so. I think that’s not core human expression. That’s not art making meaning making.

I want that person to maybe have new ideas as they’re playing at home. What if I read the movement comes about and they have a new artistic intuition? And that could be interesting, but not to automate, not to replace humans. So I tend to think the human condition is augmenting ourselves with tools but not replacing ourselves. Right? We want to be in here making meaning and artistically expressing our thoughts, but not replacing us, just augmenting us to do more and do in new ways. I think that’s something that we enjoy doing as animals on this planet.

Lea Ceasrine: Big Brains is a production of the University of Chicago Podcast Network. We’re sponsored by the Graham School. Are you a lifelong learner with an insatiable curiosity? Access more than 50 open enrollment courses every quarter. Learn more at graham.uchicago.edu/bigbrains. And if you like what you heard in our podcast, please leave us a rating and review. The show is hosted by Paul M. Rand, and produced by me Lea Ceasrine, with help by Eric Fey. Thanks for listening.

Course registration

More From Big Brains

Episode 172

The breakthrough quantum sensor that sees inside your cells, with Peter Maurer

Discovery of a first-of-its-kind biological qubit could help detect and track diseases

Episode 171

What makes music go viral—From AI to Taylor Swift, with Paula Harper

Music scholar examines the forces driving the rise of pop stars and songs in the age of TikTok and Spotify

Episode 169

How full-body MRIs could predict your long-term health, with Daniel Sodickson

Medical physicist examines how AI-enhanced MRIs could transform health care