Jason Riggle’s research on American Sign Language, or ASL, borrows ideas from computational, experimental, and traditional theoretical linguistics and is “very clearly at the intersection of several disciplines,” he says. “It involves bringing very new tools to bear on very old questions—and bringing old expertise to new questions. It’s the kind of thing we’re seeing more and more these days.”
Riggle, assistant professor in linguistics, is trying to reveal the hidden similarities in languages that seem to make different demands on users. With colleagues at the Toyota Technological Institute at Chicago and Purdue University, Riggle, a graduate of UCLA and director of the Chicago Language Modeling Lab, hopes to shed light on the process behind language change.
In the pilot study, Riggle brought native signers—including a third-generation deaf signer—into his lab for a marathon fingerspelling session. (In ASL, fingerspelling is used for emphasis and for words that do not have a sign.) The subjects sat before a high-speed camera, spelling a series of words that flashed onto a screen in front of them.
When Riggle’s team reviewed the six hours of footage they had collected, they were immediately struck by the errors the signers made. Because both typing and ASL involve the fingers, they had expected to see the kinds of errors commonly observed in typing—reversing or omitting letters, for instance. Instead, Riggle says, they saw anomalies more commonly observed in speech.
In speech, we are constantly readjusting our tongue and teeth in order to speak quickly and fluidly. As we articulate the word “warmth,” for instance, we very often unconsciously produce a puff of air that sounds like a “p” between the “m” and “th.” This “p” is the accidental result of preparing to articulate the next sound in the word. In linguistics, this is known as “coarticulation.”
Riggle observed the same phenomenon among the signers he studied. For example, when they encountered a word with an “i”—a letter articulated with the pinky finger, which is used relatively rarely in ASL—the signers’ pinkies would begin to drift up. When asked to spell an unfamiliar word like “Felkelni, a town in Ireland, one of the signers rendered it “Felkeklini.”
This might look like a careless mistake, but in a larger sense “the hand is doing a smart thing,” Riggle says. “It’s getting the pinky up because it’s going to be needed later. So the things we’re calling ‘errors’ are motivated by something we see all over the place in language—that is, getting the articulators that aren’t currently needed into the position they need to be in later.” The fingers are trying to take the easiest, most efficient path from Point A to Point B.
Over time, these optimizing strategies can become ingrained in a language, contributing to long-term changes. Studying signers offers powerful insight into language change, Riggle says, because “it allows us to pull apart what is an accidental property of having a mouth, and what is a deep property of language.”
Understanding fingerspelling anomalies yields more than theoretical insight. Riggle and several Toyota colleagues who study computational vision hope to build a computer system that can recognize ASL. By identifying how and when common errors occur, they can teach the computer to anticipate mistakes. If they get a computer to recognize hand shapes, “they might be able to scale up to other kinds of learning from visual cues,” says Chris Kennedy, chair of linguistics.