For people communicating using a single switch, a new interface learns how they make their selections and then automatically adjusts accordingly.
In 1995, Jean-Dominique Bauby, editor-in-chief of a French fashion magazine, suffered a seizure while driving a car, which left him with an illness known as lock-in syndrome, a neurological disease in which the patient is completely paralyzed and can only move the muscles that control the eyes.
Bauby, who had signed a book deal shortly before her accident, wrote the memoir ‘The Diving Bell and the Butterfly’ using a dictation system in which her speech therapist recited the alphabet and he blinked as she spoke. the correct letter. They wrote the 130-page book one snap at a time.
Technology has come a long way since Bauby’s accident. Many people with severe motor impairments caused by locked-in syndrome, cerebral palsy, amyotrophic lateral sclerosis or other conditions can communicate using computer interfaces where they select letters or words from a grid on-screen by flipping a single switch, often pressing a button, releasing a puff of air, or blinking.
But these row-column scanning systems are very rigid and, similar to the technique used by Bauby’s speech therapist, they highlight each option one at a time, making them extremely slow for some users. And they’re not suitable for tasks where options can’t be arranged in a grid, like drawing, web browsing, or gaming.
A more flexible system developed by researchers from MIT places individual selection indicators next to each option on a computer screen. Indicators can be placed anywhere – next to anything someone might click with a mouse – so a user doesn’t have to scroll through a grid of choices to make selections. The system, called Nomon, incorporates probabilistic reasoning to learn how users make selections, then adjusts the interface to improve their speed and accuracy.
Participants in a user study were able to type faster with Nomon than with a row-column scanning system. Users also performed better on an image selection task, demonstrating how Nomon could be used for more than typing.
“It’s so cool and exciting to be able to develop software that has the potential to really help people. Being able to find those signals and turn them into communication like we’re used to is a really interesting problem,” says lead author Tamara Broderick, an associate professor in MIT’s Department of Electrical and Computer Engineering (EECS) and a member of the Information and Decision Systems Laboratory and the Institute for Data, Systems, and Society.
Joining Broderick on the paper are lead author Nicholas Bonaker, an EECS graduate student; Emli-Mari Nel, Head of Innovation and Machine Learning at Averly and guest lecturer at the University of the Witwatersrand in South Africa; and Keith Vertanen, associate professor at Michigan Tech. The research is presented at the ACM Conference on Human Factors in Computing Systems.
On the clock
In the Nomon interface, a small analog clock is placed next to each option the user can select. (A gnomon is the part of a sundial that casts a shadow.) The user looks at an option, then clicks its switch when that clock’s hand crosses a red “noon” line. After each click, the system changes the phases of the clocks to separate the next most likely targets. The user clicks repeatedly until their target is selected.
When used as a keyboard, Nomon’s machine learning algorithms attempt to guess the next word based on previous words and each new letter as the user makes selections.
Broderick developed a simplified version of Nomon several years ago, but decided to revise it to make the system easier to use for people with limited mobility. She enlisted Bonaker, then an undergrad, to redesign the interface.
They first consulted with nonprofit organizations that work with people with mobility issues, as well as a mobility-impaired switch user, to gather feedback on Nomon’s design.
Next, they designed a user study that would better represent the abilities of people with reduced mobility. They wanted to make sure they checked the system thoroughly before using up much of the valuable time of users with limited mobility, so they tested on switchless users first, Broderick says.
To gather more representative data, Bonaker designed a webcam-based switch that was harder to use than a simple click of a key. Non-switch users had to lean their body to one side of the screen, then back to the other side to register a click.
“And they have to do it at precisely the right time, so it really slows them down. We did empirical studies that showed they were much closer to the response times of people with reduced mobility,” says Broderick.
They conducted a 10-session user study with 13 non-switch participants and one single-switch user with an advanced form of spinal muscular dystrophy. In the first nine sessions, participants used Nomon and a line-column scanning interface for 20 minutes each to perform text entry, and in the 10th session they used both systems for a text selection task. pictures.
Unswitched users typed 15% faster with Nomon, while the mobility-impaired user typed even faster than unswitched users. When typing unfamiliar words, users were 20% faster overall and made half the errors. In their last session, they were able to complete the image selection task 36% faster using Nomon.
“Nomon is much more forgiving than row-column scanning. With row-column scanning, even though you’re slightly off, you’ve now chosen B instead of A and that’s a mistake,” says Broderick.
Adapt to loud clicks
With its probabilistic reasoning, Nomon incorporates everything it knows about where a user is likely to click to make the process faster, easier, and less error-prone. For example, if the user selects “Q”, Nomon will make it as easy as possible for the user to select “U”.
Nomon also learns how a user clicks. Thus, if the user always clicks a little after the clock hand strikes noon, the system adapts to it in real time. It also adapts to noise. If a user’s click is often wrong, the system requires additional clicks to ensure accuracy.
This probabilistic reasoning makes Nomon powerful, but also requires a higher click load than row-column scanning systems. Clicking multiple times can be a daunting task for severely disabled users.
Broderick hopes to reduce clicks by incorporating gaze tracking into Nomon, which would give the system more robust insights into what a user might choose next based on what part of the screen they’re looking at. Researchers also want to find a better way to automatically adjust clock speeds to help users be more accurate and efficient.
They are working on a new series of studies in which they plan to partner with more users with reduced mobility.
“So far, feedback from users with reduced mobility has been invaluable to us; We are very grateful to the motor disabled user who commented on our initial interface and to the distinct motor disabled user who participated in our study. We are currently expanding our study to work with a larger and more diverse group of our target population. With their help, we are already making further improvements to our interface and working to better understand Nomon’s performance,” she says.
“Non-speaking people with motor disabilities currently do not have effective communication solutions to interact with speaking partners or computer systems. This “communication gap” is a known unsolved problem in human-computer interaction, and so far there are no good solutions. This article demonstrates that a highly creative approach underpinned by a statistical model can deliver tangible performance gains to users who need it most: non-speaking people who rely on a single switch to communicate,” says Per Ola Kristensson, Professor of Interactive Systems Engineering at Cambridge. University, which did not participate in this research. “The paper also demonstrates the value of supplementing insights from computing experiences with the involvement of end users and other stakeholders in the design process. I find this article very creative and important in an area where it is notoriously difficult to make significant progress.
This research was supported, in part, by the Seth Teller Memorial Fund to Advanced Technology for People with Disabilities, a Peter J. Eloranta Summer Undergraduate Research Fellowship, the MIT Quest for Intelligence, and the National Science Foundation.