MUSTE: Multimodal semantic text editing
Peter Ljunglöf
University of Gothenburg
Over the last 10–20 years several different modes of human computer interaction have emerged, such as speech recognition, touch screens and eye tracking. During the same time the amount of text interactions has increase enormously over the years – e.g., over 50 billion text messages are sent every day, only counting SMS and chat clients. But the full potential of the new modalities remains largely unexploited – text authoring is viewed conceptually as an incremental left-to-right process, where a text is authored by adding new words at the end. This view has some problems, especially when it comes to new modalities such as touch screens:
- A virtual touch-screen keyboard is cognitively demanding, since the user cannot get haptic feedback but instead have to constantly look at the virtual keys.
- Letter-by-letter text authoring is by itself demanding for cognitively disabled users, since it requires so many interactions with the device.
- The incremental view focuses on how to enter new text, and does not give much help when it comes to editing existing text.
The basic problem that we want to solve in this project is how to reduce the cognitive load when authoring and editing text on devices with non-traditional input modalities. Our approach is that the user should be able to modify any word or phrase in the text at any time, and the system should be helpful and suggest good alternative formulations.
MUSTE is a 5-year research project financed by Vetenskapsrådet (Swedish research council), running in the years 2015–2019, and led by Peter Ljunglöf.