Posts tagged Multimodal User Interface
Master Thesis: Modality-independent Exchange of Information Across Devices Using the Pick-and-Drop Concept
I have submitted my master’s thesis to the university of Ulm and received my master’s degree. The task of my master thesis was to develop and evaluate a multimodal user interaction concept for an existing prototypical system that allows a drag-and-drop-like interaction across device borders.
To make things more interesting, this is no ordinary drag-and-drop action where the dragged information must retain its original representation. The dragged information can be dropped and represented by different media types, depending on the context. An example would be to drag the text “apple” and drop it into a droppable area for images. The dragged information would then adapt by switching to an image of an apple and displaying the visual information. One of the main achievements of my master’s thesis was to devise such an interaction concept and discuss the implications and problems inherent to such interaction possibilities.
Another point was the ability to spontaneously switch input modalities and devices on demand. For example, when working with a computer using mouse and keyboard to input information the user should be able to leave the computer, pick up a tablet and continue working in the next room inputting the text utilising the touch pad or voice input of the tablet.
Alas, I cannot share the source code this time, as my prototype is heavily dependent on the existing prototypical system and interweaved with its code, which is the property of the university of Ulm and part of a larger research project. But, as the focus was the interaction concept and its theoretical elaboration, the most interesting parts of this work can be found in the documentation.
It was written to the best of my knowledge, though I can’t guarantee it’s perfectly free of errors. Furthermore, the disclaimer of this blog applies.