New forms of Human-Computer Interaction for Visualizing Information
Title | New forms of Human-Computer Interaction for Visualizing Information |
Publication Type | Journal Articles |
Year of Publication | 2010 |
Authors | Reiterer H, Kerren A, Plaisant C, Stasko JT |
Journal | Information Visualization |
Date Published | 2010/// |
Abstract | The Graphical User Interface (GUI) – although developed in research laboratories in the late 1970s – is still the dominant interaction paradigm in Information Visualization. We propose a new interaction paradigm called Blended Interaction. It combines ideas of Embodied Cognition, Multimodal Interaction, Reality-Based Interaction & Ubiquitous Computing. This is intended to stress that a single increase in the reality aspect of the interaction cannot go far enough. The particular challenge – and from the user's standpoint, the key advantage – lies in a meaningful marriage between the tested real-world options and the digital world. As a minimum this marriage must exist on the levels of the interaction, communication, of the way we solve problems with conventional tools (workflows), and of the design of the space or the architecture of buildings and places. The digital world often offers entirely new possibilities and takes the form of interactive devices of various shapes but also of intelligent everyday objects (e.g. the 'Internet of things'). In our view, interaction concepts can indeed offer a new quality of interaction, but only when the design of the interaction includes all these domains at the same time and with equal weighting. We test the suitability of our ideas of Blended Interaction concepts by using specific application examples that are being worked on as part of current research projects. Our experiences show that this new interaction paradigm has also great potential for interacting with visualization. For example, we have developed multi-touch scatter plots & facet maps for tangible user interfaces supporting the searching & browsing in Digital Libraries. We have embedded different visualizations into a Zoomable Object-oriented Information Landscape (ZOIL), which supports our vision of using visualizations on different displays of different size at the same time. We have developed specific kind of search tokens that supports collaborative search activities. For example, we try to address the following research questions: * How can future interactive InfoVis tools look like, especially in the light of the idea Blended Interaction? * How can future interactive InfoVis tools benefit from Multi-Displays & Multimodal environments used by Multiple Users? * What are the specific design requirements for multi-touch visualizations? * How can we support the collaborative use visualization tools? |