289 research outputs found

    Establishing the design knowledge for emerging interaction platforms

    Get PDF
    While awaiting a variety of innovative interactive products and services to appear in the market in the near future such as interactive tabletops, interactive TVs, public multi-touch walls, and other embedded appliances, this paper calls for preparation for the arrival of such interactive platforms based on their interactivity. We advocate studying, understanding and establishing the foundation for interaction characteristics and affordances and design implications for these platforms which we know will soon emerge and penetrate our everyday lives. We review some of the archetypal interaction platform categories of the future and highlight the current status of the design knowledge-base accumulated to date and the current rate of growth for each of these. We use example designs illustrating design issues and considerations based on the authors’ 12-year experience in pioneering novel applications in various forms and styles

    Interaction platform-orientated perspective in designing novel applications

    Get PDF
    The lack of HCI offerings in the invention of novel software applications and the bias of design knowledge towards desktop GUI make it difficult for us to design for novel scenarios and applications that leverage emerging computational technologies. These include new media platforms such as mobiles, interactive TV, tabletops and large multi-touch walls on which many of our future applications will operate. We argue that novel application design should come not from user-centred requirements engineering as in developing a conventional application, but from understanding the interaction characteristics of the new platforms. Ensuring general usability for a particular interaction platform without rigorously specifying envisaged usage contexts helps us to design an artifact that does not restrict the possible application contexts and yet is usable enough to help brainstorm its more exact place for future exploitation

    Designing novel applications inspired by emerging media technologies

    Get PDF
    The field of Human-Computer Interaction provides a number of useful tools and methods for obtaining information on end-users and their usage context to inform the design of computer systems, yet relatively little is known on how to go about designing for a completely novel application where there is no user base, no existing practice of use available at the start. The success of the currently available HCI methodology that focuses on understanding users’ needs and establishing requirements is well-deserved in making computing applications usable in terms of fitting them to end-users’ usage contexts. However, too much emphasis on identifying user needs tends to stifle other more exploratory design activities where new types of applications are invented in order to discover or create new activities currently not practiced. In this paper, we argue that a great starting point of novel application design is not the problem space (trying to rigorously define the user requirements) but the solution space (trying to leverage emerging computational technologies and growing design knowledge for various interaction platforms), and we build a foundation for a pragmatic design methodology supported by the authors’ extensive experience in designing novel applications inspired by emerging media technologies

    AirConstellations: In-Air Device Formations for Cross-Device Interaction via Multiple Spatially-Aware Armatures

    Get PDF
    AirConstellations supports a unique semi-fixed style of cross-device interactions via multiple self-spatially-aware armatures to which users can easily attach (or detach) tablets and other devices. In particular, AirConstellations affords highly flexible and dynamic device formations where the users can bring multiple devices together in-air - with 2-5 armatures poseable in 7DoF within the same workspace - to suit the demands of their current task, social situation, app scenario, or mobility needs. This affords an interaction metaphor where relative orientation, proximity, attaching (or detaching) devices, and continuous movement into and out of ad-hoc ensembles can drive context-sensitive interactions. Yet all devices remain self-stable in useful configurations even when released in mid-air. We explore flexible physical arrangement, feedforward of transition options, and layering of devices in-air across a variety of multi-device app scenarios. These include video conferencing with flexible arrangement of the person-space of multiple remote participants around a shared task-space, layered and tiled device formations with overview+detail and shared-to-personal transitions, and flexible composition of UI panels and tool palettes across devices for productivity applications. A preliminary interview study highlights user reactions to AirConstellations, such as for minimally disruptive device formations, easier physical transitions, and balancing "seeing and being seen"in remote work

    Emerging technologies for learning report (volume 3)

    Get PDF

    Cross-display attention switching in mobile interaction with large displays

    Get PDF
    Mobile devices equipped with features (e.g., camera, network connectivity and media player) are increasingly being used for different tasks such as web browsing, document reading and photography. While the portability of mobile devices makes them desirable for pervasive access to information, their small screen real-estate often imposes restrictions on the amount of information that can be displayed and manipulated on them. On the other hand, large displays have become commonplace in many outdoor as well as indoor environments. While they provide an efficient way of presenting and disseminating information, they provide little support for digital interactivity or physical accessibility. Researchers argue that mobile phones provide an efficient and portable way of interacting with large displays, and the latter can overcome the limitations of the small screens of mobile devices by providing a larger presentation and interaction space. However, distributing user interface (UI) elements across a mobile device and a large display can cause switching of visual attention and that may affect task performance. This thesis specifically explores how the switching of visual attention across a handheld mobile device and a vertical large display can affect a single user's task performance during mobile interaction with large displays. It introduces a taxonomy based on the factors associated with the visual arrangement of Multi Display User Interfaces (MDUIs) that can influence visual attention switching during interaction with MDUIs. It presents an empirical analysis of the effects of different distributions of input and output across mobile and large displays on the user's task performance, subjective workload and preference in the multiple-widget selection task, and in visual search tasks with maps, texts and photos. Experimental results show that the selection of multiple widgets replicated on the mobile device as well as on the large display, versus those shown only on the large display, is faster despite the cost of initial attention switching in the former. On the other hand, a hybrid UI configuration where the visual output is distributed across the mobile and large displays is the worst, or equivalent to the worst, configuration in all the visual search tasks. A mobile device-controlled large display configuration performs best in the map search task and equal to best (i.e., tied with a mobile-only configuration) in text- and photo-search tasks

    Personalized Interaction with High-Resolution Wall Displays

    Get PDF
    Fallende Hardwarepreise sowie eine zunehmende Offenheit gegenüber neuartigen Interaktionsmodalitäten haben in den vergangen Jahren den Einsatz von wandgroßen interaktiven Displays möglich gemacht, und in der Folge ist ihre Anwendung, unter anderem in den Bereichen Visualisierung, Bildung, und der Unterstützung von Meetings, erfolgreich demonstriert worden. Aufgrund ihrer Größe sind Wanddisplays für die Interaktion mit mehreren Benutzern prädestiniert. Gleichzeitig kann angenommen werden, dass Zugang zu persönlichen Daten und Einstellungen — mithin personalisierte Interaktion — weiterhin essentieller Bestandteil der meisten Anwendungsfälle sein wird. Aktuelle Benutzerschnittstellen im Desktop- und Mobilbereich steuern Zugriffe über ein initiales Login. Die Annahme, dass es nur einen Benutzer pro Bildschirm gibt, zieht sich durch das gesamte System, und ermöglicht unter anderem den Zugriff auf persönliche Daten und Kommunikation sowie persönliche Einstellungen. Gibt es hingegen mehrere Benutzer an einem großen Bildschirm, müssen hierfür Alternativen gefunden werden. Die daraus folgende Forschungsfrage dieser Dissertation lautet: Wie können wir im Kontext von Mehrbenutzerinteraktion mit wandgroßen Displays personalisierte Schnittstellen zur Verfügung stellen? Die Dissertation befasst sich sowohl mit personalisierter Interaktion in der Nähe (mit Touch als Eingabemodalität) als auch in etwas weiterer Entfernung (unter Nutzung zusätzlicher mobiler Geräte). Grundlage für personalisierte Mehrbenutzerinteraktion sind technische Lösungen für die Zuordnung von Benutzern zu einzelnen Interaktionen. Hierzu werden zwei Alternativen untersucht: In der ersten werden Nutzer via Kamera verfolgt, und in der zweiten werden Mobilgeräte anhand von Ultraschallsignalen geortet. Darauf aufbauend werden Interaktionstechniken vorgestellt, die personalisierte Interaktion unterstützen. Diese nutzen zusätzliche Mobilgeräte, die den Zugriff auf persönliche Daten sowie Interaktion in einigem Abstand von der Displaywand ermöglichen. Einen weiteren Teil der Arbeit bildet die Untersuchung der praktischen Auswirkungen der Ausgabe- und Interaktionsmodalitäten für personalisierte Interaktion. Hierzu wird eine qualitative Studie vorgestellt, die Nutzerverhalten anhand des kooperativen Mehrbenutzerspiels Miners analysiert. Der abschließende Beitrag beschäftigt sich mit dem Analyseprozess selber: Es wird das Analysetoolkit für Wandinteraktionen GIAnT vorgestellt, das Nutzerbewegungen, Interaktionen, und Blickrichtungen visualisiert und dadurch die Untersuchung der Interaktionen stark vereinfacht.An increasing openness for more diverse interaction modalities as well as falling hardware prices have made very large interactive vertical displays more feasible, and consequently, applications in settings such as visualization, education, and meeting support have been demonstrated successfully. Their size makes wall displays inherently usable for multi-user interaction. At the same time, we can assume that access to personal data and settings, and thus personalized interaction, will still be essential in most use-cases. In most current desktop and mobile user interfaces, access is regulated via an initial login and the complete user interface is then personalized to this user: Access to personal data, configurations and communications all assume a single user per screen. In the case of multiple people using one screen, this is not a feasible solution and we must find alternatives. Therefore, this thesis addresses the research question: How can we provide personalized interfaces in the context of multi-user interaction with wall displays? The scope spans personalized interaction both close to the wall (using touch as input modality) and further away (using mobile devices). Technical solutions that identify users at each interaction can replace logins and enable personalized interaction for multiple users at once. This thesis explores two alternative means of user identification: Tracking using RGB+depth-based cameras and leveraging ultrasound positioning of the users' mobile devices. Building on this, techniques that support personalized interaction using personal mobile devices are proposed. In the first contribution on interaction, HyDAP, we examine pointing from the perspective of moving users, and in the second, SleeD, we propose using an arm-worn device to facilitate access to private data and personalized interface elements. Additionally, the work contributes insights on practical implications of personalized interaction at wall displays: We present a qualitative study that analyses interaction using a multi-user cooperative game as application case, finding awareness and occlusion issues. The final contribution is a corresponding analysis toolkit that visualizes users' movements, touch interactions and gaze points when interacting with wall displays and thus allows fine-grained investigation of the interactions

    CurationSpace: Cross-Device Content Curation Using Instrumental Interaction

    Get PDF
    For digital content curation of historical artefacts, curators collaboratively collect, analyze and edit documents, images, and other digital resources in order to display and share new representations of that information to an audience. Despite their increasing reliance on digital documents and tools, current technologies provide little support for these specific collaborative content curation activities. We introduce CurationSpace – a novel cross-device system – to provide more expressive tools for curating and composing digital historical artefacts. Based on the concept of Instrumental Interaction, CurationSpace allows users to interact with digital curation artefacts on shared interactive surfaces using personal smartwatches as selectors for instruments or modifiers (applied to either the whole curation space, individual documents, or fragments). We introduce a range of novel interaction techniques that allow individuals or groups of curators to more easily create, navigate and share resources during content curation. We report insights from our user study about people’s use of instruments and modifiers for curation activities

    CurationSpace:Cross-Device Content Curation Using Instrumental Interaction

    Get PDF
    For digital content curation of historical artefacts, curators collaboratively collect, analyze and edit documents, images, and other digital resources in order to display and share new representations of that information to an audience. Despite their increasing reliance on digital documents and tools, current technologies provide little support for these specific collaborative content curation activities. We introduce CurationSpace - a novel cross-device system - to provide more expressive tools for curating and composing digital historical artefacts. Based on the concept of Instrumental Interaction, CurationSpace allows users to interact with digital curation artefacts on shared interactive surfaces using personal smartwatches as selectors for instruments or modifiers (applied to either the whole curation space, individual documents, or fragments). We introduce a range of novel interaction techniques that allow individuals or groups of curators to more easily create, navigate and share resources during content curation. We report insights from our user study about people's use of instruments and modifiers for curation activities

    Children s Acceptance of a Collaborative Problem Solving Game Based on Physical Versus Digital Learning Spaces

    Full text link
    [EN] Collaborative problem solving (CPS) is an essential soft skill that should be fostered from a young age. Research shows that a good way of teaching such skills is through video games; however, the success and viability of this method may be affected by the technological platform used. In this work we propose a gameful approach to train CPS skills in the form of the CPSbot framework and describe a study involving 80 primary school children on user experience and acceptance of a game, Quizbot, using three different technological platforms: two purely digital (tabletop and handheld tablets) and another based on tangible interfaces and physical spaces. The results show that physical spaces proved to be more effective than the screen-based platforms in several ways, as well as being considered more fun and easier to use by the children. Finally, we propose a set of design considerations for future gameful CPS systems based on the observations made during this study.Spanish Ministry of Economy and Competitiveness and the European Regional Development Fund (project TIN2014-60077-R); Spanish Ministry of Education, Culture and Sport (with fellowship FPU14/00136) and Conselleria d'Educacio, Cultura i Esport (Generalitat Valenciana, Spain) (grant ACIF/2014/214).Jurdi, S.; García Sanjuan, F.; Nácher-Soler, VE.; Jaén Martínez, FJ. (2018). Children s Acceptance of a Collaborative Problem Solving Game Based on Physical Versus Digital Learning Spaces. Interacting with Computers. 30(3):187-206. https://doi.org/10.1093/iwc/iwy006S18720630
    corecore