24 research outputs found

    Une interaction la plus riche possible et à moindre coût

    No full text
    International audienceLes jeux de stimulation cognitive ont une importance capitale pour ralentir le déclin des personnes atteintes de troubles cognitifs. Une partie de ces jeux appartient au domaine des GUI et montre des limites notamment depuis l'émergence des NUI (passivité de l'utilisateur, interaction moins riche...). Il existe également des jeux dans le domaine des NUI, mais souvent ils utilisent de la technologie onéreuse ou sont spécialisés pour un problème précis. Souvent, il n'est pas possible de modifier les exercices proposés. Cet article propose une solution pour développer des jeux de stimulation dans le domaine des NUI, à bas prix. Le principe est d'utiliser les dispositifs numériques qui sont présents pour faire un jeu attractif et réutilisable dans d'autres domaines. Cet article présente StimCards, un jeu de cartes interactif. Les utilisateurs peuvent créer leur propre carte et posséder une base de questions illimitée. Ce jeu s'adapte donc à tous les domaines, à toutes les applications. Une expérimentation a montré que StimCards est stimulant et accepté par les utilisateurs

    Understanding Visual Feedback in Large-Display Touchless Interactions: An Exploratory Study

    Get PDF
    Touchless interactions synthesize input and output from physically disconnected motor and display spaces without any haptic feedback. In the absence of any haptic feedback, touchless interactions primarily rely on visual cues, but properties of visual feedback remain unexplored. This paper systematically investigates how large-display touchless interactions are affected by (1) types of visual feedback—discrete, partial, and continuous; (2) alternative forms of touchless cursors; (3) approaches to visualize target-selection; and (4) persistent visual cues to support out-of-range and drag-and-drop gestures. Results suggest that continuous was more effective than partial visual feedback; users disliked opaque cursors, and efficiency did not increase when cursors were larger than display artifacts’ size. Semantic visual feedback located at the display border improved users’ efficiency to return within the display range; however, the path of movement echoed in drag-and-drop operations decreased efficiency. Our findings contribute key ingredients to design suitable visual feedback for large-display touchless environments.This work was partially supported by an IUPUI Research Support Funds Grant (RSFG)

    Facilitating data exploration in casual mobile settings with multi-device interaction

    Get PDF
    Big data is the new buzzword of computer professionals. Governments and industry are increasingly looking to find benefits from exploring immense data sets using new powerful tools. Large amounts of data are generated through our daily activities: commuting, eating lunch, using mobile phones, and reading the bedtime story to the children. In a truly democratized society we should have access to the data we generate along with the tools needed to gain insight. Consequently, there is an emerging need for aggregating data from different sources and presenting it in forms that will make it accessible for different stakeholders within social entities pervasive computing systems will soon be required to provide opportunities for users to rapidly explore big data in ad-hoc casual settings. This work focuses on how we can transform everyday spaces into data-rich environments where citizens can interactively explore data sets. Specifically, this work will investigate how we can transform table surfaces into interactive spaces by augmenting currently available mobile devices. Using multiple mobile devices for one and many users will be the focal theme and new interaction techniques are explored. The project is build on past research from the t2i Interaction Laboratory and look for new sensing techniques, communication protocols and navigation patterns

    Collaborative Human-Computer Interaction with Big Wall Displays - BigWallHCI 2013 3rd JRC ECML Crisis Management Technology Workshop

    Get PDF
    The 3rd JRC ECML Crisis Management Technology Workshop on Human-Computer Interaction with Big Wall Displays in Situation Rooms and Monitoring Centres was co-organised by the European Commission Joint Research Centre and the University of Applied Sciences St. Pölten, Austria. It took place in the European Crisis Management Laboratory (ECML) of the JRC in Ispra, Italy, from 18 to 19 April 2013. 40 participants from stakeholders in the EC, civil protection bodies, academia, and industry attended the workshop. The hardware of large display areas is on the one hand mature since many years and on the other hand changing rapidly and improving constantly. This high pace developments promise amazing new setups with respect to e.g., pixel density or touch interaction. On the software side there are two components with room for improvement: 1. the software provided by the display manufacturers to operate their video walls (source selection, windowing system, layout control) and 2. dedicated ICT systems developed to the very needs of crisis management practitioners and monitoring centre operators. While industry starts to focus more on the collaborative aspects of their operating software already, the customized and tailored ICT applications needed are still missing, unsatisfactory, or very expensive since they have to be developed from scratch many times. Main challenges identified to enhance big wall display systems in crisis management and situation monitoring contexts include: 1. Interaction: Overcome static layouts and/or passive information consumption. 2. Participatory Design & Development: Software needs to meet users’ needs. 3. Development and/or application of Information Visualisation & Visual Analytics principle to support the transition from data to information to knowledge. 4. Information Overload: Proper methods for attention management, automatic interpretation, incident detection, and alarm triggering are needed to deal with the ever growing amount of data to be analysed.JRC.G.2-Global security and crisis managemen

    Proceedings of Cross-Surface 2016: Workshop on Challenges and Opportunities for 'Bring-Your-Own-Device' (BYOD) in the Wild

    Get PDF
    In this workshop, we reviewed and discussed challenges and opportunities for Human-Computer Interaction in relation to cross-surface interaction in the wild based on the bring-your-own-device (BYOD) practice. We brought together researchers and practitioners working on technical infrastructures for cross-surface computing, studies of cross-surface computing in particular domains as well as interaction challenges for introducing cross-surface computing in the wild, all with a particular focus on BYOD. Examples of application domains are: cultural institutions, work places, public libraries, schools and education. Please find more details about the workshop, in the submitted proposal [1]. The workshop was held in conjunction with the 2016 ACM Conference on Human Factors in Computing Systems (CHI), that took place from May 7 to 12 in San Jose, USA. [1] Steven Houben, Nicolai Marquardt, Jo Vermeulen, Johannes Schöning, Clemens Klokmose, Harald Reiterer, Henrik Korsgaard, and Mario Schreiner. 2016. Cross-Surface: Challenges and Opportunities for 'bring your own device' in the wild

    Personalized Interaction with High-Resolution Wall Displays

    Get PDF
    Fallende Hardwarepreise sowie eine zunehmende Offenheit gegenüber neuartigen Interaktionsmodalitäten haben in den vergangen Jahren den Einsatz von wandgroßen interaktiven Displays möglich gemacht, und in der Folge ist ihre Anwendung, unter anderem in den Bereichen Visualisierung, Bildung, und der Unterstützung von Meetings, erfolgreich demonstriert worden. Aufgrund ihrer Größe sind Wanddisplays für die Interaktion mit mehreren Benutzern prädestiniert. Gleichzeitig kann angenommen werden, dass Zugang zu persönlichen Daten und Einstellungen — mithin personalisierte Interaktion — weiterhin essentieller Bestandteil der meisten Anwendungsfälle sein wird. Aktuelle Benutzerschnittstellen im Desktop- und Mobilbereich steuern Zugriffe über ein initiales Login. Die Annahme, dass es nur einen Benutzer pro Bildschirm gibt, zieht sich durch das gesamte System, und ermöglicht unter anderem den Zugriff auf persönliche Daten und Kommunikation sowie persönliche Einstellungen. Gibt es hingegen mehrere Benutzer an einem großen Bildschirm, müssen hierfür Alternativen gefunden werden. Die daraus folgende Forschungsfrage dieser Dissertation lautet: Wie können wir im Kontext von Mehrbenutzerinteraktion mit wandgroßen Displays personalisierte Schnittstellen zur Verfügung stellen? Die Dissertation befasst sich sowohl mit personalisierter Interaktion in der Nähe (mit Touch als Eingabemodalität) als auch in etwas weiterer Entfernung (unter Nutzung zusätzlicher mobiler Geräte). Grundlage für personalisierte Mehrbenutzerinteraktion sind technische Lösungen für die Zuordnung von Benutzern zu einzelnen Interaktionen. Hierzu werden zwei Alternativen untersucht: In der ersten werden Nutzer via Kamera verfolgt, und in der zweiten werden Mobilgeräte anhand von Ultraschallsignalen geortet. Darauf aufbauend werden Interaktionstechniken vorgestellt, die personalisierte Interaktion unterstützen. Diese nutzen zusätzliche Mobilgeräte, die den Zugriff auf persönliche Daten sowie Interaktion in einigem Abstand von der Displaywand ermöglichen. Einen weiteren Teil der Arbeit bildet die Untersuchung der praktischen Auswirkungen der Ausgabe- und Interaktionsmodalitäten für personalisierte Interaktion. Hierzu wird eine qualitative Studie vorgestellt, die Nutzerverhalten anhand des kooperativen Mehrbenutzerspiels Miners analysiert. Der abschließende Beitrag beschäftigt sich mit dem Analyseprozess selber: Es wird das Analysetoolkit für Wandinteraktionen GIAnT vorgestellt, das Nutzerbewegungen, Interaktionen, und Blickrichtungen visualisiert und dadurch die Untersuchung der Interaktionen stark vereinfacht.An increasing openness for more diverse interaction modalities as well as falling hardware prices have made very large interactive vertical displays more feasible, and consequently, applications in settings such as visualization, education, and meeting support have been demonstrated successfully. Their size makes wall displays inherently usable for multi-user interaction. At the same time, we can assume that access to personal data and settings, and thus personalized interaction, will still be essential in most use-cases. In most current desktop and mobile user interfaces, access is regulated via an initial login and the complete user interface is then personalized to this user: Access to personal data, configurations and communications all assume a single user per screen. In the case of multiple people using one screen, this is not a feasible solution and we must find alternatives. Therefore, this thesis addresses the research question: How can we provide personalized interfaces in the context of multi-user interaction with wall displays? The scope spans personalized interaction both close to the wall (using touch as input modality) and further away (using mobile devices). Technical solutions that identify users at each interaction can replace logins and enable personalized interaction for multiple users at once. This thesis explores two alternative means of user identification: Tracking using RGB+depth-based cameras and leveraging ultrasound positioning of the users' mobile devices. Building on this, techniques that support personalized interaction using personal mobile devices are proposed. In the first contribution on interaction, HyDAP, we examine pointing from the perspective of moving users, and in the second, SleeD, we propose using an arm-worn device to facilitate access to private data and personalized interface elements. Additionally, the work contributes insights on practical implications of personalized interaction at wall displays: We present a qualitative study that analyses interaction using a multi-user cooperative game as application case, finding awareness and occlusion issues. The final contribution is a corresponding analysis toolkit that visualizes users' movements, touch interactions and gaze points when interacting with wall displays and thus allows fine-grained investigation of the interactions

    Designing for Cross-Device Interactions

    Get PDF
    Driven by technological advancements, we now own and operate an ever-growing number of digital devices, leading to an increased amount of digital data we produce, use, and maintain. However, while there is a substantial increase in computing power and availability of devices and data, many tasks we conduct with our devices are not well connected across multiple devices. We conduct our tasks sequentially instead of in parallel, while collaborative work across multiple devices is cumbersome to set up or simply not possible. To address these limitations, this thesis is concerned with cross-device computing. In particular it aims to conceptualise, prototype, and study interactions in cross-device computing. This thesis contributes to the field of Human-Computer Interaction (HCI)—and more specifically to the area of cross-device computing—in three ways: first, this work conceptualises previous work through a taxonomy of cross-device computing resulting in an in-depth understanding of the field, that identifies underexplored research areas, enabling the transfer of key insights into the design of interaction techniques. Second, three case studies were conducted that show how cross-device interactions can support curation work as well as augment users’ existing devices for individual and collaborative work. These case studies incorporate novel interaction techniques for supporting cross-device work. Third, through studying cross-device interactions and group collaboration, this thesis provides insights into how researchers can understand and evaluate multi- and cross-device interactions for individual and collaborative work. We provide a visualization and querying tool that facilitates interaction analysis of spatial measures and video recordings to facilitate such evaluations of cross-device work. Overall, the work in this thesis advances the field of cross-device computing with its taxonomy guiding research directions, novel interaction techniques and case studies demonstrating cross-device interactions for curation, and insights into and tools for effective evaluation of cross-device systems

    A Survey of Software Frameworks for Cluster-Based Large High-Resolution Displays

    Full text link

    Supporting Situation Awareness and Workspace Awareness in Co-located Collaborative Systems Involving Dynamic Data

    Get PDF
    Co-located technologies can provide digital functionality to support collaborative work for multiple users in the same physical space. For example, digital tabletop computers — large interactive tables that allow users to directly interact with the content — can provide the most up-to-date map information while users can work together face-to-face. Combinations of interactive devices, large and small, can also be used together in a multi-device environment to support collaborative work of large groups. This environment allows individuals to utilize different networked devices. In some co-located group work, integrating automation into the available technologies can provide benefits such as automatically switching between different data views or updating map information based on underlying changes in deployed field agents’ locations. However, dynamic changes in the system state can create confusion for users and lead to low situation awareness. Furthermore, with the large size of a tabletop system or with multiple devices being used in the workspace, users may not be able to observe collaborators’ actions due to physical separations between users. Consequently, workspace awareness — knowledge of collaborators’ up-to-the-moment actions — can be difficult to maintain. As a result, users may be frustrated, and the collaboration may become inefficient or ineffective. The current tabletop applications involving dynamic data focus on interaction and information sharing techniques for collaboration rather than providing situation awareness support. Moreover, the situation awareness literature focuses primarily on single-user applications, whereas, the literature in workspace awareness primarily focuses on remote collaborative work. The aim of this dissertation was in supporting situation awareness of system-automated dynamic changes and workspace awareness of collaborators’ actions. The first study (Timeline Study) presented in this dissertation used tabletop systems to investigate supporting situation awareness of automated changes and workspace awareness, and the second study (Callout Bubble Study) followed up to further investigate workspace awareness support in the context of multi-device classrooms. Digital tabletop computers are increasingly being used for complex domains involving dynamic data, such as coastal surveillance and emergency response. Maintaining situation awareness of these changes driven by the system is crucial for quick and appropriate response when problems arise. However, distractors in the environment can make users miss the changes and negatively impact their situation awareness, e.g., the large size of the table and conversations with team members. As interactive event timelines have been shown to improve response time and decision accuracy after interruptions, in this dissertation they were adapted to the context of collaborative tabletop applications to address the lack of situation awareness due to dynamic changes. A user study was conducted to understand design factors related to the adaption and their impacts on situation awareness and workspace awareness. The Callout Bubble Study investigated workspace awareness support for multi-device classrooms, where students were co-located with their personal devices and were connected through a large shared virtual canvas. This context was chosen due to the environment’s ability to support work in large groups and the increasing prevalence of individual devices in co-located collaborative workspaces. By studying another co-located context, this research also sought to combine the lessons learned and provide a set of more generalized design recommendations for co-located technologies. Existing work on workspace awareness focuses on remote collaboration; however, the co-located users may not need all the information beneficial for remote work. This study aimed to balance awareness and distraction to improve students’ workspace awareness maintenance while minimizing distraction to their learning. A Callout Bubble was designed to augment students’ interactions in the shared online workspace, and a field study was conducted to understand how it impacted the students’ collaboration behaviour. Overall, the research presented in this dissertation aimed to investigate information visualizations for supporting situation awareness and workspace awareness in co-located collaborative environments. The contributions included the design of an interactive event timeline and an investigation of how the control placement (how many timelines and where they should be located) and feedback location (whether to display feedback to the group or to individuals when users interact with timelines) factors affected situation awareness. The empirical results revealed that individual timelines were more effective in facilitating situation awareness maintenance and the timelines were used mainly for perceiving new changes. Furthermore, this dissertation contributed in the design of a workspace awareness cue, Callout Bubble. The field study revealed that Callout Bubbles were effective in improving students’ coordination and self-monitoring behaviours, which in turn reduced teachers’ workloads. The dissertation provided overall design lessons learned for supporting awareness in co-located collaborative environments
    corecore