37 research outputs found

    Multi-touch 3D Exploratory Analysis of Ocean Flow Models

    Get PDF
    Modern ocean flow simulations are generating increasingly complex, multi-layer 3D ocean flow models. However, most researchers are still using traditional 2D visualizations to visualize these models one slice at a time. Properly designed 3D visualization tools can be highly effective for revealing the complex, dynamic flow patterns and structures present in these models. However, the transition from visualizing ocean flow patterns in 2D to 3D presents many challenges, including occlusion and depth ambiguity. Further complications arise from the interaction methods required to navigate, explore, and interact with these 3D datasets. We present a system that employs a combination of stereoscopic rendering, to best reveal and illustrate 3D structures and patterns, and multi-touch interaction, to allow for natural and efficient navigation and manipulation within the 3D environment. Exploratory visual analysis is facilitated through the use of a highly-interactive toolset which leverages a smart particle system. Multi-touch gestures allow users to quickly position dye emitting tools within the 3D model. Finally, we illustrate the potential applications of our system through examples of real world significance

    Experts evaluation of usability for digital solutions directed at older adults: a scoping review of reviews

    Get PDF
    Background: it is important to standardize the evaluation and reporting procedures across usability studies to guide researchers, facilitate comparisons, and promote high-quality studies. A first step to standardizing is to have an overview of how experts-based usability evaluation studies are reported across the literature. Objectives: to describe and synthesize the procedures of usability evaluation by experts that are being reported to conduct inspection usability assessments of digital solutions relevant for older adults. Methods: a scoping review of reviews was performed using a five-stage methodology to identify and describe relevant literature published between 2009 and 2020 as follows: i) identification of the research question; ii) identification of relevant studies; iii) select studies for review; iv) charting of data from selected literature; and v) collation, summary, and report of results. The research was conducted on five electronic databases: PubMed, ACM Digital Library, IEEE, Scopus, and Web of Science. The articles that met the inclusion criteria were identified, and data extracted for further analysis, including evaluators, current usability inspection methods, and instruments to support usability inspection methods. Results: a total of 3958 articles were identified. After a detailed screening, 12 reviews matched the eligibility criteria. Conclusion: overall, we found a variety of unstandardized procedures and a lack of detail on some important aspects of the assessment, including a thorough description of the evaluators and of the instruments used to facilitate the inspection evaluation such as heuristics checklists. These findings suggest the need for a consensus framework on the experts’ assessment of usability that informs researchers and allows standardization of procedures.in publicatio

    Gsi demo: Multiuser gesture/speech interaction over digital tables by wrapping single user applications

    Get PDF
    Most commercial software applications are designed for a single user using a keyboard/mouse over an upright monitor. Our interest is exploiting these systems so they work over a digital table. Mirroring what people do when working over traditional tables, we want to allow multiple people to interact naturally with the tabletop application and with each other via rich speech and hand gesture and speech interaction on a digital table for geospatial applications- Google Earth, Warcraft III and The Sims. In this paper, we describe our underlying architecture: GSI Demo. First, GSI Demo creates a run-time wrapper around existing single user applications: it accepts and translates speech and gestures from multiple people into a single stream of keyboard and mouse inputs recognized by the application. Second, it lets people use multimodal demonstration- instead of programming- to quickly map their own speech and gestures to these keyboard/mouse inputs. For example, continuous gestures are trained by saying ¨Computer, when I do (one finger gesture), you do (mouse drag) ¨. Similarly, discrete speech commands can be trained by saying ¨Computer, when I say (layer bars), you do (keyboard and mouse macro) ¨. The end result is that end users can rapidly transform single user commercial applications into a multi-user, multimodal digital tabletop system

    User-Evaluated Gestures for Touchless Interactions from a Distance

    Get PDF
    Very big displays are now commonplace but interactions with them are limited, even poorly understood. Recently, understanding touch-based interactions have received a great deal of attention due to the popularity and low costs of these displays. The direct extension of such interactions, touch less interactions, has not. In this paper we evaluated gesture-based interactions with very big interactive screens to learn which gestures are suited and why. In other words, did ‘Minority Report’ get it right? We aim to discover to which extend these gesture interfaces are technology-driven and influenced by prototyped, commercial and fictive interfaces. A qualitative evaluation of a gesture interface for wall sized displays is presented in which subjects experienced the interface while completing several simple puzzle tasks. We found that simple gestures based on the act of pressing buttons was the most intuitive.\ud \u

    Understanding Visual Feedback in Large-Display Touchless Interactions: An Exploratory Study

    Get PDF
    Touchless interactions synthesize input and output from physically disconnected motor and display spaces without any haptic feedback. In the absence of any haptic feedback, touchless interactions primarily rely on visual cues, but properties of visual feedback remain unexplored. This paper systematically investigates how large-display touchless interactions are affected by (1) types of visual feedback—discrete, partial, and continuous; (2) alternative forms of touchless cursors; (3) approaches to visualize target-selection; and (4) persistent visual cues to support out-of-range and drag-and-drop gestures. Results suggest that continuous was more effective than partial visual feedback; users disliked opaque cursors, and efficiency did not increase when cursors were larger than display artifacts’ size. Semantic visual feedback located at the display border improved users’ efficiency to return within the display range; however, the path of movement echoed in drag-and-drop operations decreased efficiency. Our findings contribute key ingredients to design suitable visual feedback for large-display touchless environments.This work was partially supported by an IUPUI Research Support Funds Grant (RSFG)

    Developing a Hand Gesture Recognition System for Mapping Symbolic Hand Gestures to Analogous Emoji in Computer-mediated Communication

    Get PDF
    Recent trends in computer-mediated communications (CMC) have not only led to expanded instant messaging through the use of images and videos, but have also expanded traditional text messaging with richer content, so-called visual communication markers (VCM) such as emoticons, emojis, and stickers. VCMs could prevent a potential loss of subtle emotional conversation in CMC, which is delivered by nonverbal cues that convey affective and emotional information. However, as the number of VCMs grows in the selection set, the problem of VCM entry needs to be addressed. Additionally, conventional ways for accessing VCMs continues to rely on input entry methods that are not directly and intimately tied to expressive nonverbal cues. One such form of expressive nonverbal that does exist and is well-studied comes in the form of hand gestures. In this work, I propose a user-defined hand gesture set that is highly representative to VCMs and a two-stage hand gesture recognition system (trajectory-based, shape-based) that distinguishes the user-defined hand gestures. While the trajectory-based recognizer distinguishes gestures based on the movements of hands, the shape-based recognizer classifies gestures based on the shapes of hands. The goal of this research is to allow users to be more immersed, natural, and quick in generating VCMs through gestures. The idea is for users to maintain the lower-bandwidth online communication of text messaging to largely retain its convenient and discreet properties, while also incorporating the advantages of higher-bandwidth online communication of video messaging by having users naturally gesture their emotions that are then closely mapped to VCMs. Results show that the accuracy of user-dependent is approximately 86% and the accuracy of user-independent is about 82%

    A comparison of surface and motion user-defined gestures for mobile augmented reality.

    Get PDF
    Augmented Reality (AR) technology permits interaction between the virtual and physical worlds. Recent advancements in mobile devices allow for a better mobile AR experience, and in turn, improving user adoption rate and increasing the number of mobile AR applications across a wide range of disciplines. Nevertheless, the majority of mobile AR applications, that we have surveyed, adopted surface gestures as the default interaction method for the AR experience and have not utilised three-dimensional (3D) spatial interaction, as supported by AR interfaces. This research investigates two types of gestures for interacting in mobile AR applications, surface gestures, which have been deployed by mainstream applications, and motion gestures, that take advantages of 3D movement of the handheld device. Our goal is to find out if there exists a gesture-based interaction suitable for handheld devices, that can utilise the 3D interaction of mobile AR applications. We conducted two user studies, an elicitation study and a validation study. In the elicitation study, we elicited two sets of gestures, surface and motion, for mobile AR applications. We recruited twenty-one participants to perform twelve common mobile AR tasks, which yielded a total of five-hundred and four gestures. We classified and illustrated the two sets of gestures, and compared them in terms of goodness, ease of use, and engagement. The elicitation process yielded two separate sets of user-defined gestures; legacy surface gestures, which were familiar and easy to use by the participants, and motion gestures, which found to be more engaging. From the design patterns of the motion gestures, we proposed a novel interaction technique for mobile AR called TMR (Touch-Move-Release). To validate our elicited gestures in an actual application, we conducted a second study. We have developed a mobile AR game similar to Pokémon GO and implemented the selected gestures from the elicitation study. The study was conducted with ten participants, and we found that the motion gesture could provide more engagement and better game experience. Nevertheless, surface gestures were more accurate and easier to use. We discussed the implications of our findings and gave our design recommendations for designers on the usage of the elicited gestures. Our research can be further explored in the future. It can be used as a "prequel" to the design of better gesture-based interaction technique for different tasks in various mobile AR applications

    Light on horizontal interactive surfaces: Input space for tabletop computing

    Get PDF
    In the last 25 years we have witnessed the rise and growth of interactive tabletop research, both in academic and in industrial settings. The rising demand for the digital support of human activities motivated the need to bring computational power to table surfaces. In this article, we review the state of the art of tabletop computing, highlighting core aspects that frame the input space of interactive tabletops: (a) developments in hardware technologies that have caused the proliferation of interactive horizontal surfaces and (b) issues related to new classes of interaction modalities (multitouch, tangible, and touchless). A classification is presented that aims to give a detailed view of the current development of this research area and define opportunities and challenges for novel touch- and gesture-based interactions between the human and the surrounding computational environment. © 2014 ACM.This work has been funded by Integra (Amper Sistemas and CDTI, Spanish Ministry of Science and Innovation) and TIPEx (TIN2010-19859-C03-01) projects and Programa de Becas y Ayudas para la Realización de Estudios Oficiales de Máster y Doctorado en la Universidad Carlos III de Madrid, 2010
    corecore