12,249 research outputs found

    Multi-party Interaction in a Virtual Meeting Room

    Get PDF
    This paper presents an overview of the work carried out at the HMI group of the University of Twente in the domain of multi-party interaction. The process from automatic observations of behavioral aspects through interpretations resulting in recognized behavior is discussed for various modalities and levels. We show how a virtual meeting room can be used for visualization and evaluation of behavioral models as well as a research tool for studying the effect of modified stimuli on the perception of behavior

    The benefits of using a walking interface to navigate virtual environments

    No full text
    Navigation is the most common interactive task performed in three-dimensional virtual environments (VEs), but it is also a task that users often find difficult. We investigated how body-based information about the translational and rotational components of movement helped participants to perform a navigational search task (finding targets hidden inside boxes in a room-sized space). When participants physically walked around the VE while viewing it on a head-mounted display (HMD), they then performed 90% of trials perfectly, comparable to participants who had performed an equivalent task in the real world during a previous study. By contrast, participants performed less than 50% of trials perfectly if they used a tethered HMD (move by physically turning but pressing a button to translate) or a desktop display (no body-based information). This is the most complex navigational task in which a real-world level of performance has been achieved in a VE. Behavioral data indicates that both translational and rotational body-based information are required to accurately update one's position during navigation, and participants who walked tended to avoid obstacles, even though collision detection was not implemented and feedback not provided. A walking interface would bring immediate benefits to a number of VE applications

    Eye-movements in real curve driving: pursuit-like optokinesis in vehicle frame of reference, stability in an allocentric reference coordinate system

    Get PDF
    Looking at the future path and/or the tangent point (TP) have been identified as car drivers’ gaze targets in many studies on curve driving. Yet little is known in detail about these "fixations to the road". We quantitatively analyse gaze behavior at the level of individual fixations in real on-road data. We find that while gaze tracks the TP area, this pattern consists of fast optokinetic movements (smooth pursuit and fast resetting saccadic movements). Gaze is not “fixed” to the TP. We also relate eye-movements to a reference direction fixed to a point on the trajectory of the vehicle (curve exit), showing that fixations lose their pursuit-like character in this rotating system. The findings are discussed in terms of steering models and neural levels of oculomotor control

    Scan path visualization and comparison using visual aggregation techniques

    Get PDF
    We demonstrate the use of different visual aggregation techniques to obtain non-cluttered visual representations of scanpaths. First, fixation points are clustered using the mean-shift algorithm. Second, saccades are aggregated using the Attribute-Driven Edge Bundling (ADEB) algorithm that handles a saccades direction, onset timestamp, magnitude or their combination for the edge compatibility criterion. Flow direction maps, computed during bundling, can be visualized separately (vertical or horizontal components) or as a single image using the Oriented Line Integral Convolution (OLIC) algorithm. Furthermore, cosine similarity between two flow direction maps provides a similarity map to compare two scanpaths. Last, we provide examples of basic patterns, visual search task, and art perception. Used together, these techniques provide valuable insights about scanpath exploration and informative illustrations of the eye movement data

    StreamingHub: Interactive Stream Analysis Workflows

    Get PDF
    Reusable data/code and reproducible analyses are foundational to quality research. This aspect, however, is often overlooked when designing interactive stream analysis workflows for time-series data (e.g., eye-tracking data). A mechanism to transmit informative metadata alongside data may allow such workflows to intelligently consume data, propagate metadata to downstream tasks, and thereby auto-generate reusable, reproducible analytic outputs with zero supervision. Moreover, a visual programming interface to design, develop, and execute such workflows may allow rapid prototyping for interdisciplinary research. Capitalizing on these ideas, we propose StreamingHub, a framework to build metadata propagating, interactive stream analysis workflows using visual programming. We conduct two case studies to evaluate the generalizability of our framework. Simultaneously, we use two heuristics to evaluate their computational fluidity and data growth. Results show that our framework generalizes to multiple tasks with a minimal performance overhead

    From Industry to Practice: Can Users Tackle Domain Tasks with Augmented Reality?

    Get PDF
    Augmented Reality (AR) is a cutting-edge interactive technology. While Virtual Reality (VR) is based on completely virtual and immersive environments, AR superimposes virtual objects onto the real world. The value of AR has been demonstrated and applied within numerous industrial application areas due to its capability of providing interactive interfaces of visualized digital content. AR can provide functional tools that support users in undertaking domain-related tasks, especially facilitating them in data visualization and interaction by jointly augmenting physical space and user perception. Making effective use of the advantages of AR, especially the ability which augment human vision to help users perform different domain-related tasks is the central part of my PhD research.Industrial process tomography (IPT), as a non-intrusive and commonly-used imaging technique, has been effectively harnessed in many manufacturing components for inspections, monitoring, product quality control, and safety issues. IPT underpins and facilitates the extraction of qualitative and quantitative data regarding the related industrial processes, which is usually visualized in various ways for users to understand its nature, measure the critical process characteristics, and implement process control in a complete feedback network. The adoption of AR in benefiting IPT and its related fields is currently still scarce, resulting in a gap between AR technique and industrial applications. This thesis establishes a bridge between AR practitioners and IPT users by accomplishing four stages. First of these is a need-finding study of how IPT users can harness AR technique was developed. Second, a conceptualized AR framework, together with the implemented mobile AR application developed in an optical see-through (OST) head-mounted display (HMD) was proposed. Third, the complete approach for IPT users interacting with tomographic visualizations as well as the user study was investigated.Based on the shared technologies from industry, we propose and examine an AR approach for visual search tasks providing visual hints, audio hints, and gaze-assisted instant post-task feedback as the fourth stage. The target case was a book-searching task, in which we aimed to explore the effect of the hints and the feedback with two hypotheses: that both visual and audio hints can positively affect AR search tasks whilst the combination outperforms the individuals; that instant post-task feedback can positively affect AR search tasks. The proof-of-concept was demonstrated by an AR app in an HMD with a two-stage user evaluation. The first one was a pilot study (n=8) where the impact of the visual hint in benefiting search task performance was identified. The second was a comprehensive user study (n=96) consisting of two sub-studies, Study I (n=48) and Study II (n=48). Following quantitative and qualitative analysis, our results partially verified the first hypothesis and completely verified the second, enabling us to conclude that the synthesis of visual and audio hints conditionally improves AR search task efficiency when coupled with task feedback

    Assisted Viewpoint Interaction for 3D Visualization

    Get PDF
    Many three-dimensional visualizations are characterized by the use of a mobile viewpoint that offers multiple perspectives on a set of visual information. To effectively control the viewpoint, the viewer must simultaneously manage the cognitive tasks of understanding the layout of the environment, and knowing where to look to find relevant information, along with mastering the physical interaction required to position the viewpoint in meaningful locations. Numerous systems attempt to address these problems by catering to two extremes: simplified controls or direct presentation. This research attempts to promote hybrid interfaces that offer a supportive, yet unscripted exploration of a virtual environment.Attentive navigation is a specific technique designed to actively redirect viewers' attention while accommodating their independence. User-evaluation shows that this technique effectively facilitates several visualization tasks including landmark recognition, survey knowledge acquisition, and search sensitivity. Unfortunately, it also proves to be excessively intrusive, leading viewers to occasionally struggle for control of the viewpoint. Additional design iterations suggest that formalized coordination protocols between the viewer and the automation can mute the shortcomings and enhance the effectiveness of the initial attentive navigation design.The implications of this research generalize to inform the broader requirements for Human-Automation interaction through the visual channel. Potential applications span a number of fields, including visual representations of abstract information, 3D modeling, virtual environments, and teleoperation experiences

    Big Archives and Small Collections: Remarks on the Archival Mode in Contemporary Australian Art and Visual Culture

    Get PDF

    Jointly structuring triadic spaces of meaning and action:book sharing from 3 months on

    Get PDF
    This study explores the emergence of triadic interactions through the example of book sharing. As part of a naturalistic study, 10 infants were visited in their homes from 3-12 months. We report that (1) book sharing as a form of infant-caregiver-object interaction occurred from as early as 3 months. Using qualitative video analysis at a micro-level adapting methodologies from conversation and interaction analysis, we demonstrate that caregivers and infants practiced book sharing in a highly co-ordinated way, with caregivers carving out interaction units and shaping actions into action arcs and infants actively participating and co-ordinating their attention between mother and object from the beginning. We also (2) sketch a developmental trajectory of book sharing over the first year and show that the quality and dynamics of book sharing interactions underwent considerable change as the ecological situation was transformed in parallel with the infants' development of attention and motor skills. Social book sharing interactions reached an early peak at 6 months with the infants becoming more active in the coordination of attention between caregiver and book. From 7-9 months, the infants shifted their interest largely to solitary object exploration, in parallel with newly emerging postural and object manipulation skills, disrupting the social coordination and the cultural frame of book sharing. In the period from 9-12 months, social book interactions resurfaced, as infants began to effectively integrate object actions within the socially shared activity. In conclusion, to fully understand the development and qualities of triadic cultural activities such as book sharing, we need to look especially at the hitherto overlooked early period from 4-6 months, and investigate how shared spaces of meaning and action are structured together in and through interaction, creating the substrate for continuing cooperation and cultural learning
    • …
    corecore