15,411 research outputs found

    Assisted Viewpoint Interaction for 3D Visualization

    Get PDF
    Many three-dimensional visualizations are characterized by the use of a mobile viewpoint that offers multiple perspectives on a set of visual information. To effectively control the viewpoint, the viewer must simultaneously manage the cognitive tasks of understanding the layout of the environment, and knowing where to look to find relevant information, along with mastering the physical interaction required to position the viewpoint in meaningful locations. Numerous systems attempt to address these problems by catering to two extremes: simplified controls or direct presentation. This research attempts to promote hybrid interfaces that offer a supportive, yet unscripted exploration of a virtual environment.Attentive navigation is a specific technique designed to actively redirect viewers' attention while accommodating their independence. User-evaluation shows that this technique effectively facilitates several visualization tasks including landmark recognition, survey knowledge acquisition, and search sensitivity. Unfortunately, it also proves to be excessively intrusive, leading viewers to occasionally struggle for control of the viewpoint. Additional design iterations suggest that formalized coordination protocols between the viewer and the automation can mute the shortcomings and enhance the effectiveness of the initial attentive navigation design.The implications of this research generalize to inform the broader requirements for Human-Automation interaction through the visual channel. Potential applications span a number of fields, including visual representations of abstract information, 3D modeling, virtual environments, and teleoperation experiences

    An Affordance-Based Framework for Human Computation and Human-Computer Collaboration

    Get PDF
    Visual Analytics is ā€œthe science of analytical reasoning facilitated by visual interactive interfacesā€ [70]. The goal of this field is to develop tools and methodologies for approaching problems whose size and complexity render them intractable without the close coupling of both human and machine analysis. Researchers have explored this coupling in many venues: VAST, Vis, InfoVis, CHI, KDD, IUI, and more. While there have been myriad promising examples of human-computer collaboration, there exists no common language for comparing systems or describing the benefits afforded by designing for such collaboration. We argue that this area would benefit significantly from consensus about the design attributes that define and distinguish existing techniques. In this work, we have reviewed 1,271 papers from many of the top-ranking conferences in visual analytics, human-computer interaction, and visualization. From these, we have identified 49 papers that are representative of the study of human-computer collaborative problem-solving, and provide a thorough overview of the current state-of-the-art. Our analysis has uncovered key patterns of design hinging on human- and machine-intelligence affordances, and also indicates unexplored avenues in the study of this area. The results of this analysis provide a common framework for understanding these seemingly disparate branches of inquiry, which we hope will motivate future work in the field

    The Effects of Mixed-Initiative Visualization Systems on Exploratory Data Analysis

    Get PDF
    The primary purpose of information visualization is to act as a window between a user and the data. Historically, this has been accomplished via a single-agent framework: the only decision-maker in the relationship between visualization system and analyst is the analyst herself. Yet this framework arose not from first principles, but a necessity. Before this decade, computers were limited in their decision-making capabilities, especially in the face of large, complex datasets and visualization systems. This paper aims to present the design and evaluation of a mixed-initiative system that aids the user in handling large, complex datasets and dense visualization systems. We demonstrate this system with a between-groups, two-by-two study measuring the effects of this mixed-initiative system on user interactions and system usability. We find little to no evidence that the adaptive system designed here has a statistically significant impact on user interactions or system usability. We discuss the implications of this lack of evidence and examine how the data suggests a promising avenue for further research

    The Eļ¬€ects of Mixed-Initiative Visualization Systems on Exploratory Data Analysis

    Get PDF
    The main purpose of information visualization is to act as a window between a user and data. Historically, this has been accomplished via a single-agent framework: the only decisionmaker in the relationship between visualization system and analyst is the analyst herself. Yet this framework arose not from ļ¬rst principles, but from necessity: prior to this decade, computers were limited in their decision-making capabilities, especially in the face of large, complex datasets and visualization systems. This thesis aims to present the design and evaluation of a mixed-initiative system that aids the user in handling large, complex datasets and dense visualization systems. We demonstrate this system with a between-groups, two-by-two study measuring the eļ¬€ects of this mixed-initiative system on user interactions and system usability. We ļ¬nd little to no evidence that the adaptive system designed here has a statistically-signiļ¬cant eļ¬€ect on user interactions or system usability. We discuss the implications of this lack of evidence, and examine how the data suggests a promising avenue of further research

    Mixed Initiative Systems for Human-Swarm Interaction: Opportunities and Challenges

    Full text link
    Human-swarm interaction (HSI) involves a number of human factors impacting human behaviour throughout the interaction. As the technologies used within HSI advance, it is more tempting to increase the level of swarm autonomy within the interaction to reduce the workload on humans. Yet, the prospective negative effects of high levels of autonomy on human situational awareness can hinder this process. Flexible autonomy aims at trading-off these effects by changing the level of autonomy within the interaction when required; with mixed-initiatives combining human preferences and automation's recommendations to select an appropriate level of autonomy at a certain point of time. However, the effective implementation of mixed-initiative systems raises fundamental questions on how to combine human preferences and automation recommendations, how to realise the selected level of autonomy, and what the future impacts on the cognitive states of a human are. We explore open challenges that hamper the process of developing effective flexible autonomy. We then highlight the potential benefits of using system modelling techniques in HSI by illustrating how they provide HSI designers with an opportunity to evaluate different strategies for assessing the state of the mission and for adapting the level of autonomy within the interaction to maximise mission success metrics.Comment: Author version, accepted at the 2018 IEEE Annual Systems Modelling Conference, Canberra, Australi

    Human-Computer Collaboration for Visual Analytics: an Agent-based Framework

    Full text link
    The visual analytics community has long aimed to understand users better and assist them in their analytic endeavors. As a result, numerous conceptual models of visual analytics aim to formalize common workflows, techniques, and goals leveraged by analysts. While many of the existing approaches are rich in detail, they each are specific to a particular aspect of the visual analytic process. Furthermore, with an ever-expanding array of novel artificial intelligence techniques and advances in visual analytic settings, existing conceptual models may not provide enough expressivity to bridge the two fields. In this work, we propose an agent-based conceptual model for the visual analytic process by drawing parallels from the artificial intelligence literature. We present three examples from the visual analytics literature as case studies and examine them in detail using our framework. Our simple yet robust framework unifies the visual analytic pipeline to enable researchers and practitioners to reason about scenarios that are becoming increasingly prominent in the field, namely mixed-initiative, guided, and collaborative analysis. Furthermore, it will allow us to characterize analysts, visual analytic settings, and guidance from the lenses of human agents, environments, and artificial agents, respectively

    What you see is what you feel : on the simulation of touch in graphical user interfaces

    Get PDF
    This study introduces a novel method of simulating touch with merely visual means. Interactive animations are used to create an optical illusion that evokes haptic percepts like stickiness, stiffness and mass, within a standard graphical user interface. The technique, called optically simulated hapic feedback, exploits the domination of the visual over the haptic modality and the general human tendency to integrate between the various senses. The study began with an aspiration to increase the sensorial qualities of the graphical user interface. With the introduction of the graphical user interface ā€“ and in particular the desktop metaphor ā€“ computers have become accessible for almost anyone; all over the world, people from various cultures use the same icons, folders, buttons and trashcans. However, from a sensorial point of view this computing paradigm is still extremely limited. Touch can play a powerful role in communication. It can offer an immediacy and intimacy unparalleled by words or images. Although few doubt this intrinsic value of touch perception in everyday life, examples in modern technology where human-machine communication utilizes the tactile and kinesthetic senses as additional channels of information flow are scarce. Hence, it has often been suggested that improvements in the sensorial qualities of computers could lead to more natural interfaces. Various researchers have been creating scenarios and technologies that should enrich the sensorial qualities of our digital environment. Some have developed mechanical force feedback devices that enable people to experience haptics while interacting with a digital display. Others have suggested that the computer should ā€˜disappearā€™ into the environment and proposed tangible objects as a means to connect between the digital and the physical environment. While the scenarios of force feedback, tangible interactions and the disappearing computer are maturing, millions of people are still working with a desktop computer interface every day. In spite of its obvious drawbacks, the desktop computing model penetrated deeply into our society and cannot be expected to disappear overnight. Radically different computing paradigms will require the development of radically different hardware. This takes time and it is yet unsure when, if so, other computing paradigms will replace the current desktop computing setup. It is for that reason, that we pursued another approach towards physical computing. Inspired by renaissance painters, who already centuries ago invented illusionary techniques like perspective and trompe dā€™oeil to increase the presence of their paintings, we aim to improve the physicality of the graphical user interface, without resorting to special hardware. Optically simulated haptic feedback, described in this thesis, has a lot in common with mechanical force-feedback systems, except for the fact that in mechanical force-feedback systems the location of the cursor is manipulated as a result of the force sent to the haptic device (force-feedback mouse, trackball, etc), whereas in our system the cursor location is directly manipulated, resulting in an purely visual force feedback. By applying tiny displacements upon the cursorā€™s movement, tactile sensations like stickiness, touch, or mass can be simulated. In chapter 2 we suggest that the active cursor technique can be applied to create richer interactions without the need for special hardware. The cursor channel is transformed from an input only to an input/output channel. The active cursor displacements can be used to create various (dynamic) slopes as well as textures and material properties, which can provide the user with feedback while navigating the on-screen environment. In chapter 3 the perceptual illusion of touch, resulting from the domination of the visual over the haptic modality, is described in a larger context of prior research and experimentally tested. Using both the active cursor technique and a mechanical force feedback device, we generated bumps and hole structures. In a controlled experiment the perception of the slopes was measured, comparing between the optical and the mechanical simulation. Results show that people can recognize optically simulated bump and hole structures, and that active cursor displacements influence the haptic perception of bumps and holes. Depending on the simulated strength of the force, optically simulated haptic feedback can take precedence over mechanically simulated haptic feedback, but also the other way around. When optically simulated and mechanically simulated haptic feedback counteract each other, however, the weight attributed to each source of haptic information differs between users. It is concluded that active cursor displacements can be used to optically simulate the operation of mechanical force feedback devices. An obvious application of optically simulated haptic feedback in graphical user interfaces, is to assist the user in pointing at icons and objects on the screen. Given the pervasiveness of pointing in graphical interfaces, every small improvement in a target-acquisition task, represents a substantial improvement in usability. Can active cursor displacements be applied to help the user reach its goal? In chapter 4 we test the usability of optically simulated haptic feedback in a pointing task, again in comparison with the force feedback generated by a mechanical device. In a controlled Fittsā€™-law type experiment, subjects were asked to point and click at targets of different sizes and distances. Results learn that rendering hole type structures underneath the targets improves the effectiveness, efficiency and satisfaction of the target acquisition task. Optically simulated haptic feedback results in lower error rates, more satisfaction, and a higher index of performance, which can be attributed to the shorter movement times realized for the smaller targets. For larger targets, optically simulated haptic feedback resulted in comparable movement times as mechanically simulated haptic feedback. Since the current graphical interfaces are not designed with tactility in mind, the development of novel interaction styles should also be an important research path. Before optically simulated haptic feedback can be fully brought into play in more complex interaction styles, designers and researchers need to further experiment with the technique. In chapter 5 we describe a software prototyping toolkit, called PowerCursor, which enables designers to create interaction styles using optically simulated haptic feedback, without having to do elaborate programming. The software engine consists of a set of ready force field objects ā€“ holes, hills, ramps, rough and slick objects, walls, whirls, and more ā€“ that can be added to any Flash project, as well as force behaviours that can be added to custom made shapes and objects. These basic building blocks can be combined to create more complex and dynamic force objects. This setup should allow the users of the toolkit to creatively design their own interaction styles with optically simulated haptic feedback. The toolkit is implemented in Adobe Flash and can be downloaded at www.powercursor.com. Furthermore, in chapter 5 we present a preliminary framework of the expected applicability of optically simulated haptic feedback. Illustrated with examples that have been created with the beta-version of the PowerCursor toolkit so far, we discuss some of the ideas for novel interaction styles. Besides being useful in assisting the user while navigating, optically simulated haptic feedback might be applied to create so-called mixed initiative interfaces ā€“ one can for instance think of an installation wizard, which guides the cursor towards the recommended next step. Furthermore since optically simulated haptic feedback can be used to communicate material properties of textures or 3D objects, it can be applied to create aesthetically pleasing interactions ā€“ which with the migration of computers into other domains than the office environment are becoming more relevant. Finally we discuss the opportunities for applications outside the desktop computer model. We discuss how, in principle, optically simulated haptic feedback can play a role in any graphical interface where the input and output channels are decoupled. In chapter 6 we draw conclusions and discuss future directions. We conclude that optically simulated haptic feedback can increase the physicality and quality of our current graphical user interfaces, without resorting to specialistic hardware. Users are able to recognize haptic structures simulated by applying active cursor displacements upon the users mouse movements. Our technique of simulating haptic feedback optically opens up an additional communication channel with the user that can enhance the usability of the graphical interface. However, the active cursor technique is not to be expected to replace mechanical haptic feedback altogether, since it can be applied only in combination with a visual display and thus will not work for visually impaired people. Rather, we expect the ability to employ tactile interaction styles in a standard graphical user interface, could catalyze the development of novel physical interaction styles and on the long term might instigate the acceptance of haptic devices. With this research we hope to have contributed to a more sensorial and richer graphical user interface. Moreover we have aimed to increase our awareness and understanding of media technology and simulations in general. Therefore, our scientific research results are deliberately presented within a social-cultural context that reflects upon the dominance of the visual modality in our society and the ever-increasing role of media and simulations in peopleā€™s everyday lives
    • ā€¦
    corecore