188 research outputs found

    Combining Multiple View Components for Exploratory Visualization

    Full text link
    The analysis of structured complex data, such as clustered graph based datasets, usually applies a variety of visual representation techniques and formats. The majority of currently available tools and approaches to exploratory visualization are built on integrated schemes for simultaneous displaying of multiple aspects of studying objects and processes. Usually, such schemes partition screen space that is composed of multiple views and adopt interaction patterns to focus on data-driven items. Widely known concepts as overview plus-detail and focus-plus-context are ambiguous in interpretation by means of technical terms. Therefore, their implementation by UI design practitioners need reviews and a classification of the basic approaches to visual composition of graphical representation modules. We propose a description of basic components of the view and focus and an overview of their multiple combinations

    Child-display interaction: exploring avatar-based touchless gestural interfaces

    Get PDF
    During the last decade, touchless gestural interfaces have been widely studied as one of the most promising interaction paradigms in the context of pervasive displays. In particular, avatars and sil- houettes have been proved to be effective in communicating the touchless gestural interactivity supported by displays. In the paper, we take a child-display interaction perspective by exploring avatar- based touchless gestural interfaces. We believe that large displays offer an opportunity to stimulate child experience and engagement, for instance when learning about art, as well as bringing a number of challenges. The purpose of this study is twofold: 1) identifying the relevant aspects of children’s interactions with a large display based on a touchless avatar-based interface, and 2) understanding the impact on recalling the content that arises from the interaction. We engaged 107 children over a period of five days during a public event at the university premises. Collected data were analyzed, and the outcomes transformed into three lessons learnt for informing the future design

    SymbolDesign: A User-centered Method to Design Pen-based Interfaces and Extend the Functionality of Pointer Input Devices

    Full text link
    A method called "SymbolDesign" is proposed that can be used to design user-centered interfaces for pen-based input devices. It can also extend the functionality of pointer input devices such as the traditional computer mouse or the Camera Mouse, a camera-based computer interface. Users can create their own interfaces by choosing single-stroke movement patterns that are convenient to draw with the selected input device and by mapping them to a desired set of commands. A pattern could be the trace of a moving finger detected with the Camera Mouse or a symbol drawn with an optical pen. The core of the SymbolDesign system is a dynamically created classifier, in the current implementation an artificial neural network. The architecture of the neural network automatically adjusts according to the complexity of the classification task. In experiments, subjects used the SymbolDesign method to design and test the interfaces they created, for example, to browse the web. The experiments demonstrated good recognition accuracy and responsiveness of the user interfaces. The method provided an easily-designed and easily-used computer input mechanism for people without physical limitations, and, with some modifications, has the potential to become a computer access tool for people with severe paralysis.National Science Foundation (IIS-0093367, IIS-0308213, IIS-0329009, EIA-0202067

    Gaze modulated disambiguation technique for gesture control in 3D virtual objects selection

    Get PDF
    © 2017 IEEE. Inputs with multimodal information provide more natural ways to interact with virtual 3D environment. An emerging technique that integrates gaze modulated pointing with mid-air gesture control enables fast target acquisition and rich control expressions. The performance of this technique relies on the eye tracking accuracy which is not comparable with the traditional pointing techniques (e.g., mouse) yet. This will cause troubles when fine grainy interactions are required, such as selecting in a dense virtual scene where proximity and occlusion are prone to occur. This paper proposes a coarse-to-fine solution to compensate the degradation introduced by eye tracking inaccuracy using a gaze cone to detect ambiguity and then a gaze probe for decluttering. It is tested in a comparative experiment which involves 12 participants with 3240 runs. The results show that the proposed technique enhanced the selection accuracy and user experience but it is still with a potential to be improved in efficiency. This study contributes to providing a robust multimodal interface design supported by both eye tracking and mid-air gesture control

    Sistemas de Recomendação para Grupos

    Get PDF
    O presente trabalho é um levantamento de conceitos e do estado da arte das principais abordagens e metodologias envolvidas no desenvolvimento de sistemas de recomendação individuais e para grupos. Este levantamento é um estudo prévio, de extrema importância, como ponto de partida para a realização de futuros trabalhos, com o objetivo de desenvolver novas abordagens na área de recomendação para grupos

    A resource-adaptive mobile navigation system

    Get PDF

    A Multi-scale colour and Keypoint Density-based Approach for Visual Saliency Detection.

    Get PDF
    In the first seconds of observation of an image, several visual attention processes are involved in the identification of the visual targets that pop-out from the scene to our eyes. Saliency is the quality that makes certain regions of an image stand out from the visual field and grab our attention. Saliency detection models, inspired by visual cortex mechanisms, employ both colour and luminance features. Furthermore, both locations of pixels and presence of objects influence the Visual Attention processes. In this paper, we propose a new saliency method based on the combination of the distribution of interest points in the image with multiscale analysis, a centre bias module and a machine learning approach. We use perceptually uniform colour spaces to study how colour impacts on the extraction of saliency. To investigate eye-movements and assess the performances of saliency methods over object-based images, we conduct experimental sessions on our dataset ETTO (Eye Tracking Through Objects). Experiments show our approach to be accurate in the detection of saliency concerning state-of-the-art methods and accessible eye-movement datasets. The performances over object-based images are excellent and consistent on generic pictures. Besides, our work reveals interesting findings on some relationships between saliency and perceptually uniform colour spaces

    ABScribe: Rapid Exploration of Multiple Writing Variations in Human-AI Co-Writing Tasks using Large Language Models

    Full text link
    Exploring alternative ideas by rewriting text is integral to the writing process. State-of-the-art large language models (LLMs) can simplify writing variation generation. However, current interfaces pose challenges for simultaneous consideration of multiple variations: creating new versions without overwriting text can be difficult, and pasting them sequentially can clutter documents, increasing workload and disrupting writers' flow. To tackle this, we present ABScribe, an interface that supports rapid, yet visually structured, exploration of writing variations in human-AI co-writing tasks. With ABScribe, users can swiftly produce multiple variations using LLM prompts, which are auto-converted into reusable buttons. Variations are stored adjacently within text segments for rapid in-place comparisons using mouse-over interactions on a context toolbar. Our user study with 12 writers shows that ABScribe significantly reduces task workload (d = 1.20, p < 0.001), enhances user perceptions of the revision process (d = 2.41, p < 0.001) compared to a popular baseline workflow, and provides insights into how writers explore variations using LLMs

    Supporting Sensemaking of Complex Objects with Visualizations: Visibility and Complementarity of Interactions

    Get PDF
    Making sense of complex objects is difficult, and typically requires the use of external representations to support cognitive demands while reasoning about the objects. Visualizations are one type of external representation that can be used to support sensemaking activities. In this paper, we investigate the role of two design strategies in making the interactive features of visualizations more supportive of users’ exploratory needs when trying to make sense of complex objects. These two strategies are visibility and complementarity of interactions. We employ a theoretical framework concerned with human–information interaction and complex cognitive activities to inform, contextualize, and interpret the effects of the design strategies. The two strategies are incorporated in the design of Polyvise, a visualization tool that supports making sense of complex four-dimensional geometric objects. A mixed-methods study was conducted to evaluate the design strategies and the overall usability of Polyvise. We report the findings of the study, discuss some implications for the design of visualization tools that support sensemaking of complex objects, and propose five design guidelines. We anticipate that our results are transferrable to other contexts, and that these two design strategies can be used broadly in visualization tools intended to support activities with complex objects and information spaces
    • …
    corecore