21 research outputs found

    Multilayer haptic feedback for pen-based tablet interaction

    Get PDF
    We present a novel, multilayer interaction approach that enables state transitions between spatially above-screen and 2D on-screen feedback layers. This approach supports the exploration of haptic features that are hard to simulate using rigid 2D screens. We accomplish this by adding a haptic layer above the screen that can be actuated and interacted with (pressed on) while the user interacts with on-screen content using pen input. The haptic layer provides variable firmness and contour feedback, while its membrane functionality affords additional tactile cues like texture feedback. Through two user studies, we look at how users can use the layer in haptic exploration tasks, showing that users can discriminate well between different firmness levels, and can perceive object contour characteristics. Demonstrated also through an art application, the results show the potential of multilayer feedback to extend on-screen feedback with additional widget, tool and surface properties, and for user guidance

    ePlant: Visualizing and Exploring Multiple Levels of Data for Hypothesis Generation in Plant Biology

    No full text
    A big challenge in current systems biology research arises when different types of data must be accessed from separate sources and visualized using separate tools. The high cognitive load required to navigate such a workflow is detrimental to hypothesis generation. Accordingly, there is a need for a robust research platform that incorporates all data and provides integrated search, analysis, and visualization features through a single portal. Here, we present ePlant (http://bar.utoronto.ca/eplant), a visual analytic tool for exploring multiple levels of Arabidopsis thaliana data through a zoomable user interface. ePlant connects to several publicly available web services to download genome, proteome, interactome, transcriptome, and 3D molecular structure data for one or more genes or gene products of interest. Data are displayed with a set of visualization tools that are presented using a conceptual hierarchy from big to small, and many of the tools combine information from more than one data type. We describe the development of ePlant in this article and present several examples illustrating its integrative features for hypothesis generation. We also describe the process of deploying ePlant as an “app” on Araport. Building on readily available web services, the code for ePlant is freely available for any other biological species research

    An Algorithm for Automated Fractal Terrain Deformation

    No full text
    www.cs.yorku.ca/~wolfgang/ Fractal terrains provide an easy way to generate realistic landscapes. There are several methods to generate fractal terrains, but none of those algorithms allow the user much flexibility in controlling the shape or properties of the final outcome. A few methods to modify fractal terrains have been previously proposed, both algorithm-based as well as by hand editing, but none of these provide a general solution. In this work, we present a new algorithm for fractal terrain deformation. We present a general solution that can be applied to a wide variety of deformations. Our approach employs stochastic local search to identify a sequence of local modifications, which deform the fractal terrain to conform to a set of specified constraints. The presented results show that the new method can incorporate multiple constraints simultaneously, while still preserving the natural look of the fractal terrain. Keywords: (according to ACM CCS): I.3.7 [Computer Graphics, Three-Dimensional Graphics and Realism]: Fractals, I.2.8 [Problem Solving, Control Methods, and Search] Graph and tree search strategie

    Semantic Constraints for Scene Manipulation

    No full text
    The creation of object models for computer graphics applications, such as interior design or the generation of animations is a labour-intensive process. Today's computer aided design (CAD) programs address the problem of creating geometric object models quite well. But almost all users find common tasks, such as quickly furnishing a room, hard to accomplish. One of the basic reasons is that manipulation of objects often does not yield the expected results. This paper presents a new system that exploits knowledge about natural behavior of objects to provide simple and intuitive interaction techniques for object manipulation. Semantic constraints are introduced, which encapsulate such `common knowledge' about objects. Furthermore, we present a new way to automatically infer a scene hierarchy by dynamically grouping objects according to their constraint relationships

    Unconstrained vs. Constrained 3D Scene Manipulation

    No full text
    . Content creation for computer graphics applications is a very timeconsuming process that requires skilled personnel. Many people find the manipulation of 3D object with 2D input devices non-intuitive and difficult. We present a system, which restricts the motion of objects in a 3D scene with constraints. In this publication we discuss an experiment that compares two different 3D manipulation interfaces via 2D input devices. The results show clearly that the new constraint-based interface performs significantly better than previous work. 1. Introduction Computer Graphics applications, such as physical simulations, architectural walkthroughs, and animations, require realistic three-dimensional (3D) scenes. A scene usually consists of many different objects. Objects and scenes are usually created with a 3D modeling system. Many different commercial systems are available for this purpose. But in general these products are difficult to use and require significant amounts of train..

    WidgetLens: A System for Adaptive Content Magnification of Widgets

    No full text
    On displays with high pixel densities or on mobile devices and due to limitations in current graphical user interface toolkits, content can appear (too) small and be hard to interact with. We present WidgetLens, a novel adaptive widget magnification system, which improves access to and interaction with graphical user interfaces. It is designed for usage of unmodified applications on screens with high pixel densities, remote desktop scenarios, and may also address some situations with visual impairments. It includes a comprehensive set of adaptive magnification lenses for standard widgets, each adjusted to the properties of that type of widget. These lenses enable full interaction with content that appears too small. We also present several extensions

    Automatic generation of user interface layouts for alternative screen orientations

    Get PDF
    Creating multiple layout alternatives for graphical user interfaces to accommodate different screen orientations for mobile devices is labor intensive. Here, we investigate how such layout alternatives can be generated automatically from an initial layout. Providing good layout alternatives can inspire developers in their design work and support them to create adaptive layouts. We performed an analysis of layout alternatives in existing apps and identified common realworld layout transformation patterns. Based on these patterns we developed a prototype that generates landscape and portrait layout alternatives for an initial layout. In general, there is a very large number of possibilities of how widgets can be rearranged. For this reason we developed a classification method to identify and evaluate “good” layout alternatives automatically. From this set of “good” layout alternatives, designers can choose suitable layouts for their applications. In a questionnaire study we verified that our method generates layout alternatives that appear well structured and are easy to use

    The hedgehog: a novel optical tracking method for spatially immersive displays

    No full text
    Existing commercial technologies do not adequately meet the requirements for tracking in fully-enclosed Virtual Reality displays. We present a novel six degree of freedom tracking system, the Hedgehog, which overcomes several limitations inherent in existing sensors and tracking technology. The system reliably estimates the pose of the user’s head with high resolution and low spatial distortion. Light emitted from an arrangement of lasers projects onto the display walls. An arrangement of cameras images the walls and the two-dimensional centroids of the projections are tracked to estimate the pose of the device. The system is able to handle ambiguous laser projection configurations, static and dynamic occlusions of the lasers, and incorporates an auto-calibration mechanism due to the use of the SCAAT (Single Constraint At A Time) algorithm. A prototype system was evaluated relative to a state-of-the-art motion tracker and showed comparable positional accuracy (1-2 mm RMS) and significantly better absolute angular accuracy (0.1 deg RMS).

    Distributed display environments

    No full text
    corecore