162 research outputs found

    Space-Time Kernel Density Estimation for Real-Time Interactive Visual Analytics

    Get PDF
    We present a GPU-based implementation of the Space-Time Kernel Density Estimation (STKDE) that provides massive speed up in analyzing spatial- temporal data. In our work we are able to achieve sub- second performance for data sizes transferable over the Internet in realistic time. We have integrated this into web-based visual interactive analytics tools for analyzing spatial-temporal data. The resulting inte- grated visual analytics (VA) system permits new anal- yses of spatial-temporal data from a variety of sources. Novel, interlinked interface elements permit efficient, meaningful analyses

    Evaluating the relationship between user interaction and financial visual analysis

    Get PDF
    It has been widely accepted that interactive visualization techniques enable users to more effectively form hypotheses and identify areas for more detailed investigation. There have been numerous empirical user studies testing the effectiveness of specific visual analytical tools. However, there has been limited effort in connecting a user’s interaction with his reasoning for the purpose of extracting the relationship between the two. In this paper, we present an approach for capturing and analyzing user interactions in a financial visual analytical tool and describe an exploratory user study that examines these interaction strategies. To achieve this goal, we created two visual tools to analyze raw interaction data captured during the user session. The results of this study demonstrate one possible strategy for understanding the relationship between interaction and reasoning both operationally and strategically. Index Terms: H.5.2 [Information Interfaces And Presentatio

    The Analytic Distortion Induced by False-Eye Separation in Head-Tracked Stereoscopic Displays

    Get PDF
    Stereoscopic display is a fundamental part of virtual reality systems such as the virtual workbench, the CAVE and HMD systems. A common practice in stereoscopic systems is deliberate incorrect modeling of user eye separation. Under estimating eye separation can help the human visual system fuse stereo image pairs into single 3D images, while over estimating eye separation enhances image depth. Unfortunately, false eye separation modeling also distorts the perceived 3D image in undesirable ways. We present a novel analytic expression and quantitative analysis of this distortion for eyes at an arbitrary location and orientation

    Instant Architecture

    Get PDF
    ProceedingInternational audienceThis paper presents a new method for the automatic modeling of architecture. Building designs are derived using split grammars, a new type of parametric set grammar based on the concept of shape. The paper also introduces an attribute matching system and a separate control grammar, which offer the flexibility required to model buildings using a large variety of different styles and design ideas. Through the adaptive nature of the design grammar used, the created building designs can either be generic or adhere closely to a specified goal, depending on the amount of data available

    The state of the art in integrating machine learning into visual analytics

    Get PDF
    Visual analytics systems combine machine learning or other analytic techniques with interactive data visualization to promote sensemaking and analytical reasoning. It is through such techniques that people can make sense of large, complex data. While progress has been made, the tactful combination of machine learning and data visualization is still under-explored. This state-of-the-art report presents a summary of the progress that has been made by highlighting and synthesizing select research advances. Further, it presents opportunities and challenges to enhance the synergy between machine learning and visual analytics for impactful future research directions

    From Urban Terrain Models to Visible Cities

    Get PDF
    We are now faced with the possibility and in some cases the results of acquiring accurate digital representations of our cities. But these cities will not be capable of interactive visualization unless we meet some fundamental challenges. The first challenge is to take data from multiple sources, which are often accurate in themselves but incomplete, and weave them together into comprehensive models. Because of the size and extent of the data that can now be obtained, this modeling task is daunting and must be accomplished in a semi-automated manner. Once we have comprehensive models, and especially if we can build them rapidly and extend them at will, the next question is what to do with them. Thus the second challenge is to make the models visible. In particular they must be made interactively visible so they can be explored, inspected, and analyzed. In this article, we discuss the nature of the acquired urban data and how we are beginning to meet the challenges and produce visually navigable models. These models provide the basis for building virtual environments for a variety of applications

    Direct Manipulation on the Virtual Workbench: Two Hands Aren't Always Better Than One

    Get PDF
    This paper reports on the investigation of the differential levels of effectiveness of various interaction techniques on a simple rotation and translation task on the virtual workbench. Manipulation time and number of collisions were measured for subjects using four device sets (unimanual glove, bimanual glove, unimanual stick, and bimanual stick). Participants were also asked to subjectively judge each device's effectiveness. Performance results indicated a main effect for device (better performance for users of the stick(s)), but not for number of hands. Subjective results supported these findings, as users expressed a preference for the stick(s)

    Legible Simplification of Textured Urban Models

    Full text link

    Supervised Domain Adaptation using Graph Embedding

    Get PDF
    Getting deep convolutional neural networks to perform well requires a large amount of training data. When the available labelled data is small, it is often beneficial to use transfer learning to leverage a related larger dataset (source) in order to improve the performance on the small dataset (target). Among the transfer learning approaches, domain adaptation methods assume that distributions between the two domains are shifted and attempt to realign them. In this paper, we consider the domain adaptation problem from the perspective of dimensionality reduction and propose a generic framework based on graph embedding. Instead of solving the generalised eigenvalue problem, we formulate the graph-preserving criterion as a loss in the neural network and learn a domain-invariant feature transformation in an end-to-end fashion. We show that the proposed approach leads to a powerful Domain Adaptation framework; a simple LDA-inspired instantiation of the framework leads to state-of-the-art performance on two of the most widely used Domain Adaptation benchmarks, Office31 and MNIST to USPS datasets.Comment: 7 pages, 3 figures, 3 table
    corecore