84,027 research outputs found

    Immersive ExaBrick: Visualizing Large AMR Data in the CAVE

    Get PDF
    Rendering large adaptive mesh refinement (AMR) data in real-time in virtual reality (VR) environments is a complex challenge that demands sophisticated techniques and tools. The proposed solution harnesses the ExaBrick framework and integrates it as a plugin in COVISE, a robust visualization system equipped with the VR-centric OpenCOVER render module. This setup enables direct navigation and interaction within the rendered volume in a VR environment. The user interface incorporates rendering options and functions, ensuring a smooth and interactive experience. We show that high-quality volume rendering of AMR data in VR environments at interactive rates is possible using GPUs

    A Multi-Resolution Interactive Previewer for Volumetric Data on Arbitary Meshes

    Get PDF
    In this paper we describe a rendering method suitable for interactive previewing of large-scale arbitary-mesh volume data sets. A data set to be visualized is represented by a ''point cloud,'' i. e., a set of points and associated data values without known connectivity between the points. The method uses a multi-resolution approach to achieve interactive rendering rates of several frames per second for arbitrarily large data sets. Lower-resolution approximations of an original data set are created by iteratively applying a point- decimation operation to higher-resolution levels. The goal of this method is to provide the user with an interactive navigation and exploration tool to determine good viewpoints and transfer functions to pass on to a high-quality volume renderer that uses a standard algorithm

    The Persint visualization program for the ATLAS experiment

    Full text link
    The Persint program is designed for the three-dimensional representation of objects and for the interfacing and access to a variety of independent applications, in a fully interactive way. Facilities are provided for the spatial navigation and the definition of the visualization properties, in order to interactively set the viewing and viewed points, and to obtain the desired perspective. In parallel, applications may be launched through the use of dedicated interfaces, such as the interactive reconstruction and display of physics events. Recent developments have focalized on the interfacing to the XML ATLAS General Detector Description AGDD, making it a widely used tool for XML developers. The graphics capabilities of this program were exploited in the context of the ATLAS 2002 Muon Testbeam where it was used as an online event display, integrated in the online software framework and participating in the commissioning and debug of the detector system.Comment: 9 pages, 10 figures, proceedings of CHEP200

    A Time Comparison of Computer-Assisted and Manual Bathymetric Processing

    Get PDF
    We describe an experiment designed to determine the time required to process Multibeam Echosounder (MBES) data using the CUBE (Combined Uncertainty and Bathymetry Estimator) [Calder & Mayer, 2003; Calder, 2003] and Navigation Surface [Smith et al., 2002; Smith, 2003] algorithms. We collected data for a small (22.3xl06 soundings) survey in Valdez Narrows, Alaska, and monitored person-hours expended on processing for a traditional MBES processing stream and the proposed computer-assisted method operating on identical data. The analysis shows that the vast majority of time expended in a traditional processing stream is in subjective hand-editing of data, followed by line planning and quality control, and that the computer-assisted method is significantly faster than the traditional process through its elimination of human interaction time. The potential improvement in editing time is shown to be on the order of 25-37:1 over traditional methods

    Navigation and interaction in a real-scale digital mock-up using natural language and user gesture

    Get PDF
    This paper tries to demonstrate a very new real-scale 3D system and sum up some firsthand and cutting edge results concerning multi-modal navigation and interaction interfaces. This work is part of the CALLISTO-SARI collaborative project. It aims at constructing an immersive room, developing a set of software tools and some navigation/interaction interfaces. Two sets of interfaces will be introduced here: 1) interaction devices, 2) natural language (speech processing) and user gesture. The survey on this system using subjective observation (Simulator Sickness Questionnaire, SSQ) and objective measurements (Center of Gravity, COG) shows that using natural languages and gesture-based interfaces induced less cyber-sickness comparing to device-based interfaces. Therefore, gesture-based is more efficient than device-based interfaces.FUI CALLISTO-SAR

    Video browsing interfaces and applications: a review

    Get PDF
    We present a comprehensive review of the state of the art in video browsing and retrieval systems, with special emphasis on interfaces and applications. There has been a significant increase in activity (e.g., storage, retrieval, and sharing) employing video data in the past decade, both for personal and professional use. The ever-growing amount of video content available for human consumption and the inherent characteristics of video data—which, if presented in its raw format, is rather unwieldy and costly—have become driving forces for the development of more effective solutions to present video contents and allow rich user interaction. As a result, there are many contemporary research efforts toward developing better video browsing solutions, which we summarize. We review more than 40 different video browsing and retrieval interfaces and classify them into three groups: applications that use video-player-like interaction, video retrieval applications, and browsing solutions based on video surrogates. For each category, we present a summary of existing work, highlight the technical aspects of each solution, and compare them against each other

    HoME: a Household Multimodal Environment

    Full text link
    We introduce HoME: a Household Multimodal Environment for artificial agents to learn from vision, audio, semantics, physics, and interaction with objects and other agents, all within a realistic context. HoME integrates over 45,000 diverse 3D house layouts based on the SUNCG dataset, a scale which may facilitate learning, generalization, and transfer. HoME is an open-source, OpenAI Gym-compatible platform extensible to tasks in reinforcement learning, language grounding, sound-based navigation, robotics, multi-agent learning, and more. We hope HoME better enables artificial agents to learn as humans do: in an interactive, multimodal, and richly contextualized setting.Comment: Presented at NIPS 2017's Visually-Grounded Interaction and Language Worksho

    Online Permaculture Resources: An Evaluation of a Selected Sample

    Get PDF
    As a newly-emerging, sustainable approach to landscape management, permaculture seeks to integrate knowledge from several disciplines into a holistic system with emphasis on ecological and social responsibility. Online resources on permaculture appear to represent a promising direction in the movement by supplementing existing printed sources, serving to update and diversify existing content, and increasing access to permaculture information and praxis among the general public. This study evaluated a sample of online resources on permaculture using a framework of parameters reflecting website usability and content quality. Best practice for website usability, as well as diversity of information and applicability, was addressed. The evaluation revealed, overall, good quality and usability in the majority of cases, and suggests a strong online presence among the existing permaculture community, and accessible support for those with an interest in joining the movement
    • …
    corecore