8,250 research outputs found

    Ambient point clouds for view interpolation

    Get PDF

    Interactive visualization tool for multi-channel confocal microscopy data in neurobiology research

    Get PDF
    Journal ArticleConfocal microscopy is widely used in neurobiology for studying the three-dimensional structure of the nervous system. Confocal image data are often multi-channel, with each channel resulting from a different fluorescent dye or fluorescent protein; one channel may have dense data, while another has sparse; and there are often structures at several spatial scales: subneuronal domains, neurons, and large groups of neurons (brain regions). Even qualitative analysis can therefore require visualization using techniques and parameters fine-tuned to a particular dataset. Despite the plethora of volume rendering techniques that have been available for many years, the techniques standardly used in neurobiological research are somewhat rudimentary, such as looking at image slices or maximal intensity projections. Thus there is a real demand from neurobiologists, and biologists in general, for a flexible visualization tool that allows interactive visualization of multi-channel confocal data, with rapid fine-tuning of parameters to reveal the three dimensional relationships of structures of interest. Together with neurobiologists, we have designed such a tool, choosing visualization methods to suit the characteristics of confocal data and a typical biologist's workflow. We use interactive volume rendering with intuitive settings for multidimensional transfer functions, multiple render modes and multi-views for multi-channel volume data, and embedding of polygon data into volume data for rendering and editing. As an example, we apply this tool to visualize confocal microscopy datasets of the developing zebrafish visual system

    VIOLA - A multi-purpose and web-based visualization tool for neuronal-network simulation output

    Full text link
    Neuronal network models and corresponding computer simulations are invaluable tools to aid the interpretation of the relationship between neuron properties, connectivity and measured activity in cortical tissue. Spatiotemporal patterns of activity propagating across the cortical surface as observed experimentally can for example be described by neuronal network models with layered geometry and distance-dependent connectivity. The interpretation of the resulting stream of multi-modal and multi-dimensional simulation data calls for integrating interactive visualization steps into existing simulation-analysis workflows. Here, we present a set of interactive visualization concepts called views for the visual analysis of activity data in topological network models, and a corresponding reference implementation VIOLA (VIsualization Of Layer Activity). The software is a lightweight, open-source, web-based and platform-independent application combining and adapting modern interactive visualization paradigms, such as coordinated multiple views, for massively parallel neurophysiological data. For a use-case demonstration we consider spiking activity data of a two-population, layered point-neuron network model subject to a spatially confined excitation originating from an external population. With the multiple coordinated views, an explorative and qualitative assessment of the spatiotemporal features of neuronal activity can be performed upfront of a detailed quantitative data analysis of specific aspects of the data. Furthermore, ongoing efforts including the European Human Brain Project aim at providing online user portals for integrated model development, simulation, analysis and provenance tracking, wherein interactive visual analysis tools are one component. Browser-compatible, web-technology based solutions are therefore required. Within this scope, with VIOLA we provide a first prototype.Comment: 38 pages, 10 figures, 3 table

    AROMA: Automatic Generation of Radio Maps for Localization Systems

    Full text link
    WLAN localization has become an active research field recently. Due to the wide WLAN deployment, WLAN localization provides ubiquitous coverage and adds to the value of the wireless network by providing the location of its users without using any additional hardware. However, WLAN localization systems usually require constructing a radio map, which is a major barrier of WLAN localization systems' deployment. The radio map stores information about the signal strength from different signal strength streams at selected locations in the site of interest. Typical construction of a radio map involves measurements and calibrations making it a tedious and time-consuming operation. In this paper, we present the AROMA system that automatically constructs accurate active and passive radio maps for both device-based and device-free WLAN localization systems. AROMA has three main goals: high accuracy, low computational requirements, and minimum user overhead. To achieve high accuracy, AROMA uses 3D ray tracing enhanced with the uniform theory of diffraction (UTD) to model the electric field behavior and the human shadowing effect. AROMA also automates a number of routine tasks, such as importing building models and automatic sampling of the area of interest, to reduce the user's overhead. Finally, AROMA uses a number of optimization techniques to reduce the computational requirements. We present our system architecture and describe the details of its different components that allow AROMA to achieve its goals. We evaluate AROMA in two different testbeds. Our experiments show that the predicted signal strength differs from the measurements by a maximum average absolute error of 3.18 dBm achieving a maximum localization error of 2.44m for both the device-based and device-free cases.Comment: 14 pages, 17 figure

    3D Time-Based Aural Data Representation Using D4 Library’s Layer Based Amplitude Panning Algorithm

    Get PDF
    Presented at the 22nd International Conference on Auditory Display (ICAD-2016)The following paper introduces a new Layer Based Amplitude Panning algorithm and supporting D4 library of rapid prototyping tools for the 3D time-based data representation using sound. The algorithm is designed to scale and support a broad array of configurations, with particular focus on High Density Loudspeaker Arrays (HDLAs). The supporting rapid prototyping tools are designed to leverage oculocentric strategies to importing, editing, and rendering data, offering an array of innovative approaches to spatial data editing and representation through the use of sound in HDLA scenarios. The ensuing D4 ecosystem aims to address the shortcomings of existing approaches to spatial aural representation of data, offers unique opportunities for furthering research in the spatial data audification and sonification, as well as transportable and scalable spatial media creation and production

    Using Opaque Image Blur for Real-Time Depth-of-Field Rendering

    Get PDF

    State of the art 3D technologies and MVV end to end system design

    Get PDF
    L’oggetto del presente lavoro di tesi è costituito dall’analisi e dalla recensione di tutte le tecnologie 3D: esistenti e in via di sviluppo per ambienti domestici; tenendo come punto di riferimento le tecnologie multiview video (MVV). Tutte le sezioni della catena dalla fase di cattura a quella di riproduzione sono analizzate. Lo scopo è di progettare una possibile architettura satellitare per un futuro sistema MVV televisivo, nell’ambito di due possibili scenari, broadcast o interattivo. L’analisi coprirà considerazioni tecniche, ma anche limitazioni commerciali

    Ambient point clouds for view interpolation

    Full text link

    From Capture to Display: A Survey on Volumetric Video

    Full text link
    Volumetric video, which offers immersive viewing experiences, is gaining increasing prominence. With its six degrees of freedom, it provides viewers with greater immersion and interactivity compared to traditional videos. Despite their potential, volumetric video services poses significant challenges. This survey conducts a comprehensive review of the existing literature on volumetric video. We firstly provide a general framework of volumetric video services, followed by a discussion on prerequisites for volumetric video, encompassing representations, open datasets, and quality assessment metrics. Then we delve into the current methodologies for each stage of the volumetric video service pipeline, detailing capturing, compression, transmission, rendering, and display techniques. Lastly, we explore various applications enabled by this pioneering technology and we present an array of research challenges and opportunities in the domain of volumetric video services. This survey aspires to provide a holistic understanding of this burgeoning field and shed light on potential future research trajectories, aiming to bring the vision of volumetric video to fruition.Comment: Submitte
    • …
    corecore