18,007 research outputs found

    Visualization and Correction of Automated Segmentation, Tracking and Lineaging from 5-D Stem Cell Image Sequences

    Get PDF
    Results: We present an application that enables the quantitative analysis of multichannel 5-D (x, y, z, t, channel) and large montage confocal fluorescence microscopy images. The image sequences show stem cells together with blood vessels, enabling quantification of the dynamic behaviors of stem cells in relation to their vascular niche, with applications in developmental and cancer biology. Our application automatically segments, tracks, and lineages the image sequence data and then allows the user to view and edit the results of automated algorithms in a stereoscopic 3-D window while simultaneously viewing the stem cell lineage tree in a 2-D window. Using the GPU to store and render the image sequence data enables a hybrid computational approach. An inference-based approach utilizing user-provided edits to automatically correct related mistakes executes interactively on the system CPU while the GPU handles 3-D visualization tasks. Conclusions: By exploiting commodity computer gaming hardware, we have developed an application that can be run in the laboratory to facilitate rapid iteration through biological experiments. There is a pressing need for visualization and analysis tools for 5-D live cell image data. We combine accurate unsupervised processes with an intuitive visualization of the results. Our validation interface allows for each data set to be corrected to 100% accuracy, ensuring that downstream data analysis is accurate and verifiable. Our tool is the first to combine all of these aspects, leveraging the synergies obtained by utilizing validation information from stereo visualization to improve the low level image processing tasks.Comment: BioVis 2014 conferenc

    Composite video and graphics display for multiple camera viewing system in robotics and teleoperation

    Get PDF
    A system for real-time video image display for robotics or remote-vehicle teleoperation is described that has at least one robot arm or remotely operated vehicle controlled by an operator through hand-controllers, and one or more television cameras and optional lighting element. The system has at least one television monitor for display of a television image from a selected camera and the ability to select one of the cameras for image display. Graphics are generated with icons of cameras and lighting elements for display surrounding the television image to provide the operator information on: the location and orientation of each camera and lighting element; the region of illumination of each lighting element; the viewed region and range of focus of each camera; which camera is currently selected for image display for each monitor; and when the controller coordinate for said robot arms or remotely operated vehicles have been transformed to correspond to coordinates of a selected or nonselected camera

    Composite video and graphics display for camera viewing systems in robotics and teleoperation

    Get PDF
    A system for real-time video image display for robotics or remote-vehicle teleoperation is described that has at least one robot arm or remotely operated vehicle controlled by an operator through hand-controllers, and one or more television cameras and optional lighting element. The system has at least one television monitor for display of a television image from a selected camera and the ability to select one of the cameras for image display. Graphics are generated with icons of cameras and lighting elements for display surrounding the television image to provide the operator information on: the location and orientation of each camera and lighting element; the region of illumination of each lighting element; the viewed region and range of focus of each camera; which camera is currently selected for image display for each monitor; and when the controller coordinate for said robot arms or remotely operated vehicles have been transformed to correspond to coordinates of a selected or nonselected camera

    ScannerS: Constraining the phase diagram of a complex scalar singlet at the LHC

    Get PDF
    We present the first version of a new tool to scan the parameter space of generic scalar potentials, ScannerS. The main goal of ScannerS is to help distinguish between different patterns of symmetry breaking for each scalar potential. In this work we use it to investigate the possibility of excluding regions of the phase diagram of several versions of a complex singlet extension of the Standard Model, with future LHC results. We find that if another scalar is found, one can exclude a phase with a dark matter candidate in definite regions of the parameter space, while predicting whether a third scalar to be found must be lighter or heavier. The first version of the code is publicly available and contains various generic core routines for tree level vacuum stability analysis, as well as implementations of collider bounds, dark matter constraints, electroweak precision constraints and tree level unitarity.Comment: 24 pages, 4 figures, 3 tables. Project development webpage - http://gravitation.web.ua.pt/Scanner

    The JetCurry Code. I. Reconstructing Three-Dimensional Jet Geometry from Two-Dimensional images

    Full text link
    We present a reconstruction of jet geometry models using numerical methods based on a Markov ChainMonte Carlo (MCMC) and limited memory Broyden-Fletcher-Goldfarb-Shanno (BFGS) optimized algorithm. Our aim is to model the three-dimensional geometry of an AGN jet using observations, which are inherently two-dimensional. Many AGN jets display complex hotspots and bends over the kiloparsec scales. The structure of these bends in the jets frame may be quite different than what we see in the sky frame, transformed by our particular viewing geometry. The knowledge of the intrinsic structure will be helpful in understanding the appearance of the magnetic field and hence emission and particle acceleration processes over the length of the jet. We present the method used, as well as a case study based on a region of the M87 jet.Comment: Submitted to ApJ on Feb 01, 201

    Exploring the Design Space of Immersive Urban Analytics

    Full text link
    Recent years have witnessed the rapid development and wide adoption of immersive head-mounted devices, such as HTC VIVE, Oculus Rift, and Microsoft HoloLens. These immersive devices have the potential to significantly extend the methodology of urban visual analytics by providing critical 3D context information and creating a sense of presence. In this paper, we propose an theoretical model to characterize the visualizations in immersive urban analytics. Further more, based on our comprehensive and concise model, we contribute a typology of combination methods of 2D and 3D visualizations that distinguish between linked views, embedded views, and mixed views. We also propose a supporting guideline to assist users in selecting a proper view under certain circumstances by considering visual geometry and spatial distribution of the 2D and 3D visualizations. Finally, based on existing works, possible future research opportunities are explored and discussed.Comment: 23 pages,11 figure

    Convolutional Neural Networks Applied to Neutrino Events in a Liquid Argon Time Projection Chamber

    Full text link
    We present several studies of convolutional neural networks applied to data coming from the MicroBooNE detector, a liquid argon time projection chamber (LArTPC). The algorithms studied include the classification of single particle images, the localization of single particle and neutrino interactions in an image, and the detection of a simulated neutrino event overlaid with cosmic ray backgrounds taken from real detector data. These studies demonstrate the potential of convolutional neural networks for particle identification or event detection on simulated neutrino interactions. We also address technical issues that arise when applying this technique to data from a large LArTPC at or near ground level
    corecore