58,122 research outputs found

    Geant4 Applications for Modeling Molecular Transport in Complex Vacuum Geometries

    Full text link
    We discuss a novel use of the Geant4 simulation toolkit to model molecular transport in a vacuum environment, in the molecular flow regime. The Geant4 toolkit was originally developed by the high energy physics community to simulate the interactions of elementary particles within complex detector systems. Here its capabilities are utilized to model molecular vacuum transport in geometries where other techniques are impractical. The techniques are verified with an application representing a simple vacuum geometry that has been studied previously both analytically and by basic Monte Carlo simulation. We discuss the use of an application with a very complicated geometry, that of the Large Synoptic Survey Telescope camera cryostat, to determine probabilities of transport of contaminant molecules to optical surfaces where control of contamination is crucial.Comment: 7 pages, 4 figures, 2 tables, to appear in IJMSSC, updated to accepted versio

    A note on the depth-from-defocus mechanism of jumping spiders

    Get PDF
    Jumping spiders are capable of estimating the distance to their prey relying only on the information from one of their main eyes. Recently, it has been shown that jumping spiders perform this estimation based on image defocus cues. In order to gain insight into the mechanisms involved in this blur-to-distance mapping as performed by the spider and to judge whether inspirations can be drawn from spider vision for depth-from-defocus computer vision algorithms, we constructed a three-dimensional (3D) model of the anterior median eye of the Metaphidippus aeneolus, a well studied species of jumping spider. We were able to study images of the environment as the spider would see them and to measure the performances of a well known depth-from-defocus algorithm on this dataset. We found that the algorithm performs best when using images that are averaged over the considerable thickness of the spider's receptor layers, thus pointing towards a possible functional role of the receptor thickness for the spider's depth estimation capabilities

    Multi-touch 3D Exploratory Analysis of Ocean Flow Models

    Get PDF
    Modern ocean flow simulations are generating increasingly complex, multi-layer 3D ocean flow models. However, most researchers are still using traditional 2D visualizations to visualize these models one slice at a time. Properly designed 3D visualization tools can be highly effective for revealing the complex, dynamic flow patterns and structures present in these models. However, the transition from visualizing ocean flow patterns in 2D to 3D presents many challenges, including occlusion and depth ambiguity. Further complications arise from the interaction methods required to navigate, explore, and interact with these 3D datasets. We present a system that employs a combination of stereoscopic rendering, to best reveal and illustrate 3D structures and patterns, and multi-touch interaction, to allow for natural and efficient navigation and manipulation within the 3D environment. Exploratory visual analysis is facilitated through the use of a highly-interactive toolset which leverages a smart particle system. Multi-touch gestures allow users to quickly position dye emitting tools within the 3D model. Finally, we illustrate the potential applications of our system through examples of real world significance

    Deep Eyes: Binocular Depth-from-Focus on Focal Stack Pairs

    Full text link
    Human visual system relies on both binocular stereo cues and monocular focusness cues to gain effective 3D perception. In computer vision, the two problems are traditionally solved in separate tracks. In this paper, we present a unified learning-based technique that simultaneously uses both types of cues for depth inference. Specifically, we use a pair of focal stacks as input to emulate human perception. We first construct a comprehensive focal stack training dataset synthesized by depth-guided light field rendering. We then construct three individual networks: a Focus-Net to extract depth from a single focal stack, a EDoF-Net to obtain the extended depth of field (EDoF) image from the focal stack, and a Stereo-Net to conduct stereo matching. We show how to integrate them into a unified BDfF-Net to obtain high-quality depth maps. Comprehensive experiments show that our approach outperforms the state-of-the-art in both accuracy and speed and effectively emulates human vision systems
    corecore