91,426 research outputs found

    The Double Sphere Camera Model

    Full text link
    Vision-based motion estimation and 3D reconstruction, which have numerous applications (e.g., autonomous driving, navigation systems for airborne devices and augmented reality) are receiving significant research attention. To increase the accuracy and robustness, several researchers have recently demonstrated the benefit of using large field-of-view cameras for such applications. In this paper, we provide an extensive review of existing models for large field-of-view cameras. For each model we provide projection and unprojection functions and the subspace of points that result in valid projection. Then, we propose the Double Sphere camera model that well fits with large field-of-view lenses, is computationally inexpensive and has a closed-form inverse. We evaluate the model using a calibration dataset with several different lenses and compare the models using the metrics that are relevant for Visual Odometry, i.e., reprojection error, as well as computation time for projection and unprojection functions and their Jacobians. We also provide qualitative results and discuss the performance of all models

    Gravitational Lensing by Spinning Black Holes in Astrophysics, and in the Movie Interstellar

    Get PDF
    Interstellar is the first Hollywood movie to attempt depicting a black hole as it would actually be seen by somebody nearby. For this we developed a code called DNGR (Double Negative Gravitational Renderer) to solve the equations for ray-bundle (light-beam) propagation through the curved spacetime of a spinning (Kerr) black hole, and to render IMAX-quality, rapidly changing images. Our ray-bundle techniques were crucial for achieving IMAX-quality smoothness without flickering. This paper has four purposes: (i) To describe DNGR for physicists and CGI practitioners . (ii) To present the equations we use, when the camera is in arbitrary motion at an arbitrary location near a Kerr black hole, for mapping light sources to camera images via elliptical ray bundles. (iii) To describe new insights, from DNGR, into gravitational lensing when the camera is near the spinning black hole, rather than far away as in almost all prior studies. (iv) To describe how the images of the black hole Gargantua and its accretion disk, in the movie \emph{Interstellar}, were generated with DNGR. There are no new astrophysical insights in this accretion-disk section of the paper, but disk novices may find it pedagogically interesting, and movie buffs may find its discussions of Interstellar interesting.Comment: 46 pages, 17 figure

    Visualizing Interstellar's Wormhole

    Get PDF
    Christopher Nolan's science fiction movie Interstellar offers a variety of opportunities for students in elementary courses on general relativity theory. This paper describes such opportunities, including: (i) At the motivational level, the manner in which elementary relativity concepts underlie the wormhole visualizations seen in the movie. (ii) At the briefest computational level, instructive calculations with simple but intriguing wormhole metrics, including, e.g., constructing embedding diagrams for the three-parameter wormhole that was used by our visual effects team and Christopher Nolan in scoping out possible wormhole geometries for the movie. (iii) Combining the proper reference frame of a camera with solutions of the geodesic equation, to construct a light-ray-tracing map backward in time from a camera's local sky to a wormhole's two celestial spheres. (iv) Implementing this map, for example in Mathematica, Maple or Matlab, and using that implementation to construct images of what a camera sees when near or inside a wormhole. (v) With the student's implementation, exploring how the wormhole's three parameters influence what the camera sees---which is precisely how Christopher Nolan, using our implementation, chose the parameters for \emph{Interstellar}'s wormhole. (vi) Using the student's implementation, exploring the wormhole's Einstein ring, and particularly the peculiar motions of star images near the ring; and exploring what it looks like to travel through a wormhole.Comment: 14 pages and 13 figures. In press at American Journal of Physics. Minor revisions; primarily insertion of a new, long reference 15 at the end of Section II.

    bRing: An observatory dedicated to monitoring the β\beta Pictoris b Hill sphere transit

    Get PDF
    Aims. We describe the design and first light observations from the β\beta Pictoris b Ring ("bRing") project. The primary goal is to detect photometric variability from the young star β\beta Pictoris due to circumplanetary material surrounding the directly imaged young extrasolar gas giant planet \bpb. Methods. Over a nine month period centred on September 2017, the Hill sphere of the planet will cross in front of the star, providing a unique opportunity to directly probe the circumplanetary environment of a directly imaged planet through photometric and spectroscopic variations. We have built and installed the first of two bRing monitoring stations (one in South Africa and the other in Australia) that will measure the flux of β\beta Pictoris, with a photometric precision of 0.5%0.5\% over 5 minutes. Each station uses two wide field cameras to cover the declination of the star at all elevations. Detection of photometric fluctuations will trigger spectroscopic observations with large aperture telescopes in order to determine the gas and dust composition in a system at the end of the planet-forming era. Results. The first three months of operation demonstrate that bRing can obtain better than 0.5\% photometry on β\beta Pictoris in five minutes and is sensitive to nightly trends enabling the detection of any transiting material within the Hill sphere of the exoplanet

    Couette-Poiseuille flow experiment with zero mean advection velocity: Subcritical transition to turbulence

    Full text link
    We present a new experimental set-up that creates a shear flow with zero mean advection velocity achieved by counterbalancing the nonzero streamwise pressure gradient by moving boundaries, which generates plane Couette-Poiseuille flow. We carry out the first experimental results in the transitional regime for this flow. Using flow visualization we characterize the subcritical transition to turbulence in Couette-Poiseuille flow and show the existence of turbulent spots generated by a permanent perturbation. Due to the zero mean advection velocity of the base profile, these turbulent structures are nearly stationary. We distinguish two regions of the turbulent spot: the active, turbulent core, which is characterized by waviness of the streaks similar to traveling waves, and the surrounding region, which includes in addition the weak undisturbed streaks and oblique waves at the laminar-turbulent interface. We also study the dependence of the size of these two regions on Reynolds number. Finally, we show that the traveling waves move in the downstream (Poiseuille).Comment: 17 pages, 15 figure

    Asteroid modeling for testing spacecraft approach and landing

    Get PDF

    Efficient Online Surface Correction for Real-time Large-Scale 3D Reconstruction

    Full text link
    State-of-the-art methods for large-scale 3D reconstruction from RGB-D sensors usually reduce drift in camera tracking by globally optimizing the estimated camera poses in real-time without simultaneously updating the reconstructed surface on pose changes. We propose an efficient on-the-fly surface correction method for globally consistent dense 3D reconstruction of large-scale scenes. Our approach uses a dense Visual RGB-D SLAM system that estimates the camera motion in real-time on a CPU and refines it in a global pose graph optimization. Consecutive RGB-D frames are locally fused into keyframes, which are incorporated into a sparse voxel hashed Signed Distance Field (SDF) on the GPU. On pose graph updates, the SDF volume is corrected on-the-fly using a novel keyframe re-integration strategy with reduced GPU-host streaming. We demonstrate in an extensive quantitative evaluation that our method is up to 93% more runtime efficient compared to the state-of-the-art and requires significantly less memory, with only negligible loss of surface quality. Overall, our system requires only a single GPU and allows for real-time surface correction of large environments.Comment: British Machine Vision Conference (BMVC), London, September 201
    • …
    corecore