56,522 research outputs found

    3D Visibility Representations of 1-planar Graphs

    Full text link
    We prove that every 1-planar graph G has a z-parallel visibility representation, i.e., a 3D visibility representation in which the vertices are isothetic disjoint rectangles parallel to the xy-plane, and the edges are unobstructed z-parallel visibilities between pairs of rectangles. In addition, the constructed representation is such that there is a plane that intersects all the rectangles, and this intersection defines a bar 1-visibility representation of G.Comment: Appears in the Proceedings of the 25th International Symposium on Graph Drawing and Network Visualization (GD 2017

    Visibility Constrained Generative Model for Depth-based 3D Facial Pose Tracking

    Full text link
    In this paper, we propose a generative framework that unifies depth-based 3D facial pose tracking and face model adaptation on-the-fly, in the unconstrained scenarios with heavy occlusions and arbitrary facial expression variations. Specifically, we introduce a statistical 3D morphable model that flexibly describes the distribution of points on the surface of the face model, with an efficient switchable online adaptation that gradually captures the identity of the tracked subject and rapidly constructs a suitable face model when the subject changes. Moreover, unlike prior art that employed ICP-based facial pose estimation, to improve robustness to occlusions, we propose a ray visibility constraint that regularizes the pose based on the face model's visibility with respect to the input point cloud. Ablation studies and experimental results on Biwi and ICT-3DHP datasets demonstrate that the proposed framework is effective and outperforms completing state-of-the-art depth-based methods

    Three-dimensional hydrodynamical simulations of red giant stars: semi-global models for the interpretation of interferometric observations

    Full text link
    Context. Theoretical predictions from models of red giant branch stars are a valuable tool for various applications in astrophysics ranging from galactic chemical evolution to studies of exoplanetary systems. Aims. We use the radiative transfer code OPTIM3D and realistic 3D radiative-hydrodynamical (RHD) surface convection simulations of red giants to explore the impact of granulation on interferometric observables. Methods. We compute intensity maps for the 3D simulation snapshots in two filters: in the optical at 5000 \pm 300 {\AA} and in the K band 2.14 ±\pm 0.26 {\mu}m FLUOR filter, corresponding to the wavelength-range of instruments mounted on the CHARA interferometer. From the intensity maps, we construct images of the stellar disks, accounting for center-to-limb variations. We then derive interferometric visibility amplitudes and phases. We study their behavior with position angle and wavelength. Results. We provide average limb-darkening coefficients for different metallicities and wavelength-ranges. We detail the prospects for the detection and characterization of granulation and center-to-limb variations of red giant stars with today's interferometers. We find that the effect of convective-related surface structures depends on metallicity and surface gravity. We provided theoretical closure phases that should be incorporated into the analysis of red giant planet companion closure phase signals. We estimate 3D-1D corrections to stellar radii determination: 3D models are ~ 3.5% smaller to ~ 1% larger in the optical with respect to 1D, and roughly 0.5 to 1.5% smaller in the infrared. Even if these corrections are small, they are important to properly set the zero point of effective temperature scale derived by interferometry and to strengthen the confidence of existing red giant catalogues of calibrating stars for interferometry.Comment: Accepted for publication on Astronomy & Astrophysics, 14 pages, 13 figure

    Search-based 3D Planning and Trajectory Optimization for Safe Micro Aerial Vehicle Flight Under Sensor Visibility Constraints

    Full text link
    Safe navigation of Micro Aerial Vehicles (MAVs) requires not only obstacle-free flight paths according to a static environment map, but also the perception of and reaction to previously unknown and dynamic objects. This implies that the onboard sensors cover the current flight direction. Due to the limited payload of MAVs, full sensor coverage of the environment has to be traded off with flight time. Thus, often only a part of the environment is covered. We present a combined allocentric complete planning and trajectory optimization approach taking these sensor visibility constraints into account. The optimized trajectories yield flight paths within the apex angle of a Velodyne Puck Lite 3D laser scanner enabling low-level collision avoidance to perceive obstacles in the flight direction. Furthermore, the optimized trajectories take the flight dynamics into account and contain the velocities and accelerations along the path. We evaluate our approach with a DJI Matrice 600 MAV and in simulation employing hardware-in-the-loop.Comment: In Proceedings of IEEE International Conference on Robotics and Automation (ICRA), Montreal, Canada, May 201

    Occlusion Handling using Semantic Segmentation and Visibility-Based Rendering for Mixed Reality

    Full text link
    Real-time occlusion handling is a major problem in outdoor mixed reality system because it requires great computational cost mainly due to the complexity of the scene. Using only segmentation, it is difficult to accurately render a virtual object occluded by complex objects such as trees, bushes etc. In this paper, we propose a novel occlusion handling method for real-time, outdoor, and omni-directional mixed reality system using only the information from a monocular image sequence. We first present a semantic segmentation scheme for predicting the amount of visibility for different type of objects in the scene. We also simultaneously calculate a foreground probability map using depth estimation derived from optical flow. Finally, we combine the segmentation result and the probability map to render the computer generated object and the real scene using a visibility-based rendering method. Our results show great improvement in handling occlusions compared to existing blending based methods
    corecore