2,217 research outputs found

    3D Pose Estimation and 3D Model Retrieval for Objects in the Wild

    Full text link
    We propose a scalable, efficient and accurate approach to retrieve 3D models for objects in the wild. Our contribution is twofold. We first present a 3D pose estimation approach for object categories which significantly outperforms the state-of-the-art on Pascal3D+. Second, we use the estimated pose as a prior to retrieve 3D models which accurately represent the geometry of objects in RGB images. For this purpose, we render depth images from 3D models under our predicted pose and match learned image descriptors of RGB images against those of rendered depth images using a CNN-based multi-view metric learning approach. In this way, we are the first to report quantitative results for 3D model retrieval on Pascal3D+, where our method chooses the same models as human annotators for 50% of the validation images on average. In addition, we show that our method, which was trained purely on Pascal3D+, retrieves rich and accurate 3D models from ShapeNet given RGB images of objects in the wild.Comment: Accepted to Conference on Computer Vision and Pattern Recognition (CVPR) 201

    Comparing CFSR and conventional weather data for discharge and soil loss modelling with SWAT in small catchments in the Ethiopian Highlands

    Get PDF
    Accurate rainfall data are the key input parameter for modelling river discharge and soil loss. Remote areas of Ethiopia often lack adequate precipitation data and where these data are available, there might be substantial temporal or spatial gaps. To counter this challenge, the Climate Forecast System Reanalysis (CFSR) of the National Centers for Environmental Prediction (NCEP) readily provides weather data for any geographic location on earth between 1979 and 2014. This study assesses the applicability of CFSR weather data to three watersheds in the Blue Nile Basin in Ethiopia. To this end, the Soil and Water Assessment Tool (SWAT) was set up to simulate discharge and soil loss, using CFSR and conventional weather data, in three small-scale watersheds ranging from 112 to 477 ha. Calibrated simulation results were compared to observed river discharge and observed soil loss over a period of 32 years. The conventional weather data resulted in very good discharge outputs for all three watersheds, while the CFSR weather data resulted in unsatisfactory discharge outputs for all of the three gauging stations. Soil loss simulation with conventional weather inputs yielded satisfactory outputs for two of three watersheds, while the CFSR weather input resulted in three unsatisfactory results. Overall, the simulations with the conventional data resulted in far better results for discharge and soil loss than simulations with CFSR data. The simulations with CFSR data were unable to adequately represent the specific regional climate for the three watersheds, performing even worse in climatic areas with two rainy seasons. Hence, CFSR data should not be used lightly in remote areas with no conventional weather data where no prior analysis is possible

    Sensitivity Kernels for Flows in Time-Distance Helioseismology: Extension to Spherical Geometry

    Full text link
    We extend an existing Born approximation method for calculating the linear sensitivity of helioseismic travel times to flows from Cartesian to spherical geometry. This development is necessary for using the Born approximation for inferring large-scale flows in the deep solar interior. In a first sanity check, we compare two f−f-mode kernels from our spherical method and from an existing Cartesian method. The horizontal and total integrals agree to within 0.3 %. As a second consistency test, we consider a uniformly rotating Sun and a travel distance of 42 degrees. The analytical travel-time difference agrees with the forward-modelled travel-time difference to within 2 %. In addition, we evaluate the impact of different choices of filter functions on the kernels for a meridional travel distance of 42 degrees. For all filters, the sensitivity is found to be distributed over a large fraction of the convection zone. We show that the kernels depend on the filter function employed in the data analysis process. If modes of higher harmonic degree (90≲l≲17090\lesssim l \lesssim 170) are permitted, a noisy pattern of a spatial scale corresponding to l≈260l\approx 260 appears near the surface. When mainly low-degree modes are used (l≲70l\lesssim70), the sensitivity is concentrated in the deepest regions and it visually resembles a ray-path-like structure. Among the different low-degree filters used, we find the kernel for phase-speed filtered measurements to be best localized in depth.Comment: 17 pages, 5 figures, 2 tables, accepted for publication in ApJ. v2: typo in arXiv author list correcte

    GP2C: Geometric Projection Parameter Consensus for Joint 3D Pose and Focal Length Estimation in the Wild

    Full text link
    We present a joint 3D pose and focal length estimation approach for object categories in the wild. In contrast to previous methods that predict 3D poses independently of the focal length or assume a constant focal length, we explicitly estimate and integrate the focal length into the 3D pose estimation. For this purpose, we combine deep learning techniques and geometric algorithms in a two-stage approach: First, we estimate an initial focal length and establish 2D-3D correspondences from a single RGB image using a deep network. Second, we recover 3D poses and refine the focal length by minimizing the reprojection error of the predicted correspondences. In this way, we exploit the geometric prior given by the focal length for 3D pose estimation. This results in two advantages: First, we achieve significantly improved 3D translation and 3D pose accuracy compared to existing methods. Second, our approach finds a geometric consensus between the individual projection parameters, which is required for precise 2D-3D alignment. We evaluate our proposed approach on three challenging real-world datasets (Pix3D, Comp, and Stanford) with different object categories and significantly outperform the state-of-the-art by up to 20% absolute in multiple different metrics.Comment: Accepted to International Conference on Computer Vision (ICCV) 201

    Location Field Descriptors: Single Image 3D Model Retrieval in the Wild

    Full text link
    We present Location Field Descriptors, a novel approach for single image 3D model retrieval in the wild. In contrast to previous methods that directly map 3D models and RGB images to an embedding space, we establish a common low-level representation in the form of location fields from which we compute pose invariant 3D shape descriptors. Location fields encode correspondences between 2D pixels and 3D surface coordinates and, thus, explicitly capture 3D shape and 3D pose information without appearance variations which are irrelevant for the task. This early fusion of 3D models and RGB images results in three main advantages: First, the bottleneck location field prediction acts as a regularizer during training. Second, major parts of the system benefit from training on a virtually infinite amount of synthetic data. Finally, the predicted location fields are visually interpretable and unblackbox the system. We evaluate our proposed approach on three challenging real-world datasets (Pix3D, Comp, and Stanford) with different object categories and significantly outperform the state-of-the-art by up to 20% absolute in multiple 3D retrieval metrics.Comment: Accepted to International Conference on 3D Vision (3DV) 2019 (Oral
    • …
    corecore