15,128 research outputs found

    Direct imaging of extra-solar planets in star forming regions: Lessons learned from a false positive around IM Lup

    Get PDF
    Most exoplanet imagers consist of ground-based adaptive optics coronagraphic cameras which are currently limited in contrast, sensitivity and astrometric precision, but advantageously observe in the near-IR (1- 5{\mu}m). Because of these practical limitations, our current observational aim at detecting and characterizing planets puts heavy constraints on target selection, observing strategies, data reduction, and follow-up. Most surveys so far have thus targeted young systems (1-100Myr) to catch the putative remnant thermal radiation of giant planets, which peaks in the near-IR. They also favor systems in the solar neighborhood (d<80pc), which eases angular resolution requirements but also ensures a good knowledge of the distance and proper motion, which are critical to secure the planet status, and enable subsequent characterization. Because of their youth, it is very tempting to target the nearby star forming regions, which are typically twice as far as the bulk of objects usually combed for planets by direct imaging. Probing these interesting reservoirs sets additional constraints that we review in this paper by presenting the planet search that we initiated in 2008 around the disk-bearing T Tauri star IM Lup (Lupus star forming region, 140-190pc). We show and discuss why age determination, the choice of evolutionary model for the central star and the planet, precise knowledge of the host star proper motion, relative or absolute astrometric accuracy, and patience are the key ingredients for exoplanet searches around more distant young stars. Unfortunately, most of the time, precision and perseverance are not paying off: we discovered a candidate companion around IM Lup in 2008, which we report here to be an unbound background object. We nevertheless review in details the lessons learned from our endeavor, and additionally present the best detection limits ever calculated for IM Lup.Comment: 8 pages, 3 figures, 3 tables, accepted to A&

    Scale invariant line matching on the sphere

    No full text
    International audienceThis paper proposes a novel approach of line matching across images captured by different types of cameras, from perspective to omnidirectional ones. Based on the spherical mapping, this method utilizes spherical SIFT point features to boost line matching and searches line correspondences using an affine invariant measure of similarity. It permits to unify the commonest cameras and to process heterogeneous images with the least distortion of visual information

    Unsupervised Learning of Depth and Ego-Motion from Video

    Full text link
    We present an unsupervised learning framework for the task of monocular depth and camera motion estimation from unstructured video sequences. We achieve this by simultaneously training depth and camera pose estimation networks using the task of view synthesis as the supervisory signal. The networks are thus coupled via the view synthesis objective during training, but can be applied independently at test time. Empirical evaluation on the KITTI dataset demonstrates the effectiveness of our approach: 1) monocular depth performing comparably with supervised methods that use either ground-truth pose or depth for training, and 2) pose estimation performing favorably with established SLAM systems under comparable input settings.Comment: Accepted to CVPR 2017. Project webpage: https://people.eecs.berkeley.edu/~tinghuiz/projects/SfMLearner

    Hybrid Focal Stereo Networks for Pattern Analysis in Homogeneous Scenes

    Full text link
    In this paper we address the problem of multiple camera calibration in the presence of a homogeneous scene, and without the possibility of employing calibration object based methods. The proposed solution exploits salient features present in a larger field of view, but instead of employing active vision we replace the cameras with stereo rigs featuring a long focal analysis camera, as well as a short focal registration camera. Thus, we are able to propose an accurate solution which does not require intrinsic variation models as in the case of zooming cameras. Moreover, the availability of the two views simultaneously in each rig allows for pose re-estimation between rigs as often as necessary. The algorithm has been successfully validated in an indoor setting, as well as on a difficult scene featuring a highly dense pilgrim crowd in Makkah.Comment: 13 pages, 6 figures, submitted to Machine Vision and Application

    From Calibration to Large-Scale Structure from Motion with Light Fields

    Get PDF
    Classic pinhole cameras project the multi-dimensional information of the light flowing through a scene onto a single 2D snapshot. This projection limits the information that can be reconstructed from the 2D acquisition. Plenoptic (or light field) cameras, on the other hand, capture a 4D slice of the plenoptic function, termed the “light field”. These cameras provide both spatial and angular information on the light flowing through a scene; multiple views are captured in a single photographic exposure facilitating various applications. This thesis is concerned with the modelling of light field (or plenoptic) cameras and the development of structure from motion pipelines using such cameras. Specifically, we develop a geometric model for a multi-focus plenoptic camera, followed by a complete pipeline for the calibration of the suggested model. Given a calibrated light field camera, we then remap the captured light field to a grid of pinhole images. We use these images to obtain metric 3D reconstruction through a novel framework for structure from motion with light fields. Finally, we suggest a linear and efficient approach for absolute pose estimation for light fields

    Computer vision for advanced driver assistance systems

    Get PDF
    corecore