39 research outputs found

    The WISDOM Radar: Unveiling the Subsurface Beneath the ExoMars Rover and Identifying the Best Locations for Drilling

    Get PDF
    The search for evidence of past or present life on Mars is the principal objective of the 2020 ESA-Roscosmos ExoMars Rover mission. If such evidence is to be found anywhere, it will most likely be in the subsurface, where organic molecules are shielded from the destructive effects of ionizing radiation and atmospheric oxidants. For this reason, the ExoMars Rover mission has been optimized to investigate the subsurface to identify, understand, and sample those locations where conditions for the preservation of evidence of past life are most likely to be found. The Water Ice Subsurface Deposit Observation on Mars (WISDOM) ground-penetrating radar has been designed to provide information about the nature of the shallow subsurface over depth ranging from 3 to 10 m (with a vertical resolution of up to 3 cm), depending on the dielectric properties of the regolith. This depth range is critical to understanding the geologic evolution stratigraphy and distribution and state of subsurface H2O, which provide important clues in the search for life and the identification of optimal drilling sites for investigation and sampling by the Rover's 2-m drill. WISDOM will help ensure the safety and success of drilling operations by identification of potential hazards that might interfere with retrieval of subsurface samples

    Periodic sub-micrometric structures using a 3D laser interference pattern

    No full text
    International audienceA method to obtain three-dimensional sub-micrometric periodic structures is presented. The experimental setup consists in a pulsed UV laser beam source (l = 355 nm) coming into an interferometer in order to generate four beams converging inside a chamber. According to the directions, to the relative intensities and to the polarizations of these four beams, a 3D interference pattern can be obtained inside the overlapping volume of these four beams; the characteristics of the four laser beams have been optimized in order to obtain a maximal contrast of intensity. In order to visualize the interference pattern, its contrast and its stability at each laser pulse, a video camera coupled to an oil immersion microscope objective has been installed above the interferometer. By suppressing the central beam, it is also possible to generate a bidimensional interference pattern which defines an hexagonal structure in the (1 1 1) plane with a period of 377 nm. This optical setup has been used to obtain 3D sub-micrometric periodic structures in negative photoresists. Experiments consist in a one-or multi-pulse irradiation of the photoresist followed by a development procedure which leads to a sub-micrometric face-centred cubic structure cut in a (1 1 1) plane with a cell parameter of 650 nm. The optimization of the experimental conditions is presented for two kinds of photoresists; the role of the substrate according to its reflectivity at the laser wavelength and its influence on the interference pattern is also discussed.

    Vision-based control for space applications

    Get PDF
    International audienceThis paper presents the work performed in the context of the VIMANCO ESA project. It has the objective of improving the autonomy, safety and robustness of robotics system using vision. The approach we propose is based on an up-to-date recognition and 3D tracking method that allows to determine if a known object is visible on only one image, to compute its pose and to track it in real time along the image sequence acquired by the camera, even in the presence of varying lighting conditions, partial occlusions, and aspects changes. The robustness of the proposed method has been achieved by combining an efficient low level image processing step, statistical techniques to take into account potential outliers, and a formulation of the registration step as a closed loop minimization scheme. This approach is valid if only one camera observes the object, but can also be applied to a multi-cameras system. Finally, this approach provides all the necessary data for the manipulation of non cooperative objects using the general formalism of visual servoing, which is a closed loop control scheme on visual data expressed either in the image, or in 3D, or even in both spaces simultaneously. This formalism can be applied whatever the vision sensor configuration (one or several cameras) with respect to the robot arms (eye- in-hand or eye-to-hand systems). The global approach has been integrated and validated in the Eurobot testbed located at ESTEC

    VIMANCO: Vision manipulation of non-cooperative objects

    Get PDF
    International audienceThis paper presents the work performed in the context of the VIMANCO on-going project. It has the objective of improving the autonomy, safety and robustness of robotics system using vision. Vision is certainly the most adequate exteroceptive sensor to deal with complex and varying environments and for manipulation tasks of non cooperative objects. The approach we propose is based on an up-to-date recognition and 3D tracking method that features many advantages with respect to other approaches. First of all, it allows to determine if a known object is visible on only one image. It also allows to compute its pose and to track it in real time along the image sequence acquired by the camera, even in the presence of varying lighting conditions, partial occlusions, and aspects changes. The robustness of the proposed method has been achieved by combining an efficient low level image processing step, statistical techniques to take into account potential outliers, and a formulation of the registration step as a closed loop minimization scheme. This approach is valid if only one camera observes the object, but can also be applied to a multi-cameras system. Finally, this approach provides all the necessary data for the manipulation of non cooperative objects using the general formalism of visual servoing, which is a closed loop control scheme on visual data expressed either in the image, or in 3D, or even in both spaces simultaneously. This formalism can be applied whatever the vision sensor configuration (one or several cameras) with respect to the robot arms (eye-in-hand or eye-to-hand systems)
    corecore