3,449 research outputs found

    In-flight calibration of STEREO-B/WAVES antenna system

    Full text link
    The STEREO/WAVES (SWAVES) experiment on board the two STEREO spacecraft (Solar Terrestrial Relations Observatory) launched on 25 October 2006 is dedicated to the measurement of the radio spectrum at frequencies between a few kilohertz and 16 MHz. The SWAVES antenna system consists of 6 m long orthogonal monopoles designed to measure the electric component of the radio waves. With this configuration direction finding of radio sources and polarimetry (analysis of the polarization state) of incident radio waves is possible. For the evaluation of the SWAVES data the receiving properties of the antennas, distorted by the radiation coupling with the spacecraft body and other onboard devices, have to be known accurately. In the present context, these properties are described by the antenna effective length vectors. We present the results of an in-flight calibration of the SWAVES antennas using the observations of the nonthermal terrestrial auroral kilometric radiation (AKR) during STEREO roll maneuvers in an early stage of the mission. A least squares method combined with a genetic algorithm was applied to find the effective length vectors of the STEREO Behind (STEREO-B)/WAVES antennas in a quasi-static frequency range (LantennaλwaveL_{antenna} \ll \lambda_{wave}) which fit best to the model and observed AKR intensity profiles. The obtained results confirm the former SWAVES antenna analysis by rheometry and numerical simulations. A final set of antenna parameters is recommended as a basis for evaluations of the SWAVES data

    On Real-Time Synthetic Primate Vision

    Get PDF
    The primate vision system exhibits numerous capabilities. Some important basic visual competencies include: 1) a consistent representation of visual space across eye movements; 2) egocentric spatial perception; 3) coordinated stereo fixation upon and pursuit of dynamic objects; and 4) attentional gaze deployment. We present a synthetic vision system that incorporates these competencies.We hypothesize that similarities between the underlying synthetic system model and that of the primate vision system elicit accordingly similar gaze behaviors. Psychophysical trials were conducted to record human gaze behavior when free-viewing a reproducible, dynamic, 3D scene. Identical trials were conducted with the synthetic system. A statistical comparison of synthetic and human gaze behavior has shown that the two are remarkably similar

    Tele-media-art: web-based inclusive teaching of body expression

    Get PDF
    Conferência Internacional, realizada em Olhão, Algarve, de 26-28 de abril de 2018.The Tele-Media-Art project aims to promote the improvement of the online distance learning and artistic teaching process applied in the teaching of two test scenarios, doctorate in digital art-media and the lifelong learning course ”the experience of diversity” by exploiting multimodal telepresence facilities encompassing the diversified visual, auditory and sensory channels, as well as rich forms of gestural / body interaction. To this end, a telepresence system was developed to be installed at Palácio Ceia, in Lisbon, Portugal, headquarters of the Portuguese Open University, from which methodologies of artistic teaching in mixed regime - face-to-face and online distance - that are inclusive to blind and partially sighted students. This system has already been tested against a group of subjects, including blind people. Although positive results were achieved, more development and further tests will be carried in the futureThis project was financed by Calouste Gulbenkian Foundation under Grant number 142793.info:eu-repo/semantics/publishedVersio

    Towards Visual Ego-motion Learning in Robots

    Full text link
    Many model-based Visual Odometry (VO) algorithms have been proposed in the past decade, often restricted to the type of camera optics, or the underlying motion manifold observed. We envision robots to be able to learn and perform these tasks, in a minimally supervised setting, as they gain more experience. To this end, we propose a fully trainable solution to visual ego-motion estimation for varied camera optics. We propose a visual ego-motion learning architecture that maps observed optical flow vectors to an ego-motion density estimate via a Mixture Density Network (MDN). By modeling the architecture as a Conditional Variational Autoencoder (C-VAE), our model is able to provide introspective reasoning and prediction for ego-motion induced scene-flow. Additionally, our proposed model is especially amenable to bootstrapped ego-motion learning in robots where the supervision in ego-motion estimation for a particular camera sensor can be obtained from standard navigation-based sensor fusion strategies (GPS/INS and wheel-odometry fusion). Through experiments, we show the utility of our proposed approach in enabling the concept of self-supervised learning for visual ego-motion estimation in autonomous robots.Comment: Conference paper; Submitted to IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS) 2017, Vancouver CA; 8 pages, 8 figures, 2 table
    corecore