624 research outputs found

    Towards vision-based control of cable-driven parallel robots

    Get PDF
    International audienceThis paper deals with the vision-based control of cable-driven parallel robots. First, a 3D pose visual servoing is proposed, where the end-effector pose is indirectly measured and used for regulation. This method is illustrated and validated on a cable-driven parallel robot prototype. Second, to take into account the dynamics of the platform and using a Cartesian pose and velocity estimator, a vision-based computed torque control is developed and validated in simulation

    Self-timed rings as low-phase noise programmable oscillators

    No full text
    International audienceSelf-timed rings are promising for designing highspeed serial links and system clock generators. Indeed, their architecture is well-suited to digitally control their frequency and to easily adapt their phase noise by design. Self-timed ring oscillation frequency does not only depend on the number of stages as the usual inverter ring oscillators but also on their initial state. This feature is extremely important to make them programmable. Moreover, with such ring oscillators, it is easy to control the phase noise by design. Indeed, 3dB phase noise reduction is obtained at the cost of higher power consumption when the number of stages is doubled while keeping the same oscillation frequency, thanks to the oscillator programmability. In this paper, we completely describe the method to design selftimed rings in order to make them programmable and to generate a phase noise in accordance with the specifications. Test chips have been designed and fabricated in AMS 0.35 μm and in STMicroelectonics CMOS 65 nm technology to verify our models and theoretical claims

    Neural Network Based Reinforcement Learning for Audio-Visual Gaze Control in Human-Robot Interaction

    Get PDF
    This paper introduces a novel neural network-based reinforcement learning approach for robot gaze control. Our approach enables a robot to learn and to adapt its gaze control strategy for human-robot interaction neither with the use of external sensors nor with human supervision. The robot learns to focus its attention onto groups of people from its own audio-visual experiences, independently of the number of people, of their positions and of their physical appearances. In particular, we use a recurrent neural network architecture in combination with Q-learning to find an optimal action-selection policy; we pre-train the network using a simulated environment that mimics realistic scenarios that involve speaking/silent participants, thus avoiding the need of tedious sessions of a robot interacting with people. Our experimental evaluation suggests that the proposed method is robust against parameter estimation, i.e. the parameter values yielded by the method do not have a decisive impact on the performance. The best results are obtained when both audio and visual information is jointly used. Experiments with the Nao robot indicate that our framework is a step forward towards the autonomous learning of socially acceptable gaze behavior.Comment: Paper submitted to Pattern Recognition Letter

    On Leader Following and Classification

    Get PDF
    International audienceService and assistance robots that must move in human environment must address the difficult issue of navigating in dynamic environments. As it has been shown in previous works, in such situations the robots can take advantage of the motion of persons by following them, managing to move together with humans in difficult situations. In those circumstances, the problem to be solved is how to choose a human leader to be followed. This work proposes an innovative method for leader selection, based on human experience. A learning framework is developed, where data is acquired, labeled and then used to train an AdaBoost classification algorithm, to determine if a candidate is a bad or a good leader, and also to study the contribution of features to the classification process

    Probabilistic Integration of Intensity and Depth Information for Part-Based Vehicle Detection

    Get PDF
    International audienceIn this paper, an object class recognition method is presented. The method uses local image features and follows the part-based detection approach. It fuses intensity and depth information in a probabilistic framework. The depth of each local feature is used to weigh the probability of finding the object at a given distance. To train the system for an object class, only a database of images annotated with bounding boxes is required, thus automatizing the extension of the system to different object classes. We apply our method to the problem of detecting vehicles from a moving platform. The experiments with a data set of stereo images in an urban environment show a significant improvement in performance when using both information modalities

    Design and Development of the Biped Prototype ROBIAN

    Get PDF
    Proceedings of the 2002 IEEE International Conference on Robotics & Automation, Washington, DC, May 200

    Proxemics models for human-aware navigation in robotics: Grounding interaction and personal space models in experimental data from psychology

    Get PDF
    International audienceIn order to navigate in a social environment, a robot must be aware of social spaces, which include proximity and interaction-based constraints. Previous models of interaction and personal spaces have been inspired by studies in social psychology but not systematically grounded and validated with respect to experimental data. We propose to implement personal and interaction space models in order to replicate a classical psychology experiment. Our robotic simulations can thus be compared with experimental data from humans. Thanks to this comparison, we first show the validity of our models, examine the necessity of the interaction and personal spaces and discuss their geometric shape. Our experiments suggest that human-like robotic behavior can be obtained by using only correctly calibrated personal spaces (i.e., without explicit representation of interaction spaces and therefore, without the need to detect interactions between humans in the environment)

    Sensorimotor learning in a Bayesian computational model of speech communication

    No full text
    International audienceAlthough sensorimotor exploration is a basic process within child development, clear views on the underlying computational processes remain challenging. We propose to compare eight algorithms for sensorimotor exploration, based on three components: " accommodation " performing a compromise between goal babbling and social guidance by a master, " local extrapolation " simulating local exploration of the sensorimotor space to achieve motor generalizations and " idiosyncratic babbling " which favors already explored motor commands when they are efficient. We will show that a mix of these three components offers a good compromise enabling efficient learning while reducing exploration as much as possible

    Co-Localization of Audio Sources in Images Using Binaural Features and Locally-Linear Regression

    Get PDF
    This paper addresses the problem of localizing audio sources using binaural measurements. We propose a supervised formulation that simultaneously localizes multiple sources at different locations. The approach is intrinsically efficient because, contrary to prior work, it relies neither on source separation, nor on monaural segregation. The method starts with a training stage that establishes a locally-linear Gaussian regression model between the directional coordinates of all the sources and the auditory features extracted from binaural measurements. While fixed-length wide-spectrum sounds (white noise) are used for training to reliably estimate the model parameters, we show that the testing (localization) can be extended to variable-length sparse-spectrum sounds (such as speech), thus enabling a wide range of realistic applications. Indeed, we demonstrate that the method can be used for audio-visual fusion, namely to map speech signals onto images and hence to spatially align the audio and visual modalities, thus enabling to discriminate between speaking and non-speaking faces. We release a novel corpus of real-room recordings that allow quantitative evaluation of the co-localization method in the presence of one or two sound sources. Experiments demonstrate increased accuracy and speed relative to several state-of-the-art methods.Comment: 15 pages, 8 figure
    corecore