17 research outputs found

    Navigation visuelle dans un environnement ouvert : reconnaissance de vues panoramiques

    Get PDF
    Nous présentons un système de navigation pour robot autonome dans un environnement ouvert. Le robot rejoint un objectif en associant des mouvements aux informations visuelles provenant de l'environnement. Il utilise un apprentissage simple et en ligne. Il ne crée aucune carte complexe de son environnement. Le méchanisme s'avère efficace et robuste, de plus il semble en accord avec les observations animales. Enfin, notre implémentation dans un environnement réel supporte des perturbations importantes

    Inertio-elastic focusing of bioparticles in microchannels at high throughput

    Get PDF
    Controlled manipulation of particles from very large volumes of fluid at high throughput is critical for many biomedical, environmental and industrial applications. One promising approach is to use microfluidic technologies that rely on fluid inertia or elasticity to drive lateral migration of particles to stable equilibrium positions in a microchannel. Here, we report on a hydrodynamic approach that enables deterministic focusing of beads, mammalian cells and anisotropic hydrogel particles in a microchannel at extremely high flow rates. We show that on addition of micromolar concentrations of hyaluronic acid, the resulting fluid viscoelasticity can be used to control the focal position of particles at Reynolds numbers up to Re≈10,000 with corresponding flow rates and particle velocities up to 50 ml min[superscript −1] and 130 m s[superscript −1]. This study explores a previously unattained regime of inertio-elastic fluid flow and demonstrates bioparticle focusing at flow rates that are the highest yet achieved.National Institute for Biomedical Imaging and Bioengineering (U.S.) (P41 BioMicroElectroMechanical Systems Resource Center)National Institute for Biomedical Imaging and Bioengineering (U.S.) (P41 EB002503)National Science Foundation (U.S.). Graduate Research FellowshipUnited States. Army Research Office (Institute for Collaborative Biotechnologies Grant W911NF-09-0001

    Understanding jokes: a neural approach to content-based information retrieval

    No full text
    This paper addresses the problem of accessing the content of documents. Drawing from similarities between vision and language, we devise a neural adaptive architecture that can detect and use context information for the ‘understanding” of content, The functioning of this architecture is illustrated by the problem of understanding jokes

    Living in a partially structured environment: How to bypass the limitations of classical reinforcement techniques

    No full text
    In this paper, we propose an unsupervised neural network allowing a robot to learn sensory-motor associations with a delayed reward. The robot task is to learn the "meaning" of pictograms in order to "survive" in a maze. First, we introduce a new neural conditioning rule (PCR: Probabilistic Conditioning Rule) allowing to test hypotheses (associations between visual categories and movements) during a given time span. Afterwards, we describe a real maze experiment with our mobile robot. We propose a neural architecture overcoming the difficulty to build visual categories dynamically while associating them to movements. Third, we propose to use our algorithm on a simulation in order to test it exhaustively. We give the results for different kinds of mazes. Finally, we conclude by showing the limitations of approaches that do not take into account the intrinsic complexity of a reasoning based on image recognition. Keywords: Neural Networks, Unsupervised Learning, Topological Maps..

    Visual Navigation in an open environment without map

    No full text
    In this paper we describe how a mobile robot controled only by visual information can retrieve a particular goal location in an open environment. Our model does not need a precise map nor to learn all the possible positions in the environment. The system is a neural architecture inspired from neurobiological studies using the recognition of visual patterns called landmarks. The robot merges those visual information and their azimuth to build a plastic representation of its location. This representation is used to learn the best movement to reach the goal. A simple and fast on line learning of a few places located near the goal allows the robot to reach the goal from anywhere in its neighborhood. The system uses only an egocentric representation of the robot environment and presents very high generalization capabilities. We describe an efficient implementation tested on our robot in two real indoor environments. We show the limitations of the model and its possible extensions to create ..
    corecore