2 research outputs found

    Analyse a posteriori d'une démarche d'observatoire dans un contexte conflictuel : cas de l'irrigation en Charente

    Get PDF
    La situation de l'irrigation en Poitou-Charentes est emblématique des tensions entre le monde agricole et la société. Pour dépasser ces conflits, un observatoire alliant système d'information et action collective a été mis en place en Charente à l'initiative d'un collectif regroupant responsables politiques, chercheurs et conseillers agricoles. Cet article a pour objectif de restituer l'analyse des différentes perspectives et attentes des acteurs ayant participé à l'élaboration de l'observatoire. L'analyse proposée s'appuie sur une enquête réalisée en 2006 auprès des acteurs concernés. Il en ressort un cadre que nous proposons d'utiliser lors de la conception d'observatoires afin d'expliciter la diversité des attentes des parties prenantes et de faciliter l'élaboration d'un accord préalable à leur mise en ½uvre. / Irrigation in Poitou-Charentes exemplifies the tensions that exist between agriculture and other societal sectors. To overcome these conflicts, a Community Information System (CIS) for the purpose of both data management and community development has been set up in Charente by a group of policymakers, researchers and agricultural advisers. This paper describes our analysis of the differences among participants in the development of this CIS in terms of their points of view and expectations. Drawing on an analysis of a set of interviews conducted with these stakeholders in 2006, we propose a framework for use during the initial design phase of a CIS to make stakeholder expectations explicit and to promote a shared understanding prior to setting up a CIS

    Two-handed gesture recognition and fusion with speech to command a robot

    No full text
    International audienceAssistance is currently a pivotal research area in robotics, with huge societal potential. Since assistant robots directly interact with people, finding natural and easy-to-use user interfaces is of fundamental importance. This paper describes a flexible multimodal interface based on speech and gesture modalities in order to control our mobile robot named Jido. The vision system uses a stereo head mounted on a pan-tilt unit and a bank of collaborative particle filters devoted to the upper human body extremities to track and recognize pointing/symbolic mono but also bi-manual gestures. Such framework constitutes our first contribution, as it is shown, to give proper handling of natural artifacts (self-occlusion, camera out of view field, hand deformation) when performing 3D gestures using one or the other hand even both. A speech recognition and understanding system based on the Julius engine is also developed and embedded in order to process deictic and anaphoric utterances. The second contribution deals with a probabilistic and multi-hypothesis interpreter framework to fuse results from speech and gesture components. Such interpreter is shown to improve the classification rates of multimodal commands compared to using either modality alone. Finally, we report on successful live experiments in human-centered settings. Results are reported in the context of an interactive manipulation task, where users specify local motion commands to Jido and perform safe object exchanges
    corecore