77 research outputs found

    Du caritatif au politique, l’itinéraire de Jeanne Koehler-Lumière

    Get PDF
    L’itinéraire de Jeanne Kœhler-Lumière l’a conduite de la philanthropie à la collaboration avec les pouvoirs publics pour la réalisation d’une politique sociale à Lyon après la Première Guerre mondiale. Fille et sœur d’industriels, elle participe aux œuvres mises en place pour le personnel de l’usine familiale, puis élargit son action en faveur de l’enfance à l’échelle de la ville. Cependant, l’ancrage de la famille Lumière dans le camp de la République laïque isole Jeanne Kœhler-Lumière des milieux traditionnels de la philanthropie conservatrice et catholique. La guerre constitue un tournant durant lequel elle s’engage dans les services de santé, travaillant avec des sommités du monde médical lyonnais. Son expérience et sa notabilité lui valent dans les années 1920 d’être sollicitée par la municipalité radicale en quête de compétences pour impulser une politique sociale. Dame d’œuvres puis ambassadrice du social, Jeanne Kœhler-Lumière illustre une forme originale de participation des femmes à la vie politique.Jeanne Kœhler-Lumière’s journey took her from philanthropy to collaboration with public authorities as she contributed to the elaboration of social policy after World War I. As the daughter and sister of industrialists, she participated in the charities set up for the personnel at the family factory and then extended her work on behalf of children to the municipal level. However, the Lumière family’s strong identification with the secular republicans isolated Jeanne Kœhler-Lumière from the traditional conservative and Catholic philanthropic circles. The war constituted a turning point during which she became involved in health services, working with the leading experts of Lyon’s medical community. Her experience and her respectability became valuable in the 1920s when she was called upon by the radical city council looking for skilled individuals to establish new social policies. Jeanne Kœhler-Lumière’s itinerary from a Lady of charity to an ambassadress for social services illustrates a new kind of participation of women in political life

    Multi-label Annotation for Visual Multi-Task Learning Models

    Full text link
    Deep learning requires large amounts of data, and a well-defined pipeline for labeling and augmentation. Current solutions support numerous computer vision tasks with dedicated annotation types and formats, such as bounding boxes, polygons, and key points. These annotations can be combined into a single data format to benefit approaches such as multi-task models. However, to our knowledge, no available labeling tool supports the export functionality for a combined benchmark format, and no augmentation library supports transformations for the combination of all. In this work, these functionalities are presented, with visual data annotation and augmentation to train a multi-task model (object detection, segmentation, and key point extraction). The tools are demonstrated in two robot perception use cases.Comment: 5 pages, accepted to IEEE International Conference on Robotic Computin

    Instructing Hierarchical Tasks to Robots by Verbal Commands

    Full text link
    Natural language is an effective tool for communication, as information can be expressed in different ways and at different levels of complexity. Verbal commands, utilized for instructing robot tasks, can therefor replace traditional robot programming techniques, and provide a more expressive means to assign actions and enable collaboration. However, the challenge of utilizing speech for robot programming is how actions and targets can be grounded to physical entities in the world. In addition, to be time-efficient, a balance needs to be found between fine- and course-grained commands and natural language phrases. In this work we provide a framework for instructing tasks to robots by verbal commands. The framework includes functionalities for single commands to actions and targets, as well as longer-term sequences of actions, thereby providing a hierarchical structure to the robot tasks. Experimental evaluation demonstrates the functionalities of the framework by human collaboration with a robot in different tasks, with different levels of complexity. The tools are provided open-source at https://petim44.github.io/voice-jogger/Comment: 7 pages, accepted to 16th IEEE/SICE International Symposium on System Integratio

    Co-speech gestures for human-robot collaboration

    Full text link
    Collaboration between human and robot requires effective modes of communication to assign robot tasks and coordinate activities. As communication can utilize different modalities, a multi-modal approach can be more expressive than single modal models alone. In this work we propose a co-speech gesture model that can assign robot tasks for human-robot collaboration. Human gestures and speech, detected by computer vision and speech recognition, can thus refer to objects in the scene and apply robot actions to them. We present an experimental evaluation of the multi-modal co-speech model with a real-world industrial use case. Results demonstrate that multi-modal communication is easy to achieve and can provide benefits for collaboration with respect to single modal tools.Comment: 5 pages, accepted to IEEE International Conference on Robotics Computin

    Balancing Exploration and Exploitation : A Neurally Inspired Mechanism to Learn Sensorimotor Contingencies

    Get PDF
    The learning of sensorimotor contingencies is essential for the development of early cognition. Here, we investigate how such process takes place on a neural level. We propose a theoretical concept for learning sensorimotor contingencies based on motor babbling with a robotic arm and dynamic neural fields. The robot learns to perform sequences of motor commands in order to perceive visual activation from a baby mobile toy. First, the robot explores the different sensorimotor outcomes, then autonomously decides to utilize (or not) the experience already gathered. Moreover, we introduce a neural mechanism inspired by recent neuroscience research that supports the switch between exploration and exploitation. The complete model relies on dynamic field theory, which consists of a set of interconnected dynamical systems. In time, the robot demonstrates a behavior toward the exploitation of previously learned sensorimotor contingencies and thus selecting actions that induce high visual activation.acceptedVersionPeer reviewe

    Squeeze-in Functionality for a Soft Parallel Robot Gripper

    Get PDF
    Grasping parts of inconsistent shapes, sizes and weights securely requires accurate part models and custom gripper fingers. Compliant grippers are a potential solution; however, each design approach requires the solution of unique problems. In this case, the durability and reliability of half lips (at least 1400 cycles) to perform consistently as springs of a specified stiffness (0.5N/mm) and displacement (5mm). Moreover, the challenge of low and small (3mm, 0.01kg bolt or Allen key) objects is addressed through vertical squeeze-in, implemented using an incline, lip and flex limiter as part of a 3D printed TPC spring. The squeeze-in phenomena are verified on large objects through a 30mm, 1.66kg common rail. Experimental results demonstrate the reliability when given a human-specified location for gripping, without the need for jigs or fixtures. Finally, the tested design is assessed for potential fulfillment of 7 of the United Nations sustainable development goals

    SingleDemoGrasp : Learning to Grasp From a Single Image Demonstration

    Get PDF
    Learning-based grasping models typically require a large amount of training data and training time to generate an effective grasping model. Alternatively, small non-generic grasp models have been proposed that are tailored to specific objects by, for example, directly predicting the object's location in 2/3D space, and determining suitable grasp poses by post processing. In both cases, data generation is a bottleneck, as this needs to be separately collected and annotated for each individual object and image. In this work, we tackle these issues and propose a grasping model that is developed in four main steps: 1. Visual object grasp demonstration, 2. Data augmentation, 3. Grasp detection model training and 4. Robot grasping action. Four different vision-based grasp models are evaluated with industrial and 3D printed objects, robot and standard gripper, in both simulation and real environments. The grasping model is implemented in the OpenDR toolkit at: https://github.com/opendr-eu/opendr/tree/master/projects/control/single_demo_grasp.acceptedVersionPeer reviewe

    Monolithic vs. hybrid controller for multi-objective Sim-to-Real learning

    Get PDF
    Simulation to real (Sim-to-Real) is an attractive approach to construct controllers for robotic tasks that are easier to simulate than to analytically solve. Working Sim-to-Real solutions have been demonstrated for tasks with a clear single objective such as "reach the target". Real world applications, however, often consist of multiple simultaneous objectives such as "reach the target" but "avoid obstacles". A straightforward solution in the context of reinforcement learning (RL) is to combine multiple objectives into a multi-term reward function and train a single monolithic controller. Recently, a hybrid solution based on pre-trained single objective controllers and a switching rule between them was proposed. In this work, we compare these two approaches in the multi-objective setting of a robot manipulator to reach a target while avoiding an obstacle. Our findings show that the training of a hybrid controller is easier and obtains a better success-failure trade-off than a monolithic controller. The controllers trained in simulator were verified by a real set-up.acceptedVersionPeer reviewe
    • …
    corecore