479 research outputs found

    A Multi-modal Perception based Architecture for a Non-intrusive Domestic Assistant Robot

    Get PDF
    International audienceWe present a multi-modal perception based architecture to realize a non-intrusive domestic assistant robot. The realized robot is non-intrusive in that it only starts interaction with a user when it detects the user's intention to do so automatically. All the robot's actions are based on multi-modal perceptions, which include: user detection based on RGB-D data, user's intention-for-interaction detection with RGB-D and audio data, and communication via speech recognition. The utilization of multi-modal cues in different parts of the robotic activity paves the way to successful robotic runs

    A multi-modal perception based assistive robotic system for the elderly

    Get PDF
    Edited by Giovanni Maria Farinella, Takeo Kanade, Marco Leo, Gerard G. Medioni, Mohan TrivediInternational audienceIn this paper, we present a multi-modal perception based framework to realize a non-intrusive domestic assistive robotic system. It is non-intrusive in that it only starts interaction with a user when it detects the user's intention to do so. All the robot's actions are based on multi-modal perceptions which include user detection based on RGB-D data, user's intention-for-interaction detection with RGB-D and audio data, and communication via user distance mediated speech recognition. The utilization of multi-modal cues in different parts of the robotic activity paves the way to successful robotic runs (94% success rate). Each presented perceptual component is systematically evaluated using appropriate dataset and evaluation metrics. Finally the complete system is fully integrated on the PR2 robotic platform and validated through system sanity check runs and user studies with the help of 17 volunteer elderly participants

    Overcoming barriers and increasing independence: service robots for elderly and disabled people

    Get PDF
    This paper discusses the potential for service robots to overcome barriers and increase independence of elderly and disabled people. It includes a brief overview of the existing uses of service robots by disabled and elderly people and advances in technology which will make new uses possible and provides suggestions for some of these new applications. The paper also considers the design and other conditions to be met for user acceptance. It also discusses the complementarity of assistive service robots and personal assistance and considers the types of applications and users for which service robots are and are not suitable

    ENRICHME: Perception and Interaction of an Assistive Robot for the Elderly at Home

    Get PDF
    Recent technological advances enabled modern robots to become part of our daily life. In particular, assistive robotics emerged as an exciting research topic that can provide solutions to improve the quality of life of elderly and vulnerable people. This paper introduces the robotic platform developed in the ENRICHME project, with particular focus on its innovative perception and interaction capabilities. The project’s main goal is to enrich the day-to-day experience of elderly people at home with technologies that enable health monitoring, complementary care, and social support. The paper presents several modules created to provide cognitive stimulation services for elderly users with mild cognitive impairments. The ENRICHME robot was tested in three pilot sites around Europe (Poland, Greece, and UK) and proven to be an effective assistant for the elderly at home

    Exploring Task-agnostic, ShapeNet-based Object Recognition for Mobile Robots

    Get PDF
    This position paper presents an attempt to improve the scalability of existing object recognition methods, which largely rely on supervision and imply a huge availability of manually-labelled data points. Moreover, in the context of mobile robotics, data sets and experimental settings are often handcrafted based on the specific task the object recognition is aimed at, e.g. object grasping. In this work, we argue instead that publicly available open data such as ShapeNet can be used for object classification first, and then to link objects to their related concepts, leading to task-agnostic knowledge acquisition practices. To this aim, we evaluated five pipelines for object recognition, where target classes were all entities collected from ShapeNet and matching was based on: (i) shape-only features, (ii) RGB histogram comparison, (iii) a combination of shape and colour matching, (iv) image feature descriptors, and (v) inexact, normalised cross-correlation, resembling the Deep, Siamese-like NN architecture of Submariam et al. (2016). We discussed the relative impact of shape-derived and colour-derived features, as well as suitability of all tested solutions for future application to real-life use cases

    Task-Agnostic Object Recognition for Mobile Robots through Few-Shot Image Matching

    Get PDF
    To assist humans with their daily tasks, mobile robots are expected to navigate complex and dynamic environments, presenting unpredictable combinations of known and unknown objects. Most state-of-the-art object recognition methods are unsuitable for this scenario because they require that: (i) all target object classes are known beforehand, and (ii) a vast number of training examples is provided for each class. This evidence calls for novel methods to handle unknown object classes, for which fewer images are initially available (few-shot recognition). One way of tackling the problem is learning how to match novel objects to their most similar supporting example. Here, we compare different (shallow and deep) approaches to few-shot image matching on a novel data set, consisting of 2D views of common object types drawn from a combination of ShapeNet and Google. First, we assess if the similarity of objects learned from a combination of ShapeNet and Google can scale up to new object classes, i.e., categories unseen at training time. Furthermore, we show how normalising the learned embeddings can impact the generalisation abilities of the tested methods, in the context of two novel configurations: (i) where the weights of a Convolutional two-branch Network are imprinted and (ii) where the embeddings of a Convolutional Siamese Network are L2-normalised

    ACII 2009: Affective Computing and Intelligent Interaction. Proceedings of the Doctoral Consortium 2009

    Get PDF

    Collaborative human-machine interfaces for mobile manipulators.

    Get PDF
    The use of mobile manipulators in service industries as both agents in physical Human Robot Interaction (pHRI) and for social interactions has been on the increase in recent times due to necessities like compensating for workforce shortages and enabling safer and more efficient operations amongst other reasons. Collaborative robots, or co-bots, are robots that are developed for use with human interaction through direct contact or close proximity in a shared space with the human users. The work presented in this dissertation focuses on the design, implementation and analysis of components for the next-generation collaborative human machine interfaces (CHMI) needed for mobile manipulator co-bots that can be used in various service industries. The particular components of these CHMI\u27s that are considered in this dissertation include: Robot Control: A Neuroadaptive Controller (NAC)-based admittance control strategy for pHRI applications with a co-bot. Robot state estimation: A novel methodology and placement strategy for using arrays of IMUs that can be embedded in robot skin for pose estimation in complex robot mechanisms. User perception of co-bot CHMI\u27s: Evaluation of human perceptions of usefulness and ease of use of a mobile manipulator co-bot in a nursing assistant application scenario. To facilitate advanced control for the Adaptive Robotic Nursing Assistant (ARNA) mobile manipulator co-bot that was designed and developed in our lab, we describe and evaluate an admittance control strategy that features a Neuroadaptive Controller (NAC). The NAC has been specifically formulated for pHRI applications such as patient walking. The controller continuously tunes weights of a neural network to cancel robot non-linearities, including drive train backlash, kinematic or dynamic coupling, variable patient pushing effort, or slope surfaces with unknown inclines. The advantage of our control strategy consists of Lyapunov stability guarantees during interaction, less need for parameter tuning and better performance across a variety of users and operating conditions. We conduct simulations and experiments with 10 users to confirm that the NAC outperforms a classic Proportional-Derivative (PD) joint controller in terms of resulting interaction jerk, user effort, and trajectory tracking error during patient walking. To tackle complex mechanisms of these next-gen robots wherein the use of encoder or other classic pose measuring device is not feasible, we present a study effects of design parameters on methods that use data from Inertial Measurement Units (IMU) in robot skins to provide robot state estimates. These parameters include number of sensors, their placement on the robot, as well as noise properties on the quality of robot pose estimation and its signal-to-noise Ratio (SNR). The results from that study facilitate the creation of robot skin, and in order to enable their use in complex robots, we propose a novel pose estimation method, the Generalized Common Mode Rejection (GCMR) algorithm, for estimation of joint angles in robot chains containing composite joints. The placement study and GCMR are demonstrated using both Gazebo simulation and experiments with a 3-DoF robotic arm containing 2 non-zero link lengths, 1 revolute joint and a 2-DoF composite joint. In addition to yielding insights on the predicted usage of co-bots, the design of control and sensing mechanisms in their CHMI benefits from evaluating the perception of the eventual users of these robots. With co-bots being only increasingly developed and used, there is a need for studies into these user perceptions using existing models that have been used in predicting usage of comparable technology. To this end, we use the Technology Acceptance Model (TAM) to evaluate the CHMI of the ARNA robot in a scenario via analysis of quantitative and questionnaire data collected during experiments with eventual uses. The results from the works conducted in this dissertation demonstrate insightful contributions to the realization of control and sensing systems that are part of CHMI\u27s for next generation co-bots
    • 

    corecore