19,683 research outputs found

    Overcoming barriers and increasing independence: service robots for elderly and disabled people

    Get PDF
    This paper discusses the potential for service robots to overcome barriers and increase independence of elderly and disabled people. It includes a brief overview of the existing uses of service robots by disabled and elderly people and advances in technology which will make new uses possible and provides suggestions for some of these new applications. The paper also considers the design and other conditions to be met for user acceptance. It also discusses the complementarity of assistive service robots and personal assistance and considers the types of applications and users for which service robots are and are not suitable

    The Usage and Evaluation of Anthropomorphic Form in Robot Design

    Get PDF
    There are numerous examples illustrating the application of human shape in everyday products. Usage of anthropomorphic form has long been a basic design strategy, particularly in the design of intelligent service robots. As such, it is desirable to use anthropomorphic form not only in aesthetic design but also in interaction design. Proceeding from how anthropomorphism in various domains has taken effect on human perception, we assumed that anthropomorphic form used in appearance and interaction design of robots enriches the explanation of its function and creates familiarity with robots. From many cases we have found, misused anthropomorphic form lead to user disappointment or negative impressions on the robot. In order to effectively use anthropomorphic form, it is necessary to measure the similarity of an artifact to the human form (humanness), and then evaluate whether the usage of anthropomorphic form fits the artifact. The goal of this study is to propose a general evaluation framework of anthropomorphic form for robot design. We suggest three major steps for framing the evaluation: 'measuring anthropomorphic form in appearance', 'measuring anthropomorphic form in Human-Robot Interaction', and 'evaluation of accordance of two former measurements'. This evaluation process will endow a robot an amount of humanness in their appearance equivalent to an amount of humanness in interaction ability, and then ultimately facilitate user satisfaction. Keywords: Anthropomorphic Form; Anthropomorphism; Human-Robot Interaction; Humanness; Robot Design</p

    Learning Deep Visual Object Models From Noisy Web Data: How to Make it Work

    Full text link
    Deep networks thrive when trained on large scale data collections. This has given ImageNet a central role in the development of deep architectures for visual object classification. However, ImageNet was created during a specific period in time, and as such it is prone to aging, as well as dataset bias issues. Moving beyond fixed training datasets will lead to more robust visual systems, especially when deployed on robots in new environments which must train on the objects they encounter there. To make this possible, it is important to break free from the need for manual annotators. Recent work has begun to investigate how to use the massive amount of images available on the Web in place of manual image annotations. We contribute to this research thread with two findings: (1) a study correlating a given level of noisily labels to the expected drop in accuracy, for two deep architectures, on two different types of noise, that clearly identifies GoogLeNet as a suitable architecture for learning from Web data; (2) a recipe for the creation of Web datasets with minimal noise and maximum visual variability, based on a visual and natural language processing concept expansion strategy. By combining these two results, we obtain a method for learning powerful deep object models automatically from the Web. We confirm the effectiveness of our approach through object categorization experiments using our Web-derived version of ImageNet on a popular robot vision benchmark database, and on a lifelong object discovery task on a mobile robot.Comment: 8 pages, 7 figures, 3 table

    A systematic comparison of affective robot expression modalities

    Get PDF
    • …
    corecore