33,223 research outputs found

    Social Roles and Baseline Proxemic Preferences for a Domestic Service Robot

    Get PDF
    © The Author(s) 2014. This article is published with open access at Springerlink.com. This article is distributed under the terms of the Creative Commons Attribution License which permits any use, distribution, and reproduction in any medium, provided the original author(s) and the source are credited. The work described in this paper was conducted within the EU Integrated Projects LIREC (LIving with Robots and intEractive Companions, funded by the European Commission under contract numbers FP7 215554, and partly funded by the ACCOMPANY project, a part of the European Union’s Seventh Framework Programme (FP7/2007–2013) under grant agreement n287624The goal of our research is to develop socially acceptable behavior for domestic robots in a setting where a user and the robot are sharing the same physical space and interact with each other in close proximity. Specifically, our research focuses on approach distances and directions in the context of a robot handing over an object to a userPeer reviewe

    Perception-aware Path Planning

    Full text link
    In this paper, we give a double twist to the problem of planning under uncertainty. State-of-the-art planners seek to minimize the localization uncertainty by only considering the geometric structure of the scene. In this paper, we argue that motion planning for vision-controlled robots should be perception aware in that the robot should also favor texture-rich areas to minimize the localization uncertainty during a goal-reaching task. Thus, we describe how to optimally incorporate the photometric information (i.e., texture) of the scene, in addition to the the geometric one, to compute the uncertainty of vision-based localization during path planning. To avoid the caveats of feature-based localization systems (i.e., dependence on feature type and user-defined thresholds), we use dense, direct methods. This allows us to compute the localization uncertainty directly from the intensity values of every pixel in the image. We also describe how to compute trajectories online, considering also scenarios with no prior knowledge about the map. The proposed framework is general and can easily be adapted to different robotic platforms and scenarios. The effectiveness of our approach is demonstrated with extensive experiments in both simulated and real-world environments using a vision-controlled micro aerial vehicle.Comment: 16 pages, 20 figures, revised version. Conditionally accepted for IEEE Transactions on Robotic

    Simultaneous localization and map-building using active vision

    No full text
    An active approach to sensing can provide the focused measurement capability over a wide field of view which allows correctly formulated Simultaneous Localization and Map-Building (SLAM) to be implemented with vision, permitting repeatable long-term localization using only naturally occurring, automatically-detected features. In this paper, we present the first example of a general system for autonomous localization using active vision, enabled here by a high-performance stereo head, addressing such issues as uncertainty-based measurement selection, automatic map-maintenance, and goal-directed steering. We present varied real-time experiments in a complex environment.Published versio

    An Evaluation Schema for the Ethical Use of Autonomous Robotic Systems in Security Applications

    Get PDF
    We propose a multi-step evaluation schema designed to help procurement agencies and others to examine the ethical dimensions of autonomous systems to be applied in the security sector, including autonomous weapons systems

    Enhanced lensing rate by clustering of massive galaxies: newly discovered systems in the SLACS fields

    Full text link
    [Abridged] We exploit the clustering of massive galaxies to perform a high efficiency imaging search for gravitational lenses. Our dataset comprises 44 fields imaged by the Hubble Space Telescope (HST) Advanced Camera for Surveys (ACS), each of which is centered on a lens discovered by the Strong Lens ACS Survey (SLACS). We compare four different search methods: 1) automated detection with the HST Archive Galaxy-scale Gravitational Lens Survey (HAGGLeS) robot, 2) examining cutout images of bright galaxies (BGs) after subtraction of a smooth galaxy light distribution, 3) examining the unsubtracted BG cutouts, and 4) performing a full-frame visual inspection of the ACS images. We compute purity and completeness and consider investigator time for the four algorithms, using the main SLACS lenses as a testbed. The first and second algorithms perform the best. We present the four new lens systems discovered during this comprehensive search, as well as one other likely candidate. For each new lens we use the fundamental plane to estimate the lens velocity dispersion and predict, from the resulting lens geometry, the redshifts of the lensed sources. Two of these new systems are found in galaxy clusters, which include the SLACS lenses in the two respective fields. Overall we find that the enhanced lens abundance (30^{+24}_{-8} lenses/degree^2) is higher than expected for random fields (12^{+4}_{-2} lenses/degree^2 for the COSMOS survey). Additionally, we find that the gravitational lenses we detect are qualitatively different from those in the parent SLACS sample: this imaging survey is largely probing higher-redshift, and lower-mass, early-type galaxies.Comment: submitted to ApJ; 19 pages, 12 figure

    Towards a Domain Specific Language for a Scene Graph based Robotic World Model

    Full text link
    Robot world model representations are a vital part of robotic applications. However, there is no support for such representations in model-driven engineering tool chains. This work proposes a novel Domain Specific Language (DSL) for robotic world models that are based on the Robot Scene Graph (RSG) approach. The RSG-DSL can express (a) application specific scene configurations, (b) semantic scene structures and (c) inputs and outputs for the computational entities that are loaded into an instance of a world model.Comment: Presented at DSLRob 2013 (arXiv:cs/1312.5952

    Robot rights? Towards a social-relational justification of moral consideration \ud

    Get PDF
    Should we grant rights to artificially intelligent robots? Most current and near-future robots do not meet the hard criteria set by deontological and utilitarian theory. Virtue ethics can avoid this problem with its indirect approach. However, both direct and indirect arguments for moral consideration rest on ontological features of entities, an approach which incurs several problems. In response to these difficulties, this paper taps into a different conceptual resource in order to be able to grant some degree of moral consideration to some intelligent social robots: it sketches a novel argument for moral consideration based on social relations. It is shown that to further develop this argument we need to revise our existing ontological and social-political frameworks. It is suggested that we need a social ecology, which may be developed by engaging with Western ecology and Eastern worldviews. Although this relational turn raises many difficult issues and requires more work, this paper provides a rough outline of an alternative approach to moral consideration that can assist us in shaping our relations to intelligent robots and, by extension, to all artificial and biological entities that appear to us as more than instruments for our human purpose
    • 

    corecore