248 research outputs found

    Attribution of Autonomy and its Role in Robotic Language Acquisition

    Get PDF
    © The Author(s) 2021. This article is licensed under a Creative Commons Attribution 4.0 International License, which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons licence, and indicate if changes were made. To view a copy of this licence, visit http://creativecommons.org/licenses/by/4.0/.The false attribution of autonomy and related concepts to artificial agents that lack the attributed levels of the respective characteristic is problematic in many ways. In this article we contrast this view with a positive viewpoint that emphasizes the potential role of such false attributions in the context of robotic language acquisition. By adding emotional displays and congruent body behaviors to a child-like humanoid robot’s behavioral repertoire we were able to bring naïve human tutors to engage in so called intent interpretations. In developmental psychology, intent interpretations can be hypothesized to play a central role in the acquisition of emotion, volition, and similar autonomy-related words. The aforementioned experiments originally targeted the acquisition of linguistic negation. However, participants produced other affect- and motivation-related words with high frequencies too and, as a consequence, these entered the robot’s active vocabulary. We will analyze participants’ non-negative emotional and volitional speech and contrast it with participants’ speech in a non-affective baseline scenario. Implications of these findings for robotic language acquisition in particular and artificial intelligence and robotics more generally will also be discussed.Peer reviewedFinal Published versio

    Integrating Elastic Bands to Enhance Performance for Textile Robotics

    Full text link
    The elastic bands integrated using the ruffles technique proved to be effective in enhancing the performance of the soft robotic structures. In the actuator application, the elastic bands greatly increased the bending capability and force capability of the structure, while in the eversion robot cap application, the elastic bands improved the performance slightly by maintaining the sensory payload at the tip without restricting the eversion process. These findings demonstrate the potential of using elastic bands and textile techniques in soft robotics to create more efficient and adaptable structures

    Novel Tactile-SIFT Descriptor for Object Shape Recognition

    Get PDF
    Using a tactile array sensor to recognize an object often requires multiple touches at different positions. This process is prone to move or rotate the object, which inevitably increases difficulty in object recognition. To cope with the unknown object movement, this paper proposes a new tactile-SIFT descriptor to extract features in view of gradients in the tactile image to represent objects, to allow the features being invariant to object translation and rotation. The tactile-SIFT segments a tactile image into overlapping subpatches, each of which is represented using a dn-dimensional gradient vector, similar to the classic SIFT descriptor. Tactile-SIFT descriptors obtained from multiple touches form a dictionary of k words, and the bag-of-words method is then used to identify objects. The proposed method has been validated by classifying 18 real objects with data from an off-the-shelf tactile sensor. The parameters of the tactile-SIFT descriptor, including the dimension size dn and the number of subpatches sp, are studied. It is found that the optimal performance is obtained using an 8-D descriptor with three subpatches, taking both the classification accuracy and time efficiency into consideration. By employing tactile-SIFT, a recognition rate of 91.33% has been achieved with a dictionary size of 50 clusters using only 15 touches

    Fingertip Fiber Optical Tactile Array with Two-Level Spring Structure

    Get PDF
    Tactile perception is a feature benefiting reliable grasping and manipulation. This paper presents the design of an integrated fingertip force sensor employing an optical fiber based approach where applied forces modulate light intensity. The proposed sensor system is developed to support grasping of a broad range of objects, including those that are hard as well those that are soft. The sensor system is comprised of four sensing elements forming a tactile array integrated with the tip of a finger. We investigate the design configuration of a separate force sensing element with the aim to improve its measurement range. The force measurement of a single tactile element is based on a two-level displacement that is achieved thanks to a hybrid sensing structure made up of a stiff linear and flexible ortho-planar spring. An important outcome of this paper is a miniature tactile fingertip sensor that is capable of perceiving light contact, typically occurring during the initial stages of a grasp, as well as measuring higher forces, commonly present during tight grasps

    Miniaturized triaxial optical fiber force sensor for MRI-guided minimally invasive surgery

    Get PDF
    Proceedings of: 2010 IEEE International Conference on Robotics and Automation (ICRA'10), May 3-8, 2010, Anchorage (Alaska, USA)This paper describes the design and construction of a miniaturized triaxial force sensor which can be applied inside a magnetic resonance imaging (MRI) machine. The sensing principle of the sensor is based on an optical intensity modulation mechanism that utilizes bent-tip optical fibers to measure the deflection of a compliant platform when exposed to a force. By measuring the deflection of the platform using this optical approach, the magnitude and direction of three orthogonal force components (Fx, Fy, and Fz) can be determined. The sensor prototype described in this paper demonstrates that it can perform force measurements in axial and radial directions with working ranges of +/- 2 N. Since the sensor is small in size and entirely made of nonmetallic materials, it is compatible with minimally invasive surgery (MIS) and safe to be deployed within magnetic resonance (MR) environments.European Community's Seventh Framework Progra

    A Non-linear Model for Predicting Tip Position of a Pliable Robot Arm Segment Using Bending Sensor Data

    Get PDF
    Using pliable materials for the construction of robot bodies presents new and interesting challenges for the robotics community. Within the EU project entitled STIFFness controllable Flexible & Learnable manipulator for surgical Operations (STIFF-FLOP), a bendable, segmented robot arm has been developed. The exterior of the arm is composed of a soft material (silicone), encasing an internal structure that contains air-chamber actuators and a variety of sensors for monitoring applied force, position and shape of the arm as it bends. Due to the physical characteristics of the arm, a proper model of robot kinematics and dynamics is difficult to infer from the sensor data. Here we propose a non-linear approach to predicting the robot arm posture, by training a feed-forward neural network with a structured series of pressures values applied to the arm's actuators. The model is developed across a set of seven different experiments. Because the STIFF-FLOP arm is intended for use in surgical procedures, traditional methods for position estimation (based on visual information or electromagnetic tracking) will not be possible to implement. Thus the ability to estimate pose based on data from a custom fiber-optic bending sensor and accompanying model is a valuable contribution. Results are presented which demonstrate the utility of our non-linear modelling approach across a range of data collection procedures

    Robotic surface exploration with vision and tactile sensing for cracks detection and characterisation

    Full text link
    This paper presents a novel algorithm for crack localisation and detection based on visual and tactile analysis via fibre-optics. A finger-shaped sensor based on fibre-optics is employed for the data acquisition to collect data for the analysis and the experiments. To detect the possible locations of cracks a camera is used to scan an environment while running an object detection algorithm. Once the crack is detected, a fully-connected graph is created from a skeletonised version of the crack. A minimum spanning tree is then employed for calculating the shortest path to explore the crack which is then used to develop the motion planner for the robotic manipulator. The motion planner divides the crack into multiple nodes which are then explored individually. Then, the manipulator starts the exploration and performs the tactile data classification to confirm if there is indeed a crack in that location or just a false positive from the vision algorithm. If a crack is detected, also the length, width, orientation and number of branches are calculated. This is repeated until all the nodes of the crack are explored. In order to validate the complete algorithm, various experiments are performed: comparison of exploration of cracks through full scan and motion planning algorithm, implementation of frequency-based features for crack classification and geometry analysis using a combination of vision and tactile data. From the results of the experiments, it is shown that the proposed algorithm is able to detect cracks and improve the results obtained from vision to correctly classify cracks and their geometry with minimal cost thanks to the motion planning algorithm.Comment: 12 page

    A two party haptic guidance controller via a hard rein

    Get PDF
    In the case of human intervention in disaster response operations like indoor firefighting, where the environment perception is limited due to thick smoke, noise in the oxygen masks and clutter, not only limit the environmental perception of the human responders, but also causes distress. An intelligent agent (man/machine) with full environment perceptual capabilities is an alternative to enhance navigation in such unfavorable environments. Since haptic communication is the least affected mode of communication in such cases, we consider human demonstrations to use a hard rein to guide blindfolded followers with auditory distraction to be a good paradigm to extract salient features of guiding using hard reins. Based on numerical simulations and experimental systems identification based on demonstrations from eight pairs of human subjects, we show that, the relationship between the orientation difference between the follower and the guider, and the lateral swing patterns of the hard rein by the guider can be explained by a novel 3rd order auto regressive predictive controller. Moreover,by modeling the two party voluntary movement dynamics using a virtual damped inertial model, we were able to model the mutual trust between two parties. In the future, the novel controller extracted based on human demonstrations can be tested on a human-robot interaction scenario to guide a visually impaired person in various applications like fire fighting, search and rescue, medical surgery, etc

    An Optimal State Dependent Haptic Guidance Controller via a Hard Rein

    Get PDF
    The aim of this paper is to improve the optimality and accuracy of techniques to guide a human in limited visibility & auditory conditions such as in fire-fighting in warehouses or similar environments. At present, teams of breathing apparatus (BA) wearing fire-fighters move in teams following walls. Due to limited visibility and high noise in the oxygen masks, they predominantly depend on haptic communication through reins. An intelligent agent (man/machine) with full environment perceptual capabilities is an alternative to enhance navigation in such unfavorable environments, just like a dog guiding a blind person. This paper proposes an optimal state-dependent control policy to guide a follower with limited environmental perception, by an intelligent and environmentally perceptive agent. Based on experimental systems identification and numerical simulations on human demonstrations from eight pairs of participants, we show that the guiding agent and the follower experience learning for a optimal stable state-dependent a novel 3rd and 2nd order auto regressive predictive and reactive control policies respectively. Our findings provide a novel theoretical basis to design advanced human-robot interaction algorithms in a variety of cases that require the assistance of a robot to perceive the environment by a human counterpart
    • …
    corecore