2,565 research outputs found

    Learning efficient haptic shape exploration with a rigid tactile sensor array

    Full text link
    Haptic exploration is a key skill for both robots and humans to discriminate and handle unknown objects or to recognize familiar objects. Its active nature is evident in humans who from early on reliably acquire sophisticated sensory-motor capabilities for active exploratory touch and directed manual exploration that associates surfaces and object properties with their spatial locations. This is in stark contrast to robotics. In this field, the relative lack of good real-world interaction models - along with very restricted sensors and a scarcity of suitable training data to leverage machine learning methods - has so far rendered haptic exploration a largely underdeveloped skill. In the present work, we connect recent advances in recurrent models of visual attention with previous insights about the organisation of human haptic search behavior, exploratory procedures and haptic glances for a novel architecture that learns a generative model of haptic exploration in a simulated three-dimensional environment. The proposed algorithm simultaneously optimizes main perception-action loop components: feature extraction, integration of features over time, and the control strategy, while continuously acquiring data online. We perform a multi-module neural network training, including a feature extractor and a recurrent neural network module aiding pose control for storing and combining sequential sensory data. The resulting haptic meta-controller for the rigid 16×1616 \times 16 tactile sensor array moving in a physics-driven simulation environment, called the Haptic Attention Model, performs a sequence of haptic glances, and outputs corresponding force measurements. The resulting method has been successfully tested with four different objects. It achieved results close to 100%100 \% while performing object contour exploration that has been optimized for its own sensor morphology

    Experimental verification of a completely soft gripper for grasping and classifying beam members in truss structures

    Full text link
    © 2018 IEEE. Robotic object exploration and identification methods to date have attempted to mimic human Exploratory Procedures (EPs) using complex, rigid robotic hands with multifaceted sensory suites. For applications where the target objects may have different or unknown cross-sectional shapes and sizes (e.g. beam members in truss structures), rigid grippers are not a good option as they are unable to adapt to the target objects. This may make it very difficult to recognise the shape and size of a beam member and the approaching angles which would result in a secure grasp. To best meet the requirements of adaptability and compliancy, a soft robotic gripper with simple exteroceptive force sensors has been designed. This paper experimentally verifies the gripper design by assessing its performance in grasping and adapting to a variety of target beam members in a truss structure. The sensor arrangement is also assessed by verifying that sufficient data is extracted during a grasp to recognise the approaching angle of the gripper. Firstly, the gripper is used to grasp each beam member from various angles of approach and readings from the force sensors are collected. Secondly, the collected sensor data is used to train and then test a range of commonly used classifiers for classification of the angle of approach. Thirdly, the classification results are analysed. Through this process, it is found that the gripper is proficient in grasping the variety of target beam members. Despite the uncertainty in the gripper pose, the sensor data collected from the soft gripper during a grasp is sufficient for classification of the angles of approach

    Active haptic perception in robots: a review

    Get PDF
    In the past few years a new scenario for robot-based applications has emerged. Service and mobile robots have opened new market niches. Also, new frameworks for shop-floor robot applications have been developed. In all these contexts, robots are requested to perform tasks within open-ended conditions, possibly dynamically varying. These new requirements ask also for a change of paradigm in the design of robots: on-line and safe feedback motion control becomes the core of modern robot systems. Future robots will learn autonomously, interact safely and possess qualities like self-maintenance. Attaining these features would have been relatively easy if a complete model of the environment was available, and if the robot actuators could execute motion commands perfectly relative to this model. Unfortunately, a complete world model is not available and robots have to plan and execute the tasks in the presence of environmental uncertainties which makes sensing an important component of new generation robots. For this reason, today\u2019s new generation robots are equipped with more and more sensing components, and consequently they are ready to actively deal with the high complexity of the real world. Complex sensorimotor tasks such as exploration require coordination between the motor system and the sensory feedback. For robot control purposes, sensory feedback should be adequately organized in terms of relevant features and the associated data representation. In this paper, we propose an overall functional picture linking sensing to action in closed-loop sensorimotor control of robots for touch (hands, fingers). Basic qualities of haptic perception in humans inspire the models and categories comprising the proposed classification. The objective is to provide a reasoned, principled perspective on the connections between different taxonomies used in the Robotics and human haptic literature. The specific case of active exploration is chosen to ground interesting use cases. Two reasons motivate this choice. First, in the literature on haptics, exploration has been treated only to a limited extent compared to grasping and manipulation. Second, exploration involves specific robot behaviors that exploit distributed and heterogeneous sensory data

    A Robotic Haptic System Architecture

    Get PDF
    In order to carry out a given task in a unstructured environment, a robotic system must extract physical and geometric properties about the environment and the objects therein. We are interested in the question of what are the necessary elements to integrate a robotics system that would be able to carry out a task, i.e pick-up and transport objects in an unknown environment. One of the major concerns is to insure adequate data throughput and fast communication between modules within the system, so that haptic tasks can be adequately carried out. We also discuss the communication issues involved in the development of such a system

    E-TRoll: Tactile sensing and classification via a simple robotic gripper for extended rolling manipulations

    Get PDF
    Robotic tactile sensing provides a method of recognizing objects and their properties where vision fails. Prior work on tactile perception in robotic manipulation has frequently focused on exploratory procedures (EPs). However, the also-human-inspired technique of in-hand-manipulation can glean rich data in a fraction of the time of EPs. We propose a simple 3-DOF robotic hand design, optimized for object rolling tasks via a variable-width palm and associated control system. This system dynamically adjusts the distance between the finger bases in response to object behavior. Compared to fixed finger bases, this technique significantly increases the area of the object that is exposed to finger-mounted tactile arrays during a single rolling motion (an increase of over 60% was observed for a cylinder with a 30-millimeter diameter). In addition, this paper presents a feature extraction algorithm for the collected spatiotemporal dataset, which focuses on object corner identification, analysis, and compact representation. This technique drastically reduces the dimensionality of each data sample from 10×1500 time series data to 80 features, which was further reduced by Principal Component Analysis (PCA) to 22 components. An ensemble subspace k-nearest neighbors (KNN) classification model was trained with 90 observations on rolling three different geometric objects, resulting in a three-fold cross-validation accuracy of 95.6% for object shape recognition

    Feeling the Shape: Active Exploration Behaviors for Object Recognition With a Robotic Hand

    Get PDF
    Autonomous exploration in robotics is a crucial feature to achieve robust and safe systems capable to interact with and recognize their surrounding environment. In this paper, we present a method for object recognition using a three-fingered robotic hand actively exploring interesting object locations to reduce uncertainty. We present a novel probabilistic perception approach with a Bayesian formulation to iteratively accumulate evidence from robot touch. Exploration of better locations for perception is performed by familiarity and novelty exploration behaviors, which intelligently control the robot hand to move toward locations with low and high levels of interestingness, respectively. These are active behaviors that, similar to the exploratory procedures observed in humans, allow robots to autonomously explore locations they believe that contain interesting information for recognition. Active behaviors are validated with object recognition experiments in both offline and real-time modes. Furthermore, the effects of inhibiting the active behaviors are analyzed with a passive exploration strategy. The results from the experiments demonstrate the accuracy of our proposed methods, but also their benefits for active robot control to intelligently explore and interact with the environment

    Textile Taxonomy and Classification Using Pulling and Twisting

    Full text link
    Identification of textile properties is an important milestone toward advanced robotic manipulation tasks that consider interaction with clothing items such as assisted dressing, laundry folding, automated sewing, textile recycling and reusing. Despite the abundance of work considering this class of deformable objects, many open problems remain. These relate to the choice and modelling of the sensory feedback as well as the control and planning of the interaction and manipulation strategies. Most importantly, there is no structured approach for studying and assessing different approaches that may bridge the gap between the robotics community and textile production industry. To this end, we outline a textile taxonomy considering fiber types and production methods, commonly used in textile industry. We devise datasets according to the taxonomy, and study how robotic actions, such as pulling and twisting of the textile samples, can be used for the classification. We also provide important insights from the perspective of visualization and interpretability of the gathered data

    Exploring the potential of physical visualizations

    Get PDF
    The goal of an external representation of abstract data is to provide insights and convey information about the structure of the underlying data, therefore helping people execute tasks and solve problems more effectively. Apart from the popular and well-studied digital visualization of abstract data there are other scarcely studied perceptual channels to represent data such as taste, sound or haptic. My thesis focuses on the latter and explores in which ways human knowledge and ability to sense and interact with the physical non-digital world can be used to enhance the way in which people analyze and explore abstract data. Emerging technological progress in digital fabrication allow an easy, fast and inexpensive production of physical objects. Machines such as laser cutters and 3D printers enable an accurate fabrication of physical visualizations with different form factors as well as materials. This creates, for the first time, the opportunity to study the potential of physical visualizations in a broad range. The thesis starts with the description of six prototypes of physical visualizations from static examples to digitally augmented variations to interactive artifacts. Based on these explorations, three promising areas of potential for physical visualizations were identified and investigated in more detail: perception & memorability, communication & collaboration, and motivation & self-reflection. The results of two studies in the area of information recall showed that participants who used a physical bar chart retained more information compared to the digital counterpart. Particularly facts about maximum and minimum values were be remembered more efficiently, when they were perceived from a physical visualization. Two explorative studies dealt with the potential of physical visualizations regarding communication and collaboration. The observations revealed the importance on the design and aesthetic of physical visualizations and indicated a great potential for their utilization by audiences with less interest in technology. The results also exposed the current limitations of physical visualizations, especially in contrast to their well-researched digital counterparts. In the area of motivation we present the design and evaluation of the Activity Sculptures project. We conducted a field study, in which we investigated physical visualizations of personal running activity. It was discovered that these sculptures generated curiosity and experimentation regarding the personal running behavior as well as evoked social dynamics such as discussions and competition. Based on the findings of the aforementioned studies this thesis concludes with two theoretical contributions on the design and potential of physical visualizations. On the one hand, it proposes a conceptual framework for material representations of personal data by describing a production and consumption lens. The goal is to encourage artists and designers working in the field of personal informatics to harness the interactive capabilities afforded by digital fabrication and the potential of material representations. On the other hand we give a first classification and performance rating of physical variables including 14 dimensions grouped into four categories. This complements the undertaking of providing researchers and designers with guidance and inspiration to uncover alternative strategies for representing data physically and building effective physical visualizations.Um aus abstrakten Daten konkrete Aussagen, komplexe Zusammenhänge oder überraschende Einsichten gewinnen zu können, müssen diese oftmals in eine, für den Menschen, anschauliche Form gebracht werden. Eine weitverbreitete und gut erforschte Möglichkeiten ist die Darstellung von Daten in visueller Form. Weniger erforschte Varianten sind das Verkörpern von Daten durch Geräusche, Gerüche oder physisch ertastbare Objekte und Formen. Diese Arbeit konzentriert sich auf die letztgenannte Variante und untersucht wie die menschlichen Fähigkeiten mit der physischenWelt zu interagieren dafür genutzt werden können, das Analysieren und Explorieren von Daten zu unterstützen. Der technische Fortschritt in der digitalen Fertigung vereinfacht und beschleunigt die Produktion von physischen Objekten und reduziert dabei deren Kosten. Lasercutter und 3D Drucker ermöglichen beispielsweise eine maßgerechte Fertigung physischer Visualisierungen verschiedenster Ausprägungen hinsichtlich Größe und Material. Dadurch ergibt sich zum ersten Mal die Gelegenheit, das Potenzial von physischen Visualisierungen in größerem Umfang zu erforschen. Der erste Teil der Arbeit skizziert insgesamt sechs Prototypen physischer Visualisierungen, wobei sowohl statische Beispiele beschrieben werden, als auch Exemplare die durch digital Inhalte erweitert werden oder dynamisch auf Interaktionen reagieren können. Basierend auf den Untersuchungen dieser Prototypen wurden drei vielversprechende Bereiche für das Potenzial physischer Visualisierungen ermittelt und genauer untersucht: Wahrnehmung & Einprägsamkeit, Kommunikation & Zusammenarbeit sowie Motivation & Selbstreflexion. Die Ergebnisse zweier Studien zur Wahrnehmung und Einprägsamkeit von Informationen zeigten, dass sich Teilnehmer mit einem physischen Balkendiagramm an deutlich mehr Informationen erinnern konnten, als Teilnehmer, die eine digitale Visualisierung nutzten. Insbesondere Fakten über Maximal- und Minimalwerte konnten besser im Gedächtnis behalten werden, wenn diese mit Hilfe einer physischen Visualisierung wahrgenommen wurden. Zwei explorative Studien untersuchten das Potenzial von physischen Visualisierungen im Bereich der Kommunikation mit Informationen sowie der Zusammenarbeit. Die Ergebnisse legten einerseits offen wie wichtig ein ausgereiftes Design und die Ästhetik von physischen Visualisierungen ist, deuteten anderseits aber auch darauf hin, dass Menschen mit geringem Interesse an neuen Technologien eine interessante Zielgruppe darstellen. Die Studien offenbarten allerdings auch die derzeitigen Grenzen von physischen Visualisierungen, insbesondere im Vergleich zu ihren gut erforschten digitalen Pendants. Im Bereich der Motivation und Selbstreflexion präsentieren wir die Entwicklung und Auswertung des Projekts Activity Sculptures. In einer Feldstudie über drei Wochen erforschten wir physische Visualisierungen, die persönliche Laufdaten repräsentieren. Unsere Beobachtungen und die Aussagen der Teilnehmer ließen darauf schließen, dass die Skulpturen Neugierde weckten und zum Experimentieren mit dem eigenen Laufverhalten einluden. Zudem konnten soziale Dynamiken entdeckt werden, die beispielsweise durch Diskussion aber auch Wettbewerbsgedanken zum Ausdruck kamen. Basierend auf den gewonnen Erkenntnissen durch die erwähnten Studien schließt diese Arbeit mit zwei theoretischen Beiträgen, hinsichtlich des Designs und des Potenzials von physischen Visualisierungen, ab. Zuerst wird ein konzeptionelles Framework vorgestellt, welches die Möglichkeiten und den Nutzen physischer Visualisierungen von persönlichen Daten veranschaulicht. Für Designer und Künstler kann dies zudem als Inspirationsquelle dienen, wie das Potenzial neuer Technologien, wie der digitalen Fabrikation, zur Darstellung persönlicher Daten in physischer Form genutzt werden kann. Des Weiteren wird eine initiale Klassifizierung von physischen Variablen vorgeschlagen mit insgesamt 14 Dimensionen, welche in vier Kategorien gruppiert sind. Damit vervollständigen wir unser Ziel, Forschern und Designern Inspiration und Orientierung zu bieten, um neuartige und effektvolle physische Visualisierungen zu erschaffen
    corecore