1,116 research outputs found
Grasping With Mechanical Intelligence
Many robotic hands have been designed and a number have been built. Because of the difficulty of controlling and using complex hands, which usually have nine or more degrees of freedom, the simple one- or two-degree-of-freedom gripper is still the most common robotic end effector. This thesis presents a new category of device: a medium-complexity end effector. With three to five degrees of freedom, such a tool is much easier to control and use, as well as more economical, compact and lightweight than complex hands. In order to increase the versatility, it was necessary to identify grasping primitives and to implement them in the mechanism. In addition, power and enveloping grasps are stressed over fingertip and precision grasps. The design is based upon analysis of object apprehension types, requisite characteristics for active sensing, and a determination of necessary environmental interactions. Contained in this thesis are the general concepts necessary to the design of a medium-complexity end effector, an analysis of typica.1 performance, and a computer simulation of a grasp planning algorithm specific to this type of mechanism. Finally, some details concerning the UPenn Hand - a tool designed for the research laboratory - are presented
Grasp-sensitive surfaces
Grasping objects with our hands allows us to skillfully move and manipulate them.
Hand-held tools further extend our capabilities by adapting precision, power, and shape of our hands to the task at hand.
Some of these tools, such as mobile phones or computer mice, already incorporate information processing capabilities.
Many other tools may be augmented with small, energy-efficient digital sensors and processors.
This allows for graspable objects to learn about the user grasping them - and supporting the user's goals.
For example, the way we grasp a mobile phone might indicate whether we want to take a photo or call a friend with it - and thus serve as a shortcut to that action.
A power drill might sense whether the user is grasping it firmly enough and refuse to turn on if this is not the case.
And a computer mouse could distinguish between intentional and unintentional movement and ignore the latter.
This dissertation gives an overview of grasp sensing for human-computer interaction, focusing on technologies for building grasp-sensitive surfaces and challenges in designing grasp-sensitive user interfaces.
It comprises three major contributions: a comprehensive review of existing research on human grasping and grasp sensing, a detailed description of three novel prototyping tools for grasp-sensitive surfaces, and a framework for analyzing and designing grasp interaction:
For nearly a century, scientists have analyzed human grasping.
My literature review gives an overview of definitions, classifications, and models of human grasping.
A small number of studies have investigated grasping in everyday situations.
They found a much greater diversity of grasps than described by existing taxonomies.
This diversity makes it difficult to directly associate certain grasps with users' goals.
In order to structure related work and own research, I formalize a generic workflow for grasp sensing.
It comprises *capturing* of sensor values, *identifying* the associated grasp, and *interpreting* the meaning of the grasp.
A comprehensive overview of related work shows that implementation of grasp-sensitive surfaces is still hard, researchers often are not aware of related work from other disciplines, and intuitive grasp interaction has not yet received much attention.
In order to address the first issue, I developed three novel sensor technologies designed for grasp-sensitive surfaces. These mitigate one or more limitations of traditional sensing techniques:
**HandSense** uses four strategically positioned capacitive sensors for detecting and classifying grasp patterns on mobile phones. The use of custom-built high-resolution sensors allows detecting proximity and avoids the need to cover the whole device surface with sensors. User tests showed a recognition rate of 81%, comparable to that of a system with 72 binary sensors.
**FlyEye** uses optical fiber bundles connected to a camera for detecting touch and proximity on arbitrarily shaped surfaces. It allows rapid prototyping of touch- and grasp-sensitive objects and requires only very limited electronics knowledge. For FlyEye I developed a *relative calibration* algorithm that allows determining the locations of groups of sensors whose arrangement is not known.
**TDRtouch** extends Time Domain Reflectometry (TDR), a technique traditionally used for inspecting cable faults, for touch and grasp sensing. TDRtouch is able to locate touches along a wire, allowing designers to rapidly prototype and implement modular, extremely thin, and flexible grasp-sensitive surfaces.
I summarize how these technologies cater to different requirements and significantly expand the design space for grasp-sensitive objects.
Furthermore, I discuss challenges for making sense of raw grasp information and categorize interactions. Traditional application scenarios for grasp sensing use only the grasp sensor's data, and only for mode-switching. I argue that data from grasp sensors is part of the general usage context and should be only used in combination with other context information.
For analyzing and discussing the possible meanings of grasp types, I created the GRASP model. It describes five categories of influencing factors that determine how we grasp an object:
*Goal* -- what we want to do with the object,
*Relationship* -- what we know and feel about the object we want to grasp,
*Anatomy* -- hand shape and learned movement patterns,
*Setting* -- surrounding and environmental conditions, and
*Properties* -- texture, shape, weight, and other intrinsics of the object
I conclude the dissertation with a discussion of upcoming challenges in grasp sensing and grasp interaction, and provide suggestions for implementing robust and usable grasp interaction.Die Fähigkeit, Gegenstände mit unseren Händen zu greifen, erlaubt uns, diese vielfältig zu manipulieren.
Werkzeuge erweitern unsere Fähigkeiten noch, indem sie Genauigkeit, Kraft und Form unserer Hände an die Aufgabe anpassen.
Digitale Werkzeuge, beispielsweise Mobiltelefone oder Computermäuse, erlauben uns auch, die Fähigkeiten unseres Gehirns und unserer Sinnesorgane zu erweitern.
Diese Geräte verfügen bereits über Sensoren und Recheneinheiten.
Aber auch viele andere Werkzeuge und Objekte lassen sich mit winzigen, effizienten Sensoren und Recheneinheiten erweitern.
Dies erlaubt greifbaren Objekten, mehr über den Benutzer zu erfahren, der sie greift - und ermöglicht es, ihn bei der Erreichung seines Ziels zu unterstützen.
Zum Beispiel könnte die Art und Weise, in der wir ein Mobiltelefon halten, verraten, ob wir ein Foto aufnehmen oder einen Freund anrufen wollen - und damit als Shortcut für diese Aktionen dienen.
Eine Bohrmaschine könnte erkennen, ob der Benutzer sie auch wirklich sicher hält und den Dienst verweigern, falls dem nicht so ist.
Und eine Computermaus könnte zwischen absichtlichen und unabsichtlichen Mausbewegungen unterscheiden und letztere ignorieren.
Diese Dissertation gibt einen Überblick über Grifferkennung (*grasp sensing*) für die Mensch-Maschine-Interaktion, mit einem Fokus auf Technologien zur Implementierung griffempfindlicher Oberflächen und auf Herausforderungen beim Design griffempfindlicher Benutzerschnittstellen.
Sie umfasst drei primäre Beiträge zum wissenschaftlichen Forschungsstand:
einen umfassenden Ăśberblick ĂĽber die bisherige Forschung zu menschlichem Greifen und Grifferkennung,
eine detaillierte Beschreibung dreier neuer Prototyping-Werkzeuge für griffempfindliche Oberflächen
und ein Framework fĂĽr Analyse und Design von griff-basierter Interaktion (*grasp interaction*).
Seit nahezu einem Jahrhundert erforschen Wissenschaftler menschliches Greifen.
Mein Ăśberblick ĂĽber den Forschungsstand beschreibt Definitionen, Klassifikationen und Modelle menschlichen Greifens.
In einigen wenigen Studien wurde bisher Greifen in alltäglichen Situationen untersucht.
Diese fanden eine deutlich größere Diversität in den Griffmuster als in existierenden Taxonomien beschreibbar.
Diese Diversität erschwert es, bestimmten Griffmustern eine Absicht des Benutzers zuzuordnen.
Um verwandte Arbeiten und eigene Forschungsergebnisse zu strukturieren, formalisiere ich einen allgemeinen Ablauf der Grifferkennung.
Dieser besteht aus dem *Erfassen* von Sensorwerten, der *Identifizierung* der damit verknĂĽpften Griffe und der *Interpretation* der Bedeutung des Griffes.
In einem umfassenden Überblick über verwandte Arbeiten zeige ich, dass die Implementierung von griffempfindlichen Oberflächen immer noch ein herausforderndes Problem ist, dass Forscher regelmäßig keine Ahnung von verwandten Arbeiten in benachbarten Forschungsfeldern haben, und dass intuitive Griffinteraktion bislang wenig Aufmerksamkeit erhalten hat.
Um das erstgenannte Problem zu lösen, habe ich drei neuartige Sensortechniken für griffempfindliche Oberflächen entwickelt.
Diese mindern jeweils eine oder mehrere Schwächen traditioneller Sensortechniken:
**HandSense** verwendet vier strategisch positionierte kapazitive Sensoren um Griffmuster zu erkennen.
Durch die Verwendung von selbst entwickelten, hochauflösenden Sensoren ist es möglich, schon die Annäherung an das Objekt zu erkennen.
Außerdem muss nicht die komplette Oberfläche des Objekts mit Sensoren bedeckt werden.
Benutzertests ergaben eine Erkennungsrate, die vergleichbar mit einem System mit 72 binären Sensoren ist.
**FlyEye** verwendet Lichtwellenleiterbündel, die an eine Kamera angeschlossen werden, um Annäherung und Berührung auf beliebig geformten Oberflächen zu erkennen.
Es ermöglicht auch Designern mit begrenzter Elektronikerfahrung das Rapid Prototyping von berührungs- und griffempfindlichen Objekten.
FĂĽr FlyEye entwickelte ich einen *relative-calibration*-Algorithmus, der verwendet werden kann um Gruppen von Sensoren, deren Anordnung unbekannt ist, semi-automatisch anzuordnen.
**TDRtouch** erweitert Time Domain Reflectometry (TDR), eine Technik die üblicherweise zur Analyse von Kabelbeschädigungen eingesetzt wird.
TDRtouch erlaubt es, BerĂĽhrungen entlang eines Drahtes zu lokalisieren.
Dies ermöglicht es, schnell modulare, extrem dünne und flexible griffempfindliche Oberflächen zu entwickeln.
Ich beschreibe, wie diese Techniken verschiedene Anforderungen erfĂĽllen und den *design space* fĂĽr griffempfindliche Objekte deutlich erweitern.
Desweiteren bespreche ich die Herausforderungen beim Verstehen von Griffinformationen und
stelle eine Einteilung von Interaktionsmöglichkeiten vor.
Bisherige Anwendungsbeispiele für die Grifferkennung nutzen nur Daten der Griffsensoren und beschränken sich auf Moduswechsel.
Ich argumentiere, dass diese Sensordaten Teil des allgemeinen Benutzungskontexts sind und nur in Kombination mit anderer Kontextinformation verwendet werden sollten.
Um die möglichen Bedeutungen von Griffarten analysieren und diskutieren zu können, entwickelte ich das GRASP-Modell.
Dieses beschreibt fĂĽnf Kategorien von Einflussfaktoren, die bestimmen wie wir ein Objekt greifen:
*Goal* -- das Ziel, das wir mit dem Griff erreichen wollen,
*Relationship* -- das Verhältnis zum Objekt,
*Anatomy* -- Handform und Bewegungsmuster,
*Setting* -- Umgebungsfaktoren und
*Properties* -- Eigenschaften des Objekts, wie Oberflächenbeschaffenheit, Form oder Gewicht.
Ich schließe mit einer Besprechung neuer Herausforderungen bei der Grifferkennung und Griffinteraktion und mache Vorschläge zur Entwicklung von zuverlässiger und benutzbarer Griffinteraktion
Recommended from our members
Intuitive Human-Machine Interfaces for Non-Anthropomorphic Robotic Hands
As robots become more prevalent in our everyday lives, both in our workplaces and in our homes, it becomes increasingly likely that people who are not experts in robotics will be asked to interface with robotic devices. It is therefore important to develop robotic controls that are intuitive and easy for novices to use. Robotic hands, in particular, are very useful, but their high dimensionality makes creating intuitive human-machine interfaces for them complex. In this dissertation, we study the control of non-anthropomorphic robotic hands by non-roboticists in two contexts: collaborative manipulation and assistive robotics.
In the field of collaborative manipulation, the human and the robot work side by side as independent agents. Teleoperation allows the human to assist the robot when autonomous grasping is not able to deal sufficiently well with corner cases or cannot operate fast enough. Using the teleoperator’s hand as an input device can provide an intuitive control method, but finding a mapping between a human hand and a non-anthropomorphic robot hand can be difficult, due to the hands’ dissimilar kinematics. In this dissertation, we seek to create a mapping between the human hand and a fully actuated, non-anthropomorphic robot hand that is intuitive enough to enable effective real-time teleoperation, even for novice users.
We propose a low-dimensional and continuous teleoperation subspace which can be used as an intermediary for mapping between different hand pose spaces. We first propose the general concept of the subspace, its properties and the variables needed to map from the human hand to a robot hand. We then propose three ways to populate the teleoperation subspace mapping. Two of our mappings use a dataglove to harvest information about the user's hand. We define the mapping between joint space and teleoperation subspace with an empirical definition, which requires a person to define hand motions in an intuitive, hand-specific way, and with an algorithmic definition, which is kinematically independent, and uses objects to define the subspace. Our third mapping for the teleoperation subspace uses forearm electromyography (EMG) as a control input.
Assistive orthotics is another area of robotics where human-machine interfaces are critical, since, in this field, the robot is attached to the hand of the human user. In this case, the goal is for the robot to assist the human with movements they would not otherwise be able to achieve. Orthotics can improve the quality of life of people who do not have full use of their hands. Human-machine interfaces for assistive hand orthotics that use EMG signals from the affected forearm as input are intuitive and repeated use can strengthen the muscles of the user's affected arm. In this dissertation, we seek to create an EMG based control for an orthotic device used by people who have had a stroke. We would like our control to enable functional motions when used in conjunction with a orthosis and to be robust to changes in the input signal.
We propose a control for a wearable hand orthosis which uses an easy to don, commodity forearm EMG band. We develop an supervised algorithm to detect a user’s intent to open and close their hand, and pair this algorithm with a training protocol which makes our intent detection robust to changes in the input signal. We show that this algorithm, when used in conjunction with an orthosis over several weeks, can improve distal function in users. Additionally, we propose two semi-supervised intent detection algorithms designed to keep our control robust to changes in the input data while reducing the length and frequency of our training protocol
Biomimetic Based EEG Learning for Robotics Complex Grasping and Dexterous Manipulation
There have been tremendous efforts to understand the biological nature of human grasping, in such a way that it can be learned and copied to prosthesis–robotics and dextrous grasping applications. Several biomimetic methods and techniques have been adopted, hence applied to analytically comprehend ways human performs grasping to duplicate human knowledge. A major topic for further study, is related to decoding the resulting EEG brainwaves during motorizing of fingers and moving parts. To accomplish this, there are a number of phases that are performed, including recording, pre-processing, filtration, and understanding of the waves. However, there are two important phases that have received substantial research attentions. The classification and decoding, of such massive and complex brain waves, as they are two important steps towards understanding patterns during grasping. In this respect, the fundamental objective of this research is to demonstrate how to employ advanced pattern recognition methods, like fuzzy c-mean clustering for understanding resulting EEG brain waves, in such a way to control a prosthesis or robotic hand, while relying sets of detected EEG brainwaves. There are a number of decoding and classification methods and techniques, however we shall look into fuzzy based clustering blended with principle component analysis (PAC) technique to help for the decoding mechanism. EEG brainwaves during a grasping and manipulation have been used for this analysis. This involves, movement of almost five fingers during a grasping defined task. The study has found that, it is not a straight forward task to decode all human fingers motions, as due to the complexity of grasping tasks. However, the adopted analysis was able to classify and identify the different narrowly performed and related fundamental events during a simple grasping task
Differentiable Robot Neural Distance Function for Adaptive Grasp Synthesis on a Unified Robotic Arm-Hand System
Grasping is a fundamental skill for robots to interact with their
environment. While grasp execution requires coordinated movement of the hand
and arm to achieve a collision-free and secure grip, many grasp synthesis
studies address arm and hand motion planning independently, leading to
potentially unreachable grasps in practical settings. The challenge of
determining integrated arm-hand configurations arises from its computational
complexity and high-dimensional nature. We address this challenge by presenting
a novel differentiable robot neural distance function. Our approach excels in
capturing intricate geometry across various joint configurations while
preserving differentiability. This innovative representation proves
instrumental in efficiently addressing downstream tasks with stringent contact
constraints. Leveraging this, we introduce an adaptive grasp synthesis
framework that exploits the full potential of the unified arm-hand system for
diverse grasping tasks. Our neural joint space distance function achieves an
84.7% error reduction compared to baseline methods. We validated our approaches
on a unified robotic arm-hand system that consists of a 7-DoF robot arm and a
16-DoF multi-fingered robotic hand. Results demonstrate that our approach
empowers this high-DoF system to generate and execute various arm-hand grasp
configurations that adapt to the size of the target objects while ensuring
whole-body movements to be collision-free.Comment: Under revie
- …