8,430 research outputs found
Completely Automated Public Physical test to tell Computers and Humans Apart: A usability study on mobile devices
A very common approach adopted to fight the increasing sophistication and dangerousness of malware and hacking is to introduce more complex authentication mechanisms. This approach, however, introduces additional cognitive burdens for users and lowers the whole authentication mechanism acceptability to the point of making it unusable. On the contrary, what is really needed to fight the onslaught of automated attacks to users data and privacy is to first tell human and computers apart and then distinguish among humans to guarantee correct authentication. Such an approach is capable of completely thwarting any automated attempt to achieve unwarranted access while it allows keeping simple the mechanism dedicated to recognizing the legitimate user. This kind of approach is behind the concept of Completely Automated Public Turing test to tell Computers and Humans Apart (CAPTCHA), yet CAPTCHA leverages cognitive capabilities, thus the increasing sophistication of computers calls for more and more difficult cognitive tasks that make them either very long to solve or very prone to false negatives. We argue that this problem can be overcome by substituting the cognitive component of CAPTCHA with a different property that programs cannot mimic: the physical nature. In past work we have introduced the Completely Automated Public Physical test to tell Computer and Humans Apart (CAPPCHA) as a way to enhance the PIN authentication method for mobile devices and we have provided a proof of concept implementation. Similarly to CAPTCHA, this mechanism can also be used to prevent automated programs from abusing online services. However, to evaluate the real efficacy of the proposed scheme, an extended empirical assessment of CAPPCHA is required as well as a comparison of CAPPCHA performance with the existing state of the art. To this aim, in this paper we carry out an extensive experimental study on both the performance and the usability of CAPPCHA involving a high number of physical users, and we provide comparisons of CAPPCHA with existing flavors of CAPTCHA
Data-Driven Grasp Synthesis - A Survey
We review the work on data-driven grasp synthesis and the methodologies for
sampling and ranking candidate grasps. We divide the approaches into three
groups based on whether they synthesize grasps for known, familiar or unknown
objects. This structure allows us to identify common object representations and
perceptual processes that facilitate the employed data-driven grasp synthesis
technique. In the case of known objects, we concentrate on the approaches that
are based on object recognition and pose estimation. In the case of familiar
objects, the techniques use some form of a similarity matching to a set of
previously encountered objects. Finally for the approaches dealing with unknown
objects, the core part is the extraction of specific features that are
indicative of good grasps. Our survey provides an overview of the different
methodologies and discusses open problems in the area of robot grasping. We
also draw a parallel to the classical approaches that rely on analytic
formulations.Comment: 20 pages, 30 Figures, submitted to IEEE Transactions on Robotic
Understanding of Object Manipulation Actions Using Human Multi-Modal Sensory Data
Object manipulation actions represent an important share of the Activities of
Daily Living (ADLs). In this work, we study how to enable service robots to use
human multi-modal data to understand object manipulation actions, and how they
can recognize such actions when humans perform them during human-robot
collaboration tasks. The multi-modal data in this study consists of videos,
hand motion data, applied forces as represented by the pressure patterns on the
hand, and measurements of the bending of the fingers, collected as human
subjects performed manipulation actions. We investigate two different
approaches. In the first one, we show that multi-modal signal (motion, finger
bending and hand pressure) generated by the action can be decomposed into a set
of primitives that can be seen as its building blocks. These primitives are
used to define 24 multi-modal primitive features. The primitive features can in
turn be used as an abstract representation of the multi-modal signal and
employed for action recognition. In the latter approach, the visual features
are extracted from the data using a pre-trained image classification deep
convolutional neural network. The visual features are subsequently used to
train the classifier. We also investigate whether adding data from other
modalities produces a statistically significant improvement in the classifier
performance. We show that both approaches produce a comparable performance.
This implies that image-based methods can successfully recognize human actions
during human-robot collaboration. On the other hand, in order to provide
training data for the robot so it can learn how to perform object manipulation
actions, multi-modal data provides a better alternative
Behavior-specific proprioception models for robotic force estimation: a machine learning approach
Robots that support humans in physically demanding tasks require accurate force sensing capabilities. A common way to achieve this is by monitoring the interaction with the environment directly with dedicated force sensors. Major drawbacks of such special purpose sensors are the increased costs and the reduced payload of the robot platform. Instead, this thesis investigates how the functionality of such sensors can be approximated by utilizing force estimation approaches. Most of today’s robots are equipped with rich proprioceptive sensing capabilities where even a robotic arm, e.g., the UR5, provides access to more than hundred sensor readings. Following this trend, it is getting feasible to utilize a wide variety of sensors for force estimation purposes. Human proprioception allows estimating forces such as the weight of an object by prior experience about sensory-motor patterns. Applying a similar approach to robots enables them to learn from previous demonstrations without the need of dedicated force sensors.
This thesis introduces Behavior-Specific Proprioception Models (BSPMs), a novel concept for enhancing robotic behavior with estimates of the expected proprioceptive feedback. A main methodological contribution is the operationalization of the BSPM approach using data-driven machine learning techniques. During a training phase, the behavior is continuously executed while recording proprioceptive sensor readings. The training data acquired from these demonstrations represents ground truth about behavior-specific sensory-motor experiences, i.e., the influence of performed actions and environmental conditions on the proprioceptive feedback. This data acquisition procedure does not require expert knowledge about the particular robot platform, e.g., kinematic chains or mass distribution, which is a major advantage over analytical approaches. The training data is then used to learn BSPMs, e.g. using lazy learning techniques or artificial neural networks. At runtime, the BSPMs provide estimates of the proprioceptive feedback that can be compared to actual sensations. The BSPM approach thus extends classical programming by demonstrations methods where only movement data is learned and enables robots to accurately estimate forces during behavior execution
On Neuromechanical Approaches for the Study of Biological Grasp and Manipulation
Biological and robotic grasp and manipulation are undeniably similar at the
level of mechanical task performance. However, their underlying fundamental
biological vs. engineering mechanisms are, by definition, dramatically
different and can even be antithetical. Even our approach to each is
diametrically opposite: inductive science for the study of biological systems
vs. engineering synthesis for the design and construction of robotic systems.
The past 20 years have seen several conceptual advances in both fields and the
quest to unify them. Chief among them is the reluctant recognition that their
underlying fundamental mechanisms may actually share limited common ground,
while exhibiting many fundamental differences. This recognition is particularly
liberating because it allows us to resolve and move beyond multiple paradoxes
and contradictions that arose from the initial reasonable assumption of a large
common ground. Here, we begin by introducing the perspective of neuromechanics,
which emphasizes that real-world behavior emerges from the intimate
interactions among the physical structure of the system, the mechanical
requirements of a task, the feasible neural control actions to produce it, and
the ability of the neuromuscular system to adapt through interactions with the
environment. This allows us to articulate a succinct overview of a few salient
conceptual paradoxes and contradictions regarding under-determined vs.
over-determined mechanics, under- vs. over-actuated control, prescribed vs.
emergent function, learning vs. implementation vs. adaptation, prescriptive vs.
descriptive synergies, and optimal vs. habitual performance. We conclude by
presenting open questions and suggesting directions for future research. We
hope this frank assessment of the state-of-the-art will encourage and guide
these communities to continue to interact and make progress in these important
areas
Voronoi Features for Tactile Sensing: Direct Inference of Pressure, Shear, and Contact Locations
There are a wide range of features that tactile contact provides, each with
different aspects of information that can be used for object grasping,
manipulation, and perception. In this paper inference of some key tactile
features, tip displacement, contact location, shear direction and magnitude, is
demonstrated by introducing a novel method of transducing a third dimension to
the sensor data via Voronoi tessellation. The inferred features are displayed
throughout the work in a new visualisation mode derived from the Voronoi
tessellation; these visualisations create easier interpretation of data from an
optical tactile sensor that measures local shear from displacement of internal
pins (the TacTip). The output values of tip displacement and shear magnitude
are calibrated to appropriate mechanical units and validate the direction of
shear inferred from the sensor. We show that these methods can infer the
direction of shear to 2.3 without the need for training a
classifier or regressor. The approach demonstrated here will increase the
versatility and generality of the sensors and thus allow sensor to be used in
more unstructured and unknown environments, as well as improve the use of these
tactile sensors in more complex systems such as robot hands.Comment: Presented at ICRA 201
- …