16 research outputs found

    The Anthropomorphic Hand Assessment Protocol (AHAP)

    Get PDF
    The progress in the development of anthropomorphic hands for robotic and prosthetic applications has not been followed by a parallel development of objective methods to evaluate their performance. The need for benchmarking in grasping research has been recognized by the robotics community as an important topic. In this study we present the Anthropomorphic Hand Assessment Protocol (AHAP) to address this need by providing a measure for quantifying the grasping ability of artificial hands and comparing hand designs. To this end, the AHAP uses 25 objects from the publicly available Yale-CMU-Berkeley Object and Model Set thereby enabling replicability. It is composed of 26 postures/tasks involving grasping with the eight most relevant human grasp types and two non-grasping postures. The AHAP allows to quantify the anthropomorphism and functionality of artificial hands through a numerical Grasping Ability Score (GAS). The AHAP was tested with different hands, the first version of the hand of the humanoid robot ARMAR-6 with three different configurations resulting from attachment of pads to fingertips and palm as well as the two versions of the KIT Prosthetic Hand. The benchmark was used to demonstrate the improvements of these hands in aspects like the grasping surface, the grasp force and the finger kinematics. The reliability, consistency and responsiveness of the benchmark have been statistically analyzed, indicating that the AHAP is a powerful tool for evaluating and comparing different artificial hand designs

    A Survey of League Championship Algorithm: Prospects and Challenges

    Full text link
    The League Championship Algorithm (LCA) is sport-inspired optimization algorithm that was introduced by Ali Husseinzadeh Kashan in the year 2009. It has since drawn enormous interest among the researchers because of its potential efficiency in solving many optimization problems and real-world applications. The LCA has also shown great potentials in solving non-deterministic polynomial time (NP-complete) problems. This survey presents a brief synopsis of the LCA literatures in peer-reviewed journals, conferences and book chapters. These research articles are then categorized according to indexing in the major academic databases (Web of Science, Scopus, IEEE Xplore and the Google Scholar). The analysis was also done to explore the prospects and the challenges of the algorithm and its acceptability among researchers. This systematic categorization can be used as a basis for future studies.Comment: 10 pages, 2 figures, 2 tables, Indian Journal of Science and Technology, 201

    SPATIAL PERCEPTION AND ROBOT OPERATION: THE RELATIONSHIP BETWEEN VISUAL SPATIAL ABILITY AND PERFORMANCE UNDER DIRECT LINE OF SIGHT AND TELEOPERATION

    Get PDF
    This dissertation investigated the relationship between the spatial perception abilities of operators and robot operation under direct-line-of-sight and teleoperation viewing conditions. This study was an effort to determine if spatial ability testing may be a useful tool in the selection of human-robot interaction (HRI) operators. Participants completed eight cognitive ability measures and operated one of four types of robots under tasks of low and high difficulty. Performance for each participant was tested during both direct-line-of-sight and teleoperation. These results provide additional evidence that spatial perception abilities are reliable predictors of direct-line-of-sight and teleoperation performance. Participants in this study with higher spatial abilities performed faster, with fewer errors, and less variability. In addition, participants with higher spatial abilities were more successful in the accumulation of points. Applications of these findings are discussed in terms of teleoperator selection tools and HRI training and design recommendations with a human-centered design approach

    Haptic Shared Control in Tele-Manipulation: Effects of Inaccuracies in Guidance on Task Execution

    Get PDF
    Haptic shared control is a promising approach to improve tele-manipulated task execution, by making safe and effective control actions tangible through guidance forces. In current research, these guidance forces are most often generated based on pre-generated, errorless models of the remote environment. Hence such guidance forces are exempt from the inaccuracies that can be expected in practical implementations. The goal of this research is to quantify the extent to which task execution is degraded by inaccuracies in the model on which haptic guidance forces are based. In a human-in-the-loop experiment, subjects (n = 14) performed a realistic tele-manipulated assembly task in a virtual environment. Operators were provided with various levels of haptic guidance, namely no haptic guidance (conventional tele-manipulation), haptic guidance without inaccuracies, and haptic guidance with translational inaccuracies (one large inaccuracy, in the order of magnitude of the task, and a second smaller inaccuracy). The quality of natural haptic feedback (i.e., haptic transparency) was varied between high and low to identify the operator\u27s ability to detect and cope with inaccuracies in haptic guidance. The results indicate that haptic guidance is beneficial for task execution when no inaccuracies are present in the guidance. When inaccuracies are present, this may degrade task execution, depending on the magnitude and the direction of the inaccuracy. The effect of inaccuracies on overall task performance is dominated by effects found for the Constrained Translational Movement, due to its potential for jamming. No evidence was found that a higher quality of haptic transparency helps operators to detect and cope with inaccuracies in the haptic guidance.</p

    Robotic bin-picking: Benchmarking robotics grippers with modified YCB object and model set

    Get PDF
    Robotic bin-picking is increasingly important in the order-picking process in intralogistics. However, many aspects of the robotic bin-picking process (object detection, grasping, manipulation) still require the research community\u27s attention. Established methods are used to test robotic grippers, enabling comparability of the research community\u27s results. This study presents a modified YCB Robotic Gripper Assessment Protocol that was used to evaluate the performance of four robotic grippers (two-fingered, vacuum, gecko, and soft gripper). During the testing, 45 objects from the modified YCB Object and Model Set from the packaging categories, tools, small objects, spherical objects, and deformable objects were grasped and manipulated. The results of the robotic gripper evaluation show that while some robotic grippers performed substantially well, there is an expressive grasp success variation over diverse objects. The results indicate that selecting the object grasp point next to selecting the most suitable robotic gripper is critical in successful object grasping. Therefore, we propose grasp point determination using mechanical software simulation with a model of a two-fingered gripper in an ADAMS/MATLAB co-simulation. Performing software simulations for this task can save time and give comparable results to real-world experiments

    Real-Time Supervision for Human Robot Teams in Complex Task Domains

    Full text link
    Ongoing research on multi-robot teams is focused on methods and systems to be utilized in dynamic and dangerous environments such as search and rescue missions, often with a human operator in the loop to supervise the system and make critical decisions. To increase the size of the team controlled by an operator, and to reduce the operator\u27s mental workload, the robots will have to be more autonomous and reliable so that tasks can be issued at a higher level. Typical in these domains, such high-level tasks are often composed of smaller tasks with dependencies and constraints. Assigning suitable robot platforms to execute these tasks is a combinatorial optimization problem. Operations Research and AI techniques can handle large numbers of robot allocations in real time, however most of these algorithms are opaque to humans; they provide no explanation or insight about how the solution is produced. Recent studies suggest that interaction between the human operator and robot team requires human-centric approaches for collaborative planning and task allocation, since black-box solutions are often too complex to examine under stressful conditions and are often discarded by experts. The main contribution of this thesis is a methodology to help operators make decisions about complex task allocation in real time for high stress missions. First a novel, human-centric graphical model, TAG, is described to analyze and predict the complexity of task assignment and scheduling problem instances, taking into account the spatial distribution of resources and tasks. Then, the TAG model is extended for dynamic environments to the MAP model. Two user studies were conducted, first in static and then in dynamic environments, in order to identify and empirically verify the key factors, derived from the graphical model, which affect the decision making of human supervisors during task assignment for a team of robots. In these user studies, participants used software tools developed for this work. One of these software tools allows for two different levels of autonomy for the interaction scheme: manual control and collaborative control, with an option to invoke an automated assignment tool. Findings relating to the impact of decision support functionality on the mental workload and the performance of the supervisor are presented. Finally, steering of the common algorithms utilized by decision support tools, using the strategies employed by user study participants, related to the TAG and MAP model parameters, are discussed

    Self-organization of robotic devices through demonstrations

    Get PDF
    La théorie des AMAS (Adaptive Multi-Agent Systems) propose de résoudre des problèmes complexes par auto-organisation pour lesquels aucune solution algorithmique n'est connue. Le comportement auto-organisateur des agents coopératifs permet au système de s'adapter à un environnement dynamique pour maintenir le système dans un état de fonctionnement adéquat. Dans cette thèse, cette approche a été appliquée au contrôle dans les systèmes ambiants, et plus particulièrement à la robotique de service. En effet, la robotique de service tend de plus en plus à s'intégrer à des environnements ambiants, on parle alors de robotique ambiante. Les systèmes ambiants présentent des caractéristiques, telles que l'ouverture et l'hétérogénéité, qui rendent la tâche de contrôle particulièrement complexe. Cette complexité est accrue si l'on prend en compte les besoins spécifiques, changeants et parfois contradictoires des utilisateurs. Les travaux de cette thèse proposent d'utiliser les principes de l'auto-organisation, pour concevoir un système multi-agent capable d'apprendre en temps réel à contrôler un système à partir des démonstrations faites par un tuteur. C'est l'apprentissage par démonstration. En observant l'activité de l'utilisateur et en apprenant le contexte dans lequel l'utilisateur agit, le système apprend une politique de contrôle pour satisfaire les utilisateurs. Nous proposons un nouveau paradigme de conception des systèmes robotiques sous le nom d'Extreme Sensitive Robotics. L'idée de base de ce paradigme est de distribuer le contrôle au sein des différentes fonctionnalités qui composent un système et de doter chacune de ces fonctionnalités de la capacité à s'adapter de manière autonome à son environnement. Pour évaluer l'apport de ce paradigme, nous avons conçu ALEX (Adaptive Learner by EXperiments), un système multi-agent adaptatif dont la fonction est d'apprendre, en milieux ambiants, à contrôler un dispositif robotique à partir de démonstrations. L'approche par AMAS permet la conception de logiciels à fonctionnalités émergentes. La solution à un problème émerge des interactions coopératives entre un ensemble d'agents autonomes, chaque agent ne possédant qu'une vue partielle de l'environnement. L'application de cette approche nous conduit à isoler les différents agents impliqués dans le problème du contrôle et à décrire leurs comportements locaux. Ensuite, nous identifions un ensemble de situations de non coopération susceptibles de nuire à leurs comportements et proposons un ensemble de mécanismes pour résoudre et anticiper ces situations. Les différentes expérimentations ont montré la capacité du système à apprendre en temps réel à partir de l'observation de l'activité de l'utilisateur et ont mis en évidence les apports, les limitations et les perspectives offertes par notre approche à la problématique du contrôle de systèmes ambiants.The AMAS (Adaptive Multi-Agent Systems) theory proposes to solve complex problems for which there is no known algorithmic solution by self-organization. The self-organizing behaviour of the cooperative agents enables the system to self-adapt to a dynamical environment to maintain the system in a functionality adequate state. In this thesis, we apply the theory to the problematic of control in ambient systems, and more particularly to service robotics. Service robotics is more and more taking part in ambient environment, we talk of ambient robotics. Ambient systems have challenging characteristics, such as openness and heterogeneity, which make the task of control particularly complex. This complexity is increased if we take into account the specific, changing and often contradictory needs of users. This thesis proposes to use the principle of self-organization to design a multi-agent system with the ability to learn in real-time to control a robotic device from demonstrations made by a tutor. We then talk of learning from demonstrations. By observing the activity of the users, and learning the context in which they act, the system learns a control policy allowing to satisfy users. Firstly, we propose a new paradigm to design robotic systems under the name Extreme Sensitive Robotics. The main proposal of this paradigm is to distribute the control inside the different functionalities which compose a system, and to give to each functionality the capacity to self-adapt to its environment. To evaluate the benefits of this paradigm, we designed ALEX (Adaptive Learner by Experiments), an Adaptive Multi-Agent System which learns to control a robotic device from demonstrations. The AMAS approach enables the design of software with emergent functionalities. The solution to a problem emerges from the cooperative interactions between a set of autonomous agents, each agent having only a partial perception of its environment. The application of this approach implies to isolate the different agents involved in the problem of control and to describe their local behaviour. Then, we identify a set of non-cooperative situations susceptible to disturb their normal behaviour, and propose a set of cooperation mechanisms to handle them. The different experimentations have shown the capacity of our system to learn in realtime from the observation of the activity of the user and have enable to highlight the benefits, limitations and perspectives offered by our approach to the problematic of control in ambient systems

    Integrating Perception, Prediction and Control for Adaptive Mobile Navigation

    Get PDF
    Mobile robots capable of navigating seamlessly and safely in pedestrian rich environments promise to bring robotic assistance closer to our daily lives. A key limitation of existing navigation policies is the difficulty to predict and reason about the environment including static obstacles and pedestrians. In this thesis, I explore three properties of navigation including prediction of occupied spaces, prediction of pedestrians and measurements of uncertainty to improve crowd-based navigation. The hypothesis is that improving prediction and uncertainty estimation will increase robot navigation performance resulting in fewer collisions, faster speeds and lead to more socially-compliant motion in crowds. Specifically, this thesis focuses on techniques that allow mobile robots to predict occupied spaces that extend beyond the line of sight of the sensor. This is accomplished through the development of novel generative neural network architectures that enable map prediction that exceed the limitations of the sensor. Further, I extend the neural network architectures to predict multiple hypotheses and use the variance of the hypotheses as a measure of uncertainty to formulate an information-theoretic map exploration strategy. Finally, control algorithms that leverage the predicted occupancy map were developed to demonstrate more robust, high-speed navigation on a physical small form factor autonomous car. I further extend the prediction and uncertainty approaches to include modeling pedestrian motion for dynamic crowd navigation. This includes developing novel techniques that model human intent to predict future motion of pedestrians. I show this approach improves state-of-the-art results in pedestrian prediction. I then show errors in prediction can be used as a measure of uncertainty to adapt the risk sensitivity of the robot controller in real time. Finally, I show that the crowd navigation algorithm extends to socially compliant behavior in groups of pedestrians. This research demonstrates that combining obstacle and pedestrian prediction with uncertainty estimation achieves more robust navigation policies. This approach results in improved map exploration efficiency, faster robot motion, fewer number of collisions and more socially compliant robot motion within crowds

    Third Conference on Artificial Intelligence for Space Applications, part 1

    Get PDF
    The application of artificial intelligence to spacecraft and aerospace systems is discussed. Expert systems, robotics, space station automation, fault diagnostics, parallel processing, knowledge representation, scheduling, man-machine interfaces and neural nets are among the topics discussed

    Fifth Conference on Artificial Intelligence for Space Applications

    Get PDF
    The Fifth Conference on Artificial Intelligence for Space Applications brings together diverse technical and scientific work in order to help those who employ AI methods in space applications to identify common goals and to address issues of general interest in the AI community. Topics include the following: automation for Space Station; intelligent control, testing, and fault diagnosis; robotics and vision; planning and scheduling; simulation, modeling, and tutoring; development tools and automatic programming; knowledge representation and acquisition; and knowledge base/data base integration
    corecore