54 research outputs found
Autonomous dexterous end-effectors for space robotics
The development of a knowledge-based controller is summarized for the Belgrade/USC robot hand, a five-fingered end effector, designed for maximum autonomy. The biological principles of the hand and its architecture are presented. The conceptual and software aspects of the grasp selection system are discussed, including both the effects of the geometry of the target object and the task to be performed. Some current research issues are presented
Artificial Intelligence and the Ethics of Self-learning Robots
The convergence of robotics technology with the science of artificial intelligence ( or AI) is rapidly enabling the development of robots that emulate a wide range of intelligent human behaviors.1 Recent advances in machine learning techniques have produced significant gains in the ability of artificial agents to perform or even excel in activities formerly thought to be the exclusive province of human intelligence, including abstract problem-solving, perceptual recognition, social interaction, and natural language use. These developments raise a host of new ethical concerns about the responsible design, manufacture, and use of robots enabled with artificial intelligence-particularly those equipped with self-learning capacities.
The potential public benefits of self-learning robots are immense. Driverless cars promise to vastly reduce human fatalities on the road while boosting transportation efficiency and reducing energy use. Robot medics with access to a virtual ocean of medical case data might one day be able to diagnose patients with far greater speed and reliability than even the best-trained human counterparts. Robots tasked with crowd control could predict the actions of a dangerous mob well before the signs are recognizable to law enforcement officers. Such applications, and many more that will emerge, have the potential to serve vital moral interests in protecting human life, health, and well-being.
Yet as this chapter will show, the ethical risks posed by AI-enabled robots are equally serious-especially since self-learning systems behave in ways that cannot always be anticipated or folly understood, even by their programmers. Some warn of a future where Al escapes our control, or even turns against humanity (Standage 2016); but other, far less cinematic dangers are much nearer to hand and are virtually certain to cause great harms if not promptly addressed by technologists, lawmakers, and ocher stakeholders. The task of ensuring the ethical design, manufacture, use, and governance of AI-enabled robots and other artificial agents is thus as critically important as it is vast
Autonomous mobile robot teams
This paper describes autonomous mobile robot teams performing tasks in unstructured environments. The behavior and the intelligence of the group is distributed, and the system does not include a central command base or leader. The novel concept of the Tropism-Based Cognitive Architecture is introduced, which is used by the robots in order to produce behavior transforming their sensory information to proper action. The results of a number of simulation experiments are presented. These experiments include worlds where the robot teams must locate, decompose, and gather objects, and defend themselves against hostile predators, while navigating around stationary and mobile obstacles
Recursive Learning for Deformable Object Manipulation
©1997 IEEE. Personal use of this material is permitted. Permission from IEEE must be obtained for all other users, including reprinting/ republishing this material for advertising or promotional purposes, creating new collective works for resale or redistribution to servers or lists, or reuse of any copyrighted components of this work in other works.Presented at the 1997 8th International Conference on Advanced Robotics (ICAR '97) Hyatt Regency, Monterey, California, U.S.A., July 7-9, 1997.DOI: 10.1109/ICAR.1997.620294This paper presents a generalized approach to handling of 3D deformable objects. Our task is to learn robotic grasping characteristics for a non-rigid object represented by a physically-based model. The model is derived from discretizing the object into a network of interconnected particles and springs. Using Newtonian equations, we model the particle motion of a deformable object and thus calculate the deformation characteristics of the object. These deformation characteristics allow us to learn the required minimum forces necessary to successfully grasp the object and by linking these parameters into a learning table, we can subsequently retrieve the forces necessary to grasp an object presented to the system during run time. This new method of learning is presented and the results of a virtual simulation are shown
Intelligent learning for deformable object manipulation
©1999 IEEE. Personal use of this material is permitted. Permission from IEEE must be obtained for all other users, including reprinting/ republishing this material for advertising or promotional purposes, creating new collective works for resale or redistribution to servers or lists, or reuse of any copyrighted components of this work in other works.Presented at the 1999 IEEE International Symposium on Computational Intelligence in Robotics and Automation, Monterey Bay, CA, November 1999.DOI: 10.1109/CIRA.1999.809935The majority of manipulation systems are designed with the assumption that the objects’being handled are rigid and do not deform when grasped. This paper addresses the problem of robotic grasping and manipulation of 3-D deformable objects, such as rubber balls or bags filled with sand.‘ Specifically, we have developed a generalized learning algorithm for handling of 3-D deformable objects in which prior knowledge of object attributes is not required and thus it can be applied to a large class of object types. Our methodology relies on the implementation of two main tasks. Our first task is to calculate deformation characteristics for a non-rigid object represented by a physically-based model. Using nonlinear partial differential equations, we model the particle motion of the deformable object in order to calculate the deformation characteristics. For our second task, we must calculate the minimum force required to successfully lift the deformable object. This minimum lifting force can be learned using a technique called ‘iterative lifting’. Once the deformation characteristics and the associated lifting force term are determined, they are used to train a neural network for extracting the minimum force required for subsequent deformable object manipulation tasks. Our developed algorithm is validated with two sets of experiments. The first experimental results are derived from the implementation of the algorithm in a simulated environment. The second set involves a physical implementation of the technique whose outcome is compared with the simulation results to test the real world validity of the developed methodology
Robots in War: Issues of Risk and Ethics
War robots clearly hold tremendous advantages-from saving the lives of our own soldiers, to safely defusing roadside bombs, to operating in inaccessible and dangerous environments such as mountainside caves and underwater. Without emotions and other liabilities on the battlefield, they could conduct warfare more ethically and effectively than human soldiers who are susceptible to overreactions, anger, vengeance, fatigue, low morale, and so on. But the use of robots, especially autonomous ones, raises a a host of ethical and risk issues. This paper offers a a survey of such emerging issues in this new but rapidly advancing area of technology
Altruistic Task Allocation despite Unbalanced Relationships within Multi-Robot Communities
Typical Multi-Robot Systems consist of robots cooperating to maximize global fitness functions. However, in some scenarios, the set of interacting robots may not share common goals and thus the concept of a global fitness function becomes invalid. This work examines Multi-Robot Communities(MRC), in which individual robots have independent goals. Within the MRC context, we present a task allocation architecture that optimizes individual robot fitness functions over long time horizons using reciprocal altruism.
Previous work has shown that reciprocating altruistic relationships can evolve between two willing robots, using market-based task auctions, while still protecting against selfish robots aiming to exploit altruism. As these relationships grow, robots are increasingly likely to perform tasks for one another without any reward or promise of payback. This work furthers this notion by considering cases where an imbalance exists in the altruistic relationship. The imbalance occurs when one robot can perform another robot\u27s task, thereby exhibiting altruism, but the other robot cannot reciprocate since it is physically unable (e.g. lack of adequate sensors or actuators). A new altruistic controller to deal with such imbalances is presented. The controller permits a robot to build altruistic relationships with the community as a whole (one-to-many), instead of just with single robots (one-to-one). The controller is proven stable and guarantees altruistic relationships will grow, if robots are willing, while still minimizing the effects of selfish robots. Results indicate that the one-to-many controller performs comparable to the one-to-one on most problems, but excels in the case of an unbalanced altruistic relationship
Altruistic Task Allocation despite Unbalanced Relationships within Multi-Robot Communities
Typical Multi-Robot Systems consist of robots cooperating to maximize global fitness functions. However, in some scenarios, the set of interacting robots may not share common goals and thus the concept of a global fitness function becomes invalid. This work examines Multi-Robot Communities(MRC), in which individual robots have independent goals. Within the MRC context, we present a task allocation architecture that optimizes individual robot fitness functions over long time horizons using reciprocal altruism.
Previous work has shown that reciprocating altruistic relationships can evolve between two willing robots, using market-based task auctions, while still protecting against selfish robots aiming to exploit altruism. As these relationships grow, robots are increasingly likely to perform tasks for one another without any reward or promise of payback. This work furthers this notion by considering cases where an imbalance exists in the altruistic relationship. The imbalance occurs when one robot can perform another robot\u27s task, thereby exhibiting altruism, but the other robot cannot reciprocate since it is physically unable (e.g. lack of adequate sensors or actuators). A new altruistic controller to deal with such imbalances is presented. The controller permits a robot to build altruistic relationships with the community as a whole (one-to-many), instead of just with single robots (one-to-one). The controller is proven stable and guarantees altruistic relationships will grow, if robots are willing, while still minimizing the effects of selfish robots. Results indicate that the one-to-many controller performs comparable to the one-to-one on most problems, but excels in the case of an unbalanced altruistic relationship
- …