300 research outputs found

    Envisioning the qualitative effects of robot manipulation actions using simulation-based projections

    Get PDF
    Autonomous robots that are to perform complex everyday tasks such as making pancakes have to understand how the effects of an action depend on the way the action is executed. Within Artificial Intelligence, classical planning reasons about whether actions are executable, but makes the assumption that the actions will succeed (with some probability). In this work, we have designed, implemented, and analyzed a framework that allows us to envision the physical effects of robot manipulation actions. We consider envisioning to be a qualitative reasoning method that reasons about actions and their effects based on simulation-based projections. Thereby it allows a robot to infer what could happen when it performs a task in a certain way. This is achieved by translating a qualitative physics problem into a parameterized simulation problem; performing a detailed physics-based simulation of a robot plan; logging the state evolution into appropriate data structures; and then translating these sub-symbolic data structures into interval-based first-order symbolic, qualitative representations, called timelines. The result of the envisioning is a set of detailed narratives represented by timelines which are then used to infer answers to qualitative reasoning problems. By envisioning the outcome of actions before committing to them, a robot is able to reason about physical phenomena and can therefore prevent itself from ending up in unwanted situations. Using this approach, robots can perform manipulation tasks more efficiently, robustly, and flexibly, and they can even successfully accomplish previously unknown variations of tasks

    Do robots outperform humans in human-centered domains?

    Get PDF
    The incessant progress of robotic technology and rationalization of human manpower induces high expectations in society, but also resentment and even fear. In this paper, we present a quantitative normalized comparison of performance, to shine a light onto the pressing question, "How close is the current state of humanoid robotics to outperforming humans in their typical functions (e.g., locomotion, manipulation), and their underlying structures (e.g., actuators/muscles) in human-centered domains?" This is the most comprehensive comparison of the literature so far. Most state-of-the-art robotic structures required for visual, tactile, or vestibular perception outperform human structures at the cost of slightly higher mass and volume. Electromagnetic and fluidic actuation outperform human muscles w.r.t. speed, endurance, force density, and power density, excluding components for energy storage and conversion. Artificial joints and links can compete with the human skeleton. In contrast, the comparison of locomotion functions shows that robots are trailing behind in energy efficiency, operational time, and transportation costs. Robots are capable of obstacle negotiation, object manipulation, swimming, playing soccer, or vehicle operation. Despite the impressive advances of humanoid robots in the last two decades, current robots are not yet reaching the dexterity and versatility to cope with more complex manipulation and locomotion tasks (e.g., in confined spaces). We conclude that state-of-the-art humanoid robotics is far from matching the dexterity and versatility of human beings. Despite the outperforming technical structures, robot functions are inferior to human ones, even with tethered robots that could place heavy auxiliary components off-board. The persistent advances in robotics let us anticipate the diminishing of the gap

    From walking to running: robust and 3D humanoid gait generation via MPC

    Get PDF
    Humanoid robots are platforms that can succeed in tasks conceived for humans. From locomotion in unstructured environments, to driving cars, or working in industrial plants, these robots have a potential that is yet to be disclosed in systematic every-day-life applications. Such a perspective, however, is opposed by the need of solving complex engineering problems under the hardware and software point of view. In this thesis, we focus on the software side of the problem, and in particular on locomotion control. The operativity of a legged humanoid is subordinate to its capability of realizing a reliable locomotion. In many settings, perturbations may undermine the balance and make the robot fall. Moreover, complex and dynamic motions might be required by the context, as for instance it could be needed to start running or climbing stairs to achieve a certain location in the shortest time. We present gait generation schemes based on Model Predictive Control (MPC) that tackle both the problem of robustness and tridimensional dynamic motions. The proposed control schemes adopt the typical paradigm of centroidal MPC for reference motion generation, enforcing dynamic balance through the Zero Moment Point condition, plus a whole-body controller that maps the generated trajectories to joint commands. Each of the described predictive controllers also feature a so-called stability constraint, preventing the generation of diverging Center of Mass trajectories with respect to the Zero Moment Point. Robustness is addressed by modeling the humanoid as a Linear Inverted Pendulum and devising two types of strategies. For persistent perturbations, a way to use a disturbance observer and a technique for constraint tightening (to ensure robust constraint satisfaction) are presented. In the case of impulsive pushes instead, techniques for footstep and timing adaptation are introduced. The underlying approach is to interpret robustness as a MPC feasibility problem, thus aiming at ensuring the existence of a solution for the constrained optimization problem to be solved at each iteration in spite of the perturbations. This perspective allows to devise simple solutions to complex problems, favoring a reliable real-time implementation. For the tridimensional locomotion, on the other hand, the humanoid is modeled as a Variable Height Inverted Pendulum. Based on it, a two stage MPC is introduced with particular emphasis on the implementation of the stability constraint. The overall result is a gait generation scheme that allows the robot to overcome relatively complex environments constituted by a non-flat terrain, with also the capability of realizing running gaits. The proposed methods are validated in different settings: from conceptual simulations in Matlab to validations in the DART dynamic environment, up to experimental tests on the NAO and the OP3 platforms

    RFID Technology in Intelligent Tracking Systems in Construction Waste Logistics Using Optimisation Techniques

    Get PDF
    Construction waste disposal is an urgent issue for protecting our environment. This paper proposes a waste management system and illustrates the work process using plasterboard waste as an example, which creates a hazardous gas when land filled with household waste, and for which the recycling rate is less than 10% in the UK. The proposed system integrates RFID technology, Rule-Based Reasoning, Ant Colony optimization and knowledge technology for auditing and tracking plasterboard waste, guiding the operation staff, arranging vehicles, schedule planning, and also provides evidence to verify its disposal. It h relies on RFID equipment for collecting logistical data and uses digital imaging equipment to give further evidence; the reasoning core in the third layer is responsible for generating schedules and route plans and guidance, and the last layer delivers the result to inform users. The paper firstly introduces the current plasterboard disposal situation and addresses the logistical problem that is now the main barrier to a higher recycling rate, followed by discussion of the proposed system in terms of both system level structure and process structure. And finally, an example scenario will be given to illustrate the system’s utilization

    Intuitive Instruction of Industrial Robots : A Knowledge-Based Approach

    Get PDF
    With more advanced manufacturing technologies, small and medium sized enterprises can compete with low-wage labor by providing customized and high quality products. For small production series, robotic systems can provide a cost-effective solution. However, for robots to be able to perform on par with human workers in manufacturing industries, they must become flexible and autonomous in their task execution and swift and easy to instruct. This will enable small businesses with short production series or highly customized products to use robot coworkers without consulting expert robot programmers. The objective of this thesis is to explore programming solutions that can reduce the programming effort of sensor-controlled robot tasks. The robot motions are expressed using constraints, and multiple of simple constrained motions can be combined into a robot skill. The skill can be stored in a knowledge base together with a semantic description, which enables reuse and reasoning. The main contributions of the thesis are 1) development of ontologies for knowledge about robot devices and skills, 2) a user interface that provides simple programming of dual-arm skills for non-experts and experts, 3) a programming interface for task descriptions in unstructured natural language in a user-specified vocabulary and 4) an implementation where low-level code is generated from the high-level descriptions. The resulting system greatly reduces the number of parameters exposed to the user, is simple to use for non-experts and reduces the programming time for experts by 80%. The representation is described on a semantic level, which means that the same skill can be used on different robot platforms. The research is presented in seven papers, the first describing the knowledge representation and the second the knowledge-based architecture that enables skill sharing between robots. The third paper presents the translation from high-level instructions to low-level code for force-controlled motions. The two following papers evaluate the simplified programming prototype for non-expert and expert users. The last two present how program statements are extracted from unstructured natural language descriptions

    Towards gestural understanding for intelligent robots

    Get PDF
    Fritsch JN. Towards gestural understanding for intelligent robots. Bielefeld: Universität Bielefeld; 2012.A strong driving force of scientific progress in the technical sciences is the quest for systems that assist humans in their daily life and make their life easier and more enjoyable. Nowadays smartphones are probably the most typical instances of such systems. Another class of systems that is getting increasing attention are intelligent robots. Instead of offering a smartphone touch screen to select actions, these systems are intended to offer a more natural human-machine interface to their users. Out of the large range of actions performed by humans, gestures performed with the hands play a very important role especially when humans interact with their direct surrounding like, e.g., pointing to an object or manipulating it. Consequently, a robot has to understand such gestures to offer an intuitive interface. Gestural understanding is, therefore, a key capability on the way to intelligent robots. This book deals with vision-based approaches for gestural understanding. Over the past two decades, this has been an intensive field of research which has resulted in a variety of algorithms to analyze human hand motions. Following a categorization of different gesture types and a review of other sensing techniques, the design of vision systems that achieve hand gesture understanding for intelligent robots is analyzed. For each of the individual algorithmic steps – hand detection, hand tracking, and trajectory-based gesture recognition – a separate Chapter introduces common techniques and algorithms and provides example methods. The resulting recognition algorithms are considering gestures in isolation and are often not sufficient for interacting with a robot who can only understand such gestures when incorporating the context like, e.g., what object was pointed at or manipulated. Going beyond a purely trajectory-based gesture recognition by incorporating context is an important prerequisite to achieve gesture understanding and is addressed explicitly in a separate Chapter of this book. Two types of context, user-provided context and situational context, are reviewed and existing approaches to incorporate context for gestural understanding are reviewed. Example approaches for both context types provide a deeper algorithmic insight into this field of research. An overview of recent robots capable of gesture recognition and understanding summarizes the currently realized human-robot interaction quality. The approaches for gesture understanding covered in this book are manually designed while humans learn to recognize gestures automatically during growing up. Promising research targeted at analyzing developmental learning in children in order to mimic this capability in technical systems is highlighted in the last Chapter completing this book as this research direction may be highly influential for creating future gesture understanding systems

    Exploring Natural User Abstractions For Shared Perceptual Manipulator Task Modeling & Recovery

    Get PDF
    State-of-the-art domestic robot assistants are essentially autonomous mobile manipulators capable of exerting human-scale precision grasps. To maximize utility and economy, non-technical end-users would need to be nearly as efficient as trained roboticists in control and collaboration of manipulation task behaviors. However, it remains a significant challenge given that many WIMP-style tools require superficial proficiency in robotics, 3D graphics, and computer science for rapid task modeling and recovery. But research on robot-centric collaboration has garnered momentum in recent years; robots are now planning in partially observable environments that maintain geometries and semantic maps, presenting opportunities for non-experts to cooperatively control task behavior with autonomous-planning agents exploiting the knowledge. However, as autonomous systems are not immune to errors under perceptual difficulty, a human-in-the-loop is needed to bias autonomous-planning towards recovery conditions that resume the task and avoid similar errors. In this work, we explore interactive techniques allowing non-technical users to model task behaviors and perceive cooperatively with a service robot under robot-centric collaboration. We evaluate stylus and touch modalities that users can intuitively and effectively convey natural abstractions of high-level tasks, semantic revisions, and geometries about the world. Experiments are conducted with \u27pick-and-place\u27 tasks in an ideal \u27Blocks World\u27 environment using a Kinova JACO six degree-of-freedom manipulator. Possibilities for the architecture and interface are demonstrated with the following features; (1) Semantic \u27Object\u27 and \u27Location\u27 grounding that describe function and ambiguous geometries (2) Task specification with an unordered list of goal predicates, and (3) Guiding task recovery with implied scene geometries and trajectory via symmetry cues and configuration space abstraction. Empirical results from four user studies show our interface was much preferred than the control condition, demonstrating high learnability and ease-of-use that enable our non-technical participants to model complex tasks, provide effective recovery assistance, and teleoperative control

    Acquiring and Maintaining Knowledge by Natural Multimodal Dialog

    Get PDF
    corecore