15,730 research outputs found

    A Framework of Hybrid Force/Motion Skills Learning for Robots

    Get PDF
    Human factors and human-centred design philosophy are highly desired in today’s robotics applications such as human-robot interaction (HRI). Several studies showed that endowing robots of human-like interaction skills can not only make them more likeable but also improve their performance. In particular, skill transfer by imitation learning can increase usability and acceptability of robots by the users without computer programming skills. In fact, besides positional information, muscle stiffness of the human arm, contact force with the environment also play important roles in understanding and generating human-like manipulation behaviours for robots, e.g., in physical HRI and tele-operation. To this end, we present a novel robot learning framework based on Dynamic Movement Primitives (DMPs), taking into consideration both the positional and the contact force profiles for human-robot skills transferring. Distinguished from the conventional method involving only the motion information, the proposed framework combines two sets of DMPs, which are built to model the motion trajectory and the force variation of the robot manipulator, respectively. Thus, a hybrid force/motion control approach is taken to ensure the accurate tracking and reproduction of the desired positional and force motor skills. Meanwhile, in order to simplify the control system, a momentum-based force observer is applied to estimate the contact force instead of employing force sensors. To deploy the learned motion-force robot manipulation skills to a broader variety of tasks, the generalization of these DMP models in actual situations is also considered. Comparative experiments have been conducted using a Baxter Robot to verify the effectiveness of the proposed learning framework on real-world scenarios like cleaning a table

    Relational Approach to Knowledge Engineering for POMDP-based Assistance Systems as a Translation of a Psychological Model

    Get PDF
    Assistive systems for persons with cognitive disabilities (e.g. dementia) are difficult to build due to the wide range of different approaches people can take to accomplishing the same task, and the significant uncertainties that arise from both the unpredictability of client's behaviours and from noise in sensor readings. Partially observable Markov decision process (POMDP) models have been used successfully as the reasoning engine behind such assistive systems for small multi-step tasks such as hand washing. POMDP models are a powerful, yet flexible framework for modelling assistance that can deal with uncertainty and utility. Unfortunately, POMDPs usually require a very labour intensive, manual procedure for their definition and construction. Our previous work has described a knowledge driven method for automatically generating POMDP activity recognition and context sensitive prompting systems for complex tasks. We call the resulting POMDP a SNAP (SyNdetic Assistance Process). The spreadsheet-like result of the analysis does not correspond to the POMDP model directly and the translation to a formal POMDP representation is required. To date, this translation had to be performed manually by a trained POMDP expert. In this paper, we formalise and automate this translation process using a probabilistic relational model (PRM) encoded in a relational database. We demonstrate the method by eliciting three assistance tasks from non-experts. We validate the resulting POMDP models using case-based simulations to show that they are reasonable for the domains. We also show a complete case study of a designer specifying one database, including an evaluation in a real-life experiment with a human actor

    Multi Agent Systems in Logistics: A Literature and State-of-the-art Review

    Get PDF
    Based on a literature survey, we aim to answer our main question: “How should we plan and execute logistics in supply chains that aim to meet today’s requirements, and how can we support such planning and execution using IT?†Today’s requirements in supply chains include inter-organizational collaboration and more responsive and tailored supply to meet specific demand. Enterprise systems fall short in meeting these requirements The focus of planning and execution systems should move towards an inter-enterprise and event-driven mode. Inter-organizational systems may support planning going from supporting information exchange and henceforth enable synchronized planning within the organizations towards the capability to do network planning based on available information throughout the network. We provide a framework for planning systems, constituting a rich landscape of possible configurations, where the centralized and fully decentralized approaches are two extremes. We define and discuss agent based systems and in particular multi agent systems (MAS). We emphasize the issue of the role of MAS coordination architectures, and then explain that transportation is, next to production, an important domain in which MAS can and actually are applied. However, implementation is not widespread and some implementation issues are explored. In this manner, we conclude that planning problems in transportation have characteristics that comply with the specific capabilities of agent systems. In particular, these systems are capable to deal with inter-organizational and event-driven planning settings, hence meeting today’s requirements in supply chain planning and execution.supply chain;MAS;multi agent systems

    Spoken dialogue systems: architectures and applications

    Get PDF
    171 p.Technology and technological devices have become habitual and omnipresent. Humans need to learn tocommunicate with all kind of devices. Until recently humans needed to learn how the devices expressthemselves to communicate with them. But in recent times the tendency has become to makecommunication with these devices in more intuitive ways. The ideal way to communicate with deviceswould be the natural way of communication between humans, the speech. Humans have long beeninvestigating and designing systems that use this type of communication, giving rise to the so-calledSpoken Dialogue Systems.In this context, the primary goal of the thesis is to show how these systems can be implemented.Additionally, the thesis serves as a review of the state-of-the-art regarding architectures and toolkits.Finally, the thesis is intended to serve future system developers as a guide for their construction. For that

    Grounding robot motion in natural language and visual perception

    Get PDF
    The current state of the art in military and first responder ground robots involves heavy physical and cognitive burdens on the human operator while taking little to no advantage of the potential autonomy of robotic technology. The robots currently in use are rugged remote-controlled vehicles. Their interaction modalities, usually utilizing a game controller connected to a computer, require a dedicated operator who has limited capacity for other tasks. I present research which aims to ease these burdens by incorporating multiple modes of robotic sensing into a system which allows humans to interact with robots through a natural-language interface. I conduct this research on a custom-built six-wheeled mobile robot. First I present a unified framework which supports grounding natural-language semantics in robotic driving. This framework supports learning the meanings of nouns and prepositions from sentential descriptions of paths driven by the robot, as well as using such meanings to both generate a sentential description of a path and perform automated driving of a path specified in natural language. One limitation of this framework is that it requires as input the locations of the (initially nameless) objects in the floor plan. Next I present a method to automatically detect, localize, and label objects in the robot’s environment using only the robot’s video feed and corresponding odometry. This method produces a map of the robot’s environment in which objects are differentiated by abstract class labels. Finally, I present work that unifies the previous two approaches. This method detects, localizes, and labels objects, as the previous method does. However, this new method integrates natural-language descriptions to learn actual object names, rather than abstract labels

    Symbolic representation of scenarios in Bologna airport on virtual reality concept

    Get PDF
    This paper is a part of a big Project named Retina Project, which is focused in reduce the workload of an ATCO. It uses the last technological advances as Virtual Reality concept. The work has consisted in studying the different awareness situations that happens daily in Bologna Airport. It has been analysed one scenario with good visibility where the sun predominates and two other scenarios with poor visibility where the rain and the fog dominate. Due to the study of visibility in the three scenarios computed, the conclusion obtained is that the overlay must be shown with a constant dimension regardless the position of the aircraft to be readable by the ATC and also, the frame and the flight strip should be coloured in a showy colour (like red) for a better control by the ATCO

    Computational interaction techniques for 3D selection, manipulation and navigation in immersive VR

    Get PDF
    3D interaction provides a natural interplay for HCI. Many techniques involving diverse sets of hardware and software components have been proposed, which has generated an explosion of Interaction Techniques (ITes), Interactive Tasks (ITas) and input devices, increasing thus the heterogeneity of tools in 3D User Interfaces (3DUIs). Moreover, most of those techniques are based on general formulations that fail in fully exploiting human capabilities for interaction. This is because while 3D interaction enables naturalness, it also produces complexity and limitations when using 3DUIs. In this thesis, we aim to generate approaches that better exploit the high potential human capabilities for interaction by combining human factors, mathematical formalizations and computational methods. Our approach is focussed on the exploration of the close coupling between specific ITes and ITas while addressing common issues of 3D interactions. We specifically focused on the stages of interaction within Basic Interaction Tasks (BITas) i.e., data input, manipulation, navigation and selection. Common limitations of these tasks are: (1) the complexity of mapping generation for input devices, (2) fatigue in mid-air object manipulation, (3) space constraints in VR navigation; and (4) low accuracy in 3D mid-air selection. Along with two chapters of introduction and background, this thesis presents five main works. Chapter 3 focusses on the design of mid-air gesture mappings based on human tacit knowledge. Chapter 4 presents a solution to address user fatigue in mid-air object manipulation. Chapter 5 is focused on addressing space limitations in VR navigation. Chapter 6 describes an analysis and a correction method to address Drift effects involved in scale-adaptive VR navigation; and Chapter 7 presents a hybrid technique 3D/2D that allows for precise selection of virtual objects in highly dense environments (e.g., point clouds). Finally, we conclude discussing how the contributions obtained from this exploration, provide techniques and guidelines to design more natural 3DUIs

    Agents for educational games and simulations

    Get PDF
    This book consists mainly of revised papers that were presented at the Agents for Educational Games and Simulation (AEGS) workshop held on May 2, 2011, as part of the Autonomous Agents and MultiAgent Systems (AAMAS) conference in Taipei, Taiwan. The 12 full papers presented were carefully reviewed and selected from various submissions. The papers are organized topical sections on middleware applications, dialogues and learning, adaption and convergence, and agent applications
    corecore