2,188 research outputs found

    Exploring Natural User Abstractions For Shared Perceptual Manipulator Task Modeling & Recovery

    Get PDF
    State-of-the-art domestic robot assistants are essentially autonomous mobile manipulators capable of exerting human-scale precision grasps. To maximize utility and economy, non-technical end-users would need to be nearly as efficient as trained roboticists in control and collaboration of manipulation task behaviors. However, it remains a significant challenge given that many WIMP-style tools require superficial proficiency in robotics, 3D graphics, and computer science for rapid task modeling and recovery. But research on robot-centric collaboration has garnered momentum in recent years; robots are now planning in partially observable environments that maintain geometries and semantic maps, presenting opportunities for non-experts to cooperatively control task behavior with autonomous-planning agents exploiting the knowledge. However, as autonomous systems are not immune to errors under perceptual difficulty, a human-in-the-loop is needed to bias autonomous-planning towards recovery conditions that resume the task and avoid similar errors. In this work, we explore interactive techniques allowing non-technical users to model task behaviors and perceive cooperatively with a service robot under robot-centric collaboration. We evaluate stylus and touch modalities that users can intuitively and effectively convey natural abstractions of high-level tasks, semantic revisions, and geometries about the world. Experiments are conducted with \u27pick-and-place\u27 tasks in an ideal \u27Blocks World\u27 environment using a Kinova JACO six degree-of-freedom manipulator. Possibilities for the architecture and interface are demonstrated with the following features; (1) Semantic \u27Object\u27 and \u27Location\u27 grounding that describe function and ambiguous geometries (2) Task specification with an unordered list of goal predicates, and (3) Guiding task recovery with implied scene geometries and trajectory via symmetry cues and configuration space abstraction. Empirical results from four user studies show our interface was much preferred than the control condition, demonstrating high learnability and ease-of-use that enable our non-technical participants to model complex tasks, provide effective recovery assistance, and teleoperative control

    OCRA – An ontology for collaborative robotics and adaptation

    Get PDF
    Industrial collaborative robots will be used in unstructured scenarios and a large variety of tasks in the near future. These robots shall collaborate with humans, who will add uncertainty and safety constraints to the execution of industrial robotic tasks. Hence, trustworthy collaborative robots must be able to reason about their collaboration’s requirements (e.g., safety), as well as the adaptation of their plans due to unexpected situations. A common approach to reasoning is to represent the knowledge of interest using logic-based formalisms, such as ontologies. However, there is not an established ontology defining notions such as collaboration or adaptation yet. In this article, we propose an Ontology for Collaborative Robotics and Adaptation (OCRA), which is built around two main notions: collaboration, and plan adaptation. OCRA ensures a reliable human-robot collaboration, since robots can formalize, and reason about their plan adaptations and collaborations in unstructured collaborative robotic scenarios. Furthermore, our ontology enhances the reusability of the domain’s terminology, allowing robots to represent their knowledge about different collaborative and adaptive situations. We validate our formal model, first, by demonstrating that a robot may answer a set of competency questions using OCRA. Second, by studying the formalization’s performance in limit cases that include instances with incongruent and incomplete axioms. For both validations, the example use case consists in a human and a robot collaborating on the filling of a tray.Peer ReviewedPostprint (published version

    Neuro-fuzzy techniques to optimize an FPGA embedded controller for robot navigation

    Get PDF
    This paper describes how low-cost embedded controllers for robot navigation can be obtained by using a small number of if-then rules (exploiting the connection in cascade of rule bases) that apply Takagi-Sugeno fuzzy inference method and employ fuzzy sets represented by normalized triangular functions. The rules comprise heuristic and fuzzy knowledge together with numerical data obtained from a geometric analysis of the control problem that considers the kinematic and dynamic constraints of the robot. Numerical data allow tuning the fuzzy symbols used in the rules to optimize the controller performance. From the implementation point of view, very few computational and memory resources are required: standard logical, addition, and multiplication operations and a few data that can be represented by integer values. This is illustrated with the design of a controller for the safe navigation of an autonomous car-like robot among possible obstacles toward a goal configuration. Implementation results of an FPGA embedded system based on a general-purpose soft processor confirm that percentage reduction in clock cycles is drastic thanks to applying the proposed neuro-fuzzy techniques. Simulation and experimental results obtained with the robot confirm the efficiency of the controller designed. Design methodology has been supported by the CAD tools of the environment Xfuzzy 3 and by the Embedded System Tools from Xilinx. © 2014 Elsevier B.V.Peer Reviewe

    Augmented reality (AR) for surgical robotic and autonomous systems: State of the art, challenges, and solutions

    Get PDF
    Despite the substantial progress achieved in the development and integration of augmented reality (AR) in surgical robotic and autonomous systems (RAS), the center of focus in most devices remains on improving end-effector dexterity and precision, as well as improved access to minimally invasive surgeries. This paper aims to provide a systematic review of different types of state-of-the-art surgical robotic platforms while identifying areas for technological improvement. We associate specific control features, such as haptic feedback, sensory stimuli, and human-robot collaboration, with AR technology to perform complex surgical interventions for increased user perception of the augmented world. Current researchers in the field have, for long, faced innumerable issues with low accuracy in tool placement around complex trajectories, pose estimation, and difficulty in depth perception during two-dimensional medical imaging. A number of robots described in this review, such as Novarad and SpineAssist, are analyzed in terms of their hardware features, computer vision systems (such as deep learning algorithms), and the clinical relevance of the literature. We attempt to outline the shortcomings in current optimization algorithms for surgical robots (such as YOLO and LTSM) whilst providing mitigating solutions to internal tool-to-organ collision detection and image reconstruction. The accuracy of results in robot end-effector collisions and reduced occlusion remain promising within the scope of our research, validating the propositions made for the surgical clearance of ever-expanding AR technology in the future

    Virtual Reality Games for Motor Rehabilitation

    Get PDF
    This paper presents a fuzzy logic based method to track user satisfaction without the need for devices to monitor users physiological conditions. User satisfaction is the key to any product’s acceptance; computer applications and video games provide a unique opportunity to provide a tailored environment for each user to better suit their needs. We have implemented a non-adaptive fuzzy logic model of emotion, based on the emotional component of the Fuzzy Logic Adaptive Model of Emotion (FLAME) proposed by El-Nasr, to estimate player emotion in UnrealTournament 2004. In this paper we describe the implementation of this system and present the results of one of several play tests. Our research contradicts the current literature that suggests physiological measurements are needed. We show that it is possible to use a software only method to estimate user emotion

    Energy-based control approaches in human-robot collaborative disassembly

    Get PDF
    • …
    corecore