384,159 research outputs found

    Learning robotic milling strategies based on passive variable operational space interaction control

    Full text link
    This paper addresses the problem of robotic cutting during disassembly of products for materials separation and recycling. Waste handling applications differ from milling in manufacturing processes, as they engender considerable variety and uncertainty in the parameters (e.g. hardness) of materials which the robot must cut. To address this challenge, we propose a learning-based approach incorporating elements of interaction control, in which the robot can adapt key parameters, such as feed rate, depth of cut, and mechanical compliance during task execution. We show how a mathematical model of cutting mechanics, embedded in a simulation environment, can be used to rapidly train the system without needing large amounts of data from physical cutting trials. The simulation approach was validated on a real robot setup based on four case study materials with varying structural and mechanical properties. We demonstrate the proposed method minimises process force and path deviations to a level similar to offline optimal planning methods, while the average time to complete a cutting task is within 25% of the optimum, at the expense of reduced volume of material removed per pass. A key advantage of our approach over similar works is that no prior knowledge about the material is required.Comment: 15 pages, 14 figures, accepted for publication in IEEE Transactions on Automation Science and Engineering (T-ASE

    A Novel Approach to an Autonomous and Dynamic Satellite Control System Using On-Orbit Machine Learning

    Get PDF
    Classical control methods require deep analytical understanding of the system to be successfully controlled. This can be particularly difficult to accomplish in space systems where it is difficult, if not impossible, to truly replicate the operational environment in a laboratory. As a result, many missions, especially in the CubeSat form factor, fly with control systems that regularly fail to meet their operational requirements. Failure of a control system might result in diminished science collection or even a total loss of mission in severe circumstances. Additionally, future SmallSat use cases (such as for orbital debris collection, repair missions, or deep space prospecting) shall place autonomous spacecraft in situations where mission operations cannot be fully simulated prior to deployment and a more dynamic control scheme is required. This paper explores the use of a student/teacher machine learning model for the purpose of training an Artificial Intelligence to fly a spacecraft in much the same way a human pilot may be taught to fly a spacecraft. With dedicated Artificial Intelligence & Machine Learning hardware onboard the satellite, it is also hypothesized that deploying an active learning algorithm in space may allow it to rapidly adapt to unforeseen circumstances without direct human intervention. Full development of a magnetorquer only control scheme was conducted with testing ranging from a software-in-the-loop 3D physics engine to a hemispherical air bearing, and finally a planned on-orbit demonstration. Further work is planned to expand this research to translational operations in future missions

    Beyond task-space exploration: On the role of variance for motor control and learning.

    Get PDF
    This conceptual analysis on the role of variance for motor control and learning should be taken as a call to: (a) overcome the classic motor-action controversy by identifying converging lines and mutual synergies in the explanation of motor behavior phenomena, and (b) design more empirical research on low-level operational aspects of motor behavior rather than on high-level theoretical terms. Throughout the paper, claim (a) is exemplified by deploying the well-accepted task-space landscape metaphor. This approach provides an illustration not only of a dynamical sensorimotor system but also of a structure of internal forward models, as they are used in more cognitively rooted frameworks such as the theory of optimal feedback control. Claim (b) is put into practice by, mainly theoretically, substantiating a number of predictions for the role of variance in motor control and learning that can be derived from a convergent perspective. From this standpoint, it becomes obvious that variance is neither generally "good" nor generally "bad" for sensorimotor learning. Rather, the predictions derived suggest that specific forms of variance cause specific changes on permanent performance. In this endeavor, Newell's concept of task-space exploration is identified as a fundamental learning mechanism. Beyond, we highlight further predictions regarding the optimal use of variance for learning from a converging view. These predictions regard, on the one hand, additional learning mechanisms based on the task-space landscape metaphor-namely task-space formation, task-space differentiation and task-space (de-)composition-and, on the other hand, mechanisms of meta-learning that refer to handling noise as well as learning-to-learn and learning-to-adapt. Due to the character of a conceptual-analysis paper, we grant ourselves the right to be highly speculative on some issues. Thus, we would like readers to see our call mainly as an effort to stimulate both a meta-theoretical discussion on chances for convergence between classically separated lines of thought and, on an empirical level, future research on the role of variance in motor control and learning

    Probabilistic Learning of Torque Controllers from Kinematic and Force Constraints

    Get PDF
    When learning skills from demonstrations, one is often required to think in advance about the appropriate task representation (usually in either operational or configuration space). We here propose a probabilistic approach for simultaneously learning and synthesizing torque control commands which take into account task space, joint space and force constraints. We treat the problem by considering different torque controllers acting on the robot, whose relevance is learned probabilistically from demonstrations. This information is used to combine the controllers by exploiting the properties of Gaussian distributions, generating new torque commands that satisfy the important features of the task. We validate the approach in two experimental scenarios using 7- DoF torque-controlled manipulators, with tasks that require the consideration of different controllers to be properly executed

    Autonomous Payload Thermal Control

    Full text link
    In small satellites there is less room for heat control equipment, scientific instruments, and electronic components. Furthermore, the near proximity of the electronics makes power dissipation difficult, with the risk of not being able to control the temperature appropriately, reducing component lifetime and mission performance. To address this challenge, taking advantage of the advent of increasing intelligence on board satellites, a deep reinforcement learning based framework that uses Soft Actor-Critic algorithm is proposed for learning the thermal control policy onboard. The framework is evaluated both in a naive simulated environment and in a real space edge processing computer that will be shipped in the future IMAGIN-e mission and hosted in the ISS. The experiment results show that the proposed framework is able to learn to control the payload processing power to maintain the temperature under operational ranges, complementing traditional thermal control systems
    • …
    corecore