9 research outputs found

    Model learning for trajectory tracking of robot manipulators

    Get PDF
    Abstract Model based controllers have drastically improved robot performance, increasing task accuracy while reducing control effort. Nevertheless, all this was realized with a very strong assumption: the exact knowledge of the physical properties of both the robot and the environment that surrounds it. This assertion is often misleading: in fact modern robots are modeled in a very approximate way and, more important, the environment is almost never static and completely known. Also for systems very simple, such as robot manipulators, these assumptions are still too strong and must be relaxed. Many methods were developed which, exploiting previous experiences, are able to refine the nominal model: from classic identification techniques to more modern machine learning based approaches. Indeed, the topic of this thesis is the investigation of these data driven techniques in the context of robot control for trajectory tracking. In the first two chapters, preliminary knowledge is provided on both model based controllers, used in robotics to assure precise trajectory tracking, and model learning techniques. In the following three chapters, are presented the novelties introduced by the author in this context with respect to the state of the art: three works with the same premise (an inaccurate system modeling), an identical goal (accurate trajectory tracking control) but with small differences according to the specific platform of application (fully actuated, underactuated, redundant robots). In all the considered architectures, an online learning scheme has been introduced to correct the nominal feedback linearization control law. Indeed, the method has been primarily introduced in the literature to cope with fully actuated systems, showing its efficacy in the accurate tracking of joint space trajectories also with an inaccurate dynamic model. The main novelty of the technique was the use of only kinematics information, instead of torque measurements (in general very noisy), to online retrieve and compensate the dynamic mismatches. After that the method has been extended to underactuated robots. This new architecture was composed by an online learning correction of the controller, acting on the actuated part of the system (the nominal partial feedback linearization), and an offline planning phase, required to realize a dynamically feasible trajectory also for the zero dynamics of the system. The scheme was iterative: after each trial, according to the collected information, both the phases were improved and then repeated until the task achievement. Also in this case the method showed its capability, both in numerical simulations and on real experiments on a robotics platform. Eventually the method has been applied to redundant systems: differently from before, in this context the task consisted in the accurate tracking of a Cartesian end effector trajectory. In principle very similar to the fully actuated case, the presence of redundancy slowed down drastically the learning machinery convergence, worsening the performance. In order to cope with this, a redundancy resolution was proposed that, exploiting an approximation of the learning algorithm (Gaussian process regression), allowed to locally maximize the information and so select the most convenient self motion for the system; moreover, all of this was realized with just the resolution of a quadratic programming problem. Also in this case the method showed its performance, realizing an accurate online tracking while reducing both the control effort and the joints velocity, obtaining so a natural behaviour. The thesis concludes with summary considerations on the proposed approach and with possible future directions of research

    Whole-Body Impedance Control of Wheeled Humanoid Robots

    Full text link

    State Estimation, Covariance Estimation, and Economic Optimization of Semi-Batch Bioprocesses

    Get PDF
    One of the most critical aspects of any chemical process engineer is the ability to gather, analyze, and trust incoming process data as it is often required in control and process monitoring applications. In real processes, online data can be unreliable due to factors such as poor tuning, calibration drift, or mechanical drift. Outside of these sources of noise, it may not be economically viable to directly measure all process states of interest (e.g., component concentrations). While process models can help validate incoming process data, models are often subject to plant-model mismatches, unmodeled disturbances, or lack enough detail to track all process states (e.g., dissolved oxygen in a bioprocess). As a result, directly utilizing the process data or the process model exclusively in these applications is often not possible or simply results in suboptimal performance. To address these challenges and achieve a higher level of confidence in the process states, estimation theory is used to blend online measurements and process models together to derive a series of state estimates. By utilizing both sources, it is possible to filter out the noise and derive a state estimate close to the true process conditions. This work deviates from the traditional state estimation field that mostly addresses continuous processes and examines how techniques such as extended Kalman Filter (EKF) and moving horizon estimation (MHE) can be applied to semi-batch processes. Additionally, this work considers how plant-model mismatches can be overcome through parameter-based estimation algorithms such as Dual EKF and a novel parameter-MHE (P-MHE) algorithm. A galacto-oligosaccharide (GOS) process is selected as the motivating example as some process states are unable to be independently measured online and require state estimation to be implemented. Moreover, this process is representative of the broader bioprocess field as it is subject to high amounts of noise, less rigorous models, and is traditionally operated using batch/semi-batch reactors. In conjunction with employing estimation approaches, this work also explores how to effectively tune these algorithms. The estimation algorithms selected in this work require careful tuning of the model and measurement covariance matrices to balance the uncertainties between the process models and the incoming measurements. Traditionally, this is done via ad-hoc manual tuning from process control engineers. This work modifies and employs techniques such as direct optimization (DO) and autocovariance least-squares (ALS) to accurately estimate the covariance values. Poor approximation of the covariances often results in poor estimation of the states or drives the estimation algorithm to failure. Finally, this work develops a semi-batch specific dynamic real-time optimization (DRTO) algorithm and poses a novel costing methodology for this specific type of problem. As part of this costing methodology, an enzyme specific cost scaling correlation is proposed to provide a realistic approximation of these costs in industrial contexts. This semi-batch DRTO is combined with the GOS process to provide an economic analysis using Kluyveromyces lactis (K. lactis) β-galactosidase enzyme. An extensive literature review is carried out to support the conclusions of the economic analysis and motivate application to other bioprocesses

    Reduced Order Modeling of Geophysical Flows Using Physics-Based and Data-Driven Modeling Techniques

    Get PDF
    The growing advancements in computational power, algorithmic innovation, and the availability of data resources have started shaping the way we numerically model physical problems now and for years to come. Many of the physical phenomena, whether it be in natural sciences and engineering disciplines or social sciences, are described by a set of ordinary differential equations or partial differential equations which is referred as the mathematical model of a physical system. High-fidelity numerical simulations provide us valuable information about the flow behavior of the physical system by solving these sets of equations using suitable numerical schemes and modeling tools. However, despite the progress in software engineering and processor technologies, the computational burden of high-fidelity simulation is still a limiting factor for many practical problems in different research areas, specifically for the large-scale physical problems with high spatio-temporal variabilities such as atmospheric and geophysical flows. Therefore, the development of efficient and robust algorithms that aims at achieving the maximum attainable quality of numerical simulations with optimal computational costs has become an active research question in computational fluid dynamics community. As an alternative to existing techniques for computational cost reduction, reduced order modeling (ROM) strategies have been proven to be successful in reducing the computational costs significantly with little compromise in physical accuracy. In this thesis, we utilize the state of the art physics-based and data-driven modeling tools to develop efficient and improved ROM frameworks for large-scale geophysical flows by addressing the issues associated with conventional ROM approaches. We first develop an improved physics-based ROM framework by considering the analogy between dynamic eddy viscosity large eddy simulation (LES) model and truncated modal projection, then we present a hybrid modeling approach by combining projection based ROM and extreme learning machine (ELM) neural network, and finally, we devise a fully data-driven ROM framework utilizing long short-term memory (LSTM) recurrent neural network architecture. As a representative benchmark test case, we consider a two-dimensional quasi-geostrophic (QG) ocean circulation model which, in general, displays an enormous range of fluctuating spatial and temporal scales. Throughout the thesis, we demonstrate our findings in terms of time series evolution of the field values and mean flow patterns, which suggest that the proposed ROM frameworks are robust and capable of predicting such fluid flows in an extremely efficient way compared to the conventional projection based ROM framework

    Image-Based Force Estimation and Haptic Rendering For Robot-Assisted Cardiovascular Intervention

    Get PDF
    Clinical studies have indicated that the loss of haptic perception is the prime limitation of robot-assisted cardiovascular intervention technology, hindering its global adoption. It causes compromised situational awareness for the surgeon during the intervention and may lead to health risks for the patients. This doctoral research was aimed at developing technology for addressing the limitation of the robot-assisted intervention technology in the provision of haptic feedback. The literature review showed that sensor-free force estimation (haptic cue) on endovascular devices, intuitive surgeon interface design, and haptic rendering within the surgeon interface were the major knowledge gaps. For sensor-free force estimation, first, an image-based force estimation methods based on inverse finite-element methods (iFEM) was developed and validated. Next, to address the limitation of the iFEM method in real-time performance, an inverse Cosserat rod model (iCORD) with a computationally efficient solution for endovascular devices was developed and validated. Afterward, the iCORD was adopted for analytical tip force estimation on steerable catheters. The experimental studies confirmed the accuracy and real-time performance of the iCORD for sensor-free force estimation. Afterward, a wearable drift-free rotation measurement device (MiCarp) was developed to facilitate the design of an intuitive surgeon interface by decoupling the rotation measurement from the insertion measurement. The validation studies showed that MiCarp had a superior performance for spatial rotation measurement compared to other modalities. In the end, a novel haptic feedback system based on smart magnetoelastic elastomers was developed, analytically modeled, and experimentally validated. The proposed haptics-enabled surgeon module had an unbounded workspace for interventional tasks and provided an intuitive interface. Experimental validation, at component and system levels, confirmed the usability of the proposed methods for robot-assisted intervention systems
    corecore