1,218 research outputs found
Efficient Deep Learning of Robust Policies from MPC using Imitation and Tube-Guided Data Augmentation
Imitation Learning (IL) has been increasingly employed to generate
computationally efficient policies from task-relevant demonstrations provided
by Model Predictive Control (MPC). However, commonly employed IL methods are
often data- and computationally-inefficient, as they require a large number of
MPC demonstrations, resulting in long training times, and they produce policies
with limited robustness to disturbances not experienced during training. In
this work, we propose an IL strategy to efficiently compress a computationally
expensive MPC into a Deep Neural Network (DNN) policy that is robust to
previously unseen disturbances. By using a robust variant of the MPC, called
Robust Tube MPC (RTMPC), and leveraging properties from the controller, we
introduce a computationally-efficient Data Aggregation (DA) method that enables
a significant reduction of the number of MPC demonstrations and training time
required to generate a robust policy. Our approach opens the possibility of
zero-shot transfer of a policy trained from a single MPC demonstration
collected in a nominal domain, such as a simulation or a robot in a
lab/controlled environment, to a new domain with previously-unseen bounded
model errors/perturbations. Numerical and experimental evaluations performed
using linear and nonlinear MPC for agile flight on a multirotor show that our
method outperforms strategies commonly employed in IL (such as DAgger and DR)
in terms of demonstration-efficiency, training time, and robustness to
perturbations unseen during training.Comment: Under review. arXiv admin note: text overlap with arXiv:2109.0991
Tube-NeRF: Efficient Imitation Learning of Visuomotor Policies from MPC using Tube-Guided Data Augmentation and NeRFs
Imitation learning (IL) can train computationally-efficient sensorimotor
policies from a resource-intensive Model Predictive Controller (MPC), but it
often requires many samples, leading to long training times or limited
robustness. To address these issues, we combine IL with a variant of robust MPC
that accounts for process and sensing uncertainties, and we design a data
augmentation (DA) strategy that enables efficient learning of vision-based
policies. The proposed DA method, named Tube-NeRF, leverages Neural Radiance
Fields (NeRFs) to generate novel synthetic images, and uses properties of the
robust MPC (the tube) to select relevant views and to efficiently compute the
corresponding actions. We tailor our approach to the task of localization and
trajectory tracking on a multirotor, by learning a visuomotor policy that
generates control actions using images from the onboard camera as only source
of horizontal position. Numerical evaluations show 80-fold increase in
demonstration efficiency and a 50% reduction in training time over current IL
methods. Additionally, our policies successfully transfer to a real multirotor,
achieving low tracking errors despite large disturbances, with an onboard
inference time of only 1.5 ms.
Video: https://youtu.be/_W5z33ZK1m4Comment: Video: https://youtu.be/_W5z33ZK1m4. Evolved paper from our previous
work: arXiv:2210.1012
Efficient Deep Learning of Robust, Adaptive Policies using Tube MPC-Guided Data Augmentation
The deployment of agile autonomous systems in challenging, unstructured
environments requires adaptation capabilities and robustness to uncertainties.
Existing robust and adaptive controllers, such as the ones based on MPC, can
achieve impressive performance at the cost of heavy online onboard
computations. Strategies that efficiently learn robust and onboard-deployable
policies from MPC have emerged, but they still lack fundamental adaptation
capabilities. In this work, we extend an existing efficient IL algorithm for
robust policy learning from MPC with the ability to learn policies that adapt
to challenging model/environment uncertainties. The key idea of our approach
consists in modifying the IL procedure by conditioning the policy on a learned
lower-dimensional model/environment representation that can be efficiently
estimated online. We tailor our approach to the task of learning an adaptive
position and attitude control policy to track trajectories under challenging
disturbances on a multirotor. Our evaluation is performed in a high-fidelity
simulation environment and shows that a high-quality adaptive policy can be
obtained in about hours. We additionally empirically demonstrate rapid
adaptation to in- and out-of-training-distribution uncertainties, achieving a
cm average position error under a wind disturbance that corresponds to
about of the weight of the robot and that is larger than the
maximum wind seen during training.Comment: 8 pages, 6 figure
SCAN-TO-BIM EFFICIENT APPROACH TO EXTRACT BIM MODELS FROM HIGH PRODUCTIVE INDOOR MOBILE MAPPING SURVEY
Building Information Modeling represents one of the most interesting developments in construction fields in the last 20 years. BIM process supports the creation of intelligent data that can be used throughout the life cycle of a construction project. Where a project involves a pre-existing structure, reality capture can provide the most critical information. The purpose of this paper is to describe an efficient approach to extract 3D models using high productive indoor Mobile Mapping Systems (iMMS) and an optimized scan-to-BIM workflow. The scan-to-BIM procedure allows reconstructing several elements within a digital environment preserving the features and reusing them in the development of the BIM project. The elaboration of the raw data acquired from the iMMS starts with the software HERON® Desktop where a SLAM algorithm runs and a 3D point cloud model is produced. The model is translated in the Gexcel Reconstructor® point cloud post processing software where a number of deliverables as orthophotos, blueprints and a filtered and optimized point cloud are obtained. In the proposed processing workflow, the data are introduced to Autodesk ReCap®, where the model can be edited and the final texturized point cloud model extracted. The identification and modeling of the 3D objects that compose the BIM model is realized in ClearEdge3D EdgeWiseTM and optimized in Autodesk Revit®. The data elaboration workflow implemented shows how an optimized data processing workflow allows making the scan-to-BIM procedure automatic and economically sustainable
Dynamic facial expressions of emotions are discriminated at birth
The ability to discriminate between different facial expressions is fundamental since the first stages of postnatal life. The aim of this study is to investigate whether 2-days-old newborns are capable to discriminate facial expressions of emotions as they naturally take place in everyday interactions, that is in motion. When two dynamic displays depicting a happy and a disgusted facial expression were simultaneously presented (i.e., visual preference paradigm), newborns did not manifest any visual preference (Experiment 1). Nonetheless, after being habituated to a happy or disgusted dynamic emotional expression (i.e., habituation paradigm), newborns successfully discriminated between the two (Experiment 2). These results indicate that at birth newborns are sensitive to dynamic faces expressing emotions
Real-time prediction of breast lesions displacement during Ultrasound scanning using a position-based dynamics approach.
Although ultrasound (US) images represent the most popular modality for guiding breast biopsy, they are sometimes unable to render malignant regions, thus preventing accurate lesion localization which is essential for a successful procedure. Biomechanical models can support the localization of suspicious areas identified on a pre-operative image during US scanning since they are able to account for anatomical deformations resulting from US probe pressure. We propose a deformation model which relies on position-based dynamics (PBD) approach to predict the displacement of internal targets induced by probe interaction during US acquisition. The PBD implementation available in NVIDIA FleX is exploited to create an anatomical model capable of deforming in real-time. In order to account for each patient\u2019s specificities, model parameters are selected as those minimizing the localization error of a US-visible landmark of the anatomy of interest (in our case, a realistic breast phantom). The updated model is used to estimate the displacement of other internal lesions due to probe-tissue interaction. The proposed approach is compared to a finite element model (FEM), generally used in breast biomechanics, and a rigid one. Localization error obtained when applying the PBD model remains below 11 mm for all the tumors even for input displacements in the order of 30 mm. The proposed method obtains results aligned with FE models with faster computational performance, suitable for real-time applications. In addition, it outperforms rigid model used to track lesion position in US-guided breast biopsies, at least halving the localization error for all the displacement ranges considered. Position-based dynamics approach has proved to be successful in modeling breast tissue deformations during US acquisition. Its stability, accuracy and real-time performance make such model suitable for tracking lesions displacement during US-guided breast biopsy
A position-based framework for the prediction of probe-induced lesion displacement in Ultrasound-guided breast biopsy
Although ultrasound (US) images represent the most popular modality for guiding breast biopsy, they are sometimes unable to render malignant regions, thus preventing accurate lesion localization which is essential for a successful procedure. Biomechanical models can support the localization of suspicious areas identified on a pre-operative image during US scanning since they are able to account for anatomical deformations resulting from US probe pressure. We propose a deformation model which relies on position-based dynamics (PBD) approach to predict the displacement of internal targets induced by probe interaction during US acquisition. The PBD implementation available in NVIDIA FleX is exploited to create an anatomical model capable of deforming online. Simulation parameters are initialized on a calibration phantom under different levels of probe-induced deformations, then they are fine-tuned by minimizing the localization error of a US-visible landmark of a realistic breast phantom. The updated model is used to estimate the displacement of other internal lesions due to probe-tissue interaction. The localization error obtained when applying the PBD model remains below 11 mm for all the tumors even for input displacements in the order of 30 mm. This approach outperforms rigid model used to track lesion position in US-guided breast biopsies, at least halving the localization error for all the displacement ranges considered. Position-based dynamics approach has proved to be successful in modeling breast tissue deformations during US acquisition. Its stability, accuracy and real-time performance make such model suitable for tracking lesions displacement during US-guided breast biopsy
Heregulin β1 induces the down regulation and the ubiquitin-proteasome degradation pathway of p185HER2 oncoprotein
AbstractAnalysis of the fate of the p185HER2 oncoprotein following activation by heregulin β1 revealed the induction of the tyrosine-phosphorylation, down-modulation, and polyubiquitination of p185HER2. Receptor ubiquitination was suppressed in cells treated with heregulin β1 in the presence of sodium azide, an inhibitor of ATP-dependent reactions, or genistein, a tyrosine kinase protein inhibitor, indicating the requirement for kinase activity and ATP in p185HER2 polyubiquitination. Ubiquitinated p185HER2 was degradated by the 26S proteasome proteolytic pathway. Kinetics and inhibition experiments indicated that endocytosis of the receptor occurs downstream of the initiation of the degradation process
- …