9 research outputs found
Uncertainty-Induced Transferability Representation for Source-Free Unsupervised Domain Adaptation
Source-free unsupervised domain adaptation (SFUDA) aims to learn a target
domain model using unlabeled target data and the knowledge of a well-trained
source domain model. Most previous SFUDA works focus on inferring semantics of
target data based on the source knowledge. Without measuring the
transferability of the source knowledge, these methods insufficiently exploit
the source knowledge, and fail to identify the reliability of the inferred
target semantics. However, existing transferability measurements require either
source data or target labels, which are infeasible in SFUDA. To this end,
firstly, we propose a novel Uncertainty-induced Transferability Representation
(UTR), which leverages uncertainty as the tool to analyse the channel-wise
transferability of the source encoder in the absence of the source data and
target labels. The domain-level UTR unravels how transferable the encoder
channels are to the target domain and the instance-level UTR characterizes the
reliability of the inferred target semantics. Secondly, based on the UTR, we
propose a novel Calibrated Adaption Framework (CAF) for SFUDA, including i)the
source knowledge calibration module that guides the target model to learn the
transferable source knowledge and discard the non-transferable one, and ii)the
target semantics calibration module that calibrates the unreliable semantics.
With the help of the calibrated source knowledge and the target semantics, the
model adapts to the target domain safely and ultimately better. We verified the
effectiveness of our method using experimental results and demonstrated that
the proposed method achieves state-of-the-art performances on the three SFUDA
benchmarks. Code is available at https://github.com/SPIresearch/UTR
Closing the Loop on Runtime Monitors with Fallback-Safe MPC
When we rely on deep-learned models for robotic perception, we must recognize
that these models may behave unreliably on inputs dissimilar from the training
data, compromising the closed-loop system's safety. This raises fundamental
questions on how we can assess confidence in perception systems and to what
extent we can take safety-preserving actions when external environmental
changes degrade our perception model's performance. Therefore, we present a
framework to certify the safety of a perception-enabled system deployed in
novel contexts. To do so, we leverage robust model predictive control (MPC) to
control the system using the perception estimates while maintaining the
feasibility of a safety-preserving fallback plan that does not rely on the
perception system. In addition, we calibrate a runtime monitor using recently
proposed conformal prediction techniques to certifiably detect when the
perception system degrades beyond the tolerance of the MPC controller,
resulting in an end-to-end safety assurance. We show that this control
framework and calibration technique allows us to certify the system's safety
with orders of magnitudes fewer samples than required to retrain the perception
network when we deploy in a novel context on a photo-realistic aircraft taxiing
simulator. Furthermore, we illustrate the safety-preserving behavior of the MPC
on simulated examples of a quadrotor. We open-source our simulation platform
and provide videos of our results at our project page:
https://tinyurl.com/fallback-safe-mpc.Comment: Accepted to the 2023 IEEE Conference on Decision and Contro