1,000 research outputs found
WLST: Weak Labels Guided Self-training for Weakly-supervised Domain Adaptation on 3D Object Detection
In the field of domain adaptation (DA) on 3D object detection, most of the
work is dedicated to unsupervised domain adaptation (UDA). Yet, without any
target annotations, the performance gap between the UDA approaches and the
fully-supervised approach is still noticeable, which is impractical for
real-world applications. On the other hand, weakly-supervised domain adaptation
(WDA) is an underexplored yet practical task that only requires few labeling
effort on the target domain. To improve the DA performance in a cost-effective
way, we propose a general weak labels guided self-training framework, WLST,
designed for WDA on 3D object detection. By incorporating autolabeler, which
can generate 3D pseudo labels from 2D bounding boxes, into the existing
self-training pipeline, our method is able to generate more robust and
consistent pseudo labels that would benefit the training process on the target
domain. Extensive experiments demonstrate the effectiveness, robustness, and
detector-agnosticism of our WLST framework. Notably, it outperforms previous
state-of-the-art methods on all evaluation tasks
Fair Robust Active Learning by Joint Inconsistency
Fair Active Learning (FAL) utilized active learning techniques to achieve
high model performance with limited data and to reach fairness between
sensitive groups (e.g., genders). However, the impact of the adversarial
attack, which is vital for various safety-critical machine learning
applications, is not yet addressed in FAL. Observing this, we introduce a novel
task, Fair Robust Active Learning (FRAL), integrating conventional FAL and
adversarial robustness. FRAL requires ML models to leverage active learning
techniques to jointly achieve equalized performance on benign data and
equalized robustness against adversarial attacks between groups. In this new
task, previous FAL methods generally face the problem of unbearable
computational burden and ineffectiveness. Therefore, we develop a simple yet
effective FRAL strategy by Joint INconsistency (JIN). To efficiently find
samples that can boost the performance and robustness of disadvantaged groups
for labeling, our method exploits the prediction inconsistency between benign
and adversarial samples as well as between standard and robust models.
Extensive experiments under diverse datasets and sensitive groups demonstrate
that our method not only achieves fairer performance on benign samples but also
obtains fairer robustness under white-box PGD attacks compared with existing
active learning and FAL baselines. We are optimistic that FRAL would pave a new
path for developing safe and robust ML research and applications such as facial
attribute recognition in biometrics systems.Comment: 11 pages, 3 figure
Regulation of CLC-1 chloride channel biosynthesis by FKBP8 and Hsp90β.
Mutations in human CLC-1 chloride channel are associated with the skeletal muscle disorder myotonia congenita. The disease-causing mutant A531V manifests enhanced proteasomal degradation of CLC-1. We recently found that CLC-1 degradation is mediated by cullin 4 ubiquitin ligase complex. It is currently unclear how quality control and protein degradation systems coordinate with each other to process the biosynthesis of CLC-1. Herein we aim to ascertain the molecular nature of the protein quality control system for CLC-1. We identified three CLC-1-interacting proteins that are well-known heat shock protein 90 (Hsp90)-associated co-chaperones: FK506-binding protein 8 (FKBP8), activator of Hsp90 ATPase homolog 1 (Aha1), and Hsp70/Hsp90 organizing protein (HOP). These co-chaperones promote both the protein level and the functional expression of CLC-1 wild-type and A531V mutant. CLC-1 biosynthesis is also facilitated by the molecular chaperones Hsc70 and Hsp90β. The protein stability of CLC-1 is notably increased by FKBP8 and the Hsp90β inhibitor 17-allylamino-17-demethoxygeldanamycin (17-AAG) that substantially suppresses cullin 4 expression. We further confirmed that cullin 4 may interact with Hsp90β and FKBP8. Our data are consistent with the idea that FKBP8 and Hsp90β play an essential role in the late phase of CLC-1 quality control by dynamically coordinating protein folding and degradation
Indoor Depth Completion with Boundary Consistency and Self-Attention
Depth estimation features are helpful for 3D recognition. Commodity-grade
depth cameras are able to capture depth and color image in real-time. However,
glossy, transparent or distant surface cannot be scanned properly by the
sensor. As a result, enhancement and restoration from sensing depth is an
important task. Depth completion aims at filling the holes that sensors fail to
detect, which is still a complex task for machine to learn. Traditional
hand-tuned methods have reached their limits, while neural network based
methods tend to copy and interpolate the output from surrounding depth values.
This leads to blurred boundaries, and structures of the depth map are lost.
Consequently, our main work is to design an end-to-end network improving
completion depth maps while maintaining edge clarity. We utilize self-attention
mechanism, previously used in image inpainting fields, to extract more useful
information in each layer of convolution so that the complete depth map is
enhanced. In addition, we propose boundary consistency concept to enhance the
depth map quality and structure. Experimental results validate the
effectiveness of our self-attention and boundary consistency schema, which
outperforms previous state-of-the-art depth completion work on Matterport3D
dataset. Our code is publicly available at
https://github.com/patrickwu2/Depth-CompletionComment: Accepted by ICCVW (RLQ) 201
Self-correcting LLM-controlled Diffusion Models
Text-to-image generation has witnessed significant progress with the advent
of diffusion models. Despite the ability to generate photorealistic images,
current text-to-image diffusion models still often struggle to accurately
interpret and follow complex input text prompts. In contrast to existing models
that aim to generate images only with their best effort, we introduce
Self-correcting LLM-controlled Diffusion (SLD). SLD is a framework that
generates an image from the input prompt, assesses its alignment with the
prompt, and performs self-corrections on the inaccuracies in the generated
image. Steered by an LLM controller, SLD turns text-to-image generation into an
iterative closed-loop process, ensuring correctness in the resulting image. SLD
is not only training-free but can also be seamlessly integrated with diffusion
models behind API access, such as DALL-E 3, to further boost the performance of
state-of-the-art diffusion models. Experimental results show that our approach
can rectify a majority of incorrect generations, particularly in generative
numeracy, attribute binding, and spatial relationships. Furthermore, by simply
adjusting the instructions to the LLM, SLD can perform image editing tasks,
bridging the gap between text-to-image generation and image editing pipelines.
We will make our code available for future research and applications.Comment: 16 pages, 10 figure
Flowtable-Free Routing for Data Center Networks: A Software-Defined Approach
The paradigm shift toward SDN has exhibited the following trends: (1) relying on a centralized and more powerful controller to make intelligent decisions, and (2) allowing a set of relatively dumb switches to route packets. Therefore, efficiently looking up the flowtables in forwarding switches to guarantee low latency becomes a critical issue. In this paper, following the similar paradigm, we propose a new routing scheme called KeySet which is flowtable-free and enables constant-time switching at the forwarding switches. Instead of looking up long flowtables, KeySet relies on a residual system to quickly calculate routing paths. A switch only needs to do simple modular arithmetics to obtain a packet's forwarding output port. Moreover, KeySet has a nice fault- tolerant capability because in many cases the controller does not need to update flowtables at switches when a failure occurs. We validate KeySet through extensive simulations by using general as well as Facebook fat-tree topologies. The results show that the KeySet outperforms the KeyFlow scheme [1] by at least 25% in terms of the length of the forwarding label. Moreover, we show that KeySet is very efficient when applied to fat-trees
- …