343 research outputs found
DTG-SSOD: Dense Teacher Guidance for Semi-Supervised Object Detection
The Mean-Teacher (MT) scheme is widely adopted in semi-supervised object
detection (SSOD). In MT, the sparse pseudo labels, offered by the final
predictions of the teacher (e.g., after Non Maximum Suppression (NMS)
post-processing), are adopted for the dense supervision for the student via
hand-crafted label assignment. However, the sparse-to-dense paradigm
complicates the pipeline of SSOD, and simultaneously neglects the powerful
direct, dense teacher supervision. In this paper, we attempt to directly
leverage the dense guidance of teacher to supervise student training, i.e., the
dense-to-dense paradigm. Specifically, we propose the Inverse NMS Clustering
(INC) and Rank Matching (RM) to instantiate the dense supervision, without the
widely used, conventional sparse pseudo labels. INC leads the student to group
candidate boxes into clusters in NMS as the teacher does, which is implemented
by learning grouping information revealed in NMS procedure of the teacher.
After obtaining the same grouping scheme as the teacher via INC, the student
further imitates the rank distribution of the teacher over clustered candidates
through Rank Matching. With the proposed INC and RM, we integrate Dense Teacher
Guidance into Semi-Supervised Object Detection (termed DTG-SSOD), successfully
abandoning sparse pseudo labels and enabling more informative learning on
unlabeled data. On COCO benchmark, our DTG-SSOD achieves state-of-the-art
performance under various labelling ratios. For example, under 10% labelling
ratio, DTG-SSOD improves the supervised baseline from 26.9 to 35.9 mAP,
outperforming the previous best method Soft Teacher by 1.9 points.Comment: Technical repor
Integrating Stock Features and Global Information via Large Language Models for Enhanced Stock Return Prediction
The remarkable achievements and rapid advancements of Large Language Models
(LLMs) such as ChatGPT and GPT-4 have showcased their immense potential in
quantitative investment. Traders can effectively leverage these LLMs to analyze
financial news and predict stock returns accurately. However, integrating LLMs
into existing quantitative models presents two primary challenges: the
insufficient utilization of semantic information embedded within LLMs and the
difficulties in aligning the latent information within LLMs with pre-existing
quantitative stock features. We propose a novel framework consisting of two
components to surmount these challenges. The first component, the Local-Global
(LG) model, introduces three distinct strategies for modeling global
information. These approaches are grounded respectively on stock features, the
capabilities of LLMs, and a hybrid method combining the two paradigms. The
second component, Self-Correlated Reinforcement Learning (SCRL), focuses on
aligning the embeddings of financial news generated by LLMs with stock features
within the same semantic space. By implementing our framework, we have
demonstrated superior performance in Rank Information Coefficient and returns,
particularly compared to models relying only on stock features in the China
A-share market.Comment: 8 pages, International Joint Conferences on Artificial Intelligenc
GPT Understands, Too
While GPTs with traditional fine-tuning fail to achieve strong results on
natural language understanding (NLU), we show that GPTs can be better than or
comparable to similar-sized BERTs on NLU tasks with a novel method P-tuning --
which employs trainable continuous prompt embeddings. On the knowledge probing
(LAMA) benchmark, the best GPT recovers 64\% (P@1) of world knowledge without
any additional text provided during test time, which substantially improves the
previous best by 20+ percentage points. On the SuperGlue benchmark, GPTs
achieve comparable and sometimes better performance to similar-sized BERTs in
supervised learning. Importantly, we find that P-tuning also improves BERTs'
performance in both few-shot and supervised settings while largely reducing the
need for prompt engineering. Consequently, P-tuning outperforms the
state-of-the-art approaches on the few-shot SuperGlue benchmark
Protecting P-Glycoprotein at the Blood-Brain Barrier from Degradation in an Alzheimer\u27s Disease Mouse Model
BACKGROUND: Failure to clear Aβ from the brain is partly responsible for Aβ brain accumulation in Alzheimer\u27s disease (AD). A critical protein for clearing Aβ across the blood-brain barrier is the efflux transporter P-glycoprotein (P-gp). In AD, P-gp levels are reduced, which contributes to impaired Aβ brain clearance. However, the mechanism responsible for decreased P-gp levels is poorly understood and there are no strategies available to protect P-gp. We previously demonstrated in isolated brain capillaries ex vivo that human Aβ40 (hAβ40) triggers P-gp degradation by activating the ubiquitin-proteasome pathway. In this pathway, hAβ40 initiates P-gp ubiquitination, leading to internalization and proteasomal degradation of P-gp, which then results in decreased P-gp protein expression and transport activity levels. Here, we extend this line of research and present results from an in vivo study using a transgenic mouse model of AD (human amyloid precursor protein (hAPP)-overexpressing mice; Tg2576).
METHODS: In our study, hAPP mice were treated with vehicle, nocodazole (NCZ, microtubule inhibitor to block P-gp internalization), or a combination of NCZ and the P-gp inhibitor cyclosporin A (CSA). We determined P-gp protein expression and transport activity levels in isolated mouse brain capillaries and Aβ levels in plasma and brain tissue.
RESULTS: Treating hAPP mice with 5 mg/kg NCZ for 14 days increased P-gp levels to levels found in WT mice. Consistent with this, P-gp-mediated hAβ42 transport in brain capillaries was increased in NCZ-treated hAPP mice compared to untreated hAPP mice. Importantly, NCZ treatment significantly lowered hAβ40 and hAβ42 brain levels in hAPP mice, whereas hAβ40 and hAβ42 levels in plasma remained unchanged.
CONCLUSIONS: These findings provide in vivo evidence that microtubule inhibition maintains P-gp protein expression and transport activity levels, which in turn helps to lower hAβ brain levels in hAPP mice. Thus, protecting P-gp at the blood-brain barrier may provide a novel therapeutic strategy for AD and other Aβ-based pathologies
- …