274 research outputs found
Few-Shot Image Recognition by Predicting Parameters from Activations
In this paper, we are interested in the few-shot learning problem. In
particular, we focus on a challenging scenario where the number of categories
is large and the number of examples per novel category is very limited, e.g. 1,
2, or 3. Motivated by the close relationship between the parameters and the
activations in a neural network associated with the same category, we propose a
novel method that can adapt a pre-trained neural network to novel categories by
directly predicting the parameters from the activations. Zero training is
required in adaptation to novel categories, and fast inference is realized by a
single forward pass. We evaluate our method by doing few-shot image recognition
on the ImageNet dataset, which achieves the state-of-the-art classification
accuracy on novel categories by a significant margin while keeping comparable
performance on the large-scale categories. We also test our method on the
MiniImageNet dataset and it strongly outperforms the previous state-of-the-art
methods
Association of lactate-to-albumin ratio with in-hospital and intensive care unit mortality in patients with intracerebral hemorrhage
BackgroundIntracerebral hemorrhage (ICH) is a severe stroke subtype with a high mortality rate; the lactate-to-albumin ratio (LAR) is a new biomarker for predicting clinical outcomes in patients with ICH. However, the relationship between LAR and mortality in patients with ICH treated in the intensive care unit (ICU) remains controversial. Therefore, in this study, we aimed to investigate the association between LAR and in-hospital and ICU mortality in patients with ICH.MethodsPatients with ICH were selected from the Medical Information Mart for Intensive Care III (MIMIC-III) database; their clinical information, including baseline characteristics, vital signs, comorbidities, laboratory test results, and scoring systems, was extracted. Univariate and multivariate Cox proportional hazards analyses were used to investigate the association of LAR with in-hospital and ICU mortality. The maximum selection statistical method and subgroup analysis were used to investigate these relationships further. Kaplan–Meier (KM) analysis was used to draw survival curves.ResultsThis study enrolled 237 patients with ICH whose lactate and albumin levels, with median values of 1.975 and 3.6 mg/dl, respectively, were measured within the first 24 h after ICU admission. LAR had an association with increased risk of in-hospital mortality [unadjusted hazards ratio (HR), 1.79; 95% confidence interval (CI), 1.32–2.42; p < 0.001] and ICU mortality (unadjusted HR, 1.88; 95% CI, 1.38–2.55; p < 0.001). A cut-off value of 0.963 mg/dl was used to classify patients into high LAR (≥0.963) and low LAR (<0.963) groups, and survival curves suggested that those two groups had significant survival differences (p = 0.0058 and 0.0048, respectively). Furthermore, the high LAR group with ICH had a significantly increased risk of in-hospital and ICU mortality compared to the low LAR group.ConclusionOur study suggests that a high LAR is associated with an increased risk of in-hospital and ICU mortality in patients with ICH. Thus, the LAR is a useful prognostic predictor of clinical outcomes in patients with ICH
Joint Token Pruning and Squeezing Towards More Aggressive Compression of Vision Transformers
Although vision transformers (ViTs) have shown promising results in various
computer vision tasks recently, their high computational cost limits their
practical applications. Previous approaches that prune redundant tokens have
demonstrated a good trade-off between performance and computation costs.
Nevertheless, errors caused by pruning strategies can lead to significant
information loss. Our quantitative experiments reveal that the impact of pruned
tokens on performance should be noticeable. To address this issue, we propose a
novel joint Token Pruning & Squeezing module (TPS) for compressing vision
transformers with higher efficiency. Firstly, TPS adopts pruning to get the
reserved and pruned subsets. Secondly, TPS squeezes the information of pruned
tokens into partial reserved tokens via the unidirectional nearest-neighbor
matching and similarity-based fusing steps. Compared to state-of-the-art
methods, our approach outperforms them under all token pruning intensities.
Especially while shrinking DeiT-tiny&small computational budgets to 35%, it
improves the accuracy by 1%-6% compared with baselines on ImageNet
classification. The proposed method can accelerate the throughput of DeiT-small
beyond DeiT-tiny, while its accuracy surpasses DeiT-tiny by 4.78%. Experiments
on various transformers demonstrate the effectiveness of our method, while
analysis experiments prove our higher robustness to the errors of the token
pruning policy. Code is available at
https://github.com/megvii-research/TPS-CVPR2023.Comment: Accepted to CVPR202
- …