290 research outputs found

    SynBench: Task-Agnostic Benchmarking of Pretrained Representations using Synthetic Data

    Full text link
    Recent success in fine-tuning large models, that are pretrained on broad data at scale, on downstream tasks has led to a significant paradigm shift in deep learning, from task-centric model design to task-agnostic representation learning and task-specific fine-tuning. As the representations of pretrained models are used as a foundation for different downstream tasks, this paper proposes a new task-agnostic framework, \textit{SynBench}, to measure the quality of pretrained representations using synthetic data. We set up a reference by a theoretically-derived robustness-accuracy tradeoff of the class conditional Gaussian mixture. Given a pretrained model, the representations of data synthesized from the Gaussian mixture are used to compare with our reference to infer the quality. By comparing the ratio of area-under-curve between the raw data and their representations, SynBench offers a quantifiable score for robustness-accuracy performance benchmarking. Our framework applies to a wide range of pretrained models taking continuous data inputs and is independent of the downstream tasks and datasets. Evaluated with several pretrained vision transformer models, the experimental results show that our SynBench score well matches the actual linear probing performance of the pre-trained model when fine-tuned on downstream tasks. Moreover, our framework can be used to inform the design of robust linear probing on pretrained representations to mitigate the robustness-accuracy tradeoff in downstream tasks

    Fastened CROWN: Tightened Neural Network Robustness Certificates

    Full text link
    The rapid growth of deep learning applications in real life is accompanied by severe safety concerns. To mitigate this uneasy phenomenon, much research has been done providing reliable evaluations of the fragility level in different deep neural networks. Apart from devising adversarial attacks, quantifiers that certify safeguarded regions have also been designed in the past five years. The summarizing work of Salman et al. unifies a family of existing verifiers under a convex relaxation framework. We draw inspiration from such work and further demonstrate the optimality of deterministic CROWN (Zhang et al. 2018) solutions in a given linear programming problem under mild constraints. Given this theoretical result, the computationally expensive linear programming based method is shown to be unnecessary. We then propose an optimization-based approach \textit{FROWN} (\textbf{F}astened C\textbf{ROWN}): a general algorithm to tighten robustness certificates for neural networks. Extensive experiments on various networks trained individually verify the effectiveness of FROWN in safeguarding larger robust regions.Comment: Zhaoyang Lyu and Ching-Yun Ko contributed equally, accepted to AAAI 202

    Hypolipidemic and antioxidant activity of enoki mushrooms (Flammulina velutipes

    Get PDF
    According to the literatures, Flammulina velutipes contains biologically active components such as dietary fiber, polysaccharide, and mycosterol, whose effects in reducing blood sugar, blood pressure, and cholesterol have been proven. This study used the active components extracted from Flammulina velutipes powder (FVP) and Flammulina velutipes extract (FVE) to investigate the impact of these active components on lipid metabolism of hamsters. The results show that the total dietary fiber content in FVP and FVE is 29.34 mg/100 g and 15.08 mg/100 g, respectively. The total mycosterol content is 46.57 ± 0.37 mg/100 g and 9.01 ± 0.17 mg/100 g, respectively. The male hamsters were subjected to lipid metabolism monitoring by adding 1, 2, and 3% FVP or FVE into their diets for a period of 8 weeks. The animal assay results show that the 3% FVP and FVE groups have the lowest concentration of TC (total cholesterol), TG (triacylglycerol), LDL (low density lipoprotein cholesterol), and LDL/HDL (high density lipoprotein cholesterol) in the serum and liver (P < 0.05). Our results demonstrate that the addition of 3% FVP or FVE has a significant effect on the lipid metabolism in hamsters whose increased level of HDL in the serum was induced by high fat diet

    Sample-Specific Debiasing for Better Image-Text Models

    Full text link
    Self-supervised representation learning on image-text data facilitates crucial medical applications, such as image classification, visual grounding, and cross-modal retrieval. One common approach involves contrasting semantically similar (positive) and dissimilar (negative) pairs of data points. Drawing negative samples uniformly from the training data set introduces false negatives, i.e., samples that are treated as dissimilar but belong to the same class. In healthcare data, the underlying class distribution is nonuniform, implying that false negatives occur at a highly variable rate. To improve the quality of learned representations, we develop a novel approach that corrects for false negatives. Our method can be viewed as a variant of debiased constrastive learning that uses estimated sample-specific class probabilities. We provide theoretical analysis of the objective function and demonstrate the proposed approach on both image and paired image-text data sets. Our experiments demonstrate empirical advantages of sample-specific debiasing
    • …
    corecore