2 research outputs found

    "Understanding Robustness Lottery": A Geometric Visual Comparative Analysis of Neural Network Pruning Approaches

    Full text link
    Deep learning approaches have provided state-of-the-art performance in many applications by relying on large and overparameterized neural networks. However, such networks have been shown to be very brittle and are difficult to deploy on resource-limited platforms. Model pruning, i.e., reducing the size of the network, is a widely adopted strategy that can lead to a more robust and compact model. Many heuristics exist for model pruning, but empirical studies show that some heuristics improve performance whereas others can make models more brittle or have other side effects. This work aims to shed light on how different pruning methods alter the network's internal feature representation and the corresponding impact on model performance. To facilitate a comprehensive comparison and characterization of the high-dimensional model feature space, we introduce a visual geometric analysis of feature representations. We decomposed and evaluated a set of critical geometric concepts from the common adopted classification loss, and used them to design a visualization system to compare and highlight the impact of pruning on model performance and feature representation. The proposed tool provides an environment for in-depth comparison of pruning methods and a comprehensive understanding of how model response to common data corruption. By leveraging the proposed visualization, machine learning researchers can reveal the similarities between pruning methods and redundant in robustness evaluation benchmarks, obtain geometric insights about the differences between pruned models that achieve superior robustness performance, and identify samples that are robust or fragile to model pruning and common data corruption to model pruning and data corruption but also obtain insights and explanations on how some pruned models achieve superior robustness performance

    Effects of cholesteryl ester transfer protein inhibition on apolipoprotein A-II-containing HDL subspecies and apolipoprotein A-II metabolism

    No full text
    This study was designed to establish the mechanism responsible for the increased apolipoprotein (apo) A-II levels caused by the cholesteryl ester transfer protein inhibitor torcetrapib. Nineteen subjects with low HDL cholesterol (<40 mg/dl), nine of whom were also treated with 20 mg of atorvastatin daily, received placebo for 4 weeks, followed by 120 mg of torcetrapib daily for the next 4 weeks. Six subjects in the nonatorvastatin cohort participated in a third phase, in which they received 120 mg of torcetrapib twice daily for 4 weeks. At the end of each phase, subjects underwent a primed-constant infusion of [5,5,5-2H3]l-leucine to determine the kinetics of HDL apoA-II. Relative to placebo, torcetrapib significantly increased apoA-II concentrations by reducing HDL apoA-II catabolism in the atorvastatin (−9.4%, P < 0.003) and nonatorvastatin once- (−9.9%, P = 0.02) and twice- (−13.2%, P = 0.02) daily cohorts. Torcetrapib significantly increased the amount of apoA-II in the α-2-migrating subpopulation of HDL when given as monotherapy (27%, P < 0.02; 57%, P < 0.003) or on a background of atorvastatin (28%, P < 0.01). In contrast, torcetrapib reduced concentrations of apoA-II in α-3-migrating HDL, with mean reductions of −14% (P = 0.23), −18% (P < 0.02), and −18% (P < 0.01) noted during the atorvastatin and nonatorvastatin 120 mg once- and twice-daily phases, respectively. Our findings indicate that CETP inhibition increases plasma concentrations of apoA-II by delaying HDL apoA-II catabolism and significantly alters the remodeling of apoA-II-containing HDL subpopulations
    corecore