2,692 research outputs found

    Data Dependent Randomized Smoothing

    Full text link
    Randomized smoothing is a recent technique that achieves state-of-art performance in training certifiably robust deep neural networks. While the smoothing family of distributions is often connected to the choice of the norm used for certification, the parameters of these distributions are always set as global hyper parameters independent of the input data on which a network is certified. In this work, we revisit Gaussian randomized smoothing and show that the variance of the Gaussian distribution can be optimized at each input so as to maximize the certification radius for the construction of the smoothed classifier. This new approach is generic, parameter-free, and easy to implement. In fact, we show that our data dependent framework can be seamlessly incorporated into 3 randomized smoothing approaches, leading to consistent improved certified accuracy. When this framework is used in the training routine of these approaches followed by a data dependent certification, we achieve 9\% and 6\% improvement over the certified accuracy of the strongest baseline for a radius of 0.5 on CIFAR10 and ImageNet.Comment: First two authors contributed equally to this wor

    The Influence of a KDT501, a Novel Isohumulone, on Adipocyte Function in Humans

    Get PDF
    Objective: In a phase II clinical trial in nine obese, insulin-resistant humans, we observed that treatment with KDT501, a novel isohumulone drug, increased total and high-molecular weight (HMW) adiponectin in plasma. The objective was to determine whether KDT501 increased adiponectin secretion from subcutaneous white adipose tissue (SC WAT) and the underlying mechanism(s). Methods: Nine obese participants with either prediabetes or with normal glucose tolerance plus three features of metabolic syndrome were part of the study. SC WAT biopsies were performed before and after 28 days of KDT501 treatment in a clinical research setting. In addition, a cold stimulus was used to induce thermogenic gene expression. Adiponectin secretion was measured, and gene expression of 130 genes involved in adipose tissue function was determined. The effect of KDT501 on adipocyte mitochondrial function was analyzed in vitro. Results: SC WAT explants secreted more total and HMW adiponectin after KDT501 treatment (P \u3c 0.05). After KDT501 treatment, a number of genes involved in thermogenesis and lipolysis were induced by cold (P \u3c 0.05). KDT501 also potentiated β-adrenergic signaling (P \u3c 0.001) and enhanced mitochondrial function in adipocytes (P \u3c 0.001). Conclusion: KDT501 induced adiponectin secretion posttranscriptionally and increased gene expression of thermogenic and lipolytic genes in response to cold stimulation. These beneficial effects on SC WAT may be explained by the ability of KDT501 to potentiate β-adrenergic signaling and enhance mitochondrial function in adipocytes. Clinical Trial Registration: https://www.ClinicalTrials.gov, ID number: NCT02444910

    Statistical modeling for selecting housekeeper genes

    Get PDF
    There is a need for statistical methods to identify genes that have minimal variation in expression across a variety of experimental conditions. These 'housekeeper' genes are widely employed as controls for quantification of test genes using gel analysis and real-time RT-PCR. Using real-time quantitative RT-PCR, we analyzed 80 primary breast tumors for variation in expression of six putative housekeeper genes (MRPL19 (mitochondrial ribosomal protein L19), PSMC4 (proteasome (prosome, macropain) 26S subunit, ATPase, 4), SF3A1 (splicing factor 3a, subunit 1, 120 kDa), PUM1 (pumilio homolog 1 (Drosophila)), ACTB (actin, beta) and GAPD (glyceraldehyde-3-phosphate dehydrogenase)). We present appropriate models for selecting the best housekeepers to normalize quantitative data within a given tissue type (for example, breast cancer) and across different types of tissue samples

    Correction: Statistical modeling for selecting housekeeper genes

    Get PDF
    A correction to Statistical modeling for selecting housekeeper genes by Aniko Szabo, Charles M Perou, Mehmet Karaca, Laurent Perreard, John F Quackenbush, and Philip S Bernard. Genome Biology 2004, 5:R5

    From Categories to Classifier: Name-Only Continual Learning by Exploring the Web

    Full text link
    Continual Learning (CL) often relies on the availability of extensive annotated datasets, an assumption that is unrealistically time-consuming and costly in practice. We explore a novel paradigm termed name-only continual learning where time and cost constraints prohibit manual annotation. In this scenario, learners adapt to new category shifts using only category names without the luxury of annotated training data. Our proposed solution leverages the expansive and ever-evolving internet to query and download uncurated webly-supervised data for image classification. We investigate the reliability of our web data and find them comparable, and in some cases superior, to manually annotated datasets. Additionally, we show that by harnessing the web, we can create support sets that surpass state-of-the-art name-only classification that create support sets using generative models or image retrieval from LAION-5B, achieving up to 25% boost in accuracy. When applied across varied continual learning contexts, our method consistently exhibits a small performance gap in comparison to models trained on manually annotated datasets. We present EvoTrends, a class-incremental dataset made from the web to capture real-world trends, created in just minutes. Overall, this paper underscores the potential of using uncurated webly-supervised data to mitigate the challenges associated with manual data labeling in continual learning

    Radio Astronomy

    Get PDF
    Contains reports on research objectives and eight research projects.National Science Foundation (Grant AST79-25075)National Science Foundation (Grant AST79-20984)National Science Foundation (Grant AST79-19553)U.S. Navy - Office of Naval Research (Contract N00014-80-C-0348)National Aeronautics and Space Administration (Grant NAG2-50)M.I.T. Sloan Fund for Basic ResearchJoint Services Electronics Program (Contract DAAG29-78-C-0020)Joint Services Electronics Program (Contract DAAG29-80-C-0104)National Aeronautics and Space Administration (Grant NAG5-10)National Aeronautics and Space Administration (Contract NAS5-25091)National Aeronautics and Space Administration (Contract NAS5-22929)U.S. Department of Commerce - National Oceanic and Atmospheric Administration (Grant 04-8-MOl-1

    Image, brand and price info: do they always matter the same?

    Get PDF
    We study attention processes to brand, price and visual information about products in online retailing websites, simultaneously considering the effects of consumers’ goals, purchase category and consumers’ statements. We use an intra-subject experimental design, simulated web stores and a combination of observational eye-tracking data and declarative measures. Image information about the product is the more important stimulus, regardless of the task at hand or the store involved. The roles of brand and price information are dependent on the product category and the purchase task involved. Declarative measures of relative brand importance are found to be positively related with its observed importance

    Real-Time Evaluation in Online Continual Learning: A New Hope

    Get PDF
    Current evaluations of Continual Learning (CL) methods typically assume that there is no constraint on training time and computation. This is an unrealistic assumption for any real-world setting, which motivates us to propose: a practical real-time evaluation of continual learning, in which the stream does not wait for the model to complete training before revealing the next data for predictions. To do this, we evaluate current CL methods with respect to their computational costs. We conduct extensive experiments on CLOC, a large-scale dataset containing 39 million time-stamped images with geolocation labels. We show that a simple baseline outperforms state-of-the-art CL methods under this evaluation, questioning the applicability of existing methods in realistic settings. In addition, we explore various CL components commonly used in the literature, including memory sampling strategies and regularization approaches. We find that all considered methods fail to be competitive against our simple baseline. This surprisingly suggests that the majority of existing CL literature is tailored to a specific class of streams that is not practical. We hope that the evaluation we provide will be the first step towards a paradigm shift to consider the computational cost in the development of online continual learning methods.Comment: Accepted at CVPR'23 as Highlight (Top 2.5%
    • …
    corecore