1,455 research outputs found

    Network Sketching: Exploiting Binary Structure in Deep CNNs

    Full text link
    Convolutional neural networks (CNNs) with deep architectures have substantially advanced the state-of-the-art in computer vision tasks. However, deep networks are typically resource-intensive and thus difficult to be deployed on mobile devices. Recently, CNNs with binary weights have shown compelling efficiency to the community, whereas the accuracy of such models is usually unsatisfactory in practice. In this paper, we introduce network sketching as a novel technique of pursuing binary-weight CNNs, targeting at more faithful inference and better trade-off for practical applications. Our basic idea is to exploit binary structure directly in pre-trained filter banks and produce binary-weight models via tensor expansion. The whole process can be treated as a coarse-to-fine model approximation, akin to the pencil drawing steps of outlining and shading. To further speedup the generated models, namely the sketches, we also propose an associative implementation of binary tensor convolutions. Experimental results demonstrate that a proper sketch of AlexNet (or ResNet) outperforms the existing binary-weight models by large margins on the ImageNet large scale classification task, while the committed memory for network parameters only exceeds a little.Comment: To appear in CVPR201

    Robot Composite Learning and the Nunchaku Flipping Challenge

    Full text link
    Advanced motor skills are essential for robots to physically coexist with humans. Much research on robot dynamics and control has achieved success on hyper robot motor capabilities, but mostly through heavily case-specific engineering. Meanwhile, in terms of robot acquiring skills in a ubiquitous manner, robot learning from human demonstration (LfD) has achieved great progress, but still has limitations handling dynamic skills and compound actions. In this paper, we present a composite learning scheme which goes beyond LfD and integrates robot learning from human definition, demonstration, and evaluation. The method tackles advanced motor skills that require dynamic time-critical maneuver, complex contact control, and handling partly soft partly rigid objects. We also introduce the "nunchaku flipping challenge", an extreme test that puts hard requirements to all these three aspects. Continued from our previous presentations, this paper introduces the latest update of the composite learning scheme and the physical success of the nunchaku flipping challenge

    Physics Inspired Optimization on Semantic Transfer Features: An Alternative Method for Room Layout Estimation

    Full text link
    In this paper, we propose an alternative method to estimate room layouts of cluttered indoor scenes. This method enjoys the benefits of two novel techniques. The first one is semantic transfer (ST), which is: (1) a formulation to integrate the relationship between scene clutter and room layout into convolutional neural networks; (2) an architecture that can be end-to-end trained; (3) a practical strategy to initialize weights for very deep networks under unbalanced training data distribution. ST allows us to extract highly robust features under various circumstances, and in order to address the computation redundance hidden in these features we develop a principled and efficient inference scheme named physics inspired optimization (PIO). PIO's basic idea is to formulate some phenomena observed in ST features into mechanics concepts. Evaluations on public datasets LSUN and Hedau show that the proposed method is more accurate than state-of-the-art methods.Comment: To appear in CVPR 2017. Project Page: https://sites.google.com/view/st-pio

    Number of Repetitions in Re-randomization Tests

    Full text link
    In covariate-adaptive or response-adaptive randomization, the treatment assignment and outcome can be correlated. Under this situation, re-randomization tests are a straightforward and attractive method to provide valid statistical inference. In this paper, we investigate the number of repetitions in the re-randomization tests. This is motivated by the group sequential design in clinical trials, where the nominal significance bound can be very small at an interim analysis. Accordingly, re-randomization tests lead to a very large number of required repetitions, which may be computationally intractable. To reduce the number of repetitions, we propose an adaptive procedure and compare it with multiple approaches under pre-defined criteria. Monte Carlo simulations are conducted to show the performance of different approaches in a limited sample size. We also suggest strategies to reduce total computation time and provide practical guidance in preparing, executing and reporting before and after data are unblinded at an interim analysis, so one can complete the computation within a reasonable time frame

    Potentiation of Recombinant NP and M1-Induced Cellular Immune Responses and Protection by Physical Radiofrequency Adjuvant

    Get PDF
    Nucleoprotein (NP) and matrix protein 1 (M1) are highly conserved among influenza A viruses and have been attractive targets to develop vaccines to elicit cross-reactive cytotoxic T lymphocytes (CTLs). Yet, external antigens are often presented on major histocompatibility complex class II molecules and elicit humoral immune responses. In this study, we present a physical radiofrequency adjuvant (RFA) to assist recombinant NP and M1 to elicit potent CTL responses. We found recombinant NP/M1 immunization in the presence of RFA could elicit potent anti-NP CTLs and confer significant protection against homologous viral challenges, while NP/M1 immunization alone failed to elicit significant CTL responses or confer significant protection. Interestingly, RFA failed to elicit potent anti-M1 CTL responses or anti-NP or anti-M1 antibody responses. Different from RFA, AddaVax adjuvant was found to significantly increase NP-specific antibody responses but not CTLs. NP/M1 immunization in the presence of RFA or AddaVax similarly reduced body weight loss, while only the former significantly increased the survival. We further found NP/M1 immunization in the presence of RFA did not significantly increase serum IL-6 release (a systemic inflammatory mediator) and rather reduced serum IL-6 release after boost immunization. NP/M1 immunization in the presence of RFA did not induce significant local reactions or increase body temperature of mice. The high potency and safety strongly support further development of RFA-based recombinant NP/M1 vaccine to elicit cross-protective immunity

    Trust, Perceived Benefit, and Purchase Intention in C2C E-Commerce: An Empirical Examination in China

    Get PDF
    It is a class research question about how trust and perceived benefit affect consumers\u27 purchase intentions. This research examines the relationship in a very different context: consumer-to-consumer (C2C) e-commerce in China. Specifically, this research empirically assesses the differences in effect size due to the change of context. First, a theoretical model linking trust, perceived benefit, and their antecedents to purchase intention is developed upon the literature. Then the model is evaluated using empirical data collected at Taobao, the largest C2C e-commerce website in China. Partial least squares based structural equation modeling (PLS-SEM) results strongly support the model and research hypotheses. A developing country context can indeed affect the strength of effect. These results contribute to the literature in that they provide new insights toward a more in-depth theoretical understanding. Meanwhile, they can also provide useful guidance for managers

    Understanding Programs by Exploiting (Fuzzing) Test Cases

    Full text link
    Semantic understanding of programs has attracted great attention in the community. Inspired by recent successes of large language models (LLMs) in natural language understanding, tremendous progress has been made by treating programming language as another sort of natural language and training LLMs on corpora of program code. However, programs are essentially different from texts after all, in a sense that they are normally heavily structured and syntax-strict. In particular, programs and their basic units (i.e., functions and subroutines) are designed to demonstrate a variety of behaviors and/or provide possible outputs, given different inputs. The relationship between inputs and possible outputs/behaviors represents the functions/subroutines and profiles the program as a whole. Therefore, we propose to incorporate such a relationship into learning, for achieving a deeper semantic understanding of programs. To obtain inputs that are representative enough to trigger the execution of most part of the code, we resort to fuzz testing and propose fuzz tuning to boost the performance of program understanding and code representation learning, given a pre-trained LLM. The effectiveness of the proposed method is verified on two program understanding tasks including code clone detection and code classification, and it outperforms current state-of-the-arts by large margins. Code is available at https://github.com/rabbitjy/FuzzTuning.Comment: Findings of the Association for Computational Linguistics: ACL 202

    Improving immunogenicity and safety of flagellin as vaccine carrier by high-density display on virus-like particle surface

    Get PDF
    Flagellin is a protein-based adjuvant that activates toll-like receptor (TLR) 5. Flagellin has been actively explored as vaccine adjuvants and carriers. Preclinical and clinical studies find flagellin-based vaccines have a risk to induce systemic adverse reactions potentially due to its overt activation of TLR5. To improve safety and immunogenicity of flagellin as vaccine carriers, FljB was displayed at high densities on hepatitis b core (HBc) virus-like particle (VLP) surface upon c/e1 loop insertion. FljB-HBc (FH) VLPs showed significantly reduced ability to activate TLR5 or induce systemic interleukin-6 release as compared to FljB. FH VLPs also failed to significantly increase rectal temperature of mice, while FljB could significantly increase rectal temperature of mice. These data indicated systemic safety of FljB could be significantly improved by high-density display on HBc VLP surface. Besides improved safety, FH VLPs and FljB similarly boosted co-administered ovalbumin immunization and FH VLPs were found to induce two-fold higher anti-FljB antibody titer than FljB. These data indicated preserved adjuvant potency and improved immunogenicity after high-density display of FljB on HBc VLP surface. Consistent with the high immunogenicity, FH VLPs were found to be more efficiently taken up by bone marrow-derived dendritic cells and stimulate more potent dendritic cell maturation than FljB. Lastly, FH VLPs were found to be a more immunogenic carrier than FljB, HBc VLPs, or the widely used keyhole limpet hemocyanin for nicotine vaccine development with a good local and systemic safety. Our data support FH VLPs to be a potentially safer and more immunogenic carrier than FljB for vaccine development

    ProMix: Combating Label Noise via Maximizing Clean Sample Utility

    Full text link
    The ability to train deep neural networks under label noise is appealing, as imperfectly annotated data are relatively cheaper to obtain. State-of-the-art approaches are based on semi-supervised learning(SSL), which selects small loss examples as clean and then applies SSL techniques for boosted performance. However, the selection step mostly provides a medium-sized and decent-enough clean subset, which overlooks a rich set of clean samples. In this work, we propose a novel noisy label learning framework ProMix that attempts to maximize the utility of clean samples for boosted performance. Key to our method, we propose a matched high-confidence selection technique that selects those examples having high confidence and matched prediction with its given labels. Combining with the small-loss selection, our method is able to achieve a precision of 99.27 and a recall of 98.22 in detecting clean samples on the CIFAR-10N dataset. Based on such a large set of clean data, ProMix improves the best baseline method by +2.67% on CIFAR-10N and +1.61% on CIFAR-100N datasets. The code and data are available at https://github.com/Justherozen/ProMixComment: Winner of the 1st Learning and Mining with Noisy Labels Challenge in IJCAI-ECAI 2022 (an informal technical report
    corecore