45,123 research outputs found

    Data Dropout in Arbitrary Basis for Deep Network Regularization

    Full text link
    An important problem in training deep networks with high capacity is to ensure that the trained network works well when presented with new inputs outside the training dataset. Dropout is an effective regularization technique to boost the network generalization in which a random subset of the elements of the given data and the extracted features are set to zero during the training process. In this paper, a new randomized regularization technique in which we withhold a random part of the data without necessarily turning off the neurons/data-elements is proposed. In the proposed method, of which the conventional dropout is shown to be a special case, random data dropout is performed in an arbitrary basis, hence the designation Generalized Dropout. We also present a framework whereby the proposed technique can be applied efficiently to convolutional neural networks. The presented numerical experiments demonstrate that the proposed technique yields notable performance gain. Generalized Dropout provides new insight into the idea of dropout, shows that we can achieve different performance gains by using different bases matrices, and opens up a new research question as of how to choose optimal bases matrices that achieve maximal performance gain

    Dropout Training as Adaptive Regularization

    Full text link
    Dropout and other feature noising schemes control overfitting by artificially corrupting the training data. For generalized linear models, dropout performs a form of adaptive regularization. Using this viewpoint, we show that the dropout regularizer is first-order equivalent to an L2 regularizer applied after scaling the features by an estimate of the inverse diagonal Fisher information matrix. We also establish a connection to AdaGrad, an online learning algorithm, and find that a close relative of AdaGrad operates by repeatedly solving linear dropout-regularized problems. By casting dropout as regularization, we develop a natural semi-supervised algorithm that uses unlabeled data to create a better adaptive regularizer. We apply this idea to document classification tasks, and show that it consistently boosts the performance of dropout training, improving on state-of-the-art results on the IMDB reviews dataset.Comment: 11 pages. Advances in Neural Information Processing Systems (NIPS), 201

    Methods for Evaluating Respondent Attrition in Web-Based Surveys

    Get PDF
    Background: Electronic surveys are convenient, cost effective, and increasingly popular tools for collecting information. While the online platform allows researchers to recruit and enroll more participants, there is an increased risk of participant dropout in Web-based research. Often, these dropout trends are simply reported, adjusted for, or ignored altogether. Objective: To propose a conceptual framework that analyzes respondent attrition and demonstrates the utility of these methods with existing survey data. Methods: First, we suggest visualization of attrition trends using bar charts and survival curves. Next, we propose a generalized linear mixed model (GLMM) to detect or confirm significant attrition points. Finally, we suggest applications of existing statistical methods to investigate the effect of internal survey characteristics and patient characteristics on dropout. In order to apply this framework, we conducted a case study; a seventeen-item Informed Decision-Making (IDM) module addressing how and why patients make decisions about cancer screening. Results: Using the framework, we were able to find significant attrition points at Questions 4, 6, 7, and 9, and were also able to identify participant responses and characteristics associated with dropout at these points and overall. Conclusions: When these methods were applied to survey data, significant attrition trends were revealed, both visually and empirically, that can inspire researchers to investigate the factors associated with survey dropout, address whether survey completion is associated with health outcomes, and compare attrition patterns between groups. The framework can be used to extract information beyond simple responses, can be useful during survey development, and can help determine the external validity of survey results

    Supercapacitor assisted LDO (SCALDO) techniquean extra low frequency design approach to high efficiency DC-DC converters and how it compares with the classical switched capacitor converters

    Get PDF
    Supercapacitor assisted low dropout regulators (SCALDO) were proposed as an alternative design approach to DC-DC converters, where the supercapacitor circulation frequency (switching frequency) is in the order of few Hz to few 10s of Hz, with an output stage based on a low dropout regulator stage. For converters such as 12–5V, 5–3.3V and 5–1.5V, the technique provides efficiency improvement factors of 2, 1.33 and 3 respectively, in compared to linear converters with same input-output combinations. In a 5–1.5V SCALDO regulator, using thin profile supercapacitors in the range of fractional farads to few farads, this translates to an approximate end to end efficiency of near 90%. However, there were concerns that this patented technique is merely a variation of well-known switched capacitor (charge pump) converters. This paper is aimed at providing a broad overview of the capability of SCALDO technique with generalized theory, indicating its capabilities and limitations, and comparing the practical performance with a typical switched capacitor converter of similar current capability

    Methods for evaluating dropout attrition in survey data

    Get PDF
    As researchers increasingly use web-based surveys, the ease of dropping out in the online setting is a growing issue in ensuring data quality. One theory is that dropout or attrition occurs in phases that can be generalized to phases of high dropout and phases of stable use. In order to detect these phases, several methods are explored. First, existing methods and user-specified thresholds are applied to survey data where significant changes in the dropout rate between two questions is interpreted as the start or end of a high dropout phase. Next, survey dropout is considered as a time-to-event outcome and tests within change-point hazard models are introduced. Performance of these change-point hazard models is compared. Finally, all methods are applied to survey data on patient cancer screening preferences, testing the null hypothesis of no phases of attrition (no change-points) against the alternative hypothesis that distinct attrition phases exist (at least one change-point)
    corecore