41,763 research outputs found

    A review of R-packages for random-intercept probit regression in small clusters

    Get PDF
    Generalized Linear Mixed Models (GLMMs) are widely used to model clustered categorical outcomes. To tackle the intractable integration over the random effects distributions, several approximation approaches have been developed for likelihood-based inference. As these seldom yield satisfactory results when analyzing binary outcomes from small clusters, estimation within the Structural Equation Modeling (SEM) framework is proposed as an alternative. We compare the performance of R-packages for random-intercept probit regression relying on: the Laplace approximation, adaptive Gaussian quadrature (AGQ), Penalized Quasi-Likelihood (PQL), an MCMC-implementation, and integrated nested Laplace approximation within the GLMM-framework, and a robust diagonally weighted least squares estimation within the SEM-framework. In terms of bias for the fixed and random effect estimators, SEM usually performs best for cluster size two, while AGQ prevails in terms of precision (mainly because of SEM's robust standard errors). As the cluster size increases, however, AGQ becomes the best choice for both bias and precision

    Coz: Finding Code that Counts with Causal Profiling

    Full text link
    Improving performance is a central concern for software developers. To locate optimization opportunities, developers rely on software profilers. However, these profilers only report where programs spent their time: optimizing that code may have no impact on performance. Past profilers thus both waste developer time and make it difficult for them to uncover significant optimization opportunities. This paper introduces causal profiling. Unlike past profiling approaches, causal profiling indicates exactly where programmers should focus their optimization efforts, and quantifies their potential impact. Causal profiling works by running performance experiments during program execution. Each experiment calculates the impact of any potential optimization by virtually speeding up code: inserting pauses that slow down all other code running concurrently. The key insight is that this slowdown has the same relative effect as running that line faster, thus "virtually" speeding it up. We present Coz, a causal profiler, which we evaluate on a range of highly-tuned applications: Memcached, SQLite, and the PARSEC benchmark suite. Coz identifies previously unknown optimization opportunities that are both significant and targeted. Guided by Coz, we improve the performance of Memcached by 9%, SQLite by 25%, and accelerate six PARSEC applications by as much as 68%; in most cases, these optimizations involve modifying under 10 lines of code.Comment: Published at SOSP 2015 (Best Paper Award

    A probabilistic prediction model for window opening during transition seasons in office building

    Get PDF
    Window operation of occupants in building has close relationship with indoor air quality, indoor thermal environment and building energy performance. The objective of this study was to understand occupants' interaction with window opening in transition seasons considering the influence of subject type (e.g. active and passive respondents) and to develop corresponding predictive models. An investigation was carried out in non-air-conditioned building in the UK covering the period from September to November. Outdoor temperature in this study was determined as good predictor for window operation. The differences in window opening probabilities between active and passive subjects were significant. Active occupants preferred to open window for fresh air or for indoor thermal condition adjustment, even though the outdoor air temperature sometimes were less than 12 °C. Proper utilization of windows in transition seasons contributed significantly to building energy saving and further improve energy efficiency in buildings

    QuantiMus: A Machine Learning-Based Approach for High Precision Analysis of Skeletal Muscle Morphology.

    Get PDF
    Skeletal muscle injury provokes a regenerative response, characterized by the de novo generation of myofibers that are distinguished by central nucleation and re-expression of developmentally restricted genes. In addition to these characteristics, myofiber cross-sectional area (CSA) is widely used to evaluate muscle hypertrophic and regenerative responses. Here, we introduce QuantiMus, a free software program that uses machine learning algorithms to quantify muscle morphology and molecular features with high precision and quick processing-time. The ability of QuantiMus to define and measure myofibers was compared to manual measurement or other automated software programs. QuantiMus rapidly and accurately defined total myofibers and measured CSA with comparable performance but quantified the CSA of centrally-nucleated fibers (CNFs) with greater precision compared to other software. It additionally quantified the fluorescence intensity of individual myofibers of human and mouse muscle, which was used to assess the distribution of myofiber type, based on the myosin heavy chain isoform that was expressed. Furthermore, analysis of entire quadriceps cross-sections of healthy and mdx mice showed that dystrophic muscle had an increased frequency of Evans blue dye+ injured myofibers. QuantiMus also revealed that the proportion of centrally nucleated, regenerating myofibers that express embryonic myosin heavy chain (eMyHC) or neural cell adhesion molecule (NCAM) were increased in dystrophic mice. Our findings reveal that QuantiMus has several advantages over existing software. The unique self-learning capacity of the machine learning algorithms provides superior accuracy and the ability to rapidly interrogate the complete muscle section. These qualities increase rigor and reproducibility by avoiding methods that rely on the sampling of representative areas of a section. This is of particular importance for the analysis of dystrophic muscle given the "patchy" distribution of muscle pathology. QuantiMus is an open source tool, allowing customization to meet investigator-specific needs and provides novel analytical approaches for quantifying muscle morphology

    Trust Value of a Dividend : an Evidence From Indonesia

    Full text link
    Even though there are many issues surrounding dividend policy, dividend remains one of the main goals for investors to achieve. The aim of this study is to find out determinants of dividend policy in Indonesia. Most of the samples in observed period have varieties of dividend policies. Data for this study was collected from 258 business entities in the period between 2009 and 2012. For hypotheses testing, a binary logistic regression and factor analysis were used. The result from binary logistic regression showed that share price, earnings per share and current ratio are significant factors for dividend policy, while debt to equity ratio and corporate tax are insignificant. The insignificance of debt and tax was probably due to current ratio affected by accounting adjustments. Even though debt and tax are insignificant, they could not be ignored. Using factor analysis, it is confirmed that, most companies in this study have a similar objective through dividend policy, which is to maximize their share value in the stock market by considering profitability and liquidity on cash availability and also debt and tax. Dividends as a form of “trust value” offered by companies to their shareholders stimulate the trust of investors or shareholders and resulting the increase of share price
    corecore