2,041 research outputs found

    State-Space Models for Binomial Time Series with Excess Zeros

    Get PDF
    Count time series with excess zeros are frequently encountered in practice. In characterizing a time series of counts with excess zeros, two types of models are commonplace: models that assume a Poisson mixture distribution, and models that assume a binomial mixture distribution. Extensive work has been published dealing with modeling frameworks based on Poisson-type approaches, yet little has concentrated on binomial-type methods. To handle such data, we propose two general classes of time series models: a class of observation-driven ZIB (ODZIB) models, and a class of parameter-driven ZIB (PDZIB) models. The ODZIB model is formulated in the partial likelihood framework, which facilitates model fitting using standard statistical software for ZIB regression models. The PDZIB model is conveniently formulated in the state-space framework. For parameter estimation, we devise a Monte Carlo Expectation Maximization (MCEM) algorithm, with particle filtering and particle smoothing methods employed to approximate the intractable conditional expectations in the E-step of the algorithm. We investigate the efficacy of the proposed methodology in a simulation study, which compares the performance of the proposed ZIB models to their counterpart zero-inflated Poisson (ZIP) models in characterizing zero-inflated count time series. We also present a practical application pertaining to disease coding

    A New Bootstrap Goodness-of-Fit Test for Normal Linear Regression Models

    Full text link
    In this work, the distributional properties of the goodness-of-fit term in likelihood-based information criteria are explored. These properties are then leveraged to construct a novel goodness-of-fit test for normal linear regression models that relies on a non-parametric bootstrap. Several simulation studies are performed to investigate the properties and efficacy of the developed procedure, with these studies demonstrating that the bootstrap test offers distinct advantages as compared to other methods of assessing the goodness-of-fit of a normal linear regression model

    Bootstrap Exploration of the Duration of Surface Electromyography Sampling in Relation to the Precision of Exposure Estimation

    Get PDF
    Objectives: This study examined the effect of sampling duration, in units of work cycles, on the precision of estimates of exposure to forceful exertion obtained with surface electromyography (EMG). Methods: Recordings of the activity of the flexor digitorum superficialis, extensor digitorum, and upper trapezius muscles over 30 consecutive work cycles were obtained for a random sample of 25 manufacturing workers, each of whom was performing a unique production task representing a portion of the whole job. The mean root-meansquare amplitude and the 10th, 50th, and 90th percentiles of the distribution function of the amplitude probability were calculated for each cycle. Bootstrap analyses were used to examine the precision of the summary measures as the sampling duration increased incrementally from 1 to 30 work cycles. Precision was estimated by calculating the coefficient of variation (CV) of the bootstrap distributions at each sampling duration increment. Results: The average minimum sampling duration for a bootstrap distribution CV of 15% ranged from 2.0 (SD 1.5) cycles to 7.5 (SD 9.6) cycles, depending on muscle and summary measure. For a 5% CV, the average minimum sampling duration ranged from 11.9 (SD 9.0) to 20.9 (SD 10.5) cycles. Conclusions: The results suggest that sampling as few as three work cycles was sufficient to obtain a bootstrap distribution CV of 15% for some of the muscles and summary measures examined in this study. While limited to machine-paced, cyclic manufacturing work, these results will assist the development of exposure assessment strategies in future epidemiologic studies of physical risk factors and musculoskeletal disorders

    Youth Football Injuries: A Prospective Cohort

    Get PDF
    Background: There are approximately 2.8 million youth football players between the ages of 7 and 14 years in the United States. Rates of injury in this population are poorly described. Recent studies have reported injury rates between 2.3% and 30.4% per season and between 8.5 and 43 per 1000 exposures. Hypothesis: Youth flag football has a lower injury rate than youth tackle football. The concussion rates in flag football are lower than in tackle football. Study Design: Cohort study; Level of evidence, 3. Methods: Three large youth (grades 2-7) football leagues with a total of 3794 players were enrolled. Research personnel partnered with the leagues to provide electronic attendance and injury reporting systems. Researchers had access to deidentified player data and injury information. Injury rates for both the tackle and flag leagues were calculated and compared using Poisson regression with a log link. The probability an injury was severe and an injury resulted in a concussion were modeled using logistic regression. For these 2 responses, best subset model selection was performed, and the model with the minimum Akaike information criterion value was chosen as best. Kaplan-Meier curves were examined to compare time loss due to injury for various subgroups of the population. Finally, time loss was modeled using Cox proportional hazards regression models. Results: A total of 46,416 exposures and 128 injuries were reported. The mean age at injury was 10.64 years. The hazard ratio for tackle football (compared with flag football) was 0.45 (95% CI, 0.25-0.80; P = .0065). The rate of severe injuries per exposure for tackle football was 1.1 (95% CI, 0.33-3.4; P = .93) times that of the flag league. The rate for concussions in tackle football per exposure was 0.51 (95% CI, 0.16-1.7; P = .27) times that of the flag league. Conclusions: Injury is more likely to occur in youth flag football than in youth tackle football. Severe injuries and concussions were not significantly different between leagues. Concussion was more likely to occur during games than during practice. Players in the sixth or seventh grade were more likely to suffer a concussion than were younger players

    The CMS Integration Grid Testbed

    Get PDF
    The CMS Integration Grid Testbed (IGT) comprises USCMS Tier-1 and Tier-2 hardware at the following sites: the California Institute of Technology, Fermi National Accelerator Laboratory, the University of California at San Diego, and the University of Florida at Gainesville. The IGT runs jobs using the Globus Toolkit with a DAGMan and Condor-G front end. The virtual organization (VO) is managed using VO management scripts from the European Data Grid (EDG). Gridwide monitoring is accomplished using local tools such as Ganglia interfaced into the Globus Metadata Directory Service (MDS) and the agent based Mona Lisa. Domain specific software is packaged and installed using the Distrib ution After Release (DAR) tool of CMS, while middleware under the auspices of the Virtual Data Toolkit (VDT) is distributed using Pacman. During a continuo us two month span in Fall of 2002, over 1 million official CMS GEANT based Monte Carlo events were generated and returned to CERN for analysis while being demonstrated at SC2002. In this paper, we describe the process that led to one of the world's first continuously available, functioning grids.Comment: CHEP 2003 MOCT01

    The Effect of a Translating Research into Practice (TRIP)‐Cancer Intervention on Cancer Pain Management in Older Adults in Hospice

    Full text link
    Background.  Pain is a major concern for individuals with cancer, particularly older adults who make up the largest segment of individuals with cancer and who have some of the most unique pain challenges. One of the priorities of hospice is to provide a pain‐free death, and while outcomes are better in hospice, patients still die with poorly controlled pain. Objective.  This article reports on the results of a Translating Research into Practice intervention designed to promote the adoption of evidence‐based pain practices for older adults with cancer in community‐based hospices. Setting.  This Institutional Human Subjects Review Board‐approved study was a cluster randomized controlled trial implemented in 16 Midwestern hospices. Methods.  Retrospective medical records from newly admitted patients were used to determine the intervention effect. Additionally, survey and focus group data gathered from hospice staff at the completion of the intervention phase were analyzed. Results.  Improvement on the Cancer Pain Practice Index, an overall composite outcome measure of evidence‐based practices for the experimental sites, was not significantly greater than control sites. Decrease in patient pain severity from baseline to post‐intervention in the experimental group was greater; however, the result was not statistically significant ( P  = 0.1032). Conclusions.  Findings indicate a number of factors that may impact implementation of multicomponent interventions, including unique characteristics and culture of the setting, the level of involvement with the change processes, competing priorities and confounding factors, and complexity of the innovation (practice change). Our results suggest that future study is needed on specific factors to target when implementing a community‐based hospice intervention, including determining and measuring intervention fidelity prospectively.Peer Reviewedhttp://deepblue.lib.umich.edu/bitstream/2027.42/93516/1/j.1526-4637.2012.01405.x.pd

    Influence of smoking on gingival crevicular fluid cytokines in severe chronic periodontitis

    Get PDF
    The aim of this study was to compare the expression of 22 chemokines and cytokines in gingival crevicular fluid (GCF) from smokers and non-smokers with periodontitis and periodontally healthy control subjects

    Movement Behavior of High-Heeled Walking: How Does the Nervous System Control the Ankle Joint during an Unstable Walking Condition?

    Get PDF
    The human locomotor system is flexible and enables humans to move without falling even under less than optimal conditions. Walking with high-heeled shoes constitutes an unstable condition and here we ask how the nervous system controls the ankle joint in this situation? We investigated the movement behavior of high-heeled and barefooted walking in eleven female subjects. The movement variability was quantified by calculation of approximate entropy (ApEn) in the ankle joint angle and the standard deviation (SD) of the stride time intervals. Electromyography (EMG) of the soleus (SO) and tibialis anterior (TA) muscles and the soleus Hoffmann (H-) reflex were measured at 4.0 km/h on a motor driven treadmill to reveal the underlying motor strategies in each walking condition. The ApEn of the ankle joint angle was significantly higher (p<0.01) during high-heeled (0.38±0.08) than during barefooted walking (0.28±0.07). During high-heeled walking, coactivation between the SO and TA muscles increased towards heel strike and the H-reflex was significantly increased in terminal swing by 40% (p<0.01). These observations show that high-heeled walking is characterized by a more complex and less predictable pattern than barefooted walking. Increased coactivation about the ankle joint together with increased excitability of the SO H-reflex in terminal swing phase indicates that the motor strategy was changed during high-heeled walking. Although, the participants were young, healthy and accustomed to high-heeled walking the results demonstrate that that walking on high-heels needs to be controlled differently from barefooted walking. We suggest that the higher variability reflects an adjusted neural strategy of the nervous system to control the ankle joint during high-heeled walking
    • 

    corecore