639 research outputs found

    Exponential Random Graph Modeling for Complex Brain Networks

    Get PDF
    Exponential random graph models (ERGMs), also known as p* models, have been utilized extensively in the social science literature to study complex networks and how their global structure depends on underlying structural components. However, the literature on their use in biological networks (especially brain networks) has remained sparse. Descriptive models based on a specific feature of the graph (clustering coefficient, degree distribution, etc.) have dominated connectivity research in neuroscience. Corresponding generative models have been developed to reproduce one of these features. However, the complexity inherent in whole-brain network data necessitates the development and use of tools that allow the systematic exploration of several features simultaneously and how they interact to form the global network architecture. ERGMs provide a statistically principled approach to the assessment of how a set of interacting local brain network features gives rise to the global structure. We illustrate the utility of ERGMs for modeling, analyzing, and simulating complex whole-brain networks with network data from normal subjects. We also provide a foundation for the selection of important local features through the implementation and assessment of three selection approaches: a traditional p-value based backward selection approach, an information criterion approach (AIC), and a graphical goodness of fit (GOF) approach. The graphical GOF approach serves as the best method given the scientific interest in being able to capture and reproduce the structure of fitted brain networks

    Assessing methods for dealing with treatment switching in clinical trials: A follow-up simulation study

    Get PDF
    When patients randomised to the control group of a randomised controlled trial are allowed to switch onto the experimental treatment, intention-to-treat analyses of the treatment effect are confounded because the separation of randomised groups is lost. Previous research has investigated statistical methods that aim to estimate the treatment effect that would have been observed had this treatment switching not occurred and has demonstrated their performance in a limited set of scenarios. Here, we investigate these methods in a new range of realistic scenarios, allowing conclusions to be made based upon a broader evidence base. We simulated randomised controlled trials incorporating prognosis-related treatment switching and investigated the impact of sample size, reduced switching proportions, disease severity, and alternative data-generating models on the performance of adjustment methods, assessed through a comparison of bias, mean squared error, and coverage, related to the estimation of true restricted mean survival in the absence of switching in the control group. Rank preserving structural failure time models, inverse probability of censoring weights, and two-stage methods consistently produced less bias than the intentionto-treat analysis. The switching proportion was confirmed to be a key determinant of bias: sample size and censoring proportion were relatively less important. It is critical to determine the size of the treatment effect in terms of an acceleration factor (rather than a hazard ratio) to provide information on the likely bias associated with rank-preserving structural failure time model adjustments. In general, inverse probability of censoring weight methods are more volatile than other adjustment methods

    Semiparametric Multivariate Accelerated Failure Time Model with Generalized Estimating Equations

    Full text link
    The semiparametric accelerated failure time model is not as widely used as the Cox relative risk model mainly due to computational difficulties. Recent developments in least squares estimation and induced smoothing estimating equations provide promising tools to make the accelerate failure time models more attractive in practice. For semiparametric multivariate accelerated failure time models, we propose a generalized estimating equation approach to account for the multivariate dependence through working correlation structures. The marginal error distributions can be either identical as in sequential event settings or different as in parallel event settings. Some regression coefficients can be shared across margins as needed. The initial estimator is a rank-based estimator with Gehan's weight, but obtained from an induced smoothing approach with computation ease. The resulting estimator is consistent and asymptotically normal, with a variance estimated through a multiplier resampling method. In a simulation study, our estimator was up to three times as efficient as the initial estimator, especially with stronger multivariate dependence and heavier censoring percentage. Two real examples demonstrate the utility of the proposed method

    A frequentist framework of inductive reasoning

    Full text link
    Reacting against the limitation of statistics to decision procedures, R. A. Fisher proposed for inductive reasoning the use of the fiducial distribution, a parameter-space distribution of epistemological probability transferred directly from limiting relative frequencies rather than computed according to the Bayes update rule. The proposal is developed as follows using the confidence measure of a scalar parameter of interest. (With the restriction to one-dimensional parameter space, a confidence measure is essentially a fiducial probability distribution free of complications involving ancillary statistics.) A betting game establishes a sense in which confidence measures are the only reliable inferential probability distributions. The equality between the probabilities encoded in a confidence measure and the coverage rates of the corresponding confidence intervals ensures that the measure's rule for assigning confidence levels to hypotheses is uniquely minimax in the game. Although a confidence measure can be computed without any prior distribution, previous knowledge can be incorporated into confidence-based reasoning. To adjust a p-value or confidence interval for prior information, the confidence measure from the observed data can be combined with one or more independent confidence measures representing previous agent opinion. (The former confidence measure may correspond to a posterior distribution with frequentist matching of coverage probabilities.) The representation of subjective knowledge in terms of confidence measures rather than prior probability distributions preserves approximate frequentist validity.Comment: major revisio

    A Relational Event Approach to Modeling Behavioral Dynamics

    Full text link
    This chapter provides an introduction to the analysis of relational event data (i.e., actions, interactions, or other events involving multiple actors that occur over time) within the R/statnet platform. We begin by reviewing the basics of relational event modeling, with an emphasis on models with piecewise constant hazards. We then discuss estimation for dyadic and more general relational event models using the relevent package, with an emphasis on hands-on applications of the methods and interpretation of results. Statnet is a collection of packages for the R statistical computing system that supports the representation, manipulation, visualization, modeling, simulation, and analysis of relational data. Statnet packages are contributed by a team of volunteer developers, and are made freely available under the GNU Public License. These packages are written for the R statistical computing environment, and can be used with any computing platform that supports R (including Windows, Linux, and Mac).

    Pneumococcal carriage in sub-Saharan Africa--a systematic review.

    Get PDF
    BACKGROUND: Pneumococcal epidemiology varies geographically and few data are available from the African continent. We assess pneumococcal carriage from studies conducted in sub-Saharan Africa (sSA) before and after the pneumococcal conjugate vaccine (PCV) era. METHODS: A search for pneumococcal carriage studies published before 2012 was conducted to describe carriage in sSA. The review also describes pneumococcal serotypes and assesses the impact of vaccination on carriage in this region. RESULTS: Fifty-seven studies were included in this review with the majority (40.3%) from South Africa. There was considerable variability in the prevalence of carriage between studies (I-squared statistic = 99%). Carriage was higher in children and decreased with increasing age, 63.2% (95% CI: 55.6-70.8) in children less than 5 years, 42.6% (95% CI: 29.9-55.4) in children 5-15 years and 28.0% (95% CI: 19.0-37.0) in adults older than 15 years. There was no difference in the prevalence of carriage between males and females in 9/11 studies. Serotypes 19F, 6B, 6A, 14 and 23F were the five most common isolates. A meta-analysis of four randomized trials of PCV vaccination in children aged 9-24 months showed that carriage of vaccine type (VT) serotypes decreased with PCV vaccination; however, overall carriage remained the same because of a concomitant increase in non-vaccine type (NVT) serotypes. CONCLUSION: Pneumococcal carriage is generally high in the African continent, particularly in young children. The five most common serotypes in sSA are among the top seven serotypes that cause invasive pneumococcal disease in children globally. These serotypes are covered by the two PCVs recommended for routine childhood immunization by the WHO. The distribution of serotypes found in the nasopharynx is altered by PCV vaccination

    A Measurement of Rb using a Double Tagging Method

    Get PDF
    The fraction of Z to bbbar events in hadronic Z decays has been measured by the OPAL experiment using the data collected at LEP between 1992 and 1995. The Z to bbbar decays were tagged using displaced secondary vertices, and high momentum electrons and muons. Systematic uncertainties were reduced by measuring the b-tagging efficiency using a double tagging technique. Efficiency correlations between opposite hemispheres of an event are small, and are well understood through comparisons between real and simulated data samples. A value of Rb = 0.2178 +- 0.0011 +- 0.0013 was obtained, where the first error is statistical and the second systematic. The uncertainty on Rc, the fraction of Z to ccbar events in hadronic Z decays, is not included in the errors. The dependence on Rc is Delta(Rb)/Rb = -0.056*Delta(Rc)/Rc where Delta(Rc) is the deviation of Rc from the value 0.172 predicted by the Standard Model. The result for Rb agrees with the value of 0.2155 +- 0.0003 predicted by the Standard Model.Comment: 42 pages, LaTeX, 14 eps figures included, submitted to European Physical Journal

    Measurement of the B+ and B-0 lifetimes and search for CP(T) violation using reconstructed secondary vertices

    Get PDF
    The lifetimes of the B+ and B-0 mesons, and their ratio, have been measured in the OPAL experiment using 2.4 million hadronic Z(0) decays recorded at LEP. Z(0) --> b (b) over bar decays were tagged using displaced secondary vertices and high momentum electrons and muons. The lifetimes were then measured using well-reconstructed charged and neutral secondary vertices selected in this tagged data sample. The results aretau(B+) = 1.643 +/- 0.037 +/- 0.025 pstau(Bo) = 1.523 +/- 0.057 +/- 0.053 pstau(B+)/tau(Bo) = 1.079 +/- 0.064 +/- 0.041,where in each case the first error is statistical and the second systematic.A larger data sample of 3.1 million hadronic Z(o) decays has been used to search for CP and CPT violating effects by comparison of inclusive b and (b) over bar hadron decays, No evidence fur such effects is seen. The CP violation parameter Re(epsilon(B)) is measured to be Re(epsilon(B)) = 0.001 +/- 0.014 +/- 0.003and the fractional difference between b and (b) over bar hadron lifetimes is measured to(Delta tau/tau)(b) = tau(b hadron) - tau((b) over bar hadron)/tau(average) = -0.001 +/- 0.012 +/- 0.008

    The Risk of Virologic Failure Decreases with Duration of HIV Suppression, at Greater than 50% Adherence to Antiretroviral Therapy

    Get PDF
    Background: We hypothesized that the percent adherence to antiretroviral therapy necessary to maintain HIV suppression would decrease with longer duration of viral suppression. Methodology: Eligible participants were identified from the REACH cohort of marginally housed HIV infected adults in San Francisco. Adherence to antiretroviral therapy was measured through pill counts obtained at unannounced visits by research staff to each participant's usual place of residence. Marginal structural models and targeted maximum likelihood estimation methodologies were used to determine the effect of adherence to antiretroviral therapy on the probability of virologic failure during early and late viral suppression. Principal Findings: A total of 221 subjects were studied (median age 44.1 years; median CD4+ T cell nadir 206 cells/mm3). Most subjects were taking the following types of antiretroviral regimens: non-nucleoside reverse transcriptase inhibitor based (37%), ritonavir boosted protease inhibitor based (28%), or unboosted protease inhibitor based (25%). Comparing the probability of failure just after achieving suppression vs. after 12 consecutive months of suppression, there was a statistically significant decrease in the probability of virologic failure for each range of adherence proportions we considered, as long as adherence was greater than 50%. The estimated risk difference, comparing the probability of virologic failure after 1 month vs. after 12 months of continuous viral suppression was 0.47 (95% CI 0.23–0.63) at 50–74% adherence, 0.29 (CI 0.03–0.50) at 75–89% adherence, and 0.36 (CI 0.23–0.48) at 90–100% adherence. Conclusions: The risk of virologic failure for adherence greater than 50% declines with longer duration of continuous suppression. While high adherence is required to maximize the probability of durable viral suppression, the range of adherence capable of sustaining viral suppression is wider after prolonged periods of viral suppression

    Comparison of dynamic monitoring strategies based on CD4 cell counts in virally suppressed, HIV-positive individuals on combination antiretroviral therapy in high-income countries: a prospective, observational study

    Get PDF
    BACKGROUND: Clinical guidelines vary with respect to the optimal monitoring frequency of HIV-positive individuals. We compared dynamic monitoring strategies based on time-varying CD4 cell counts in virologically suppressed HIV-positive individuals. METHODS: In this observational study, we used data from prospective studies of HIV-positive individuals in Europe (France, Greece, the Netherlands, Spain, Switzerland, and the UK) and North and South America (Brazil, Canada, and the USA) in The HIV-CAUSAL Collaboration and The Centers for AIDS Research Network of Integrated Clinical Systems. We compared three monitoring strategies that differ in the threshold used to measure CD4 cell count and HIV RNA viral load every 3–6 months (when below the threshold) or every 9–12 months (when above the threshold). The strategies were defined by the threshold CD4 counts of 200 cells per μL, 350 cells per μL, and 500 cells per μL. Using inverse probability weighting to adjust for baseline and time-varying confounders, we estimated hazard ratios (HRs) of death and of AIDS-defining illness or death, risk ratios of virological failure, and mean differences in CD4 cell count. FINDINGS: 47 635 individuals initiated an antiretroviral therapy regimen between Jan 1, 2000, and Jan 9, 2015, and met the eligibility criteria for inclusion in our study. During follow-up, CD4 cell count was measured on average every 4·0 months and viral load every 3·8 months. 464 individuals died (107 in threshold 200 strategy, 157 in threshold 350, and 200 in threshold 500) and 1091 had AIDS-defining illnesses or died (267 in threshold 200 strategy, 365 in threshold 350, and 459 in threshold 500). Compared with threshold 500, the mortality HR was 1·05 (95% CI 0·86–1·29) for threshold 200 and 1·02 (0·91·1·14) for threshold 350. Corresponding estimates for death or AIDS-defining illness were 1·08 (0·95–1·22) for threshold 200 and 1·03 (0·96–1·12) for threshold 350. Compared with threshold 500, the 24 month risk ratios of virological failure (viral load more than 200 copies per mL) were 2·01 (1·17–3·43) for threshold 200 and 1·24 (0·89–1·73) for threshold 350, and 24 month mean CD4 cell count differences were 0·4 (−25·5 to 26·3) cells per μL for threshold 200 and −3·5 (−16·0 to 8·9) cells per μL for threshold 350. INTERPRETATION: Decreasing monitoring to annually when CD4 count is higher than 200 cells per μL compared with higher than 500 cells per μL does not worsen the short-term clinical and immunological outcomes of virally suppressed HIV-positive individuals. However, more frequent virological monitoring might be necessary to reduce the risk of virological failure. Further follow-up studies are needed to establish the long-term safety of these strategies. FUNDING National Institutes of Health
    corecore