662,881 research outputs found

    Using practice effects for targeted trials or sub-group analysis in Alzheimer\u27s disease: How practice effects predict change over time

    Get PDF
    OBJECTIVE: To describe the presence of practice effects in persons with Alzheimer disease (AD) or mild cognitive impairment (MCI) and to evaluate how practice effects affect cognitive progression and the outcome of clinical trials. METHODS: Using data from a meta-database consisting of 18 studies including participants from the Alzheimer disease Cooperative Study (ADCS) and the Alzheimer Disease Neuroimaging Initiative (ADNI) with ADAS-Cog11 as the primary outcome, we defined practice effects based on the improvement in the first two ADAS-Cog11 scores and then estimated the presence of practice effects and compared the cognitive progression between participants with and without practice effects. The robustness of practice effects was investigated using CDR SB, an outcome independent the definition itself. Furthermore, we evaluated how practice effects can affect sample size estimation. RESULTS: The overall percent of practice effects for AD participants was 39.0% and 53.3% for MCI participants. For AD studies, the mean change from baseline to 2 years was 12.8 points for the non-practice effects group vs 7.4 for the practice effects group; whereas for MCI studies, it was 4.1 for non-practice effects group vs 0.2 for the practice effects group. AD participants without practice effects progressed 0.9 points faster than those with practice effects over a period of 2 years in CDR-SB; whereas for MCI participants, the difference is 0.7 points. The sample sizes can be different by over 35% when estimated based on participants with/without practice effects. CONCLUSION: Practice effects were prevalent and robust in persons with AD or MCI and affected the cognitive progression and sample size estimation. Planning of future AD or MCI clinical trials should account for practice effects to avoid underpower or considers target trials or stratification analysis based on practice effects

    Detecting multiple generalized change-points by isolating single ones

    Get PDF
    We introduce a new approach, called Isolate-Detect (ID), for the consistent estimation of the number and location of multiple generalized change-points in noisy data sequences. Examples of signal changes that ID can deal with are changes in the mean of a piecewise-constant signal and changes, continuous or not, in the linear trend. The number of change-points can increase with the sample size. Our method is based on an isolation technique, which prevents the consideration of intervals that contain more than one change-point. This isolation enhances ID’s accuracy as it allows for detection in the presence of frequent changes of possibly small magnitudes. In ID, model selection is carried out via thresholding, or an information criterion, or SDLL, or a hybrid involving the former two. The hybrid model selection leads to a general method with very good practical performance and minimal parameter choice. In the scenarios tested, ID is at least as accurate as the state-of-the-art methods; most of the times it outperforms them. ID is implemented in the R packages IDetect and breakfast, available from CRAN

    High-dimensional change point detection for mean and location parameters

    Get PDF
    Change point inference refers to detection of structural breaks of a sequence observation, which may have one or more distributional shifts subject to models such as mean or covariance changes. In this dissertation, we consider the offline multiple change point problem that the sample size is fixed in advance or after observation. In particular, we concentrate on high-dimensional setup where the dimension pp can be much larger than the sample size nn and traditional distribution assumptions can easily fail. The goal is to employ non-parametric approaches to identify change points without involving intermediate estimation to cross-sectional dependence. In the first part, we consider cumulative sum (CUSUM) statistics that are widely used in the change point inference and identification. We study two problems for high-dimensional mean vectors based on the \ell^{\infty}-norm of the CUSUM statistics. For the problem of testing for the existence of a change point in an independent sample generated from the mean-shift model, we introduce a Gaussian multiplier bootstrap to calibrate critical values of the CUSUM test statistics in high dimensions. The proposed bootstrap CUSUM test is fully data-dependent and it has strong theoretical guarantees under arbitrary dependence structures and mild moment conditions. Specifically, we show that with a boundary removal parameter the bootstrap CUSUM test enjoys the uniform validity in size under the null and it achieves the minimax separation rate under the sparse alternatives when pnp \gg n. Once a change point is detected, we estimate the change point location by maximizing the \ell^{\infty}-norm of the generalized CUSUM statistics at two different weighting scales. The first estimator is based on the covariance stationary CUSUM statistics, and we prove its consistency in estimating the location at the nearly parametric rate n1/2n^{-1/2} for sub-exponential observations. The second estimator is based on non-stationary CUSUM statistics, assigning less weights on the boundary data points. In the latter case, we show that it achieves the nearly best possible rate of convergence on the order n1n^{-1}. In both cases, dimension impacts the rate of convergence only through the logarithm factors, and therefore consistency of the CUSUM location estimators is possible when pp is much larger than nn. In the presence of multiple change points, we propose a principled bootstrap-assisted binary segmentation (BABS) algorithm to dynamically adjust the change point detection rule and recursively estimate their locations. We derive its rate of convergence under suitable signal separation and strength conditions. The results derived are non-asymptotic and we provide extensive simulation studies to assess the finite sample performance. The empirical evidence shows an encouraging agreement with our theoretical results. In the second part, we analyze the problem of change point detection for high-dimensional distributions in a location family. We propose a robust, tuning-free (i.e., fully data-dependent), and easy-to-implement change point test formulated in the multivariate UU-statistics framework with anti-symmetric and nonlinear kernels. It achieves the robust purpose in a non-parametric setting when CUSUM statistics are sensitive to outliers and heavy-tailed distributions. Specifically, the within-sample noise is canceled out by anti-symmetry of the kernel, while the signal distortion under certain nonlinear kernels can be controlled such that the between-sample change point signal is magnitude preserving. A (half) jackknife multiplier bootstrap (JMB) tailored to the change point detection setting is proposed to calibrate the distribution of our \ell^{\infty}-norm aggregated test statistic. Subject to mild moment conditions on kernels, we derive the uniform rates of convergence for the JMB to approximate the sampling distribution of the test statistic, and analyze its size and power properties. Extensions to multiple change point testing and estimation are discussed with illustration from numeric studies

    Essay on Dynamic Matching

    Get PDF
    abstract: In the first chapter, I study the two-sided, dynamic matching problem that occurs in the United States (US) foster care system. In this market, foster parents and foster children can form reversible foster matches, which may disrupt, continue in a reversible state, or transition into permanency via adoption. I first present an empirical analysis that yields four new stylized facts related to match transitions of children in foster care and their exit through adoption. Thereafter, I develop a two-sided dynamic matching model with five key features: (a) children are heterogeneous (with and without a disability), (b) children must be foster matched before being adopted, (c) children search for parents while foster matched to another parent, (d) parents receive a smaller per-period payoff when adopting than fostering (capturing the presence of a financial penalty on adoption), and (e) matches differ in their quality. I use the model to derive conditions for the stylized facts to arise in equilibrium and carry out predictions regarding match quality. The main insight is that the intrinsic disadvantage (being less preferred by foster parents) faced by children with a disability exacerbates due to the penalty. Moreover, I show that foster parents in high-quality matches (relative to foster parents in low-quality matches) might have fewer incentives to adopt. In the second chapter, I study the Minnesota's 2015 Northstar Care Program which eliminated the adoption penalty (i.e., the decrease in fostering-based financial transfers associated with adoption) for children aged six and older, while maintaining it for children under age six. Using a differences-in-differences estimation strategy that controls for a rich set of covariates, I find that parents were responsive to the change in direct financial payments; the annual adoption rate of older foster children (aged six to eleven) increased by approximately 8 percentage points (24% at the mean) as a result of the program. I additionally find evidence of strategic adoption behavior as the adoption rate of younger children temporarily increased by 9 percentage points (23% at the mean) while the adoption rate of the oldest children (aged fifteen) temporarily decreased by 9 percentage points (65% at the mean) in the year prior to the program's implementation.Dissertation/ThesisDoctoral Dissertation Economics 201

    A comparison between tests for changes in the adjustment coefficients in cointegrated systems

    Get PDF
    In this paper we examine several approaches to detecting changes in the adjustment coefficients in cointegrated VARs. We adopt recursive and rolling techniques as mis-specification tests for the detection of non-constancy and the estimation of the breakpoints. We find that inspection of the recursive eigenvalues is not useful to detect a break in the adjustment coefficients, whilst recursive estimation of the coefficients can only indicate non-constancy, but not the exact breakpoint. Rolling estimation is found to perform better in detecting non-constancy in the parameters and their true value after the breakpoint. However, it only detects a region where the break is likely to occur. To overcome the drawbacks of these techniques, we use an OLS-based sequential test. To assess its performance, we derive its critical values for different sample sizes. Monte Carlo evidence shows that the test has reasonably good power even in moderately sized samples and that it can be used as a graphical device, as it shows a kink at the breakpoint. As a benchmark we use the Kalman filter, of which we analyse the performance on the same data generating processes (DGP)
    corecore