70 research outputs found

    A unified analysis of regression adjustment in randomized experiments

    Get PDF
    Regression adjustment is broadly applied in randomized trials under the premise that it usually improves the precision of a treatment effect estimator. However, previous work has shown that this is not always true. To further understand this phenomenon, we develop a unified comparison of the asymptotic variance of a class of linear regression-adjusted estimators. Our analysis is based on the classical theory for linear regression with heteroscedastic errors and thus does not assume that the postulated linear model is correct. For a completely randomized binary treatment, we provide sufficient conditions under which some regression-adjusted estimators are guaranteed to be more asymptotically efficient than others. We explore other settings such as general treatment assignment mechanisms and generalized linear models, and find that the variance dominance phenomenon no longer occurs.Comment: 17 pages, 1 figure, 2 table

    Accelerated Nonconvex ADMM with Self-Adaptive Penalty for Rank-Constrained Model Identification

    Full text link
    The alternating direction method of multipliers (ADMM) has been widely adopted in low-rank approximation and low-order model identification tasks; however, the performance of nonconvex ADMM is highly reliant on the choice of penalty parameter. To accelerate ADMM for solving rankconstrained identification problems, this paper proposes a new self-adaptive strategy for automatic penalty update. Guided by first-order analysis of the increment of the augmented Lagrangian, the self-adaptive penalty updating enables effective and balanced minimization of both primal and dual residuals and thus ensures a stable convergence. Moreover, improved efficiency can be obtained within the Anderson acceleration scheme. Numerical examples show that the proposed strategy significantly accelerates the convergence of nonconvex ADMM while alleviating the critical reliance on tedious tuning of penalty parameters.Comment: 7 pages, 4 figures. Submitted to 62nd IEEE Conference on Decision and Control (CDC 2023

    The influence and mechanism exploration of hydration environment on the stability of natural clay crude oil emulsion

    Get PDF
    The study investigated the effects and mechanisms of clay content, emulsion water content, pH, and metal cations on clay-crude oil emulsions. The results indicate the following: 1) At a water content of 50 V/V%, montmorillonite can form emulsions with crude oil at different concentrations, with the highest stability observed at 5 wt% content. In contrast, chlorite, illite, and kaolinite cannot form emulsions at low concentrations. 2) Under acidic conditions, montmorillonite, illite, and chlorite cannot form emulsions with crude oil, or the emulsions are highly unstable. However, kaolinite forms more stable emulsions under acidic conditions. In alkaline environments, emulsions formed by all clay minerals exhibit increased stability. 3) The order of the effectiveness of different metal cations in reducing the stability of montmorillonite-crude oil emulsions is K+ > Na+ > Mg2+ > Ca2+, while for chlorite, illite, and kaolinite, it is Mg2+ > Ca2+ > K+ > Na+. 4) The factors that influence the stability of clay-crude oil emulsions are the arrangement of clay particles in water and the dispersion capability of clay particles in water. The most significant influencing factor is the arrangement of clay particles in water

    Characterizing and Understanding Development of Social Computing Through DBLP : A Data-Driven Analysis

    Get PDF
    During the past decades, the term 'social computing' has become a promising interdisciplinary area in the intersection of computer science and social science. In this work, we conduct a data-driven study to understand the development of social computing using the data collected from Digital Bibliography and Library Project (DBLP), a representative computer science bibliography website. We have observed a series of trends in the development of social computing, including the evolution of the number of publications, popular keywords, top venues, international collaborations, and research topics. Our findings will be helpful for researchers and practitioners working in relevant fields.publishedVersionPeer reviewe

    First-line treatment with chemotherapy plus cetuximab in Chinese patients with recurrent and/or metastatic squamous cell carcinoma of the head and neck: Efficacy and safety results of the randomised, phase III CHANGE-2 trial.

    Get PDF
    Abstract Background The EXTREME regimen (chemotherapy [CT; cisplatin/carboplatin and 5-fluorouracil]) plus cetuximab is a standard-of-care first-line (1L) treatment for patients with recurrent and/or metastatic squamous cell carcinoma of the head and neck (R/M SCCHN), as supported by international guidelines. The phase III CHANGE-2 trial assessed the efficacy and safety of a modified CT regimen (with a reduced dose of both components) and cetuximab versus CT for the 1L treatment of Chinese patients with R/M SCCHN. Methods Patients were randomised to receive up to six cycles of CT plus cetuximab followed by cetuximab maintenance until progressive disease or CT alone. The primary end-point was the progression-free survival (PFS) time assessed by the independent review committee (IRC). Results Overall, 243 patients were randomised (164 to CT plus cetuximab; 79 to CT). The hazard ratios for PFS by IRC and overall survival (OS) were 0.57 (95% CI: 0.40–0.80; median: 5.5 versus 4.2 months) and 0.69 (95% CI: 0.50–0.93; median: 11.1 versus 8.9 months), respectively, in favour of CT plus cetuximab. The objective response rates (ORR) by IRC were 50.0% and 26.6% with CT plus cetuximab and CT treatment, respectively. Treatment-emergent adverse events of maximum grade 3 or 4 occurred in 61.3% (CT plus cetuximab) and 48.7% (CT) of patients. Conclusions CHANGE-2 showed an improved median PFS, median OS and ORR with the addition of cetuximab to a modified platinum/5-fluorouracil regimen, with no new or unexpected safety findings, thereby confirming CT plus cetuximab as an effective and safe 1L treatment for Chinese patients with R/M SCCHN. Clinical trial registration number NCT02383966

    Belle II Pixel Detector Commissioning and Operational Experience

    Get PDF

    Status of the BELLE II Pixel Detector

    Get PDF
    The Belle II experiment at the super KEK B-factory (SuperKEKB) in Tsukuba, Japan, has been collecting e+e−e^+e^− collision data since March 2019. Operating at a record-breaking luminosity of up to 4.7×1034cm−2s−14.7×10^{34} cm^{−2}s^{−1}, data corresponding to 424fb−1424 fb^{−1} has since been recorded. The Belle II VerteX Detector (VXD) is central to the Belle II detector and its physics program and plays a crucial role in reconstructing precise primary and decay vertices. It consists of the outer 4-layer Silicon Vertex Detector (SVD) using double sided silicon strips and the inner two-layer PiXel Detector (PXD) based on the Depleted P-channel Field Effect Transistor (DePFET) technology. The PXD DePFET structure combines signal generation and amplification within pixels with a minimum pitch of (50×55)μm2(50×55) μm^2. A high gain and a high signal-to-noise ratio allow thinning the pixels to 75μm75 μm while retaining a high pixel hit efficiency of about 9999%. As a consequence, also the material budget of the full detector is kept low at ≈0.21≈0.21%XX0\frac{X}{X_0} per layer in the acceptance region. This also includes contributions from the control, Analog-to-Digital Converter (ADC), and data processing Application Specific Integrated Circuits (ASICs) as well as from cooling and support structures. This article will present the experience gained from four years of operating PXD; the first full scale detector employing the DePFET technology in High Energy Physics. Overall, the PXD has met the expectations. Operating in the intense SuperKEKB environment poses many challenges that will also be discussed. The current PXD system remains incomplete with only 20 out of 40 modules having been installed. A full replacement has been constructed and is currently in its final testing stage before it will be installed into Belle II during the ongoing long shutdown that will last throughout 2023

    Toward Better Practice of Covariate Adjustment in Analyzing Randomized Clinical Trials

    No full text
    In randomized clinical trials, adjustments for baseline covariates at both design and analysis stages are highly encouraged by regulatory agencies. A recent trend is to use a model-assisted approach for covariate adjustment to gain credibility and efficiency while producing asymptotically valid inference even when the model is incorrect. In this article we present three considerations for better practice when model-assisted inference is applied to adjust for covariates under simple or covariate-adaptive randomized trials: (1) guaranteed efficiency gain: a model-assisted method should often gain but never hurt efficiency; (2) wide applicability: a valid procedure should be applicable, and preferably universally applicable, to all commonly used randomization schemes; (3) robust standard error: variance estimation should be robust to model misspecification and heteroscedasticity. To achieve these, we recommend a model-assisted estimator under an analysis of heterogeneous covariance working model including all covariates utilized in randomization. Our conclusions are based on an asymptotic theory that provides a clear picture of how covariate-adaptive randomization and regression adjustment alter statistical efficiency. Our theory is more general than the existing ones in terms of studying arbitrary functions of response means (including linear contrasts, ratios, and odds ratios), multiple arms, guaranteed efficiency gain, optimality, and universal applicability.Isaact Newton Trus
    • …
    corecore