87 research outputs found

    Estimating correlation between multivariate longitudinal data in the presence of heterogeneity

    Get PDF
    Abstract Background Estimating correlation coefficients among outcomes is one of the most important analytical tasks in epidemiological and clinical research. Availability of multivariate longitudinal data presents a unique opportunity to assess joint evolution of outcomes over time. Bivariate linear mixed model (BLMM) provides a versatile tool with regard to assessing correlation. However, BLMMs often assume that all individuals are drawn from a single homogenous population where the individual trajectories are distributed smoothly around population average. Methods Using longitudinal mean deviation (MD) and visual acuity (VA) from the Ocular Hypertension Treatment Study (OHTS), we demonstrated strategies to better understand the correlation between multivariate longitudinal data in the presence of potential heterogeneity. Conditional correlation (i.e., marginal correlation given random effects) was calculated to describe how the association between longitudinal outcomes evolved over time within specific subpopulation. The impact of heterogeneity on correlation was also assessed by simulated data. Results There was a significant positive correlation in both random intercepts (ρ = 0.278, 95% CI: 0.121–0.420) and random slopes (ρ = 0.579, 95% CI: 0.349–0.810) between longitudinal MD and VA, and the strength of correlation constantly increased over time. However, conditional correlation and simulation studies revealed that the correlation was induced primarily by participants with rapid deteriorating MD who only accounted for a small fraction of total samples. Conclusion Conditional correlation given random effects provides a robust estimate to describe the correlation between multivariate longitudinal data in the presence of unobserved heterogeneity (NCT00000125)

    Pre-Peak Deformation and Damage Features of Sandstone under Cyclic Loading

    Get PDF
    In this paper, several sandstone specimens are prepared and subjected to uniaxial compression and cyclic loading. For each specimen, the loading segment of the stress-strain curve was fitted, and the peak slope of this segment was taken as the elastic modulus of the specimen in that cycle. It is learned that, under cyclic loading, the elastic modulus of each specimen increased with the growing number of load cycles, and tended to be stable; meanwhile, strain hardening was observed on all specimens. Moreover, the specimens are similar in corresponding stress, although varied in corresponding strain. In the same cycle, the tangent modulus of the loading phase was smaller than that of the unloading phase under the same stress. Finally, the damage variables of sandstone specimens under cyclic loading were defined from the angle of energy, revealing that the damage variables had logarithmic growth with the load cycles in the later stage

    Genome-wide identification and evolution of ATP-binding cassette transporters in the ciliate Tetrahymena thermophila: A case of functional divergence in a multigene family

    Get PDF
    <p>Abstract</p> <p>Background</p> <p>In eukaryotes, ABC transporters that utilize the energy of ATP hydrolysis to expel cellular substrates into the environment are responsible for most of the efflux from cells. Many members of the superfamily of ABC transporters have been linked with resistance to multiple drugs or toxins. Owing to their medical and toxicological importance, members of the ABC superfamily have been studied in several model organisms and warrant examination in newly sequenced genomes.</p> <p>Results</p> <p>A total of 165 ABC transporter genes, constituting a highly expanded superfamily relative to its size in other eukaryotes, were identified in the macronuclear genome of the ciliate <it>Tetrahymena thermophila</it>. Based on ortholog comparisons, phylogenetic topologies and intron characterizations, each highly expanded ABC transporter family of <it>T</it>. <it>thermophila </it>was classified into several distinct groups, and hypotheses about their evolutionary relationships are presented. A comprehensive microarray analysis revealed divergent expression patterns among the members of the ABC transporter superfamily during different states of physiology and development. Many of the relatively recently formed duplicate pairs within individual ABC transporter families exhibit significantly different expression patterns. Further analysis showed that multiple mechanisms have led to functional divergence that is responsible for the preservation of duplicated genes.</p> <p>Conclusion</p> <p>Gene duplications have resulted in an extensive expansion of the superfamily of ABC transporters in the <it>Tetrahymena </it>genome, making it the largest example of its kind reported in any organism to date. Multiple independent duplications and subsequent divergence contributed to the formation of different families of ABC transporter genes. Many of the members within a gene family exhibit different expression patterns. The combination of gene duplication followed by both sequence divergence and acquisition of new patterns of expression likely plays a role in the adaptation of <it>Tetrahymen </it>a to its environment.</p

    Hysteresis Characteristics of Brittle Rock Deformation under Constant Load Cyclic Loading and Unloading

    Get PDF
    This paper mainly explores the deformation characteristics of limestone specimens under constant load cyclic loading. For limestone specimens under uniaxial compression, the stress-strain curve can be divided into three stages: compaction stage, elastic stage and sudden failure stage. Under cyclic loading, the hysteresis loop on the stress-strain curve is long and thin, taking the shape of "toothpicks". The axial strain and radial strain both change with the stress amplitude and cycle number, but in different variation patterns. There is a stress amplitude "threshold" for radial deformation, indicating that the radial deformation is more sensitive to stress amplitude than the axial deformation. It is calculated that the incremental deformation between peaks includes both plastic deformation and the deformation recoverable after unloading, and the recoverable deformation is positively correlated with the load amplitude of the cyclic loading

    Measuring Overall Heterogeneity in Meta-Analyses: Application to CSF Biomarker Studies in Alzheimer’s Disease

    Get PDF
    The interpretations of statistical inferences from meta-analyses depend on the degree of heterogeneity in the meta-analyses. Several new indices of heterogeneity in meta-analyses are proposed, and assessed the variation/difference of these indices through a large simulation study. The proposed methods are applied to biomakers of Alzheimer’s disease

    Comparing statistical methods in assessing the prognostic effect of biomarker variability on time-to-event clinical outcomes

    Get PDF
    BACKGROUND: In recent years there is increasing interest in modeling the effect of early longitudinal biomarker data on future time-to-event or other outcomes. Sometimes investigators are also interested in knowing whether the variability of biomarkers is independently predictive of clinical outcomes. This question in most applications is addressed via a two-stage approach where summary statistics such as variance are calculated in the first stage and then used in models as covariates to predict clinical outcome in the second stage. The objective of this study is to compare the relative performance of various methods in estimating the effect of biomarker variability. METHODS: A joint model and 4 different two-stage approaches (naïve, landmark analysis, time-dependent Cox model, and regression calibration) were illustrated using data from a large multi-center randomized phase III trial, the Ocular Hypertension Treatment Study (OHTS), regarding the association between the variability of intraocular pressure (IOP) and the development of primary open-angle glaucoma (POAG). The model performance was also evaluated in terms of bias using simulated data from the joint model of longitudinal IOP and time to POAG. The parameters for simulation were chosen after OHTS data, and the association between longitudinal and survival data was introduced via underlying, unobserved, and error-free parameters including subject-specific variance. RESULTS: In the OHTS data, joint modeling and two-stage methods reached consistent conclusion that IOP variability showed no significant association with the risk of POAG. In the simulated data with no association between IOP variability and time-to-POAG, all the two-stage methods (except the naïve approach) provided a reliable estimation. When a moderate effect of IOP variability on POAG was imposed, all the two-stage methods underestimated the true association as compared with the joint modeling while the model-based two-stage method (regression calibration) resulted in the least bias. CONCLUSION: Regression calibration and joint modelling are the preferred methods in assessing the effect of biomarker variability. Two-stage methods with sample-based measures should be used with caution unless there exists a relatively long series of longitudinal measurements and/or strong effect size (NCT00000125)

    Deep Industrial Image Anomaly Detection: A Survey

    Full text link
    The recent rapid development of deep learning has laid a milestone in industrial Image Anomaly Detection (IAD). In this paper, we provide a comprehensive review of deep learning-based image anomaly detection techniques, from the perspectives of neural network architectures, levels of supervision, loss functions, metrics and datasets. In addition, we extract the new setting from industrial manufacturing and review the current IAD approaches under our proposed our new setting. Moreover, we highlight several opening challenges for image anomaly detection. The merits and downsides of representative network architectures under varying supervision are discussed. Finally, we summarize the research findings and point out future research directions. More resources are available at https://github.com/M-3LAB/awesome-industrial-anomaly-detection

    Improved Hybrid Layered Image Compression using Deep Learning and Traditional Codecs

    Full text link
    Recently deep learning-based methods have been applied in image compression and achieved many promising results. In this paper, we propose an improved hybrid layered image compression framework by combining deep learning and the traditional image codecs. At the encoder, we first use a convolutional neural network (CNN) to obtain a compact representation of the input image, which is losslessly encoded by the FLIF codec as the base layer of the bit stream. A coarse reconstruction of the input is obtained by another CNN from the reconstructed compact representation. The residual between the input and the coarse reconstruction is then obtained and encoded by the H.265/HEVC-based BPG codec as the enhancement layer of the bit stream. Experimental results using the Kodak and Tecnick datasets show that the proposed scheme outperforms the state-of-the-art deep learning-based layered coding scheme and traditional codecs including BPG in both PSNR and MS-SSIM metrics across a wide range of bit rates, when the images are coded in the RGB444 domain.Comment: Submitted to Signal Processing: Image Communicatio

    IM-IAD: Industrial Image Anomaly Detection Benchmark in Manufacturing

    Full text link
    Image anomaly detection (IAD) is an emerging and vital computer vision task in industrial manufacturing (IM). Recently many advanced algorithms have been published, but their performance deviates greatly. We realize that the lack of actual IM settings most probably hinders the development and usage of these methods in real-world applications. As far as we know, IAD methods are not evaluated systematically. As a result, this makes it difficult for researchers to analyze them because they are designed for different or special cases. To solve this problem, we first propose a uniform IM setting to assess how well these algorithms perform, which includes several aspects, i.e., various levels of supervision (unsupervised vs. semi-supervised), few-shot learning, continual learning, noisy labels, memory usage, and inference speed. Moreover, we skillfully build a comprehensive image anomaly detection benchmark (IM-IAD) that includes 16 algorithms on 7 mainstream datasets with uniform settings. Our extensive experiments (17,017 in total) provide in-depth insights for IAD algorithm redesign or selection under the IM setting. Next, the proposed benchmark IM-IAD gives challenges as well as directions for the future. To foster reproducibility and accessibility, the source code of IM-IAD is uploaded on the website, https://github.com/M-3LAB/IM-IAD
    corecore