1,633 research outputs found

    Cox Model Analysis with the Dependently Left Truncated Data

    Get PDF
    A truncated sample consists of realizations of a pair of random variables (L, T) subject to the constraint that L ≤T. The major study interest with a truncated sample is to find the marginal distributions of L and T. Many studies have been done with the assumption that L and T are independent. We introduce a new way to specify a Cox model for a truncated sample, assuming that the truncation time is a predictor of T, and this causes the dependence between L and T. We develop an algorithm to obtain the adjusted risk sets and use the Kaplan-Meier estimator to estimate the Marginal distribution of L. We further extend our method to more practical situation, in which the Cox model includes other covariates associated with T. Simulation studies have been conducted to investigate the performances of the Cox model and the new estimators

    Impact of dependent left truncation in semiparametric competing risks methods: A simulation study

    Get PDF
    In this study, we investigated the robustness of the methods that account for independent left truncation when applied to competing risks settings with dependent left truncation. We specifically focused on the methods for the proportional cause-specific hazards model and the Fine–Gray model. Simulation experiments showed that these methods are not in general robust against dependent left truncation. The magnitude of the bias was analogous to the strength of the association between left truncation and failure times, the effect of the covariate on the competing cause of failure, and the baseline hazard of left truncation time

    Semiparametric methods for survival analysis of case‐control data subject to dependent censoring

    Full text link
    Case‐control sampling can be an efficient and cost‐saving study design, wherein subjects are selected into the study based on the outcome of interest. It was established long ago that proportional hazards regression can be applied to case‐control data. However, each of the various estimation techniques available assumes that failure times are independently censored. Since independent censoring is often violated in observational studies, we propose methods for Cox regression analysis of survival data obtained through case‐control sampling, but subject to dependent censoring. The proposed methods are based on weighted estimating equations, with separate inverse weights used to account for the case‐control sampling and to correct for dependent censoring. The proposed estimators are shown to be consistent and asymptotically normal, and consistent estimators of the asymptotic covariance matrices are derived. Finite‐sample properties of the proposed estimators are examined through simulation studies. The methods are illustrated through an analysis of pre‐transplant mortality among end‐stage liver disease patients obtained from a national organ failure registry. The Canadian Journal of Statistics 42: 365–383; 2014 © 2014 Statistical Society of Canada Résumé L’échantillonnage cas‐témoins peut constituer un plan d'expérience efficace et économique dans le cadre duquel les sujets sont choisis pour l’étude en fonction du phénomène étudié. Il est établi depuis longtemps que le modèle de régression à risques proportionnels peut s'appliquer à des données cas‐témoins. Cependant, toutes les techniques d'estimation existantes supposent que les temps de défaillance sont censurés de façon indépendante. Étant donné que l'indépendance de la censure est souvent bafouée dans le cadre d’études observationnelles, les auteurs proposent des méthodes pour la régression de Cox de données de survie sujettes à la censure dépendante obtenues par un échantillonnage cas‐témoins. Les méthodes proposées se fondent sur des équations d'estimation pondérées dont les poids séparés et inverses permettent de tenir compte de l’échantillonnage cas‐témoins et de corriger le biais lié à la censure dépendante. Les auteurs montrent que les estimateurs proposés sont convergents et asymptotiquement normaux. Ils obtiennent également des estimateurs convergents pour les matrices de covariance asymptotique. Ils examinent les propriétés de ces estimateurs sur des échantillons de taille finie par voie de simulation et illustrent les méthodes au moyen d'une analyse de données sur le taux de mortalité prétransplantation chez les patients atteints d'une maladie hépatique en phase terminale provenant d'un registre national d'organes défaillants. La revue canadienne de statistique 42: 365–383; 2014 © 2014 Société statistique du CanadaPeer Reviewedhttp://deepblue.lib.umich.edu/bitstream/2027.42/108283/1/cjs11218-sm-0001-SuppData_S1.pdfhttp://deepblue.lib.umich.edu/bitstream/2027.42/108283/2/cjs11218.pd

    In Vitro Investigation on Therapeutic Potential of Juglone, a Naphthoquinone from Walnuts against Pancreatic Cancer

    Get PDF
    Juglone, a naphthoquinone found in Juglandaceae family, which includes black walnut, European walnut, and butter nut possess various biological activities. The anti-cancer properties of juglone has been reported; however, the effect of juglone in pancreatic cancer (PC) has not been elucidated yet. PC is an aggressive lethal, highly metastatic disease associated with poor prognosis and high mortality rate. PC is usually diagnosed in advanced stage and chemotherapy is provided as a first line of treatment. The de novo chemoresistance that develops with chemotherapeutic treatment creates a critical need for identification of novel therapeutic agents for effectively targeting the disease. The effects of juglone on PC cell proliferation, level of reactive oxygen species (ROS) production, and expression of various oncogenic signal transduction molecules in MIA Paca-2, pancreatic carcinoma cells were investigated. The major findings indicate that treatment with juglone dose dependently suppressed the in vitro proliferation and induced cell death of rapidly dividing human PC cells with an IC50 value of 5 μM at 24 h. Long-term colonies forming ability of PC cells was also significantly inhibited. The molecular mechanisms behind juglone-induced apoptosis of PC cells indicated activation of caspase-3, cleavage of PARP, upregulation of Bax, down regulation of Akt, ERK, HER-2, Cox-2, and Bcl-2 and very high production of ROS leading to chromatin condensation, DNA damage and cell death. Changes in morphological features of cell treated with juglone were obtained by confocal microscopy using Hoechst staining, which specified apoptotic features in treated cells. The results also revealed the anti-angiogenic and anti-metastatic potential of juglone. PC cell migration and invasion was significantly reduced with juglone treatment and the potential of endothelial cells to form tubes was also limited when treated with juglone. Key angiogenic regulators such as HIF-1α and VEGF were also downregulated with juglone treatment. Taken together, our data suggest that of ROS-inducing agent juglone could provide a novel therapeutic approach for PC treatment

    Utilization of research technologies within a local community hospital in Ann Arbor

    Get PDF
    Technology has the ability to change the way clinical trials are conducted. Technology utilization has expanded into research in the form of handheld smartphones, wearables, and social media. This project explored technologies and assessed which of those technologies are being utilized at a community hospital. A survey was designed, developed, and disseminated to principal investigators and co-investigators of research within the hospital. The results showed that few of the technologies included in the assessment are being utilized by the researchers at the hospital. The most popular technology category being utilized by the researchers is smartphone technology. This research could contribute to the knowledge about the utilization of research technologies to society, as well as to the operational directors of research within the community hospital, which could help reveal which technologies are most useful. This research could also aid in the assessment of technology utilization over time within the same hospital

    Optimal non-linear transformations for large scale structure statistics

    Full text link
    Recently, several studies proposed non-linear transformations, such as a logarithmic or Gaussianization transformation, as efficient tools to recapture information about the (Gaussian) initial conditions. During non-linear evolution, part of the cosmologically relevant information leaks out from the second moment of the distribution. This information is accessible only through complex higher order moments or, in the worst case, becomes inaccessible to the hierarchy. The focus of this work is to investigate these transformations in the framework of Fisher information using cosmological perturbation theory of the matter field with Gaussian initial conditions. We show that at each order in perturbation theory, there is a polynomial of corresponding order exhausting the information on a given parameter. This polynomial can be interpreted as the Taylor expansion of the maximally efficient "sufficient" observable in the non-linear regime. We determine explicitly this maximally efficient observable for local transformations. Remarkably, this optimal transform is essentially the simple power transform with an exponent related to the slope of the power spectrum; when this is -1, it is indistinguishable from the logarithmic transform. This transform Gaussianizes the distribution, and recovers the linear density contrast. Thus a direct connection is revealed between undoing of the non-linear dynamics and the efficient capture of Fisher information. Our analytical results were compared with measurements from the Millennium Simulation density field. We found that our transforms remain very close to optimal even in the deeply non-linear regime with \sigma^2 \sim 10.Comment: 11 pages, matches version accepted for publication in MNRA

    Mean Field description of and propagation of chaos in recurrent multipopulation networks of Hodgkin-Huxley and Fitzhugh-Nagumo neurons

    Full text link
    We derive the mean-field equations arising as the limit of a network of interacting spiking neurons, as the number of neurons goes to infinity. The neurons belong to a fixed number of populations and are represented either by the Hodgkin-Huxley model or by one of its simplified version, the Fitzhugh-Nagumo model. The synapses between neurons are either electrical or chemical. The network is assumed to be fully connected. The maximum conductances vary randomly. Under the condition that all neurons initial conditions are drawn independently from the same law that depends only on the population they belong to, we prove that a propagation of chaos phenomenon takes places, namely that in the mean-field limit, any finite number of neurons become independent and, within each population, have the same probability distribution. This probability distribution is solution of a set of implicit equations, either nonlinear stochastic differential equations resembling the McKean-Vlasov equations, or non-local partial differential equations resembling the McKean-Vlasov-Fokker- Planck equations. We prove the well-posedness of these equations, i.e. the existence and uniqueness of a solution. We also show the results of some preliminary numerical experiments that indicate that the mean-field equations are a good representation of the mean activity of a finite size network, even for modest sizes. These experiment also indicate that the McKean-Vlasov-Fokker- Planck equations may be a good way to understand the mean-field dynamics through, e.g., a bifurcation analysis.Comment: 55 pages, 9 figure
    corecore