1,363 research outputs found

    Inference for a General Class of Models for Recurrent Events with application to cancer data

    Get PDF
    La necesidad del análisis de supervivencia aparece cuando necesitamos estudiar las propiedades estadísticas de una variable que describe el tiempo hasta que ocurre un evento único. En algunas ocasiones, podemos observar que el evento de interés ocurre repetidamente en un mismo individuo, como puede ser el caso de un paciente diagnosticado de cáncer que recae a lo largo del tiempo o cuando una persona es reingresada repetidas veces en un hospital. En este caso hablamos de análisis de supervivencia con eventos recurrentes. La naturaleza recurrente de los eventos hace necesario el uso de otras técnicas distintas a aquellas que utilizamos cuando analizamos tiempos de supervivencia para un evento único. En esta tesis, tratamos este tipo de análisis principalmente motivados por dos estudios en investigación en cáncer que fueron creados especialmente para este trabajo. Uno de ellos hace referencia a un estudio sobre readmisiones hospitalarias en pacientes diagnosticados con cáncer colorectal, mientras que el otro hace referencia a pacientes diagnosticados con linfomas no Hodgkinianos. Este último estudio es especialmente relevante ya que incluimos información sobre el efecto del tratamiento después de las recaídas y algunos autores han mostrado la necesidad de desarrollar un modelo específico para pacientes que presentan este tipo de enfermedades. Nuestra contribución al análisis univariante es proponer un método para construir intervalos de confianza para la mediana de supervivencia en el caso de eventos recurrentes. Para ello, hemos utilizado dos aproximaciones. Una de ellas se basa en las varianzas asintóticas derivadas de dos estimadores existentes de la función de supervivencia, mientras que el otro utiliza técnicas de remuestreo. Esta última aproximación es útil ya que uno de los estimadores utilizados todavía no tiene una forma cerrada para su varianza. La nueva contribución de este trabajo es el estudio de cómo hacer remuestreo en la presencia de datos con eventos recurrentes que aparecen de un esquema conocido como --sum-quota accrual" y la informatividad del mecanismo de censura por la derecha que presentan este tipo de datos. Demostramos la convergencia d bil y los intervalos de confianza asintóticos se construyen utilizando dicho resultado. Por otro lado, el análisis multivariante trata el problema de cómo incorporar más de una covariable en el análisis. En problemas con eventos recurrentes, también necesitamos tener en cuenta que además de las covariables, la hetereogeneidad, el número de ocurrencias, o especialmente, el efecto de las intervenciones después de las reocurrencias puede modificar la probabilidad de observar un nuevo evento en un paciente. Este último punto es muy importante ya que todavía no se ha tenido en cuenta en estudios biomédicos. Para tratar este problema, hemos basado nuestro trabajo en un nuevo modelo para eventos recurrentes propuesto por Peña y Hollander, 2004. Nuestra contribución a este punto es la adaptación de las recaídas en cáncer utilizando este modelo en el que el efecto de las intervenciones se representa mediante un proceso llamado --edad efectiva' que actúa sobre la función de riesgo basal. Hemos llamado a este modelo modelo dinámico de cáncer (--dynamic cancer model'). También tratamos el problema de la estimación de parámetros de la clase general de modelos para eventos recurrentes propuesta por Peña y Hollander donde el modelo dinámico de cáncer se puede ver como un caso especial de este modelo general. Hemos desarrollado dos aproximaciones. La primera se basa en inferencia semiparamétrica, donde la función de riesgo basal se especifica de forma no paramétrica y usamos el algoritmo EM. La segunda es una aproximación basada en verosimilitud penalizada donde adoptamos dos estrategias diferentes. Una de ellas se basa en penalizar la verosimilitud parcial donde la penalización recae en los coeficientes de regresión. La segunda penaliza la verosimilitud completa y da una estimación no paramétrica de la función de riesgo basal utilizando un estimador continuo. La solución se aproxima utilizando splines. La principal ventaja de este método es que podemos obtener fácilmente una estimación suave de la función de riesgo así como una estimación de la varianza de la varianza de la fragilidad, mientras que con las otras aproximaciones esto no es posible. Además este último método presenta un coste computacional bastante más bajo que los otros. Los resultados obtenidos con datos reales, indican que la flexibilidad de este modelo es una garantía para analizar datos de pacientes que recaen a lo largo del tiempo y que son intervenidos después de las recaídas tumorales.El aspecto computacional es otra de las contribuciones importantes de esta tesis al campo de los eventos recurrentes. Hemos desarrollado tres paquete de R llamados survrec, gcmrec y frailtypack que están accesibles en CRAN, http://www.r-project.org/. Estos paquetes permiten al usuario calcular la mediana de supervivencia y sus intervalos de confianza, estimar los par metros del modelo de Peña y Hollander (en particular el modelo dinámico de cáncer) utilizando el algoritmo EM y la verosimilitud penalizada, respectivamente.Survival analysis arises when we are interested in studying statistical properties of a variable which describes the time to a single event. In some situations, we may observe that the event of interest occurs repeatedly in the same individual, such as when a patient diagnosed with cancer tends to relapse over time or when a person is repeatedly readmitted in a hospital. In this case we speak about survival analysis with recurrent events. Recurrent nature of events makes necessary to use other techniques from those used when we analyze survival times from one single event. In this dissertation we deal with this type of analysis mainly motivatedby two studies on cancer research that were created specially for this research. One of them belongs to a study on hospital readmissions in patients diagnosed with colorectal cancer, while the other one deals with patients diagnosed with non-Hodgkin's lymphoma. This last study is mainly relevant since we include information about the effect of treatment after relapses and some authors have stated the needed of developing a specific model for relapsing patients in cancer settings.Our first contribution to univariate analysis is to propose a method to construct confidence intervals for the median survival time in the case of recurrent event settings. Two different approaches are developed. One of them is based on asymptotic variances derived from two existing estimators of survival function, while the other one uses bootstrap techniques. This last approach is useful since one of the estimators used, does not have any closed form for its variance yet. The new contribution to this work is the examination of the question of how to do bootstrapping in the presence of recurrent event data arising from a sum-quota accrual scheme and informativeness of right censoring mechanism. Weak convergence is proved and asymptotic confidence intervals are built to according this result. On the other hand, multivariate analysis addresses the problem of how incorporate more than one covariate in the analysis. In recurrent event settings, we also need to take into account that apart from covariates, the heterogeneity, the number of occurrences or specially, the effect of interventions after re occurrences may modify the probability of observing a new event in a patient. This last point is a very important one since it has not been taken into consideration in biomedical studies yet. To address this problem, we base our work on a new model for recurrent events proposed by Peña and Hollander. Our contribution to this topic is to accommodate the situation of cancer relapses to this model model in which the effect of interventions is represented by an effective age process acting on the baseline hazard function. We call this model dynamic cancer model.We also address the problem of estimating parameters of the general class of models for recurrent events proposed by Peña and Hollander, 2004, where the dynamic cancer model may be seen as a special case of this general model. Two general approaches are developed. First approach is based on semiparametric inference, where a baseline hazard function is nonparametrically specified and uses the EM algorithm. The second one is a penalized likelihood approach where two different strategies are adopted. One of them is based on penalizing the partial likelihood where the penalization bears on a regression coefficient. The second penalized approach penalized full likelihood, and it gives a non parametric estimation of the baseline hazard function using a continuous estimator. The solution is then approximated using splines. The main advantage of this method is that we caneasily obtain smooth estimates of the hazard function and an estimation of the variance of frailty variance, while in the other approaches this is not possible. In addition, this last approach has a quite less computational cost than the other ones. The results obtained using dynamic cancer model in real data sets, indicate that the flexibility of this method provides a safeguard for analyzing data where patients relapse over time and interventions are performed after tumoral reoccurrences.Computational issue is another important contribution of this work to recurrent event settings. We have developed three R packages called survrec, gcmrec, and frailtypack that are available at CRAN, http://www.r-project.org/. These packages allow users to compute median survival time and their confidence intervals, to estimate the parameters involved in the Peña and Hollander's model (in particular in the dynamic cancer model) using EM algorithm, and to estimate this parameters using penalized approach, respectively.Postprint (published version

    Inference for a General Class of Models for Recurrent Events with application to cancer data

    Get PDF
    La necesidad del análisis de supervivencia aparece cuando necesitamos estudiar las propiedades estadísticas de una variable que describe el tiempo hasta que ocurre un evento único. En algunas ocasiones, podemos observar que el evento de interés ocurre repetidamente en un mismo individuo, como puede ser el caso de un paciente diagnosticado de cáncer que recae a lo largo del tiempo o cuando una persona es reingresada repetidas veces en un hospital. En este caso hablamos de análisis de supervivencia con eventos recurrentes. La naturaleza recurrente de los eventos hace necesario el uso de otras técnicas distintas a aquellas que utilizamos cuando analizamos tiempos de supervivencia para un evento único. En esta tesis, tratamos este tipo de análisis principalmente motivados por dos estudios en investigación en cáncer que fueron creados especialmente para este trabajo. Uno de ellos hace referencia a un estudio sobre readmisiones hospitalarias en pacientes diagnosticados con cáncer colorectal, mientras que el otro hace referencia a pacientes diagnosticados con linfomas no Hodgkinianos. Este último estudio es especialmente relevante ya que incluimos información sobre el efecto del tratamiento después de las recaídas y algunos autores han mostrado la necesidad de desarrollar un modelo específico para pacientes que presentan este tipo de enfermedades. Nuestra contribución al análisis univariante es proponer un método para construir intervalos de confianza para la mediana de supervivencia en el caso de eventos recurrentes. Para ello, hemos utilizado dos aproximaciones. Una de ellas se basa en las varianzas asintóticas derivadas de dos estimadores existentes de la función de supervivencia, mientras que el otro utiliza técnicas de remuestreo. Esta última aproximación es útil ya que uno de los estimadores utilizados todavía no tiene una forma cerrada para su varianza. La nueva contribución de este trabajo es el estudio de cómo hacer remuestreo en la presencia de datos con eventos recurrentes que aparecen de un esquema conocido como --sum-quota accrual" y la informatividad del mecanismo de censura por la derecha que presentan este tipo de datos. Demostramos la convergencia d bil y los intervalos de confianza asintóticos se construyen utilizando dicho resultado. Por otro lado, el análisis multivariante trata el problema de cómo incorporar más de una covariable en el análisis. En problemas con eventos recurrentes, también necesitamos tener en cuenta que además de las covariables, la hetereogeneidad, el número de ocurrencias, o especialmente, el efecto de las intervenciones después de las reocurrencias puede modificar la probabilidad de observar un nuevo evento en un paciente. Este último punto es muy importante ya que todavía no se ha tenido en cuenta en estudios biomédicos. Para tratar este problema, hemos basado nuestro trabajo en un nuevo modelo para eventos recurrentes propuesto por Peña y Hollander, 2004. Nuestra contribución a este punto es la adaptación de las recaídas en cáncer utilizando este modelo en el que el efecto de las intervenciones se representa mediante un proceso llamado --edad efectiva' que actúa sobre la función de riesgo basal. Hemos llamado a este modelo modelo dinámico de cáncer (--dynamic cancer model'). También tratamos el problema de la estimación de parámetros de la clase general de modelos para eventos recurrentes propuesta por Peña y Hollander donde el modelo dinámico de cáncer se puede ver como un caso especial de este modelo general. Hemos desarrollado dos aproximaciones. La primera se basa en inferencia semiparamétrica, donde la función de riesgo basal se especifica de forma no paramétrica y usamos el algoritmo EM. La segunda es una aproximación basada en verosimilitud penalizada donde adoptamos dos estrategias diferentes. Una de ellas se basa en penalizar la verosimilitud parcial donde la penalización recae en los coeficientes de regresión. La segunda penaliza la verosimilitud completa y da una estimación no paramétrica de la función de riesgo basal utilizando un estimador continuo. La solución se aproxima utilizando splines. La principal ventaja de este método es que podemos obtener fácilmente una estimación suave de la función de riesgo así como una estimación de la varianza de la varianza de la fragilidad, mientras que con las otras aproximaciones esto no es posible. Además este último método presenta un coste computacional bastante más bajo que los otros. Los resultados obtenidos con datos reales, indican que la flexibilidad de este modelo es una garantía para analizar datos de pacientes que recaen a lo largo del tiempo y que son intervenidos después de las recaídas tumorales.El aspecto computacional es otra de las contribuciones importantes de esta tesis al campo de los eventos recurrentes. Hemos desarrollado tres paquete de R llamados survrec, gcmrec y frailtypack que están accesibles en CRAN, http://www.r-project.org/. Estos paquetes permiten al usuario calcular la mediana de supervivencia y sus intervalos de confianza, estimar los par metros del modelo de Peña y Hollander (en particular el modelo dinámico de cáncer) utilizando el algoritmo EM y la verosimilitud penalizada, respectivamente.Survival analysis arises when we are interested in studying statistical properties of a variable which describes the time to a single event. In some situations, we may observe that the event of interest occurs repeatedly in the same individual, such as when a patient diagnosed with cancer tends to relapse over time or when a person is repeatedly readmitted in a hospital. In this case we speak about survival analysis with recurrent events. Recurrent nature of events makes necessary to use other techniques from those used when we analyze survival times from one single event. In this dissertation we deal with this type of analysis mainly motivatedby two studies on cancer research that were created specially for this research. One of them belongs to a study on hospital readmissions in patients diagnosed with colorectal cancer, while the other one deals with patients diagnosed with non-Hodgkin's lymphoma. This last study is mainly relevant since we include information about the effect of treatment after relapses and some authors have stated the needed of developing a specific model for relapsing patients in cancer settings.Our first contribution to univariate analysis is to propose a method to construct confidence intervals for the median survival time in the case of recurrent event settings. Two different approaches are developed. One of them is based on asymptotic variances derived from two existing estimators of survival function, while the other one uses bootstrap techniques. This last approach is useful since one of the estimators used, does not have any closed form for its variance yet. The new contribution to this work is the examination of the question of how to do bootstrapping in the presence of recurrent event data arising from a sum-quota accrual scheme and informativeness of right censoring mechanism. Weak convergence is proved and asymptotic confidence intervals are built to according this result. On the other hand, multivariate analysis addresses the problem of how incorporate more than one covariate in the analysis. In recurrent event settings, we also need to take into account that apart from covariates, the heterogeneity, the number of occurrences or specially, the effect of interventions after re occurrences may modify the probability of observing a new event in a patient. This last point is a very important one since it has not been taken into consideration in biomedical studies yet. To address this problem, we base our work on a new model for recurrent events proposed by Peña and Hollander. Our contribution to this topic is to accommodate the situation of cancer relapses to this model model in which the effect of interventions is represented by an effective age process acting on the baseline hazard function. We call this model dynamic cancer model.We also address the problem of estimating parameters of the general class of models for recurrent events proposed by Peña and Hollander, 2004, where the dynamic cancer model may be seen as a special case of this general model. Two general approaches are developed. First approach is based on semiparametric inference, where a baseline hazard function is nonparametrically specified and uses the EM algorithm. The second one is a penalized likelihood approach where two different strategies are adopted. One of them is based on penalizing the partial likelihood where the penalization bears on a regression coefficient. The second penalized approach penalized full likelihood, and it gives a non parametric estimation of the baseline hazard function using a continuous estimator. The solution is then approximated using splines. The main advantage of this method is that we caneasily obtain smooth estimates of the hazard function and an estimation of the variance of frailty variance, while in the other approaches this is not possible. In addition, this last approach has a quite less computational cost than the other ones. The results obtained using dynamic cancer model in real data sets, indicate that the flexibility of this method provides a safeguard for analyzing data where patients relapse over time and interventions are performed after tumoral reoccurrences.Computational issue is another important contribution of this work to recurrent event settings. We have developed three R packages called survrec, gcmrec, and frailtypack that are available at CRAN, http://www.r-project.org/. These packages allow users to compute median survival time and their confidence intervals, to estimate the parameters involved in the Peña and Hollander's model (in particular in the dynamic cancer model) using EM algorithm, and to estimate this parameters using penalized approach, respectively

    Distributed lightning monitoring: an affordable proposal

    Get PDF
    In theaters and the filmmaking industry, video streams, images, audio streams and scalar data are commonly used. In these fields, one of the most important magnitudes to be collected and controlled is the light intensity in different scene spots. So, it is extremely important to be able to deploy a network of light sensors which are usually integrated in a more general Wireless Multimedia Sensor Network (WMSN). If many light measurements have to be acquired, the simpler and cheaper the sensor, the more affordable theWMSN will be. In this paper we propose the use of a set of very cheap light sensors (photodiodes) and to spectrally and directionally correct their measurements using mathematical methods. A real testing of the proposed solution has been accomplished, obtaining quite accurate light measurements. Testing results are also presented throughout the paper.Telefonica Chair "Intelligence in Networks" of the University of Seville (Spain

    Enhanced manufacturing storage management using data mining prediction techniques

    Get PDF
    Performing an efficient storage management is a key issue for reducing costs in the manufacturing process. And the first step to accomplish this task is to have good estimations of the consumption of every storage component. For making accurate consumption estimations two main approaches are possible: using past utilization values (time series); and/or considering other external factors affecting the spending rates. Time series forecasting is the most common approach due to the fact that not always is clear the causes affecting consumption. Several classical methods have extensively been used, mainly ARIMA models. As an alternative, in this paper it is proposed to use prediction techniques based on the data mining realm. The use of consumption prediction algorithms clearly increases the storage management efficiency. The predictors based on data mining can offer enhanced solutions in many cases.Telefónica, through the “Cátedra de Telefónica Inteligencia en la Red”Paloma Luna Garrid

    Low cost multimedia sensor networks for obtaining lighting maps

    Get PDF
    In many applications, video streams, images, audio streams and scalar data are commonly used. In these fields, one of the most important magnitudes to be collected and controlled is the light intensity in different spots. So, it is extremely important to be able to deploy a network of light sensors which are usually integrated in a more general Wireless Multimedia Sensor Network (WMSN). Light control systems have increasing applications in many places like streets, roads, buildings, theaters, etc. In these situations having a dense grid of sensing spots significantly enhances measuring precision and control performance. When a great number of measuring spots are required, the cost of the sensor becomes a very important concern. In this paper the use of very low cost light sensors is proposed and it is shown how to overcome its limited performance by directionally correcting its results. A correction factor is derived for several lighting conditions. The proposed method is firstly applied to measure light in a single spot. Additionally a prototype of a sensor network is employed to draw the lighting map of a surface. Finally the sensor grid is employed to estimate the position and power of a set of light sources in a certain region of interest (street, building,…). These three applications have shown that using low cost sensors instead of luxmeters is a feasible approach to estimate illuminance levels in a room and to derive light sources maps. The obtained error measuring spots illuminance or estimating lamp emittances are quite acceptable in many practical applications.Telefonica Chair "Intelligence in Networks" of the University of Seville (Spain

    Eco‐Holonic 4.0 Circular Business Model to  Conceptualize Sustainable Value Chain Towards  Digital Transition 

    Get PDF
    The purpose of this paper is to conceptualize a circular business model based on an Eco-Holonic Architecture, through the integration of circular economy and holonic principles. A conceptual model is developed to manage the complexity of integrating circular economy principles, digital transformation, and tools and frameworks for sustainability into business models. The proposed architecture is multilevel and multiscale in order to achieve the instantiation of the sustainable value chain in any territory. The architecture promotes the incorporation of circular economy and holonic principles into new circular business models. This integrated perspective of business model can support the design and upgrade of the manufacturing companies in their respective industrial sectors. The conceptual model proposed is based on activity theory that considers the interactions between technical and social systems and allows the mitigation of the metabolic rift that exists between natural and social metabolism. This study contributes to the existing literature on circular economy, circular business models and activity theory by considering holonic paradigm concerns, which have not been explored yet. This research also offers a unique holonic architecture of circular business model by considering different levels, relationships, dynamism and contextualization (territory) aspects

    Nanotecnología y nanoquímica

    Get PDF
    Ejemplar dedicado a: Ingeniería química y medioambientalLa creciente mejora de los productos en los últimos tiempos es consecuencia de la investigación y el desarrollo de nuevas técnicas que permiten el desarrollo de innovaciones complementarias a las obtenidas por el diseño. La nanotecnología constituye un ámbito de conocimiento que permite desarrollar nuevos productos derivados del conocimiento de las propiedades y los procesos a escala nanométrica. En el presente trabajo se expone una visión de esta tecnología desde la perspectiva de su aplicación al ámbito de conocimiento del sector químico y medioambiental, que constituye la disciplina de la nanoquímica

    A flexible count data model to fit the wide diversity of expression profiles arising from extensively replicated RNA-seq experiments

    Get PDF
    Background: High-throughput RNA sequencing (RNA-seq) offers unprecedented power to capture the real dynamics of gene expression. Experimental designs with extensive biological replication present a unique opportunity to exploit this feature and distinguish expression profiles with higher resolution. RNA-seq data analysis methods so far have been mostly applied to data sets with few replicates and their default settings try to provide the best performance under this constraint. These methods are based on two well-known count data distributions: the Poisson and the negative binomial. The way to properly calibrate them with large RNA-seq data sets is not trivial for the non-expert bioinformatics user. Results: Here we show that expression profiles produced by extensively-replicated RNA-seq experiments lead to a rich diversity of count data distributions beyond the Poisson and the negative binomial, such as Poisson-Inverse Gaussian or Pólya-Aeppli, which can be captured by a more general family of count data distributions called the Poisson-Tweedie. The flexibility of the Poisson-Tweedie family enables a direct fitting of emerging features of large expression profiles, such as heavy-tails or zero-inflation, without the need to alter a single configuration parameter. We provide a software package for R called tweeDEseq implementing a new test for differential expression based on the Poisson-Tweedie family. Using simulations on synthetic and real RNA-seq data we show that tweeDEseq yields P-values that are equally or more accurate than competing methods under different configuration parameters. By surveying the tiny fraction of sex-specific gene expression changes in human lymphoblastoid cell lines, we also show that tweeDEseq accurately detects differentially expressed genes in a real large RNA-seq data set with improved performance and reproducibility over the previously compared methodologies. Finally, we compared the results with those obtained from microarrays in order to check for reproducibility. Conclusions: RNA-seq data with many replicates leads to a handful of count data distributions which can be accurately estimated with the statistical model illustrated in this paper. This method provides a better fit to the underlying biological variability; this may be critical when comparing groups of RNA-seq samples with markedly different count data distributions. The tweeDEseq package forms part of the Bioconductor project and it is available for download at http://www.bioconductor.or

    Redundancy analysis allows improved detection of methylation changes in large genomic regions

    Get PDF
    Background: DNA methylation is an epigenetic process that regulates gene expression. Methylation can be modified by environmental exposures and changes in the methylation patterns have been associated with diseases. Methylation microarrays measure methylation levels at more than 450,000 CpGs in a single experiment, and the most common analysis strategy is to perform a single probe analysis to find methylation probes associated with the outcome of interest. However, methylation changes usually occur at the regional level: for example, genomic structural variants can affect methylation patterns in regions up to several megabases in length. Existing DMR methods provide lists of Differentially Methylated Regions (DMRs) of up to only few kilobases in length, and cannot check if a target region is differentially methylated. Therefore, these methods are not suitable to evaluate methylation changes in large regions. To address these limitations, we developed a new DMR approach based on redundancy analysis (RDA) that assesses whether a target region is differentially methylated. Results: Using simulated and real datasets, we compared our approach to three common DMR detection methods (Bumphunter, blockFinder, and DMRcate). We found that Bumphunter underestimated methylation changes and blockFinder showed poor performance. DMRcate showed poor power in the simulated datasets and low specificity in the real data analysis. Our method showed very high performance in all simulation settings, even with small sample sizes and subtle methylation changes, while controlling type I error. Other advantages of our method are: 1) it estimates the degree of association between the DMR and the outcome; 2) it can analyze a targeted or region of interest; and 3) it can evaluate the simultaneous effects of different variables. The proposed methodology is implemented in MEAL, a Bioconductor package designed to facilitate the analysis of methylation data. Conclusions: We propose a multivariate approach to decipher whether an outcome of interest alters the methylation pattern of a region of interest. The method is designed to analyze large target genomic regions and outperforms the three most popular methods for detecting DMRs. Our method can evaluate factors with more than two levels or the simultaneous effect of more than one continuous variable, which is not possible with the state-of-the-art methods
    corecore