1,980 research outputs found

    Survival Analysis of Microarray Data With Microarray Measurement Subject to Measurement Error

    Get PDF
    Microarray technology is essentially a measurement tool for measuring expressions of genes, and this measurement is subject to measurement error. Gene expressions could be employed as predictors for patient survival, and the measurement error involved in the gene expression is often ignored in the analysis of microarray data in the literature. Efforts are needed to establish statistical method for analyzing microarray data without ignoring the error in gene expression. A typical microarray data set has a large number of genes far exceeding the sample size. Proper selection of survival relevant genes contributes to an accurate prediction model. We study the effect of the measurement error on survival relevant gene selection under the accelerated failure time (AFT) model setting by regularizing weighted least square estimator with adaptive LASSO penalty. The simulation results and real data analysis show that ignoring measurement error will affect survival relevant gene selection. Simulation-Extrapolation (SIMEX) method is investigated to adjust the impact of measurement error to gene selection. The resulting model after adjustment is more accurate than the model selected by ignoring measurement error. Microarray experiments are often performed over a long period of time, and samples can be prepared and collected under different conditions. Moreover, different protocols or methodology may be applied in the experiment. All these factors contribute to a possibility of heteroscedastic measurement error associated with microarray data set. It is of practical importance to combine microarray data from different labs or platforms. We construct a prediction AFT model using data with heterogeneous covariate measurement error. Two variations of the SIMEX algorithm are investigated to adjust the effect of the mis-measured covariates. Simulation results show that the proposed method can achieve better prediction accuracy than the naive method. In this dissertation, the SIMEX method is used to adjust for the effects of covariate measurement error. This method is superior to other conventional methods in that it is not only more robust to distributional assumptions for error prone covariates, it also offers marked simplicity and flexibility for practical use. To implement this method, we developed an R package for general users

    A Novel Admission Control Model in Cloud Computing

    Full text link
    With the rapid development of Cloud computing technologies and wide adopt of Cloud services and applications, QoS provisioning in Clouds becomes an important research topic. In this paper, we propose an admission control mechanism for Cloud computing. In particular we consider the high volume of simultaneous requests for Cloud services and develop admission control for aggregated traffic flows to address this challenge. By employ network calculus, we determine effective bandwidth for aggregate flow, which is used for making admission control decision. In order to improve network resource allocation while achieving Cloud service QoS, we investigate the relationship between effective bandwidth and equivalent capacity. We have also conducted extensive experiments to evaluate performance of the proposed admission control mechanism

    Strategic and economic aspects of network sharing in FTTH/PON architectures

    Full text link
    Due to the high costs associated with the deployment of the passive infrastructure of FTTH networks, operators ponder the possibility of making co-investments based on a network sharing model. This article describes the strategic and economic aspects of network sharing in FTTH/PON architectures. The capabilities of present and future versions of PON architectures and the cost implications of a network sharing model are described. Moreover, the minimum price of the access line necessary to recover the investment is derived

    Looking Deeper into Deep Learning Model: Attribution-based Explanations of TextCNN

    Get PDF
    Layer-wise Relevance Propagation (LRP) and saliency maps have been recently used to explain the predictions of Deep Learning models, specifically in the domain of text classification. Given different attribution-based explanations to highlight relevant words for a predicted class label, experiments based on word deleting perturbation is a common evaluation method. This word removal approach, however, disregards any linguistic dependencies that may exist between words or phrases in a sentence, which could semantically guide a classifier to a particular prediction. In this paper, we present a feature-based evaluation framework for comparing the two attribution methods on customer reviews (public data sets) and Customer Due Diligence (CDD) extracted reports (corporate data set). Instead of removing words based on the relevance score, we investigate perturbations based on embedded features removal from intermediate layers of Convolutional Neural Networks. Our experimental study is carried out on embedded-word, embedded-document, and embedded-ngrams explanations. Using the proposed framework, we provide a visualization tool to assist analysts in reasoning toward the model's final prediction.Comment: NIPS 2018 Workshop on Challenges and Opportunities for AI in Financial Services: the Impact of Fairness, Explainability, Accuracy, and Privacy, Montr\'eal, Canad

    SIMEX R Package for Accelerated Failure Time Models with Covariate Measurement Error

    Get PDF
    It has been well documented that ignoring measurement error may result in substantially biased estimates in many contexts including linear and nonlinear regressions. For survival data with measurement error in covariates, there has been extensive discussion in the literature with the focus typically centered on proportional hazards models. The impact of measurement error on inference under accelerated failure time models has received relatively little attention, although these models are very useful in survival data analysis. He et al. (2007) discussed accelerated failure time models with error-prone covariates and studied the bias induced by the naive approach of ignoring measurement error in covariates. To adjust for the effects of covariate measurement error, they described a simulation and extrapolation method. This method has theoretical advantages such as robustness to distributional assumptions for error prone covariates. Moreover, this method enjoys simplicity and flexibility for practical use. It is quite appealing to analysts who would like to accommodate covariate measurement error in their analysis. To implement this method, in this paper, we develop an R package for general users. Two data sets arising from clinical trials are employed to illustrate the use of the package

    Experimental Observation of Classical Sub-Wavelength Interference with Thermal-Like Light

    Full text link
    We show the experimental observation of the classical sub-wavelength double-slit interference with a pseudo-thermal light source. The experimental results are in agreement with the recent theoretical prediction shown in quant-ph/0404078 (to be appeared in Phys. Rev. A).Comment: 4 pages, 6 figure

    Spatial Interference: From Coherent To Incoherent

    Full text link
    It is well known that direct observation of interference and diffraction pattern in the intensity distribution requires a spatially coherent source. Optical waves emitted from portions beyond the coherence area possess statistically independent phases, and will degrade the interference pattern. In this paper we show an optical interference experiment, which seems contrary to our common knowledge, that the formation of the interference pattern is related to a spatially incoherent light source. Our experimental scheme is very similar to Gabor's original proposal of holography[1], just with an incoherent source replacing the coherent one. In the statistical ensemble of the incoherent source, each sample field produces a sample interference pattern between object wave and reference wave. These patterns completely differ from each other due to the fluctuation of the source field distribution. Surprisingly, the sum of a great number of sample patterns exhibits explicitly an interference pattern, which contains all the information of the object and is equivalent to a hologram in the coherent light case. In this sense our approach would be valuable in holography and other interference techniques for the case where coherent source is unavailable, such as x-ray and electron sources.Comment: 8 pages, 5 figure
    corecore