30 research outputs found

    Time-to-event overall survival prediction in glioblastoma multiforme patients using magnetic resonance imaging radiomics

    Get PDF
    Purpose: Glioblastoma Multiforme (GBM) represents the predominant aggressive primary tumor of the brain with short overall survival (OS) time. We aim to assess the potential of radiomic features in predicting the time-to-event OS of patients with GBM using machine learning (ML) algorithms. Materials and methods: One hundred nineteen patients with GBM, who had T1-weighted contrast-enhanced and T2-FLAIR MRI sequences, along with clinical data and survival time, were enrolled. Image preprocessing methods included 64 bin discretization, Laplacian of Gaussian (LOG) filters with three Sigma values and eight variations of Wavelet Transform. Images were then segmented, followed by the extraction of 1212 radiomic features. Seven feature selection (FS) methods and six time-to-event ML algorithms were utilized. The combination of preprocessing, FS, and ML algorithms (12 × 7 × 6 = 504 models) was evaluated by multivariate analysis. Results: Our multivariate analysis showed that the best prognostic FS/ML combinations are the Mutual Information (MI)/Cox Boost, MI/Generalized Linear Model Boosting (GLMB) and MI/Generalized Linear Model Network (GLMN), all of which were done via the LOG (Sigma = 1 mm) preprocessing method (C-index = 0.77). The LOG filter with Sigma = 1 mm preprocessing method, MI, GLMB and GLMN achieved significantly higher C-indices than other preprocessing, FS, and ML methods (all p values &lt; 0.05, mean C-indices of 0.65, 0.70, and 0.64, respectively). Conclusion: ML algorithms are capable of predicting the time-to-event OS of patients using MRI-based radiomic and clinical features. MRI-based radiomics analysis in combination with clinical variables might appear promising in assisting clinicians in the survival prediction of patients with GBM. Further research is needed to establish the applicability of radiomics in the management of GBM in the clinic.</p

    DiscoverySpace: an interactive data analysis application

    Get PDF
    DiscoverySpace is a graphical application for bioinformatics data analysis. Users can seamlessly traverse references between biological databases and draw together annotations in an intuitive tabular interface. Datasets can be compared using a suite of novel tools to aid in the identification of significant patterns. DiscoverySpace is of broad utility and its particular strength is in the analysis of serial analysis of gene expression (SAGE) data. The application is freely available online

    Impact of feature harmonization on radiogenomics analysis:Prediction of EGFR and KRAS mutations from non-small cell lung cancer PET/CT images

    Get PDF
    Objective: To investigate the impact of harmonization on the performance of CT, PET, and fused PET/CT radiomic features toward the prediction of mutations status, for epidermal growth factor receptor (EGFR) and Kirsten rat sarcoma viral oncogene (KRAS) genes in non-small cell lung cancer (NSCLC) patients. Methods: Radiomic features were extracted from tumors delineated on CT, PET, and wavelet fused PET/CT images obtained from 136 histologically proven NSCLC patients. Univariate and multivariate predictive models were developed using radiomic features before and after ComBat harmonization to predict EGFR and KRAS mutation statuses. Multivariate models were built using minimum redundancy maximum relevance feature selection and random forest classifier. We utilized 70/30% splitting patient datasets for training/testing, respectively, and repeated the procedure 10 times. The area under the receiver operator characteristic curve (AUC), accuracy, sensitivity, and specificity were used to assess model performance. The performance of the models (univariate and multivariate), before and after ComBat harmonization was compared using statistical analyses. Results: While the performance of most features in univariate modeling was significantly improved for EGFR prediction, most features did not show any significant difference in performance after harmonization in KRAS prediction. Average AUCs of all multivariate predictive models for both EGFR and KRAS were significantly improved (q-value &lt; 0.05) following ComBat harmonization. The mean ranges of AUCs increased following harmonization from 0.87-0.90 to 0.92-0.94 for EGFR, and from 0.85-0.90 to 0.91-0.94 for KRAS. The highest performance was achieved by harmonized F_R0.66_W0.75 model with AUC of 0.94, and 0.93 for EGFR and KRAS, respectively. Conclusion: Our results demonstrated that regarding univariate modelling, while ComBat harmonization had generally a better impact on features for EGFR compared to KRAS status prediction, its effect is feature-dependent. Hence, no systematic effect was observed. Regarding the multivariate models, ComBat harmonization significantly improved the performance of all radiomics models toward more successful prediction of EGFR and KRAS mutation statuses in lung cancer patients. Thus, by eliminating the batch effect in multi-centric radiomic feature sets, harmonization is a promising tool for developing robust and reproducible radiomics using vast and variant datasets.</p

    iNucs:Inter-nucleosome interactions

    Get PDF
    [Motivation] Deciphering nucleosome–nucleosome interactions is an important step toward mesoscale description of chromatin organization but computational tools to perform such analyses are not publicly available. [Results] We developed iNucs, a user-friendly and efficient Python-based bioinformatics tool to compute and visualize nucleosome-resolved interactions using standard pairs format input generated from pairtools

    Belief Change and Base Dependence

    Get PDF
    The AGM paradigm of belief change studies the dynamics of belief states in light of new information. For theoretical simplification, AGM idealizes a belief state as a belief set: a set of logical formulas that is closed under implication. A variant to the original AGM approach generalizes belief sets into belief bases which are not necessarily deductively closed. Many authors have argued that, compared to belief sets, belief bases are easier to represent in computers, more expressive and more inconsistency-tolerant. A strong intuition for belief change operations, Gärdenfors suggests, is that formulas that are independent of a change should remain intact. Linking belief change and dependence is significant because, for example, it can narrow the number of formulas considered during a belief change operation. Then, based on Gärdenfors’ intuition, Fariñas and Herzig axiomatize a dependence relation, and formalize the connection between dependence and belief change. The work in this thesis is also based on Gärdenfors’ intuition. We first introduce the notion of base dependence as a relation between formulas with respect to some belief base (instead of a belief set). After an axiomatization of base dependence, we present a formalization of the connection between base dependence and a particular belief base change operation, saturated kernel contraction. We also prove that base dependence is a reversible generalization of Fariñas and Herzig’s dependence. That is, in the special case when the underlying belief base is deductively closed (i.e., it is a belief set), base dependence reduces to dependence. Finally, an intriguing feature of Fariñas and Herzig’s formalism is that it meets other criteria for dependence, namely, Keynes’ conjunction criterion for dependence (CCD) and Gärdenfors’ conjunction criterion for independence (CCI). We show that our base dependence formalism also meets these criteria. More interestingly, we offer a new and more specific conjunction criterion for dependence that implies both CCD and CCI, and show our base dependence formalism also meets this new criterion

    MFP-Unet: A novel deep learning based approach for left ventricle segmentation in echocardiography

    No full text
    Segmentation of the Left ventricle (LV) is a crucial step for quantitative measurements such as area, volume, and ejection fraction. However, the automatic LV segmentation in 2D echocardiographic images is a challenging task due to ill-defined borders, and operator dependence issues (insufficient reproducibility). U-net, which is a well-known architecture in medical image segmentation, addressed this problem through an encoder-decoder path. Despite outstanding overall performance, U-net ignores the contribution of all semantic strengths in the segmentation procedure. In the present study, we have proposed a novel architecture to tackle this drawback. Feature maps in all levels of the decoder path of U-net are concatenated, their depths are equalized, and up-sampled to a fixed dimension. This stack of feature maps would be the input of the semantic segmentation layer. The performance of the proposed model was evaluated using two sets of echocardiographic images: one public dataset and one prepared dataset. The proposed network yielded significantly improved results when comparing with results from U-net, dilated U-net, Unet++, ACNN, SHG, and deeplabv3. An average Dice Metric (DM) of 0.953, Hausdorff Distance (HD) of 3.49, and Mean Absolute Distance (MAD) of 1.12 are achieved in the public dataset. The correlation graph, bland-altman analysis, and box plot showed a great agreement between automatic and manually calculated volume, area, and length.status: publishe
    corecore