3 research outputs found

    Predicting breast cancer progression by using cell-free DNA

    Get PDF
    Thesis Submitted to the Faculty of Information in partial fulfillment of the requirements for the award of Master of Science in information TechnologyCancer is among the leading causes of deaths in Kenya after infectious and cardiovascular diseases. Among the various forms of cancer, breast cancer accounts for a significant percentage of all new cancer incidences in the country and has a high mortality rate. On a global level, breast cancer is considered the most common cancer. Treatment methods employed vary from patient to patient due to factors such as the stage, age, and health. Treatment methods such as surgery, radiotherapy, chemotherapy or a combination of all have been used all to varying degrees of success and are not always efficient. However, these modalities have been employed successfully when the disease is detected early. This research applied deep neural networks coupled with genetic algorithms to build a learning model that evaluated the biomarkers obtained from cell-free DNA. The model was able to predict progression of breast cancer. The research, in addition, employed an agile, data-driven methodology due to its recursive nature producing a model with a higher degree of accuracy and specificity. The model developed was able to attain an accuracy of 94% in predicting breast cancer progression

    Methodological conduct of prognostic prediction models developed using machine learning in oncology: a systematic review.

    Get PDF
    BACKGROUND: Describe and evaluate the methodological conduct of prognostic prediction models developed using machine learning methods in oncology. METHODS: We conducted a systematic review in MEDLINE and Embase between 01/01/2019 and 05/09/2019, for studies developing a prognostic prediction model using machine learning methods in oncology. We used the Transparent Reporting of a multivariable prediction model for Individual Prognosis Or Diagnosis (TRIPOD) statement, Prediction model Risk Of Bias ASsessment Tool (PROBAST) and CHecklist for critical Appraisal and data extraction for systematic Reviews of prediction Modelling Studies (CHARMS) to assess the methodological conduct of included publications. Results were summarised by modelling type: regression-, non-regression-based and ensemble machine learning models. RESULTS: Sixty-two publications met inclusion criteria developing 152 models across all publications. Forty-two models were regression-based, 71 were non-regression-based and 39 were ensemble models. A median of 647 individuals (IQR: 203 to 4059) and 195 events (IQR: 38 to 1269) were used for model development, and 553 individuals (IQR: 69 to 3069) and 50 events (IQR: 17.5 to 326.5) for model validation. A higher number of events per predictor was used for developing regression-based models (median: 8, IQR: 7.1 to 23.5), compared to alternative machine learning (median: 3.4, IQR: 1.1 to 19.1) and ensemble models (median: 1.7, IQR: 1.1 to 6). Sample size was rarely justified (nโ€‰=โ€‰5/62; 8%). Some or all continuous predictors were categorised before modelling in 24 studies (39%). 46% (nโ€‰=โ€‰24/62) of models reporting predictor selection before modelling used univariable analyses, and common method across all modelling types. Ten out of 24 models for time-to-event outcomes accounted for censoring (42%). A split sample approach was the most popular method for internal validation (nโ€‰=โ€‰25/62, 40%). Calibration was reported in 11 studies. Less than half of models were reported or made available. CONCLUSIONS: The methodological conduct of machine learning based clinical prediction models is poor. Guidance is urgently needed, with increased awareness and education of minimum prediction modelling standards. Particular focus is needed on sample size estimation, development and validation analysis methods, and ensuring the model is available for independent validation, to improve quality of machine learning based clinical prediction models

    Methodological conduct of prognostic prediction models developed using machine learning in oncology: a systematic review.

    Get PDF
    BACKGROUND: Describe and evaluate the methodological conduct of prognostic prediction models developed using machine learning methods in oncology. METHODS: We conducted a systematic review in MEDLINE and Embase between 01/01/2019 and 05/09/2019, for studies developing a prognostic prediction model using machine learning methods in oncology. We used the Transparent Reporting of a multivariable prediction model for Individual Prognosis Or Diagnosis (TRIPOD) statement, Prediction model Risk Of Bias ASsessment Tool (PROBAST) and CHecklist for critical Appraisal and data extraction for systematic Reviews of prediction Modelling Studies (CHARMS) to assess the methodological conduct of included publications. Results were summarised by modelling type: regression-, non-regression-based and ensemble machine learning models. RESULTS: Sixty-two publications met inclusion criteria developing 152 models across all publications. Forty-two models were regression-based, 71 were non-regression-based and 39 were ensemble models. A median of 647 individuals (IQR: 203 to 4059) and 195 events (IQR: 38 to 1269) were used for model development, and 553 individuals (IQR: 69 to 3069) and 50 events (IQR: 17.5 to 326.5) for model validation. A higher number of events per predictor was used for developing regression-based models (median: 8, IQR: 7.1 to 23.5), compared to alternative machine learning (median: 3.4, IQR: 1.1 to 19.1) and ensemble models (median: 1.7, IQR: 1.1 to 6). Sample size was rarely justified (n?=?5/62; 8%). Some or all continuous predictors were categorised before modelling in 24 studies (39%). 46% (n?=?24/62) of models reporting predictor selection before modelling used univariable analyses, and common method across all modelling types. Ten out of 24 models for time-to-event outcomes accounted for censoring (42%). A split sample approach was the most popular method for internal validation (n?=?25/62, 40%). Calibration was reported in 11 studies. Less than half of models were reported or made available. CONCLUSIONS: The methodological conduct of machine learning based clinical prediction models is poor. Guidance is urgently needed, with increased awareness and education of minimum prediction modelling standards. Particular focus is needed on sample size estimation, development and validation analysis methods, and ensuring the model is available for independent validation, to improve quality of machine learning based clinical prediction models
    corecore