496 research outputs found

    Development of a clinical decision model for thyroid nodules

    Get PDF
    <p>Abstract</p> <p>Background</p> <p>Thyroid nodules represent a common problem brought to medical attention. Four to seven percent of the United States adult population (10–18 million people) has a palpable thyroid nodule, however the majority (>95%) of thyroid nodules are benign. While, fine needle aspiration remains the most cost effective and accurate diagnostic tool for thyroid nodules in current practice, over 20% of patients undergoing FNA of a thyroid nodule have indeterminate cytology (follicular neoplasm) with associated malignancy risk prevalence of 20–30%. These patients require thyroid lobectomy/isthmusectomy purely for the purpose of attaining a definitive diagnosis. Given that the majority (70–80%) of these patients have benign surgical pathology, thyroidectomy in these patients is conducted principally with diagnostic intent. Clinical models predictive of malignancy risk are needed to support treatment decisions in patients with thyroid nodules in order to reduce morbidity associated with unnecessary diagnostic surgery.</p> <p>Methods</p> <p>Data were analyzed from a completed prospective cohort trial conducted over a 4-year period involving 216 patients with thyroid nodules undergoing ultrasound (US), electrical impedance scanning (EIS) and fine needle aspiration cytology (FNA) prior to thyroidectomy. A Bayesian model was designed to predict malignancy in thyroid nodules based on multivariate dependence relationships between independent covariates. Ten-fold cross-validation was performed to estimate classifier error wherein the data set was randomized into ten separate and unique train and test sets consisting of a training set (90% of records) and a test set (10% of records). A receiver-operating-characteristics (ROC) curve of these predictions and area under the curve (AUC) were calculated to determine model robustness for predicting malignancy in thyroid nodules.</p> <p>Results</p> <p>Thyroid nodule size, FNA cytology, US and EIS characteristics were highly predictive of malignancy. Cross validation of the model created with Bayesian Network Analysis effectively predicted malignancy [AUC = 0.88 (95%CI: 0.82–0.94)] in thyroid nodules. The positive and negative predictive values of the model are 83% (95%CI: 76%–91%) and 79% (95%CI: 72%–86%), respectively.</p> <p>Conclusion</p> <p>An integrated predictive decision model using Bayesian inference incorporating readily obtainable thyroid nodule measures is clinically relevant, as it effectively predicts malignancy in thyroid nodules. This model warrants further validation testing in prospective clinical trials.</p

    Role of Artificial Intelligence in Radiogenomics for Cancers in the Era of Precision Medicine

    Get PDF
    Radiogenomics, a combination of “Radiomics” and “Genomics,” using Artificial Intelligence (AI) has recently emerged as the state-of-the-art science in precision medicine, especially in oncology care. Radiogenomics syndicates large-scale quantifiable data extracted from radiological medical images enveloped with personalized genomic phenotypes. It fabricates a prediction model through various AI methods to stratify the risk of patients, monitor therapeutic approaches, and assess clinical outcomes. It has recently shown tremendous achievements in prognosis, treatment planning, survival prediction, heterogeneity analysis, reoccurrence, and progression-free survival for human cancer study. Although AI has shown immense performance in oncology care in various clinical aspects, it has several challenges and limitations. The proposed review provides an overview of radiogenomics with the viewpoints on the role of AI in terms of its promises for computa-tional as well as oncological aspects and offers achievements and opportunities in the era of precision medicine. The review also presents various recommendations to diminish these obstacles

    Economics of Artificial Intelligence in Healthcare: Diagnosis vs. Treatment

    Get PDF
    Motivation: The price of medical treatment continues to rise due to (i) an increasing population; (ii) an aging human growth; (iii) disease prevalence; (iv) a rise in the frequency of patients that utilize health care services; and (v) increase in the price. Objective: Artificial Intelligence (AI) is already well-known for its superiority in various healthcare applications, including the segmentation of lesions in images, speech recognition, smartphone personal assistants, navigation, ride-sharing apps, and many more. Our study is based on two hypotheses: (i) AI offers more economic solutions compared to conventional methods; (ii) AI treatment offers stronger economics compared to AI diagnosis. This novel study aims to evaluate AI technology in the context of healthcare costs, namely in the areas of diagnosis and treatment, and then compare it to the traditional or non-AI-based approaches. Methodology: PRISMA was used to select the best 200 studies for AI in healthcare with a primary focus on cost reduction, especially towards diagnosis and treatment. We defined the diagnosis and treatment architectures, investigated their characteristics, and categorized the roles that AI plays in the diagnostic and therapeutic paradigms. We experimented with various combinations of different assumptions by integrating AI and then comparing it against conventional costs. Lastly, we dwell on three powerful future concepts of AI, namely, pruning, bias, explainability, and regulatory approvals of AI systems. Conclusions: The model shows tremendous cost savings using AI tools in diagnosis and treatment. The economics of AI can be improved by incorporating pruning, reduction in AI bias, explainability, and regulatory approvals. © 2022 by the authors

    White learning methodology: a case study of cancer-related disease factors analysis in real-time PACS environment

    Get PDF
    Bayesian network is a probabilistic model of which the prediction accuracy may not be one of the highest in the machine learning family. Deep learning (DL) on the other hand possess of higher predictive power than many other models. How reliable the result is, how it is deduced, how interpretable the prediction by DL mean to users, remain obscure. DL functions like a black box. As a result, many medical practitioners are reductant to use deep learning as the only tool for critical machine learning application, such as aiding tool for cancer diagnosis. In this paper, a framework of white learning is being proposed which takes advantages of both black box learning and white box learning. Usually, black box learning will give a high standard of accuracy and white box learning will provide an explainable direct acyclic graph. According to our design, there are 3 stages of White Learning, loosely coupled WL, semi coupled WL and tightly coupled WL based on degree of fusion of the white box learning and black box learning. In our design, a case of loosely coupled WL is tested on breast cancer dataset. This approach uses deep learning and an incremental version of Naïve Bayes network. White learning is largely defied as a systemic fusion of machine learning models which result in an explainable Bayes network which could find out the hidden relations between features and class and deep learning which would give a higher accuracy of prediction than other algorithms. We designed a series of experiments for this loosely coupled WL model. The simulation results show that using WL compared to standard black-box deep learning, the levels of accuracy and kappa statistics could be enhanced up to 50%. The performance of WL seems more stable too in extreme conditions such as noise and high dimensional data. The relations by Bayesian network of WL are more concise and stronger in affinity too. The experiments results deliver positive signals that WL is possible to output both high classification accuracy and explainable relations graph between features and class. [Abstract copyright: Copyright © 2020. Published by Elsevier B.V.

    Empirical chemosensitivity testing in a spheroid model of ovarian cancer using a microfluidics-based multiplex platform.

    Get PDF
    The use of biomarkers to infer drug response in patients is being actively pursued, yet significant challenges with this approach, including the complicated interconnection of pathways, have limited its application. Direct empirical testing of tumor sensitivity would arguably provide a more reliable predictive value, although it has garnered little attention largely due to the technical difficulties associated with this approach. We hypothesize that the application of recently developed microtechnologies, coupled to more complex 3-dimensional cell cultures, could provide a model to address some of these issues. As a proof of concept, we developed a microfluidic device where spheroids of the serous epithelial ovarian cancer cell line TOV112D are entrapped and assayed for their chemoresponse to carboplatin and paclitaxel, two therapeutic agents routinely used for the treatment of ovarian cancer. In order to index the chemoresponse, we analyzed the spatiotemporal evolution of the mortality fraction, as judged by vital dyes and confocal microscopy, within spheroids subjected to different drug concentrations and treatment durations inside the microfluidic device. To reflect microenvironment effects, we tested the effect of exogenous extracellular matrix and serum supplementation during spheroid formation on their chemotherapeutic response. Spheroids displayed augmented chemoresistance in comparison to monolayer culturing. This resistance was further increased by the simultaneous presence of both extracellular matrix and high serum concentration during spheroid formation. Following exposure to chemotherapeutics, cell death profiles were not uniform throughout the spheroid. The highest cell death fraction was found at the center of the spheroid and the lowest at the periphery. Collectively, the results demonstrate the validity of the approach, and provide the basis for further investigation of chemotherapeutic responses in ovarian cancer using microfluidics technology. In the future, such microdevices could provide the framework to assay drug sensitivity in a timeframe suitable for clinical decision making

    Utilizing artificial intelligence in perioperative patient flow:systematic literature review

    Get PDF
    Abstract. The purpose of this thesis was to map the existing landscape of artificial intelligence (AI) applications used in secondary healthcare, with a focus on perioperative care. The goal was to find out what systems have been developed, and how capable they are at controlling perioperative patient flow. The review was guided by the following research question: How is AI currently utilized in patient flow management in the context of perioperative care? This systematic literature review examined the current evidence regarding the use of AI in perioperative patient flow. A comprehensive search was conducted in four databases, resulting in 33 articles meeting the inclusion criteria. Findings demonstrated that AI technologies, such as machine learning (ML) algorithms and predictive analytics tools, have shown somewhat promising outcomes in optimizing perioperative patient flow. Specifically, AI systems have proven effective in predicting surgical case durations, assessing risks, planning treatments, supporting diagnosis, improving bed utilization, reducing cancellations and delays, and enhancing communication and collaboration among healthcare providers. However, several challenges were identified, including the need for accurate and reliable data sources, ethical considerations, and the potential for biased algorithms. Further research is needed to validate and optimize the application of AI in perioperative patient flow. The contribution of this thesis is summarizing the current state of the characteristics of AI application in perioperative patient flow. This systematic literature review provides information about the features of perioperative patient flow and the clinical tasks of AI applications previously identified

    A Bayesian framework for statistical signal processing and knowledge discovery in proteomic engineering

    Get PDF
    Thesis (Ph. D.)--Harvard-MIT Division of Health Sciences and Technology, February 2006.Includes bibliographical references (leaves 73-85).Proteomics has been revolutionized in the last couple of years through integration of new mass spectrometry technologies such as -Enhanced Laser Desorption/Ionization (SELDI) mass spectrometry. As data is generated in an increasingly rapid and automated manner, novel and application-specific computational methods will be needed to deal with all of this information. This work seeks to develop a Bayesian framework in mass-based proteomics for protein identification. Using the Bayesian framework in a statistical signal processing manner, mass spectrometry data is filtered and analyzed in order to estimate protein identity. This is done by a multi-stage process which compares probabilistic networks generated from mass spectrometry-based data with a mass-based network of protein interactions. In addition, such models can provide insight on features of existing models by identifying relevant proteins. This work finds that the search space of potential proteins can be reduced such that simple antibody-based tests can be used to validate protein identity. This is done with real proteins as a proof of concept. Regarding protein interaction networks, the largest human protein interaction meta-database was created as part of this project, containing over 162,000 interactions. A further contribution is the implementation of the massome network database of mass-based interactions- which is used in the protein identification process.(cont.) This network is explored in terms potential usefulness for protein identification. The framework provides an approach to a number of core issues in proteomics. Besides providing these tools, it yields a novel way to approach statistical signal processing problems in this domain in a way that can be adapted as proteomics-based technologies mature.by Gil Alterovitz.Ph.D

    Exploring variability in medical imaging

    Get PDF
    Although recent successes of deep learning and novel machine learning techniques improved the perfor- mance of classification and (anomaly) detection in computer vision problems, the application of these methods in medical imaging pipeline remains a very challenging task. One of the main reasons for this is the amount of variability that is encountered and encapsulated in human anatomy and subsequently reflected in medical images. This fundamental factor impacts most stages in modern medical imaging processing pipelines. Variability of human anatomy makes it virtually impossible to build large datasets for each disease with labels and annotation for fully supervised machine learning. An efficient way to cope with this is to try and learn only from normal samples. Such data is much easier to collect. A case study of such an automatic anomaly detection system based on normative learning is presented in this work. We present a framework for detecting fetal cardiac anomalies during ultrasound screening using generative models, which are trained only utilising normal/healthy subjects. However, despite the significant improvement in automatic abnormality detection systems, clinical routine continues to rely exclusively on the contribution of overburdened medical experts to diagnosis and localise abnormalities. Integrating human expert knowledge into the medical imaging processing pipeline entails uncertainty which is mainly correlated with inter-observer variability. From the per- spective of building an automated medical imaging system, it is still an open issue, to what extent this kind of variability and the resulting uncertainty are introduced during the training of a model and how it affects the final performance of the task. Consequently, it is very important to explore the effect of inter-observer variability both, on the reliable estimation of model’s uncertainty, as well as on the model’s performance in a specific machine learning task. A thorough investigation of this issue is presented in this work by leveraging automated estimates for machine learning model uncertainty, inter-observer variability and segmentation task performance in lung CT scan images. Finally, a presentation of an overview of the existing anomaly detection methods in medical imaging was attempted. This state-of-the-art survey includes both conventional pattern recognition methods and deep learning based methods. It is one of the first literature surveys attempted in the specific research area.Open Acces
    • …
    corecore