670 research outputs found

    Documenting and predicting topic changes in Computers in Biology and Medicine: A bibliometric keyword analysis from 1990 to 2017

    Get PDF
    The Computers in Biology and Medicine (CBM) journal promotes the use of com-puting machinery in the fields of bioscience and medicine. Since the first volume in 1970, the importance of computers in these fields has grown dramatically, this is evident in the diversification of topics and an increase in the publication rate. In this study, we quantify both change and diversification of topics covered in CBM. This is done by analysing the author supplied keywords, since they were electronically captured in 1990. The analysis starts by selecting 40 keywords, related to Medical (M) (7), Data (D)(10), Feature (F) (17) and Artificial Intelligence (AI) (6) methods. Automated keyword clustering shows the statistical connection between the selected keywords. We found that the three most popular topics in CBM are: Support Vector Machine (SVM), Elec-troencephalography (EEG) and IMAGE PROCESSING. In a separate analysis step, we bagged the selected keywords into sequential one year time slices and calculated the normalized appearance. The results were visualised with graphs that indicate the CBM topic changes. These graphs show that there was a transition from Artificial Neural Network (ANN) to SVM. In 2006 SVM replaced ANN as the most important AI algo-rithm. Our investigation helps the editorial board to manage and embrace topic change. Furthermore, our analysis is interesting for the general reader, as the results can help them to adjust their research directions

    Multimodality carotid plaque tissue characterization and classification in the artificial intelligence paradigm: a narrative review for stroke application

    Get PDF
    Cardiovascular disease (CVD) is one of the leading causes of morbidity and mortality in the United States of America and globally. Carotid arterial plaque, a cause and also a marker of such CVD, can be detected by various non-invasive imaging modalities such as magnetic resonance imaging (MRI), computer tomography (CT), and ultrasound (US). Characterization and classification of carotid plaque-type in these imaging modalities, especially into symptomatic and asymptomatic plaque, helps in the planning of carotid endarterectomy or stenting. It can be challenging to characterize plaque components due to (I) partial volume effect in magnetic resonance imaging (MRI) or (II) varying Hausdorff values in plaque regions in CT, and (III) attenuation of echoes reflected by the plaque during US causing acoustic shadowing. Artificial intelligence (AI) methods have become an indispensable part of healthcare and their applications to the non-invasive imaging technologies such as MRI, CT, and the US. In this narrative review, three main types of AI models (machine learning, deep learning, and transfer learning) are analyzed when applied to MRI, CT, and the US. A link between carotid plaque characteristics and the risk of coronary artery disease is presented. With regard to characterization, we review tools and techniques that use AI models to distinguish carotid plaque types based on signal processing and feature strengths. We conclude that AI-based solutions offer an accurate and robust path for tissue characterization and classification for carotid artery plaque imaging in all three imaging modalities. Due to cost, user-friendliness, and clinical effectiveness, AI in the US has dominated the most

    Automatic CDR Estimation for Early Glaucoma Diagnosis

    Get PDF

    Towards PACE-CAD Systems

    Get PDF
    Despite phenomenal advancements in the availability of medical image datasets and the development of modern classification algorithms, Computer-Aided Diagnosis (CAD) has had limited practical exposure in the real-world clinical workflow. This is primarily because of the inherently demanding and sensitive nature of medical diagnosis that can have far-reaching and serious repercussions in case of misdiagnosis. In this work, a paradigm called PACE (Pragmatic, Accurate, Confident, & Explainable) is presented as a set of some of must-have features for any CAD. Diagnosis of glaucoma using Retinal Fundus Images (RFIs) is taken as the primary use case for development of various methods that may enrich an ordinary CAD system with PACE. However, depending on specific requirements for different methods, other application areas in ophthalmology and dermatology have also been explored. Pragmatic CAD systems refer to a solution that can perform reliably in day-to-day clinical setup. In this research two, of possibly many, aspects of a pragmatic CAD are addressed. Firstly, observing that the existing medical image datasets are small and not representative of images taken in the real-world, a large RFI dataset for glaucoma detection is curated and published. Secondly, realising that a salient attribute of a reliable and pragmatic CAD is its ability to perform in a range of clinically relevant scenarios, classification of 622 unique cutaneous diseases in one of the largest publicly available datasets of skin lesions is successfully performed. Accuracy is one of the most essential metrics of any CAD system's performance. Domain knowledge relevant to three types of diseases, namely glaucoma, Diabetic Retinopathy (DR), and skin lesions, is industriously utilised in an attempt to improve the accuracy. For glaucoma, a two-stage framework for automatic Optic Disc (OD) localisation and glaucoma detection is developed, which marked new state-of-the-art for glaucoma detection and OD localisation. To identify DR, a model is proposed that combines coarse-grained classifiers with fine-grained classifiers and grades the disease in four stages with respect to severity. Lastly, different methods of modelling and incorporating metadata are also examined and their effect on a model's classification performance is studied. Confidence in diagnosing a disease is equally important as the diagnosis itself. One of the biggest reasons hampering the successful deployment of CAD in the real-world is that medical diagnosis cannot be readily decided based on an algorithm's output. Therefore, a hybrid CNN architecture is proposed with the convolutional feature extractor trained using point estimates and a dense classifier trained using Bayesian estimates. Evaluation on 13 publicly available datasets shows the superiority of this method in terms of classification accuracy and also provides an estimate of uncertainty for every prediction. Explainability of AI-driven algorithms has become a legal requirement after Europe’s General Data Protection Regulations came into effect. This research presents a framework for easy-to-understand textual explanations of skin lesion diagnosis. The framework is called ExAID (Explainable AI for Dermatology) and relies upon two fundamental modules. The first module uses any deep skin lesion classifier and performs detailed analysis on its latent space to map human-understandable disease-related concepts to the latent representation learnt by the deep model. The second module proposes Concept Localisation Maps, which extend Concept Activation Vectors by locating significant regions corresponding to a learned concept in the latent space of a trained image classifier. This thesis probes many viable solutions to equip a CAD system with PACE. However, it is noted that some of these methods require specific attributes in datasets and, therefore, not all methods may be applied on a single dataset. Regardless, this work anticipates that consolidating PACE into a CAD system can not only increase the confidence of medical practitioners in such tools but also serve as a stepping stone for the further development of AI-driven technologies in healthcare

    Detección automática de la presencia de patología ocular en retinografías empleando técnicas de procesado de imágenes

    Get PDF
    La vista es uno de los sentidos de mayor importancia para la vida humana. En los últimos años el número de enfermedades oculares ha aumentado y las predicciones de los científicos es que van a seguir aumentando en los próximos años. Existen enfermedades oculares que se han convertido en importantes causas de pérdida de visión a nivel mundial como la retinopatía diabética (RD), el glaucoma, la degeneración macular asociada a la edad (DMAE) y las cataratas. Estas enfermedades oculares suelen provocar alteraciones en el ojo humano, que pueden detectarse observando el ojo. Una de las técnicas más extendidas para observar el fondo del ojo es la retinografía, que es una imagen digital a color de la retina. Esta imagen es muy útil para el diagnóstico de enfermedades que afectan al ojo como RD y DMAE, entre otras. No obstante, la creciente incidencia de algunas enfermedades oculares y la escasez de oftalmólogos especialistas provoca que el análisis de las retinografías sea una tarea compleja y laboriosa. El objetivo de este Trabajo Fin de Grado (TFG) ha sido el diseño y desarrollo de un método automático para diferenciar entre retinografías patológicas y no patológicas. Este método permitiría ayudar en el diagnóstico y cribado de los pacientes con enfermedades oculares y reducir la carga de trabajo a los oftalmólogos. Para ello, se partió de una base de datos (BD) formada por 1044 imágenes de calidad adecuada para su procesado automático. De ellas, 326 pertenecían a sujetos sanos y a 819 pacientes con algún tipo de patología. Estas imágenes se dividieron en un conjunto de entrenamiento (559 imágenes) y un conjunto de test (585 imágenes). En todos los casos, un oftalmólogo especialista indicó si las imágenes eran normales o patológicas.Grado en Ingeniería de Tecnologías de Telecomunicació

    Aerospace Medicine and Biology: A continuing bibliography with indexes

    Get PDF
    This bibliography lists 253 reports, articles, and other documents introduced into the NASA scientific and technical information system in October 1975

    Diabetic retinopathy detection with texture features

    Get PDF
    Diabetic retinopathy is one of the leading causes of visual impairment and blindness in the world and the prevalence keeps increasing. It is a vascular disorder of the retina and a symptom of diabetes mellitus. The health of the retina is studied with non-invasive retinal imaging. However, the analysis of the retinal images is laborious, subjective and the number of images to be reviewed is increasing. In this master’s thesis, a computer-aided detection system for diabetic retinopathy, microaneurysms and small hemorrhages was designed and implemented. The purpose of this study was to find out, are texture features able to produce descriptive and efficient information for the retinal image classification and is the implemented system accurate. The process included image preprocessing, extraction of 21 texture features, feature selection and classification with a support vector machine. The retinal image datasets that were used for the testing were Messidor, DIARETDB1 and e-ophta. The texture features were not successful when classifying the retinal images into diabetic retinopathy or normal. The best average accuracy was 69 % with 72 % average sensitivity and 66 % average specificity. The texture features are not that descriptive as global features with a whole retinal image. Additionally, the varying size of the images and variation within a class affected the classification. The second experiment studied the classification of images into microaneurysm or normal by dividing the retinal images into blocks. The texture features were successful when the images were divided into small blocks of size 50*50. The best average accuracy was 96 % with 96 % average sensitivity and 96 % average specificity. The texture features are more descriptive in the local setting since then they can extract finer details. To ease the clinical workflow of ophthalmologists and other experts, the computer-aided detection system can lower the manual labor and make retinal image analysis more efficient, accurate and precise. To develop the systems further, an optic disc and image quality detectors are needed

    OFSET_mine:an integrated framework for cardiovascular diseases risk prediction based on retinal vascular function

    Get PDF
    As cardiovascular disease (CVD) represents a spectrum of disorders that often manifestfor the first time through an acute life-threatening event, early identification of seemingly healthy subjects with various degrees of risk is a priority.More recently, traditional scores used for early identification of CVD risk are slowly being replaced by more sensitive biomarkers that assess individual, rather than population risks for CVD. Among these, retinal vascular function, as assessed by the retinal vessel analysis method (RVA), has been proven as an accurate reflection of subclinical CVD in groups of participants without overt disease but with certain inherited or acquired risk factors. Furthermore, in order to correctly detect individual risk at an early stage, specialized machine learning methods and featureselection techniques that can cope with the characteristics of the data need to bedevised.The main contribution of this thesis is an integrated framework, OFSET_mine, that combinesnovel machine learning methods to produce a bespoke solution for Cardiovascular Risk Prediction based on RVA data that is also applicable to other medical datasets with similar characteristics. The three identified essential characteristics are 1) imbalanced dataset,2) high dimensionality and 3) overlapping feature ranges with the possibility of acquiring new samples. The thesis proposes FiltADASYN as an oversampling method that deals with imbalance, DD_Rank as a feature selection method that handles high dimensionality, and GCO_mine as a method for individual-based classification, all three integrated within the OFSET_mine framework.The new oversampling method FiltADASYN extends Adaptive Synthetic Oversampling(ADASYN) with an additional step to filter the generated samples and improve the reliability of the resultant sample set. The feature selection method DD_Rank is based on Restricted Boltzmann Machine (RBM) and ranks features according to their stability and discrimination power. GCO_mine is a lazy learning method based on Graph Cut Optimization (GCO), which considers both the local arrangements and the global structure of the data.OFSET_mine compares favourably to well established composite techniques. Itex hibits high classification performance when applied to a wide range of benchmark medical datasets with variable sample size, dimensionality and imbalance ratios.When applying OFSET _mine on our RVA data, an accuracy of 99.52% is achieved. In addition, using OFSET, the hybrid solution of FiltADASYN and DD_Rank, with Random Forest on our RVA data produces risk group classifications with accuracy 99.68%. This not only reflects the success of the framework but also establishes RVAas a valuable cardiovascular risk predicto
    corecore