352 research outputs found

    Biomedical time series analysis based on bag-of-words model

    Full text link
    This research proposes a number of new methods for biomedical time series classification and clustering based on a novel Bag-of-Words (BoW) representation. It is anticipated that the objective and automatic biomedical time series clustering and classification technologies developed in this work will potentially benefit a wide range of applications, such as biomedical data management, archiving, retrieving, and disease diagnosis and prognosis in the future

    A history and theory of textual event detection and recognition

    Get PDF

    Exploring variability in medical imaging

    Get PDF
    Although recent successes of deep learning and novel machine learning techniques improved the perfor- mance of classification and (anomaly) detection in computer vision problems, the application of these methods in medical imaging pipeline remains a very challenging task. One of the main reasons for this is the amount of variability that is encountered and encapsulated in human anatomy and subsequently reflected in medical images. This fundamental factor impacts most stages in modern medical imaging processing pipelines. Variability of human anatomy makes it virtually impossible to build large datasets for each disease with labels and annotation for fully supervised machine learning. An efficient way to cope with this is to try and learn only from normal samples. Such data is much easier to collect. A case study of such an automatic anomaly detection system based on normative learning is presented in this work. We present a framework for detecting fetal cardiac anomalies during ultrasound screening using generative models, which are trained only utilising normal/healthy subjects. However, despite the significant improvement in automatic abnormality detection systems, clinical routine continues to rely exclusively on the contribution of overburdened medical experts to diagnosis and localise abnormalities. Integrating human expert knowledge into the medical imaging processing pipeline entails uncertainty which is mainly correlated with inter-observer variability. From the per- spective of building an automated medical imaging system, it is still an open issue, to what extent this kind of variability and the resulting uncertainty are introduced during the training of a model and how it affects the final performance of the task. Consequently, it is very important to explore the effect of inter-observer variability both, on the reliable estimation of model’s uncertainty, as well as on the model’s performance in a specific machine learning task. A thorough investigation of this issue is presented in this work by leveraging automated estimates for machine learning model uncertainty, inter-observer variability and segmentation task performance in lung CT scan images. Finally, a presentation of an overview of the existing anomaly detection methods in medical imaging was attempted. This state-of-the-art survey includes both conventional pattern recognition methods and deep learning based methods. It is one of the first literature surveys attempted in the specific research area.Open Acces

    Representation learning for histopathology image analysis

    Get PDF
    Abstract. Nowadays, automatic methods for image representation and analysis have been successfully applied in several medical imaging problems leading to the emergence of novel research areas like digital pathology and bioimage informatics. The main challenge of these methods is to deal with the high visual variability of biological structures present in the images, which increases the semantic gap between their visual appearance and their high level meaning. Particularly, the visual variability in histopathology images is also related to the noise added by acquisition stages such as magnification, sectioning and staining, among others. Many efforts have focused on the careful selection of the image representations to capture such variability. This approach requires expert knowledge as well as hand-engineered design to build good feature detectors that represent the relevant visual information. Current approaches in classical computer vision tasks have replaced such design by the inclusion of the image representation as a new learning stage called representation learning. This paradigm has outperformed the state-of-the-art results in many pattern recognition tasks like speech recognition, object detection, and image scene classification. The aim of this research was to explore and define a learning-based histopathology image representation strategy with interpretative capabilities. The main contribution was a novel approach to learn the image representation for cancer detection. The proposed approach learns the representation directly from a Basal-cell carcinoma image collection in an unsupervised way and was extended to extract more complex features from low-level representations. Additionally, this research proposed the digital staining module, a complementary interpretability stage to support diagnosis through a visual identification of discriminant and semantic features. Experimental results showed a performance of 92% in F-Score, improving the state-of-the-art representation by 7%. This research concluded that representation learning improves the feature detectors generalization as well as the performance for the basal cell carcinoma detection task. As additional contributions, a bag of features image representation was extended and evaluated for Alzheimer detection, obtaining 95% in terms of equal error classification rate. Also, a novel perspective to learn morphometric measures in cervical cells based on bag of features was presented and evaluated obtaining promising results to predict nuclei and cytoplasm areas.Los métodos automáticos para la representación y análisis de imágenes se han aplicado con éxito en varios problemas de imagen médica que conducen a la aparición de nuevas áreas de investigación como la patología digital. El principal desafío de estos métodos es hacer frente a la alta variabilidad visual de las estructuras biológicas presentes en las imágenes, lo que aumenta el vacío semántico entre su apariencia visual y su significado de alto nivel. Particularmente, la variabilidad visual en imágenes de histopatología también está relacionada con el ruido añadido por etapas de adquisición tales como magnificación, corte y tinción entre otros. Muchos esfuerzos se han centrado en la selección de la representacion de las imágenes para capturar dicha variabilidad. Este enfoque requiere el conocimiento de expertos y el diseño de ingeniería para construir buenos detectores de características que representen la información visual relevante. Los enfoques actuales en tareas de visión por computador han reemplazado ese diseño por la inclusión de la representación en la etapa de aprendizaje. Este paradigma ha superado los resultados del estado del arte en muchas de las tareas de reconocimiento de patrones tales como el reconocimiento de voz, la detección de objetos y la clasificación de imágenes. El objetivo de esta investigación es explorar y definir una estrategia basada en el aprendizaje de la representación para imágenes histopatológicas con capacidades interpretativas. La contribución principal de este trabajo es un enfoque novedoso para aprender la representación de la imagen para la detección de cáncer. El enfoque propuesto aprende la representación directamente de una colección de imágenes de carcinoma basocelular en forma no supervisada que permite extraer características más complejas a partir de las representaciones de bajo nivel. También se propone el módulo de tinción digital, una nueva etapa de interpretabilidad para apoyar el diagnóstico a través de una identificación visual de las funciones discriminantes y semánticas. Los resultados experimentales mostraron un rendimiento del 92% en términos de F-Score, mejorando la representación del estado del arte en un 7%. Esta investigación concluye que el aprendizaje de la representación mejora la generalización de los detectores de características así como el desempeño en la detección de carcinoma basocelular. Como contribuciones adicionales, una representación de bolsa de caracteristicas (BdC) fue ampliado y evaluado para la detección de la enfermedad de Alzheimer, obteniendo un 95% en términos de EER. Además, una nueva perspectiva para aprender medidas morfométricas en las células del cuello uterino basado en BdC fue presentada y evaluada obteniendo resultados prometedores para predecir las areás del nucleo y el citoplasma.Maestrí

    Search for patterns of functional specificity in the brain: A nonparametric hierarchical Bayesian model for group fMRI data

    Get PDF
    Functional MRI studies have uncovered a number of brain areas that demonstrate highly specific functional patterns. In the case of visual object recognition, small, focal regions have been characterized with selectivity for visual categories such as human faces. In this paper, we develop an algorithm that automatically learns patterns of functional specificity from fMRI data in a group of subjects. The method does not require spatial alignment of functional images from different subjects. The algorithm is based on a generative model that comprises two main layers. At the lower level, we express the functional brain response to each stimulus as a binary activation variable. At the next level, we define a prior over sets of activation variables in all subjects. We use a Hierarchical Dirichlet Process as the prior in order to learn the patterns of functional specificity shared across the group, which we call functional systems, and estimate the number of these systems. Inference based on our model enables automatic discovery and characterization of dominant and consistent functional systems. We apply the method to data from a visual fMRI study comprised of 69 distinct stimulus images. The discovered system activation profiles correspond to selectivity for a number of image categories such as faces, bodies, and scenes. Among systems found by our method, we identify new areas that are deactivated by face stimuli. In empirical comparisons with previously proposed exploratory methods, our results appear superior in capturing the structure in the space of visual categories of stimuli.McGovern Institute for Brain Research at MIT. Neurotechnology (MINT) ProgramNational Institutes of Health (U.S.) (Grant NIBIB NAMIC U54-EB005149)National Institutes of Health (U.S.) (Grant NCRR NAC P41-RR13218)National Eye Institute (Grant 13455)National Science Foundation (U.S.) (CAREER Grant 0642971)National Science Foundation (U.S.) (Grant IIS/CRCNS 0904625)Harvard University--MIT Division of Health Sciences and Technology (Catalyst Grant)American Society for Engineering Education. National Defense Science and Engineering Graduate Fellowshi

    A Review of Physical Human Activity Recognition Chain Using Sensors

    Get PDF
    In the era of Internet of Medical Things (IoMT), healthcare monitoring has gained a vital role nowadays. Moreover, improving lifestyle, encouraging healthy behaviours, and decreasing the chronic diseases are urgently required. However, tracking and monitoring critical cases/conditions of elderly and patients is a great challenge. Healthcare services for those people are crucial in order to achieve high safety consideration. Physical human activity recognition using wearable devices is used to monitor and recognize human activities for elderly and patient. The main aim of this review study is to highlight the human activity recognition chain, which includes, sensing technologies, preprocessing and segmentation, feature extractions methods, and classification techniques. Challenges and future trends are also highlighted.

    Pattern classification approaches for breast cancer identification via MRI: state‐of‐the‐art and vision for the future

    Get PDF
    Mining algorithms for Dynamic Contrast Enhanced Magnetic Resonance Imaging (DCEMRI) of breast tissue are discussed. The algorithms are based on recent advances in multidimensional signal processing and aim to advance current state‐of‐the‐art computer‐aided detection and analysis of breast tumours when these are observed at various states of development. The topics discussed include image feature extraction, information fusion using radiomics, multi‐parametric computer‐aided classification and diagnosis using information fusion of tensorial datasets as well as Clifford algebra based classification approaches and convolutional neural network deep learning methodologies. The discussion also extends to semi‐supervised deep learning and self‐supervised strategies as well as generative adversarial networks and algorithms using generated confrontational learning approaches. In order to address the problem of weakly labelled tumour images, generative adversarial deep learning strategies are considered for the classification of different tumour types. The proposed data fusion approaches provide a novel Artificial Intelligence (AI) based framework for more robust image registration that can potentially advance the early identification of heterogeneous tumour types, even when the associated imaged organs are registered as separate entities embedded in more complex geometric spaces. Finally, the general structure of a high‐dimensional medical imaging analysis platform that is based on multi‐task detection and learning is proposed as a way forward. The proposed algorithm makes use of novel loss functions that form the building blocks for a generated confrontation learning methodology that can be used for tensorial DCE‐MRI. Since some of the approaches discussed are also based on time‐lapse imaging, conclusions on the rate of proliferation of the disease can be made possible. The proposed framework can potentially reduce the costs associated with the interpretation of medical images by providing automated, faster and more consistent diagnosis

    A Survey on Deep Learning in Medical Image Analysis

    Full text link
    Deep learning algorithms, in particular convolutional networks, have rapidly become a methodology of choice for analyzing medical images. This paper reviews the major deep learning concepts pertinent to medical image analysis and summarizes over 300 contributions to the field, most of which appeared in the last year. We survey the use of deep learning for image classification, object detection, segmentation, registration, and other tasks and provide concise overviews of studies per application area. Open challenges and directions for future research are discussed.Comment: Revised survey includes expanded discussion section and reworked introductory section on common deep architectures. Added missed papers from before Feb 1st 201
    corecore