2,241 research outputs found

    An automated wound identification system based on image segmentation and artificial neural networks

    Get PDF
    Chronic wounds are a global, ongoing health challenge that afflicts a large number of people. Effective diagnosis and treatment of the wounds relies largely on a precise identification and measurement of the wounded tissue; however, in current clinical process, wound evaluation is based on subjective visual inspection and manual measurements which are often inaccurate. An automatic computer-based system for fast and accurate segmentation and identification of wounds is desirable, both from the standpoint of improving health outcomes in chronic wound care and management, and in making clinical practice more efficient and cost-effective.As presented in this thesis, we design such a system that uses color wound photographs taken from the patients, and is capable of automatic image segmentation and wound region identification. Several commonly used segmentation methods are utilized to obtain a collection of candidate wound areas. The parameters of each method are fine-tuned through an optimization procedure. Two different types of Artificial Neural Networks (ANNs), the Multi-Layer Perceptron (MLP) and the Radial Basis Function (RBF) with parameters decided by a cross-validation approach, are then applied with supervised learning in the prediction procedure, and their results are compared. Satisfactory results of this system suggest a promising tool to assist in the field of clinical wound evaluation.M.S., Biomedical Engineering -- Drexel University, 201

    Medical imaging analysis with artificial neural networks

    Get PDF
    Given that neural networks have been widely reported in the research community of medical imaging, we provide a focused literature survey on recent neural network developments in computer-aided diagnosis, medical image segmentation and edge detection towards visual content analysis, and medical image registration for its pre-processing and post-processing, with the aims of increasing awareness of how neural networks can be applied to these areas and to provide a foundation for further research and practical development. Representative techniques and algorithms are explained in detail to provide inspiring examples illustrating: (i) how a known neural network with fixed structure and training procedure could be applied to resolve a medical imaging problem; (ii) how medical images could be analysed, processed, and characterised by neural networks; and (iii) how neural networks could be expanded further to resolve problems relevant to medical imaging. In the concluding section, a highlight of comparisons among many neural network applications is included to provide a global view on computational intelligence with neural networks in medical imaging

    A Survey on Deep Learning in Medical Image Analysis

    Full text link
    Deep learning algorithms, in particular convolutional networks, have rapidly become a methodology of choice for analyzing medical images. This paper reviews the major deep learning concepts pertinent to medical image analysis and summarizes over 300 contributions to the field, most of which appeared in the last year. We survey the use of deep learning for image classification, object detection, segmentation, registration, and other tasks and provide concise overviews of studies per application area. Open challenges and directions for future research are discussed.Comment: Revised survey includes expanded discussion section and reworked introductory section on common deep architectures. Added missed papers from before Feb 1st 201

    Burning Skin Detection System in Human Body

    Get PDF
    Early accurate burn depth diagnosis is crucial for selecting the appropriate clinical intervention strategies and assessing burn patient prognosis quality. However, with limited diagnostic accuracy, the current burn depth diagnosis approach still primarily relies on the empirical subjective assessment of clinicians. With the quick development of artificial intelligence technology, integration of deep learning algorithms with image analysis technology can more accurately identify and evaluate the information in medical images. The objective of the work is to detect and classify burn area in medical images using an unsupervised deep learning algorithm. The main contribution is to developing computations using one of the deep learning algorithm. To demonstrate the effectiveness of the proposed framework, experiments are performed on the benchmark to evaluate system stability. The results indicate that, the proposed system is simple and suits real life applications. The system accuracy was 75%, when compared with some of the state-of-the-art techniques

    A Comparative Study of Segmentation Algorithms in the Classification of Human Skin Burn Depth

    Get PDF
    A correct first assessment of a skin burn depth is essential as it determines a correct first burn treatment provided to the patients. The objective of this paper is to conduct a comparative study of the different segmentation algorithms for the classification of different burn depths. Eight different hybrid segmentation algorithms were studied on a skin burn dataset comprising skin burn images categorized into three burn classes by medical experts; superficial partial thickness burn (SPTB), deep partial thickness burn (DPTB) and full thickness burn (FTB). Different sequences of the algorithm were experimented as each algorithm was able to segment differently, leading to different segmentation in the final output. The performance of the segmentation algorithms was evaluated by calculating the number of correctly segmented images for each burn depth. The empirical results showed that the segmentation algorithm that was able to segment most of the burn depths had achieved 40.24%, 60.42% and 6.25% of correctly segmented image for SPTB, DPTB and FTB respectively. Most of the segmentation algorithms could not segment well for FTB images because of the different nature of the burn wounds as some of the FTB images contained dark brown and black colors. It can be concluded that a good segmentation algorithm is required to ensure that the representative features of each burn depth can be extracted to contribute to higher accuracy of classification of skin burn depth

    Imparting 3D representations to artificial intelligence for a full assessment of pressure injuries.

    Get PDF
    During recent decades, researches have shown great interest to machine learning techniques in order to extract meaningful information from the large amount of data being collected each day. Especially in the medical field, images play a significant role in the detection of several health issues. Hence, medical image analysis remarkably participates in the diagnosis process and it is considered a suitable environment to interact with the technology of intelligent systems. Deep Learning (DL) has recently captured the interest of researchers as it has proven to be efficient in detecting underlying features in the data and outperformed the classical machine learning methods. The main objective of this dissertation is to prove the efficiency of Deep Learning techniques in tackling one of the important health issues we are facing in our society, through medical imaging. Pressure injuries are a dermatology related health issue associated with increased morbidity and health care costs. Managing pressure injuries appropriately is increasingly important for all the professionals in wound care. Using 2D photographs and 3D meshes of these wounds, collected from collaborating hospitals, our mission is to create intelligent systems for a full non-intrusive assessment of these wounds. Five main tasks have been achieved in this study: a literature review of wound imaging methods using machine learning techniques, the classification and segmentation of the tissue types inside the pressure injury, the segmentation of these wounds and the design of an end-to-end system which measures all the necessary quantitative information from 3D meshes for an efficient assessment of PIs, and the integration of the assessment imaging techniques in a web-based application

    Neural Network for Papaya Leaf Disease Detection

    Get PDF
    The scientific name of papaya is Carica papaya which is an herbaceous perennial in the family Caricaceae grown for its edible fruit. The papaya plant is tree-like,usually unbranched and has hollow stems and petioles. Its origin is Costa Rica, Mexico and USA. The common names of papaya is pawpaw and tree melon. In East Indies and Southern Asia, it is known as tapaya, kepaya, lapaya and kapaya. In Brazil,it is known as Mamao. Papayas are a soft, fleshy fruit that can be used in a wide variety of culinary ways. The possible health benefits of consuming papaya include a reduced risk of heart disease, diabetes, cancer, aiding in digestion, improving blood glucose control in people with diabetes, lowering blood pressure, and improving wound healing. Disease identification in early stage can increase crop productivity and hence lead to economical growth. This work deals with leaf rather than fruit. Images of papaya leaf samples, image compression and image filtering and several image generation techniques are used to obtain several trained data image sets and then hence providing a better product. This paper focus on the power of neural network for detecting diseases in the papaya. Image segmentation is done with the help of k-medoid clustering algorithm which is a partitioning based clustering method

    Risk prediction analysis for post-surgical complications in cardiothoracic surgery

    Get PDF
    Cardiothoracic surgery patients have the risk of developing surgical site infections (SSIs), which causes hospital readmissions, increases healthcare costs and may lead to mortality. The first 30 days after hospital discharge are crucial for preventing these kind of infections. As an alternative to a hospital-based diagnosis, an automatic digital monitoring system can help with the early detection of SSIs by analyzing daily images of patient’s wounds. However, analyzing a wound automatically is one of the biggest challenges in medical image analysis. The proposed system is integrated into a research project called CardioFollowAI, which developed a digital telemonitoring service to follow-up the recovery of cardiothoracic surgery patients. This present work aims to tackle the problem of SSIs by predicting the existence of worrying alterations in wound images taken by patients, with the help of machine learning and deep learning algorithms. The developed system is divided into a segmentation model which detects the wound region area and categorizes the wound type, and a classification model which predicts the occurrence of alterations in the wounds. The dataset consists of 1337 images with chest wounds (WC), drainage wounds (WD) and leg wounds (WL) from 34 cardiothoracic surgery patients. For segmenting the images, an architecture with a Mobilenet encoder and an Unet decoder was used to obtain the regions of interest (ROI) and attribute the wound class. The following model was divided into three sub-classifiers for each wound type, in order to improve the model’s performance. Color and textural features were extracted from the wound’s ROIs to feed one of the three machine learning classifiers (random Forest, support vector machine and K-nearest neighbors), that predict the final output. The segmentation model achieved a final mean IoU of 89.9%, a dice coefficient of 94.6% and a mean average precision of 90.1%, showing good results. As for the algorithms that performed classification, the WL classifier exhibited the best results with a 87.6% recall and 52.6% precision, while WC classifier achieved a 71.4% recall and 36.0% precision. The WD had the worst performance with a 68.4% recall and 33.2% precision. The obtained results demonstrate the feasibility of this solution, which can be a start for preventing SSIs through image analysis with artificial intelligence.Os pacientes submetidos a uma cirurgia cardiotorácica tem o risco de desenvolver infeções no local da ferida cirúrgica, o que pode consequentemente levar a readmissões hospitalares, ao aumento dos custos na saúde e à mortalidade. Os primeiros 30 dias após a alta hospitalar são cruciais na prevenção destas infecções. Assim, como alternativa ao diagnóstico no hospital, a utilização diária de um sistema digital e automático de monotorização em imagens de feridas cirúrgicas pode ajudar na precoce deteção destas infeções. No entanto, a análise automática de feridas é um dos grandes desafios em análise de imagens médicas. O sistema proposto integra um projeto de investigação designado CardioFollow.AI, que desenvolveu um serviço digital de telemonitorização para realizar o follow-up da recuperação dos pacientes de cirurgia cardiotorácica. Neste trabalho, o problema da infeção de feridas cirúrgicas é abordado, através da deteção de alterações preocupantes na ferida com ajuda de algoritmos de aprendizagem automática. O sistema desenvolvido divide-se num modelo de segmentação, que deteta a região da ferida e a categoriza consoante o seu tipo, e num modelo de classificação que prevê a existência de alterações na ferida. O conjunto de dados consistiu em 1337 imagens de feridas do peito (WC), feridas dos tubos de drenagem (WD) e feridas da perna (WL), provenientes de 34 pacientes de cirurgia cardiotorácica. A segmentação de imagem foi realizada através da combinação de Mobilenet como codificador e Unet como decodificador, de forma a obter-se as regiões de interesse e atribuir a classe da ferida. O modelo seguinte foi dividido em três subclassificadores para cada tipo de ferida, de forma a melhorar a performance do modelo. Caraterísticas de cor e textura foram extraídas da região da ferida para serem introduzidas num dos modelos de aprendizagem automática de forma a prever a classificação final (Random Forest, Support Vector Machine and K-Nearest Neighbors). O modelo de segmentação demonstrou bons resultados ao obter um IoU médio final de 89.9%, um dice de 94.6% e uma média de precisão de 90.1%. Relativamente aos algoritmos que realizaram a classificação, o classificador WL exibiu os melhores resultados com 87.6% de recall e 62.6% de precisão, enquanto o classificador das WC conseguiu um recall de 71.4% e 36.0% de precisão. Por fim, o classificador das WD teve a pior performance com um recall de 68.4% e 33.2% de precisão. Os resultados obtidos demonstram a viabilidade desta solução, que constitui o início da prevenção de infeções em feridas cirúrgica a partir da análise de imagem, com recurso a inteligência artificial

    Medical Image Segmentation with Deep Convolutional Neural Networks

    Get PDF
    Medical imaging is the technique and process of creating visual representations of the body of a patient for clinical analysis and medical intervention. Healthcare professionals rely heavily on medical images and image documentation for proper diagnosis and treatment. However, manual interpretation and analysis of medical images are time-consuming, and inaccurate when the interpreter is not well-trained. Fully automatic segmentation of the region of interest from medical images has been researched for years to enhance the efficiency and accuracy of understanding such images. With the advance of deep learning, various neural network models have gained great success in semantic segmentation and sparked research interests in medical image segmentation using deep learning. We propose three convolutional frameworks to segment tissues from different types of medical images. Comprehensive experiments and analyses are conducted on various segmentation neural networks to demonstrate the effectiveness of our methods. Furthermore, datasets built for training our networks and full implementations are published
    • …
    corecore