2,357 research outputs found

    Syn3DWound: A Synthetic Dataset for 3D Wound Bed Analysis

    Full text link
    Wound management poses a significant challenge, particularly for bedridden patients and the elderly. Accurate diagnostic and healing monitoring can significantly benefit from modern image analysis, providing accurate and precise measurements of wounds. Despite several existing techniques, the shortage of expansive and diverse training datasets remains a significant obstacle to constructing machine learning-based frameworks. This paper introduces Syn3DWound, an open-source dataset of high-fidelity simulated wounds with 2D and 3D annotations. We propose baseline methods and a benchmarking framework for automated 3D morphometry analysis and 2D/3D wound segmentation.Comment: In the IEEE International Symposium on Biomedical Imaging (ISBI) 202

    Taking aim at moving targets in computational cell migration

    Get PDF
    Cell migration is central to the development and maintenance of multicellular organisms. Fundamental understanding of cell migration can, for example, direct novel therapeutic strategies to control invasive tumor cells. However, the study of cell migration yields an overabundance of experimental data that require demanding processing and analysis for results extraction. Computational methods and tools have therefore become essential in the quantification and modeling of cell migration data. We review computational approaches for the key tasks in the quantification of in vitro cell migration: image pre-processing, motion estimation and feature extraction. Moreover, we summarize the current state-of-the-art for in silico modeling of cell migration. Finally, we provide a list of available software tools for cell migration to assist researchers in choosing the most appropriate solution for their needs

    Risk prediction analysis for post-surgical complications in cardiothoracic surgery

    Get PDF
    Cardiothoracic surgery patients have the risk of developing surgical site infections (SSIs), which causes hospital readmissions, increases healthcare costs and may lead to mortality. The first 30 days after hospital discharge are crucial for preventing these kind of infections. As an alternative to a hospital-based diagnosis, an automatic digital monitoring system can help with the early detection of SSIs by analyzing daily images of patient’s wounds. However, analyzing a wound automatically is one of the biggest challenges in medical image analysis. The proposed system is integrated into a research project called CardioFollowAI, which developed a digital telemonitoring service to follow-up the recovery of cardiothoracic surgery patients. This present work aims to tackle the problem of SSIs by predicting the existence of worrying alterations in wound images taken by patients, with the help of machine learning and deep learning algorithms. The developed system is divided into a segmentation model which detects the wound region area and categorizes the wound type, and a classification model which predicts the occurrence of alterations in the wounds. The dataset consists of 1337 images with chest wounds (WC), drainage wounds (WD) and leg wounds (WL) from 34 cardiothoracic surgery patients. For segmenting the images, an architecture with a Mobilenet encoder and an Unet decoder was used to obtain the regions of interest (ROI) and attribute the wound class. The following model was divided into three sub-classifiers for each wound type, in order to improve the model’s performance. Color and textural features were extracted from the wound’s ROIs to feed one of the three machine learning classifiers (random Forest, support vector machine and K-nearest neighbors), that predict the final output. The segmentation model achieved a final mean IoU of 89.9%, a dice coefficient of 94.6% and a mean average precision of 90.1%, showing good results. As for the algorithms that performed classification, the WL classifier exhibited the best results with a 87.6% recall and 52.6% precision, while WC classifier achieved a 71.4% recall and 36.0% precision. The WD had the worst performance with a 68.4% recall and 33.2% precision. The obtained results demonstrate the feasibility of this solution, which can be a start for preventing SSIs through image analysis with artificial intelligence.Os pacientes submetidos a uma cirurgia cardiotorácica tem o risco de desenvolver infeções no local da ferida cirúrgica, o que pode consequentemente levar a readmissões hospitalares, ao aumento dos custos na saúde e à mortalidade. Os primeiros 30 dias após a alta hospitalar são cruciais na prevenção destas infecções. Assim, como alternativa ao diagnóstico no hospital, a utilização diária de um sistema digital e automático de monotorização em imagens de feridas cirúrgicas pode ajudar na precoce deteção destas infeções. No entanto, a análise automática de feridas é um dos grandes desafios em análise de imagens médicas. O sistema proposto integra um projeto de investigação designado CardioFollow.AI, que desenvolveu um serviço digital de telemonitorização para realizar o follow-up da recuperação dos pacientes de cirurgia cardiotorácica. Neste trabalho, o problema da infeção de feridas cirúrgicas é abordado, através da deteção de alterações preocupantes na ferida com ajuda de algoritmos de aprendizagem automática. O sistema desenvolvido divide-se num modelo de segmentação, que deteta a região da ferida e a categoriza consoante o seu tipo, e num modelo de classificação que prevê a existência de alterações na ferida. O conjunto de dados consistiu em 1337 imagens de feridas do peito (WC), feridas dos tubos de drenagem (WD) e feridas da perna (WL), provenientes de 34 pacientes de cirurgia cardiotorácica. A segmentação de imagem foi realizada através da combinação de Mobilenet como codificador e Unet como decodificador, de forma a obter-se as regiões de interesse e atribuir a classe da ferida. O modelo seguinte foi dividido em três subclassificadores para cada tipo de ferida, de forma a melhorar a performance do modelo. Caraterísticas de cor e textura foram extraídas da região da ferida para serem introduzidas num dos modelos de aprendizagem automática de forma a prever a classificação final (Random Forest, Support Vector Machine and K-Nearest Neighbors). O modelo de segmentação demonstrou bons resultados ao obter um IoU médio final de 89.9%, um dice de 94.6% e uma média de precisão de 90.1%. Relativamente aos algoritmos que realizaram a classificação, o classificador WL exibiu os melhores resultados com 87.6% de recall e 62.6% de precisão, enquanto o classificador das WC conseguiu um recall de 71.4% e 36.0% de precisão. Por fim, o classificador das WD teve a pior performance com um recall de 68.4% e 33.2% de precisão. Os resultados obtidos demonstram a viabilidade desta solução, que constitui o início da prevenção de infeções em feridas cirúrgica a partir da análise de imagem, com recurso a inteligência artificial

    3D bioprinting on moving surfaces using computer vision

    Get PDF
    Abstract: 3D bioprinting is a rapidly developing field with many potential applications in medicine, bioengineering, and, possibly, even other areas. However, one of the main challenges in bioprinting is maintaining accuracy and precision during the printing process, especially when printing on a moving surface. The moving surface terminology means a not properly fixed or unstable surface for printing. For example, a human body, healing harmed tissues directly in an ambulance on the way to the hospital or any other emergency. This paper proposes a novel method for 3D bioprinting on moving surfaces using computer vision and machine learning. The approach involves the use of video processing techniques to analyze the bioprinting process in real-time, enabling the identification of potential issues and the adjustment of the printing process to improve the quality of the printed tissue. This is an important step toward the development of scalable and reproducible bioprinting technologies for tissue engineering applications.Communication présentée lors du congrès international tenu conjointement par Canadian Society for Mechanical Engineering (CSME) et Computational Fluid Dynamics Society of Canada (CFD Canada), à l’Université de Sherbrooke (Québec), du 28 au 31 mai 2023

    Mobile Wound Assessment and 3D Modeling from a Single Image

    Get PDF
    The prevalence of camera-enabled mobile phones have made mobile wound assessment a viable treatment option for millions of previously difficult to reach patients. We have designed a complete mobile wound assessment platform to ameliorate the many challenges related to chronic wound care. Chronic wounds and infections are the most severe, costly and fatal types of wounds, placing them at the center of mobile wound assessment. Wound physicians assess thousands of single-view wound images from all over the world, and it may be difficult to determine the location of the wound on the body, for example, if the wound is taken at close range. In our solution, end-users capture an image of the wound by taking a picture with their mobile camera. The wound image is segmented and classified using modern convolution neural networks, and is stored securely in the cloud for remote tracking. We use an interactive semi-automated approach to allow users to specify the location of the wound on the body. To accomplish this we have created, to the best our knowledge, the first 3D human surface anatomy labeling system, based off the current NYU and Anatomy Mapper labeling systems. To interactively view wounds in 3D, we have presented an efficient projective texture mapping algorithm for texturing wounds onto a 3D human anatomy model. In so doing, we have demonstrated an approach to 3D wound reconstruction that works even for a single wound image

    A Survey on Deep Learning in Medical Image Analysis

    Full text link
    Deep learning algorithms, in particular convolutional networks, have rapidly become a methodology of choice for analyzing medical images. This paper reviews the major deep learning concepts pertinent to medical image analysis and summarizes over 300 contributions to the field, most of which appeared in the last year. We survey the use of deep learning for image classification, object detection, segmentation, registration, and other tasks and provide concise overviews of studies per application area. Open challenges and directions for future research are discussed.Comment: Revised survey includes expanded discussion section and reworked introductory section on common deep architectures. Added missed papers from before Feb 1st 201

    Characterising live cell behaviour: traditional label-free and quantitative phase imaging approaches

    Get PDF
    Label-free imaging uses inherent contrast mechanisms within cells to create image contrast without introducing dyes/labels, which may confound results. Quantitative phase imaging is label-free and offers higher content and contrast compared to traditional techniques. High-contrast images facilitate generation of individual cell metrics via more robust segmentation and tracking, enabling formation of a label-free dynamic phenotype describing cell-to-cell heterogeneity and temporal changes. Compared to population-level averages, individual cell-level dynamic phenotypes have greater power to differentiate between cellular responses to treatments, which has clinical relevance e.g. in the treatment of cancer. Furthermore, as the data is obtained label-free, the same cells can be used for further assays or expansion, of potential benefit for the fields of regenerative and personalised medicine
    • …
    corecore