87 research outputs found

    Анализ изображений клеток коры головного мозга in vitro с применением метода глубокого обучения

    Get PDF
    The article presents a method for analyzing images of cultured cortical cells for a quantitative analysis of the parameters of development of biological neural networks using machine learning approaches. We have developed software modules for segmentation of images into cells, clusters, and neurites using the neural network model and the deep learning method; a training set of images of cultivated neurons and corresponding segmentation masks have been generated. The results were validated by analyzing the development of cultivated neurons in vitro based on the length count of neutrites at different growth stages of the culture. The developed methods for monitoring the processes of formation of biological neuronal networks based on the analysis of the neuronal growth under different conditions and on different substrates provide an opportunity to monitor the processes of stem cell differentiation in the neurogenic direction. The results can be used in monitoring the formation of organoids in bioengineering applications, as well as in modeling the processes of nerve tissue regeneration.Представлен метод анализа изображений культивируемых клеток коры головного мозга для количественной оценки параметров развития биологических нейронных сетей с применением средств машинного обучения. Разработаны программные модули сегментации изображений на клетки, кластеры и нейриты с применением нейросетевой модели и метода глубокого обучения, сформирован обучающий набор изображений культивируемых нейронов и соответствующих масок сегментации. Результаты апробированы при анализе развития сети культивируемых нейронов in vitro на основе подсчета длины нейритов на различных стадиях роста культуры. Разработанные методики мониторинга процессов формирования биологических нейронных сетей на основе анализа роста нейронов в различных условиях и на различных субстратах предоставляют возможность контроля процессов дифференцировки стволовых клеток в нейрогенном направлении. Результаты могут применяться для мониторинга формирования органоидов в биоинженерных приложениях, а также при моделировании процессов регенерации нервной ткани

    Enhancing Knee Osteoarthritis severity level classification using diffusion augmented images

    Full text link
    This research paper explores the classification of knee osteoarthritis (OA) severity levels using advanced computer vision models and augmentation techniques. The study investigates the effectiveness of data preprocessing, including Contrast-Limited Adaptive Histogram Equalization (CLAHE), and data augmentation using diffusion models. Three experiments were conducted: training models on the original dataset, training models on the preprocessed dataset, and training models on the augmented dataset. The results show that data preprocessing and augmentation significantly improve the accuracy of the models. The EfficientNetB3 model achieved the highest accuracy of 84\% on the augmented dataset. Additionally, attention visualization techniques, such as Grad-CAM, are utilized to provide detailed attention maps, enhancing the understanding and trustworthiness of the models. These findings highlight the potential of combining advanced models with augmented data and attention visualization for accurate knee OA severity classification.Comment: Paper has been accepted to be presented at ICACECS 2023 and the final version will be published by Atlantis Highlights in Computer Science (AHCS) , Atlantis Press(part of Springer Nature

    Versatile and Economical Acquisition Setup for Dorsa Palm Vein Authentication

    Get PDF
    AbstractVarious biometrics were employed in many applications for security purposes, amongst palm vein biometrics is one of the best methods for unique identification of a person owing to the indestructible quality of the inner vein structures. In this paper, we have proposed our own setup for capturing vein structures of human dorsal palm using a web camera modified into a near infrared camera. The illumination for capturing images is provided with the help of 30 Infrared LEDs. The objective of this paper is to produce a versatile and an economical way for obtaining vein images rather than using a high priced Near Infrared Camera and can easily deployed in any small scale applications. This setup can be used to acquire finger veins too simultaneously. We have modified the web camera by removing the infrared filter present in it and replacing it with a visible light filter. The quality and performance of the newly acquired samples are tested with two different feature extraction methods namely Correlation filter and Speeded Up Robust Features (SURF) algorithm. Correlation method has obtained very good results than SURF in identifying the genuine samples

    Detección y clasificación de animales subacuáticos en observatorios cableados

    Get PDF
    [ES] Los océanos son ecosistemas sensibles y de gran importancia debido a la biodiversidad y recursos que ofrecen. A pesar de que el fondo marino es prácticamente inaccesible para el ser humano, es posible explorarlo gracias a vehículos de guiado automático (AGVs) o mediante cámaras fijas provistas por observatorios cableados. Estos dispositivos pueden obtener imágenes sobre la fauna y flora en entornos donde la luz natural escasea y la presión es mayor, con el fin de realizar un seguimiento del entorno y de las especies que habitan en él. Sin embargo, analizar y clasificar manualmente grandes cantidades de imágenes de manera rápida es una tarea imposible. Por ello, proponemos un proceso para el análisis de datos visuales que integra técnicas de detección y clasificación de animales subacuáticos y de mejora de imágenes y/o vídeos del fondo marino para desarrollar un proceso de detección y clasificación automática de especies marinas.[EN] The oceans are sensitive ecosystems of great importance due to the biodiversity and resources they offer. Although the seabed is virtually inaccessible to humans, it can be explored using automated guided vehicles (AGVs) or fixed cameras provided by wired observatories. These devices can obtain images of fauna and flora in environments where natural light is scarce and pressure is higher, in order to monitor the environment and the species that inhabit it. However, manually analyzing and classifying large amounts of images quickly is an impossible task. Therefore, we propose a process for the analysis of visual data that integrates techniques for the detection and classification of underwater animals and the enhancement of images and/or videos of the seabed to develop a process of automatic detection and classification of marine species

    COVID-19 Detection in Chest X-ray Images using a Deep Learning Approach

    Get PDF
    The Corona Virus Disease (COVID-19) is an infectious disease caused by a new virus that has not been detected in humans before. The virus causes a respiratory illness like the flu with various symptoms such as cough or fever that, in severe cases, may cause pneumonia. The COVID-19 spreads so quickly between people, affecting to 1,200,000 people worldwide at the time of writing this paper (April 2020). Due to the number of contagious and deaths are continually growing day by day, the aim of this study is to develop a quick method to detect COVID-19 in chest X-ray images using deep learning techniques. For this purpose, an object detection architecture is proposed, trained and tested with a public available dataset composed with 1500 images of non-infected patients and infected with COVID-19 and pneumonia. The main goal of our method is to classify the patient status either negative or positive COVID-19 case. In our experiments using SDD300 model we achieve a 94.92% of sensibility and 92.00% of specificity in COVID-19 detection, demonstrating the usefulness application of deep learning models to classify COVID-19 in X-ray images

    MAP: Domain Generalization via Meta-Learning on Anatomy-Consistent Pseudo-Modalities

    Full text link
    Deep models suffer from limited generalization capability to unseen domains, which has severely hindered their clinical applicability. Specifically for the retinal vessel segmentation task, although the model is supposed to learn the anatomy of the target, it can be distracted by confounding factors like intensity and contrast. We propose Meta learning on Anatomy-consistent Pseudo-modalities (MAP), a method that improves model generalizability by learning structural features. We first leverage a feature extraction network to generate three distinct pseudo-modalities that share the vessel structure of the original image. Next, we use the episodic learning paradigm by selecting one of the pseudo-modalities as the meta-train dataset, and perform meta-testing on a continuous augmented image space generated through Dirichlet mixup of the remaining pseudo-modalities. Further, we introduce two loss functions that facilitate the model's focus on shape information by clustering the latent vectors obtained from images featuring identical vasculature. We evaluate our model on seven public datasets of various retinal imaging modalities and we conclude that MAP has substantially better generalizability. Our code is publically available at https://github.com/DeweiHu/MAP

    Improving Lesion Segmentation for Diabetic Retinopathy using Adversarial Learning

    Full text link
    Diabetic Retinopathy (DR) is a leading cause of blindness in working age adults. DR lesions can be challenging to identify in fundus images, and automatic DR detection systems can offer strong clinical value. Of the publicly available labeled datasets for DR, the Indian Diabetic Retinopathy Image Dataset (IDRiD) presents retinal fundus images with pixel-level annotations of four distinct lesions: microaneurysms, hemorrhages, soft exudates and hard exudates. We utilize the HEDNet edge detector to solve a semantic segmentation task on this dataset, and then propose an end-to-end system for pixel-level segmentation of DR lesions by incorporating HEDNet into a Conditional Generative Adversarial Network (cGAN). We design a loss function that adds adversarial loss to segmentation loss. Our experiments show that the addition of the adversarial loss improves the lesion segmentation performance over the baseline.Comment: Accepted to International Conference on Image Analysis and Recognition, ICIAR 2019. Published at https://doi.org/10.1007/978-3-030-27272-2_29 Code: https://github.com/zoujx96/DR-segmentatio
    corecore