2,244 research outputs found

    Automated Quantification of White Blood Cells in Light Microscopic Images of Injured Skeletal Muscle

    Full text link
    Muscle regeneration process tracking and analysis aim to monitor the injured muscle tissue section over time and analyze the muscle healing procedure. In this procedure, as one of the most diverse cell types observed, white blood cells (WBCs) exhibit dynamic cellular response and undergo multiple protein expression changes. The characteristics, amount, location, and distribution compose the action of cells which may change over time. Their actions and relationships over the whole healing procedure can be analyzed by processing the microscopic images taken at different time points after injury. The previous studies of muscle regeneration usually employ manual approach or basic intensity process to detect and count WBCs. In comparison, computer vision method is more promising in accuracy, processing speed, and labor cost. Besides, it can extract features like cell/cluster size and eccentricity fast and accurately. In this thesis, we propose an automated quantifying and analysis framework to analyze the WBC in light microscope images of uninjured and injured skeletal muscles. The proposed framework features a hybrid image segmentation method combining the Localized Iterative Otsu’s threshold method assisted by neural networks classifiers and muscle edge detection. In specific, both neural network and convoluted neural network based classifiers are studied and compared. Via this framework, the CD68-positive WBC and 7/4-positive WBC quantification and density distribution results are analyzed for demonstrating the effectiveness of the proposed method

    Use of Image Processing Techniques to Automatically Diagnose Sickle-Cell Anemia Present in Red Blood Cells Smear

    Get PDF
    Sickle Cell Anemia is a blood disorder which results from the abnormalities of red blood cells and shortens the life expectancy to 42 and 48 years for males and females respectively. It also causes pain, jaundice, shortness of breath, etc. Sickle Cell Anemia is characterized by the presence of abnormal cells like sickle cell, ovalocyte, anisopoikilocyte. Sickle cell disease usually presenting in childhood, occurs more commonly in people from parts of tropical and subtropical regions where malaria is or was very common. A healthy RBC is usually round in shape. But sometimes it changes its shape to form a sickle cell structure; this is called as sickling of RBC. Majority of the sickle cells (whose shape is like crescent moon) found are due to low haemoglobin content. An image processing algorithm to automate the diagnosis of sickle-cells present in thin blood smears is developed. Images are acquired using a charge-coupled device camera connected to a light microscope. Clustering based segmentation techniques are used to identify erythrocytes (red blood cells) and Sickle-cells present on microscopic slides. Image features based on colour, texture and the geometry of the cells are generated, as well as features that make use of a priori knowledge of the classification problem and mimic features used by human technicians. The red blood cell smears were obtained from IG Hospital, Rourkela. The proposed image processing based identification of sickle-cells in anemic patient will be very helpful for automatic, sleek and effective diagnosis of the disease

    Hierarchically Clustered Adaptive Quantization CMAC and Its Learning Convergence

    Get PDF
    No abstract availabl

    A Review on Skin Disease Classification and Detection Using Deep Learning Techniques

    Get PDF
    Skin cancer ranks among the most dangerous cancers. Skin cancers are commonly referred to as Melanoma. Melanoma is brought on by genetic faults or mutations on the skin, which are caused by Unrepaired Deoxyribonucleic Acid (DNA) in skin cells. It is essential to detect skin cancer in its infancy phase since it is more curable in its initial phases. Skin cancer typically progresses to other regions of the body. Owing to the disease's increased frequency, high mortality rate, and prohibitively high cost of medical treatments, early diagnosis of skin cancer signs is crucial. Due to the fact that how hazardous these disorders are, scholars have developed a number of early-detection techniques for melanoma. Lesion characteristics such as symmetry, colour, size, shape, and others are often utilised to detect skin cancer and distinguish benign skin cancer from melanoma. An in-depth investigation of deep learning techniques for melanoma's early detection is provided in this study. This study discusses the traditional feature extraction-based machine learning approaches for the segmentation and classification of skin lesions. Comparison-oriented research has been conducted to demonstrate the significance of various deep learning-based segmentation and classification approaches

    Computational Intelligence in Healthcare

    Get PDF
    This book is a printed edition of the Special Issue Computational Intelligence in Healthcare that was published in Electronic

    A Review on Data Fusion of Multidimensional Medical and Biomedical Data

    Get PDF
    Data fusion aims to provide a more accurate description of a sample than any one source of data alone. At the same time, data fusion minimizes the uncertainty of the results by combining data from multiple sources. Both aim to improve the characterization of samples and might improve clinical diagnosis and prognosis. In this paper, we present an overview of the advances achieved over the last decades in data fusion approaches in the context of the medical and biomedical fields. We collected approaches for interpreting multiple sources of data in different combinations: image to image, image to biomarker, spectra to image, spectra to spectra, spectra to biomarker, and others. We found that the most prevalent combination is the image-to-image fusion and that most data fusion approaches were applied together with deep learning or machine learning methods

    Medical Image Segmentation: Thresholding and Minimum Spanning Trees

    Get PDF
    I bildesegmentering deles et bilde i separate objekter eller regioner. Det er et essensielt skritt i bildebehandling for å definere interesseområder for videre behandling eller analyse. Oppdelingsprosessen reduserer kompleksiteten til et bilde for å forenkle analysen av attributtene oppnådd etter segmentering. Det forandrer representasjonen av informasjonen i det opprinnelige bildet og presenterer pikslene på en måte som er mer meningsfull og lettere å forstå. Bildesegmentering har forskjellige anvendelser. For medisinske bilder tar segmenteringsprosessen sikte på å trekke ut bildedatasettet for å identifisere områder av anatomien som er relevante for en bestemt studie eller diagnose av pasienten. For eksempel kan man lokalisere berørte eller anormale deler av kroppen. Segmentering av oppfølgingsdata og baseline lesjonssegmentering er også svært viktig for å vurdere behandlingsresponsen. Det er forskjellige metoder som blir brukt for bildesegmentering. De kan klassifiseres basert på hvordan de er formulert og hvordan segmenteringsprosessen utføres. Metodene inkluderer de som er baserte på terskelverdier, graf-baserte, kant-baserte, klynge-baserte, modell-baserte og hybride metoder, og metoder basert på maskinlæring og dyp læring. Andre metoder er baserte på å utvide, splitte og legge sammen regioner, å finne diskontinuiteter i randen, vannskille segmentering, aktive kontuter og graf-baserte metoder. I denne avhandlingen har vi utviklet metoder for å segmentere forskjellige typer medisinske bilder. Vi testet metodene på datasett for hvite blodceller (WBCs) og magnetiske resonansbilder (MRI). De utviklede metodene og analysen som er utført på bildedatasettet er presentert i tre artikler. I artikkel A (Paper A) foreslo vi en metode for segmentering av nukleuser og cytoplasma fra hvite blodceller. Metodene estimerer terskelen for segmentering av nukleuser automatisk basert på lokale minima. Metoden segmenterer WBC-ene før segmentering av cytoplasma avhengig av kompleksiteten til objektene i bildet. For bilder der WBC-ene er godt skilt fra røde blodlegemer (RBC), er WBC-ene segmentert ved å ta gjennomsnittet av nn bilder som allerede var filtrert med en terskelverdi. For bilder der RBC-er overlapper WBC-ene, er hele WBC-ene segmentert ved hjelp av enkle lineære iterative klynger (SLIC) og vannskillemetoder. Cytoplasmaet oppnås ved å trekke den segmenterte nukleusen fra den segmenterte WBC-en. Metoden testes på to forskjellige offentlig tilgjengelige datasett, og resultatene sammenlignes med toppmoderne metoder. I artikkel B (Paper B) foreslo vi en metode for segmentering av hjernesvulster basert på minste dekkende tre-konsepter (minimum spanning tree, MST). Metoden utfører interaktiv segmentering basert på MST. I denne artikkelen er bildet lastet inn i et interaktivt vindu for segmentering av svulsten. Fokusregion og bakgrunn skilles ved å klikke for å dele MST i to trær. Ett av disse trærne representerer fokusregionen og det andre representerer bakgrunnen. Den foreslåtte metoden ble testet ved å segmentere to forskjellige 2D-hjerne T1 vektede magnetisk resonans bildedatasett. Metoden er enkel å implementere og resultatene indikerer at den er nøyaktig og effektiv. I artikkel C (Paper C) foreslår vi en metode som behandler et 3D MRI-volum og deler det i hjernen, ikke-hjernevev og bakgrunnsegmenter. Det er en grafbasert metode som bruker MST til å skille 3D MRI inn i de tre regiontypene. Grafen lages av et forhåndsbehandlet 3D MRI-volum etterfulgt av konstrueringen av MST-en. Segmenteringsprosessen gir tre merkede, sammenkoblende komponenter som omformes tilbake til 3D MRI-form. Etikettene brukes til å segmentere hjernen, ikke-hjernevev og bakgrunn. Metoden ble testet på tre forskjellige offentlig tilgjengelige datasett og resultatene ble sammenlignet med ulike toppmoderne metoder.In image segmentation, an image is divided into separate objects or regions. It is an essential step in image processing to define areas of interest for further processing or analysis. The segmentation process reduces the complexity of an image to simplify the analysis of the attributes obtained after segmentation. It changes the representation of the information in the original image and presents the pixels in a way that is more meaningful and easier to understand. Image segmentation has various applications. For medical images, the segmentation process aims to extract the image data set to identify areas of the anatomy relevant to a particular study or diagnosis of the patient. For example, one can locate affected or abnormal parts of the body. Segmentation of follow-up data and baseline lesion segmentation is also very important to assess the treatment response. There are different methods used for image segmentation. They can be classified based on how they are formulated and how the segmentation process is performed. The methods include those based on threshold values, edge-based, cluster-based, model-based and hybrid methods, and methods based on machine learning and deep learning. Other methods are based on growing, splitting and merging regions, finding discontinuities in the edge, watershed segmentation, active contours and graph-based methods. In this thesis, we have developed methods for segmenting different types of medical images. We tested the methods on datasets for white blood cells (WBCs) and magnetic resonance images (MRI). The developed methods and the analysis performed on the image data set are presented in three articles. In Paper A we proposed a method for segmenting nuclei and cytoplasm from white blood cells. The method estimates the threshold for segmentation of nuclei automatically based on local minima. The method segments the WBCs before segmenting the cytoplasm depending on the complexity of the objects in the image. For images where the WBCs are well separated from red blood cells (RBCs), the WBCs are segmented by taking the average of nn images that were already filtered with a threshold value. For images where RBCs overlap the WBCs, the entire WBCs are segmented using simple linear iterative clustering (SLIC) and watershed methods. The cytoplasm is obtained by subtracting the segmented nucleus from the segmented WBC. The method is tested on two different publicly available datasets, and the results are compared with state of the art methods. In Paper B, we proposed a method for segmenting brain tumors based on minimum spanning tree (MST) concepts. The method performs interactive segmentation based on the MST. In this paper, the image is loaded in an interactive window for segmenting the tumor. The region of interest and the background are selected by clicking to split the MST into two trees. One of these trees represents the region of interest and the other represents the background. The proposed method was tested by segmenting two different 2D brain T1-weighted magnetic resonance image data sets. The method is simple to implement and the results indicate that it is accurate and efficient. In Paper C, we propose a method that processes a 3D MRI volume and partitions it into brain, non-brain tissues, and background segments. It is a graph-based method that uses MST to separate the 3D MRI into the brain, non-brain, and background regions. The graph is made from a preprocessed 3D MRI volume followed by constructing the MST. The segmentation process produces three labeled connected components which are reshaped back to the shape of the 3D MRI. The labels are used to segment the brain, non-brain tissues, and the background. The method was tested on three different publicly available data sets and the results were compared to different state of the art methods.Doktorgradsavhandlin

    Computational Intelligence in Healthcare

    Get PDF
    The number of patient health data has been estimated to have reached 2314 exabytes by 2020. Traditional data analysis techniques are unsuitable to extract useful information from such a vast quantity of data. Thus, intelligent data analysis methods combining human expertise and computational models for accurate and in-depth data analysis are necessary. The technological revolution and medical advances made by combining vast quantities of available data, cloud computing services, and AI-based solutions can provide expert insight and analysis on a mass scale and at a relatively low cost. Computational intelligence (CI) methods, such as fuzzy models, artificial neural networks, evolutionary algorithms, and probabilistic methods, have recently emerged as promising tools for the development and application of intelligent systems in healthcare practice. CI-based systems can learn from data and evolve according to changes in the environments by taking into account the uncertainty characterizing health data, including omics data, clinical data, sensor, and imaging data. The use of CI in healthcare can improve the processing of such data to develop intelligent solutions for prevention, diagnosis, treatment, and follow-up, as well as for the analysis of administrative processes. The present Special Issue on computational intelligence for healthcare is intended to show the potential and the practical impacts of CI techniques in challenging healthcare applications

    Generalizable automated pixel-level structural segmentation of medical and biological data

    Get PDF
    Over the years, the rapid expansion in imaging techniques and equipments has driven the demand for more automation in handling large medical and biological data sets. A wealth of approaches have been suggested as optimal solutions for their respective imaging types. These solutions span various image resolutions, modalities and contrast (staining) mechanisms. Few approaches generalise well across multiple image types, contrasts or resolution. This thesis proposes an automated pixel-level framework that addresses 2D, 2D+t and 3D structural segmentation in a more generalizable manner, yet has enough adaptability to address a number of specific image modalities, spanning retinal funduscopy, sequential fluorescein angiography and two-photon microscopy. The pixel-level segmentation scheme involves: i ) constructing a phase-invariant orientation field of the local spatial neighbourhood; ii ) combining local feature maps with intensity-based measures in a structural patch context; iii ) using a complex supervised learning process to interpret the combination of all the elements in the patch in order to reach a classification decision. This has the advantage of transferability from retinal blood vessels in 2D to neural structures in 3D. To process the temporal components in non-standard 2D+t retinal angiography sequences, we first introduce a co-registration procedure: at the pairwise level, we combine projective RANSAC with a quadratic homography transformation to map the coordinate systems between any two frames. At the joint level, we construct a hierarchical approach in order for each individual frame to be registered to the global reference intra- and inter- sequence(s). We then take a non-training approach that searches in both the spatial neighbourhood of each pixel and the filter output across varying scales to locate and link microvascular centrelines to (sub-) pixel accuracy. In essence, this \link while extract" piece-wise segmentation approach combines the local phase-invariant orientation field information with additional local phase estimates to obtain a soft classification of the centreline (sub-) pixel locations. Unlike retinal segmentation problems where vasculature is the main focus, 3D neural segmentation requires additional exibility, allowing a variety of structures of anatomical importance yet with different geometric properties to be differentiated both from the background and against other structures. Notably, cellular structures, such as Purkinje cells, neural dendrites and interneurons, all display certain elongation along their medial axes, yet each class has a characteristic shape captured by an orientation field that distinguishes it from other structures. To take this into consideration, we introduce a 5D orientation mapping to capture these orientation properties. This mapping is incorporated into the local feature map description prior to a learning machine. Extensive performance evaluations and validation of each of the techniques presented in this thesis is carried out. For retinal fundus images, we compute Receiver Operating Characteristic (ROC) curves on existing public databases (DRIVE & STARE) to assess and compare our algorithms with other benchmark methods. For 2D+t retinal angiography sequences, we compute the error metrics ("Centreline Error") of our scheme with other benchmark methods. For microscopic cortical data stacks, we present segmentation results on both surrogate data with known ground-truth and experimental rat cerebellar cortex two-photon microscopic tissue stacks.Open Acces
    corecore