775 research outputs found
Data harmonisation for information fusion in digital healthcare: A state-of-the-art systematic review, meta-analysis and future research directions
Removing the bias and variance of multicentre data has always been a challenge in large scale digital healthcare studies, which requires the ability to integrate clinical features extracted from data acquired by different scanners and protocols to improve stability and robustness. Previous studies have described various computational approaches to fuse single modality multicentre datasets. However, these surveys rarely focused on evaluation metrics and lacked a checklist for computational data harmonisation studies. In this systematic review, we summarise the computational data harmonisation approaches for multi-modality data in the digital healthcare field, including harmonisation strategies and evaluation metrics based on different theories. In addition, a comprehensive checklist that summarises common practices for data harmonisation studies is proposed to guide researchers to report their research findings more effectively. Last but not least, flowcharts presenting possible ways for methodology and metric selection are proposed and the limitations of different methods have been surveyed for future research
Recommended from our members
Data harmonisation for information fusion in digital healthcare: A state-of-the-art systematic review, meta-analysis and future research directions.
Removing the bias and variance of multicentre data has always been a challenge in large scale digital healthcare studies, which requires the ability to integrate clinical features extracted from data acquired by different scanners and protocols to improve stability and robustness. Previous studies have described various computational approaches to fuse single modality multicentre datasets. However, these surveys rarely focused on evaluation metrics and lacked a checklist for computational data harmonisation studies. In this systematic review, we summarise the computational data harmonisation approaches for multi-modality data in the digital healthcare field, including harmonisation strategies and evaluation metrics based on different theories. In addition, a comprehensive checklist that summarises common practices for data harmonisation studies is proposed to guide researchers to report their research findings more effectively. Last but not least, flowcharts presenting possible ways for methodology and metric selection are proposed and the limitations of different methods have been surveyed for future research
Karakterizacija predkliničnega tumorskega ksenograftnega modela z uporabo multiparametrične MR
Introduction: In small animal studies multiple imaging modalities can be combined to complement each other in providing information on anatomical structure and function. Non-invasive imaging studies on animal models are used to monitor progressive tumor development. This helps to better understand the efficacy of new medicines and prediction of the clinical outcome. The aim was to construct a framework based on longitudinal multi-modal parametric in vivo imaging approach to perform tumor tissue characterization in mice. Materials and Methods: Multi-parametric in vivo MRI dataset consisted of T1-, T2-, diffusion and perfusion weighted images. Image set of mice (n=3) imaged weekly for 6 weeks was used. Multimodal image registration was performed based on maximizing mutual information. Tumor region of interested was delineated in weeks 2 to 6. These regions were stacked together, and all modalities combined were used in unsupervised segmentation. Clustering methods, such as K-means and Fuzzy C-means together with blind source separation technique of non-negative matrix factorization were tested. Results were visually compared with histopathological findings. Results: Clusters obtained with K-means and Fuzzy C-means algorithm coincided with T2 and ADC maps per levels of intensity observed. Fuzzy C-means clusters and NMF abundance maps reported most promising results compared to histological findings and seem as a complementary way to asses tumor microenvironment. Conclusions: A workflow for multimodal MR parametric map generation, image registration and unsupervised tumor segmentation was constructed. Good segmentation results were achieved, but need further extensive histological validation.Uvod
Eden izmed pomembnih stebrov znanstvenih raziskav v medicinski diagnostiki predstavljajo eksperimenti na živalih v sklopu predkliničnih študij. V teh študijah so eksperimenti izvedeni za namene odkrivanja in preskušanja novih terapevtskih metod za zdravljenje človeških bolezni. Rak jajčnikov je eden izmed glavnih vzrokov smrti kot posledica rakavih obolenj. Potreben je razvoj novih, učinkovitejših metod, da bi lahko uspešneje kljubovali tej bolezni. Časovno okno aplikacije novih terapevtikov je ključni dejavnik uspeha raziskovane terapije. Tumorska fiziologija se namreč razvija med napredovanjem bolezni. Eden izmed ciljev predkliničnih študij je spremljanje razvoja tumorskega mikro-okolja in tako določiti optimalno časovno okno za apliciranje razvitega terapevtika z namenom doseganja maksimalne učinkovitosti.
Slikovne modalitete so kot raziskovalno orodje postale izjemno popularne v biomedicinskih in farmakoloških raziskavah zaradi svoje neinvazivne narave. Predklinične slikovne modalitete imajo nemalo prednosti pred tradicionalnim pristopom. Skladno z raziskovalno regulativo, tako za spremljanje razvoja tumorja skozi daljši čas ni potrebno žrtvovati živali v vmesnih časovnih točkah. Sočasno lahko namreč s svojim nedestruktivnim in neinvazivnim pristopom poleg anatomskih informacij podajo tudi molekularni in funkcionalni opis preučevanega subjekta. Za dosego slednjega so običajno uporabljene različne slikovne modalitete. Pogosto se uporablja kombinacija več slikovnih modalitet, saj so medsebojno komplementarne v podajanju željenih informacij. V sklopu te naloge je predstavljeno ogrodje za procesiranje različnih modalitet magnetno resonančnih predkliničnih modelov z namenom karakterizacije tumorskega tkiva.
Metodologija
V študiji Belderbos, Govaerts, Croitor Sava in sod. [1] so z uporabo magnetne resonance preučevali določitev optimalnega časovnega okna za uspešno aplikacijo novo razvitega terapevtika. Poleg konvencionalnih magnetno resonančnih slikovnih metod (T1 in T2 uteženo slikanje) sta bili uporabljeni tudi perfuzijsko in difuzijsko uteženi tehniki. Zajem slik je potekal tedensko v obdobju šest tednov. Podatkovni seti, uporabljeni v predstavljenem delu, so bili pridobljeni v sklopu omenjene raziskave. Ogrodje za procesiranje je narejeno v okolju Matlab (MathWorks, verzija R2019b) in omogoča tako samodejno kot ročno procesiranje slikovnih podatkov.
V prvem koraku je pred generiranjem parametričnih map uporabljenih modalitet, potrebno izluščiti parametre uporabljenih protokolov iz priloženih tekstovnih datotek in zajete slike pravilno razvrstiti glede na podano anatomijo. Na tem mestu so slike tudi filtrirane in maskirane. Filtriranje je koristno za izboljšanje razmerja med koristnim signalom (slikanim živalskim modelom) in ozadjem, saj je skener za zajem slik navadno podvržen različnim izvorom slikovnega šuma. Uporabljen je bil filter ne-lokalnih povprečij Matlab knjižnice za procesiranje slik. Prednost maskiranja se potrdi v naslednjem koraku pri generiranju parametričnih map, saj se ob primerno maskiranem subjektu postopek bistveno pospeši z mapiranjem le na želenem področju.
Za izdelavo parametričnih map je uporabljena metoda nelinearnih najmanjših kvadratov. Z modeliranjem fizikalnih pojavov uporabljenih modalitet tako predstavimo preiskovan živalski model z biološkimi parametri. Le-ti se komplementarno dopolnjujejo v opisu fizioloških lastnosti preučevanega modela na ravni posameznih slikovnih elementov.
Ključen gradnik v uspešnem dopolnjevanju informacij posameznih modalitet je ustrezna poravnava parametričnih map. Posamezne modalitete so zajete zaporedno, ob različnih časih. Skeniranje vseh modalitet posamezne živali skupno traja več kot eno uro. Med zajemom slik tako navkljub uporabi anestetikov prihaja do majhnih premikov živali. V kolikor ti premiki niso pravilno upoštevani, prihaja do napačnih interpretacij skupnih informacij večih modalitet. Premiki živali znotraj modalitet so bili modelirani kot toge, med različnimi modalitetami pa kot afine preslikave. Poravnava slik je izvedena z lastnimi Matlab funkcijami ali z uporabo funkcij iz odprtokodnega ogrodja za procesiranje slik Elastix.
Z namenom karakterizacije tumorskega tkiva so bile uporabljene metode nenadzorovanega razčlenjevanja. Bistvo razčlenjevanja je v združevanju posameznih slikovnih elementov v segmente. Elementi si morajo biti po izbranem kriteriju dovolj medsebojno podobni in se hkrati razlikovati od elementov drugih segmentov. Za razgradnjo so bile izbrane tri metode: metoda K-tih povprečij, kot ena izmed enostavnejšihmetoda mehkih C-tih povprečij, s prednostjo mehke razčlenitvein kot zadnja, nenegativna matrična faktorizacija. Slednja ponuja pogled na razčlenitev tkiva kot produkt tipičnih več-modalnih značilk in njihove obilice za vsak posamezni slikovni element. Za potrditev izvedenega razčlenjevanja z omenjenimi metodami je bila izvedena vizualna primerjava z rezultati histopatološke analize.
Rezultati
Na ustvarjene parametrične mape je imela poravnava slik znotraj posameznih modalitet velik vpliv. Zaradi dolgotrajnega zajema T1 uteženih slik nemalokrat prihaja do premikov živali, kar brez pravilne poravnave slik negativno vpliva na mapiranje modalitet in kasnejšo segmentacijo slik. Generirane mape imajo majhno odstopanje od tistih, narejenih s standardno uporabljenimi odprtokodnimi programi. Klastri pridobljeni z metodama K-tih in mehkih C-tih povprečij dobro sovpadajo z razčlenbami glede na njihovo inteziteto pri T2 in ADC mapah. Najobetavnejše rezultate po primerjavi s histološkimi izsledki podajata metoda mehkih C-povprečij in nenegativna matrična faktorizacija. Njuni segmentaciji se dopolnjujeta v razlagi tumorskega mikro-okolja.
Zaključek
Z izgradnjo ogrodja za procesiranje slik magnetne resonance in segmentacijo tumorskega tkiva je bil cilj magistrske naloge dosežen. Zasnova ogrodja omogoča poljubno dodajanje drugih modalitet in uporabo drugih živalskih modelov. Rezultati razčlenitve tumorskega tkiva so obetavni, vendar je potrebna nadaljna primerjava z rezultati histopatološke analize. Možna nadgradnja je izboljšanje robustnosti poravnave slik z uporabo modela netoge (elastične) preslikave. Prav tako je smiselno preizkusiti dodatne metode nenadzorovane segmentacije in dobljene rezultate primerjati s tukaj predstavljenimi
A Comprehensive Overview of Computational Nuclei Segmentation Methods in Digital Pathology
In the cancer diagnosis pipeline, digital pathology plays an instrumental
role in the identification, staging, and grading of malignant areas on biopsy
tissue specimens. High resolution histology images are subject to high variance
in appearance, sourcing either from the acquisition devices or the H\&E
staining process. Nuclei segmentation is an important task, as it detects the
nuclei cells over background tissue and gives rise to the topology, size, and
count of nuclei which are determinant factors for cancer detection. Yet, it is
a fairly time consuming task for pathologists, with reportedly high
subjectivity. Computer Aided Diagnosis (CAD) tools empowered by modern
Artificial Intelligence (AI) models enable the automation of nuclei
segmentation. This can reduce the subjectivity in analysis and reading time.
This paper provides an extensive review, beginning from earlier works use
traditional image processing techniques and reaching up to modern approaches
following the Deep Learning (DL) paradigm. Our review also focuses on the weak
supervision aspect of the problem, motivated by the fact that annotated data is
scarce. At the end, the advantages of different models and types of supervision
are thoroughly discussed. Furthermore, we try to extrapolate and envision how
future research lines will potentially be, so as to minimize the need for
labeled data while maintaining high performance. Future methods should
emphasize efficient and explainable models with a transparent underlying
process so that physicians can trust their output.Comment: 47 pages, 27 figures, 9 table
Three-dimensional reconstruction of peripheral nerve internal fascicular groups
Peripheral nerves are important pathways for receiving afferent sensory impulses and sending out efferent motor instructions, as carried out by sensory nerve fibers and motor nerve fibers. It has remained a great challenge to functionally reconnect nerve internal fiber bundles (or fascicles) in nerve repair. One possible solution may be to establish a 3D nerve fascicle visualization system. This study described the key technology of 3D peripheral nerve fascicle reconstruction. Firstly, fixed nerve segments were embedded with position lines, cryostat-sectioned continuously, stained and imaged histologically. Position line cross-sections were identified using a trained support vector machine method, and the coordinates of their central pixels were obtained. Then, nerve section images were registered using the bilinear method, and edges of fascicles were extracted using an improved gradient vector flow snake method. Subsequently, fascicle types were identified automatically using the multi-directional gradient and second-order gradient method. Finally, a 3D virtual model of internal fascicles was obtained after section images were processed. This technique was successfully applied for 3D reconstruction for the median nerve of the hand-wrist and cubital fossa regions and the gastrocnemius nerve. This nerve internal fascicle 3D reconstruction technology would be helpful for aiding peripheral nerve repair and virtual surgery.Yingchun Zhong, Liping Wang, Jianghui Dong, Yi Zhang, Peng Luo, Jian Qi,
Xiaolin Liu and Cory J. Xia
Automatic gridding of DNA microarray images.
Microarray (DNA chip) technology is having a significant impact on genomic studies. Many fields, including drug discovery and toxicological research, will certainly benefit from the use of DNA microarray technology. Microarray analysis is replacing traditional biological assays based on gels, filters and purification columns with small glass chips containing tens of thousands of DNA and protein sequences in agricultural and medical sciences. Microarray functions like biological microprocessors, enabling the rapid and quantitative analysis of gene expression patterns, patient genotypes, drug mechanisms and disease onset and progression on a genomic scale. Image analysis and statistical analysis are two important aspects of microarray technology. Gridding is necessary to accurately identify the location of each of the spots while extracting spot intensities from the microarray images and automating this procedure permits high-throughput analysis. Due to the deficiencies of the equipment that is used to print the arrays, rotations, misalignments, high contaminations with noise and artifacts, solving the grid segmentation problem in an automatic system is not trivial. The existing techniques to solve the automatic grid segmentation problem cover only limited aspect of this challenging problem and requires the user to specify or make assumptions about the spotsize, rows and columns in the grid and boundary conditions. An automatic gridding and spot quantification technique is proposed, which takes a matrix of pixels or a microarray image as input and makes no assumptions about the spotsize, rows and columns in the grid and is found to effective on datasets from GEO, Stanford genomic laboratories and on images obtained from private repositories. Paper copy at Leddy Library: Theses & Major Papers - Basement, West Bldg. / Call Number: Thesis2004 .V53. Source: Masters Abstracts International, Volume: 43-03, page: 0891. Adviser: Luis Rueda. Thesis (M.Sc.)--University of Windsor (Canada), 2004
A Colour Wheel to Rule them All: Analysing Colour & Geometry in Medical Microscopy
Personalized medicine is a rapidly growing field in healthcare that aims to customize
medical treatments and preventive measures based on each patient’s unique characteristics,
such as their genes, environment, and lifestyle factors. This approach
acknowledges that people with the same medical condition may respond differently
to therapies and seeks to optimize patient outcomes while minimizing the risk
of adverse effects.
To achieve these goals, personalized medicine relies on advanced technologies,
such as genomics, proteomics, metabolomics, and medical imaging. Digital
histopathology, a crucial aspect of medical imaging, provides clinicians with valuable
insights into tissue structure and function at the cellular and molecular levels. By
analyzing small tissue samples obtained through minimally invasive techniques, such
as biopsy or aspirate, doctors can gather extensive data to evaluate potential diagnoses
and clinical decisions. However, digital analysis of histology images presents
unique challenges, including the loss of 3D information and stain variability, which
is further complicated by sample variability. Limited access to data exacerbates
these challenges, making it difficult to develop accurate computational models for
research and clinical use in digital histology.
Deep learning (DL) algorithms have shown significant potential for improving the
accuracy of Computer-Aided Diagnosis (CAD) and personalized treatment models,
particularly in medical microscopy. However, factors such as limited generability,
lack of interpretability, and bias sometimes hinder their clinical impact. Furthermore,
the inherent variability of histology images complicates the development of robust DL
methods. Thus, this thesis focuses on developing new tools to address these issues.
Our essential objective is to create transparent, accessible, and efficient methods
based on classical principles from various disciplines, including histology, medical
imaging, mathematics, and art, to tackle microscopy image registration and colour
analysis successfully. These methods can contribute significantly to the advancement
of personalized medicine, particularly in studying the tumour microenvironment
for diagnosis and therapy research.
First, we introduce a novel automatic method for colour analysis and non-rigid
histology registration, enabling the study of heterogeneity morphology in tumour
biopsies. This method achieves accurate tissue cut registration, drastically reducing
landmark distance and excellent border overlap. Second, we introduce ABANICCO, a novel colour analysis method that combines
geometric analysis, colour theory, fuzzy colour spaces, and multi-label systems
for automatically classifying pixels into a set of conventional colour categories.
ABANICCO outperforms benchmark methods in accuracy and simplicity. It is
computationally straightforward, making it useful in scenarios involving changing
objects, limited data, unclear boundaries, or when users lack prior knowledge of
the image or colour theory. Moreover, results can be modified to match each
particular task.
Third, we apply the acquired knowledge to create a novel pipeline of rigid
histology registration and ABANICCO colour analysis for the in-depth study of
triple-negative breast cancer biopsies. The resulting heterogeneity map and tumour
score provide valuable insights into the composition and behaviour of the tumour,
informing clinical decision-making and guiding treatment strategies.
Finally, we consolidate the developed ideas into an efficient pipeline for tissue
reconstruction and multi-modality data integration on Tuberculosis infection data.
This enables accurate element distribution analysis to understand better interactions
between bacteria, host cells, and the immune system during the course of infection.
The methods proposed in this thesis represent a transparent approach to computational
pathology, addressing the needs of medical microscopy registration and
colour analysis while bridging the gap between clinical practice and computational
research. Moreover, our contributions can help develop and train better, more
robust DL methods.En una época en la que la medicina personalizada está revolucionando la asistencia
sanitaria, cada vez es más importante adaptar los tratamientos y las medidas
preventivas a la composición genética, el entorno y el estilo de vida de cada
paciente. Mediante el empleo de tecnologías avanzadas, como la genómica, la
proteómica, la metabolómica y la imagen médica, la medicina personalizada se
esfuerza por racionalizar el tratamiento para mejorar los resultados y reducir
los efectos secundarios.
La microscopía médica, un aspecto crucial de la medicina personalizada, permite
a los médicos recopilar y analizar grandes cantidades de datos a partir de pequeñas
muestras de tejido. Esto es especialmente relevante en oncología, donde las terapias
contra el cáncer se pueden optimizar en función de la apariencia tisular específica de
cada tumor. La patología computacional, un subcampo de la visión por ordenador,
trata de crear algoritmos para el análisis digital de biopsias. Sin embargo, antes de
que un ordenador pueda analizar imágenes de microscopía médica, hay que seguir
varios pasos para conseguir las imágenes de las muestras.
La primera etapa consiste en recoger y preparar una muestra de tejido del
paciente. Para que esta pueda observarse fácilmente al microscopio, se corta en
secciones ultrafinas. Sin embargo, este delicado procedimiento no está exento de
dificultades. Los frágiles tejidos pueden distorsionarse, desgarrarse o agujerearse,
poniendo en peligro la integridad general de la muestra.
Una vez que el tejido está debidamente preparado, suele tratarse con tintes de
colores característicos. Estos tintes acentúan diferentes tipos de células y tejidos
con colores específicos, lo que facilita a los profesionales médicos la identificación
de características particulares. Sin embargo, esta mejora en visualización tiene
un alto coste. En ocasiones, los tintes pueden dificultar el análisis informático
de las imágenes al mezclarse de forma inadecuada, traspasarse al fondo o alterar
el contraste entre los distintos elementos.
El último paso del proceso consiste en digitalizar la muestra. Se toman imágenes
de alta resolución del tejido con distintos aumentos, lo que permite su análisis por
ordenador. Esta etapa también tiene sus obstáculos. Factores como una calibración
incorrecta de la cámara o unas condiciones de iluminación inadecuadas pueden
distorsionar o hacer borrosas las imágenes. Además, las imágenes de porta completo
obtenidas so de tamaño considerable, complicando aún más el análisis. En general, si bien la preparación, la tinción y la digitalización de las muestras
de microscopía médica son fundamentales para el análisis digital, cada uno de estos
pasos puede introducir retos adicionales que deben abordarse para garantizar un
análisis preciso. Además, convertir un volumen de tejido completo en unas pocas
secciones teñidas reduce drásticamente la información 3D disponible e introduce
una gran incertidumbre.
Las soluciones de aprendizaje profundo (deep learning, DL) son muy prometedoras
en el ámbito de la medicina personalizada, pero su impacto clínico a veces se
ve obstaculizado por factores como la limitada generalizabilidad, el sobreajuste, la
opacidad y la falta de interpretabilidad, además de las preocupaciones éticas y en
algunos casos, los incentivos privados. Por otro lado, la variabilidad de las imágenes
histológicas complica el desarrollo de métodos robustos de DL. Para superar estos
retos, esta tesis presenta una serie de métodos altamente robustos e interpretables
basados en principios clásicos de histología, imagen médica, matemáticas y arte,
para alinear secciones de microscopía y analizar sus colores.
Nuestra primera contribución es ABANICCO, un innovador método de análisis
de color que ofrece una segmentación de colores objectiva y no supervisada y permite
su posterior refinamiento mediante herramientas fáciles de usar. Se ha demostrado
que la precisión y la eficacia de ABANICCO son superiores a las de los métodos
existentes de clasificación y segmentación del color, e incluso destaca en la detección
y segmentación de objetos completos. ABANICCO puede aplicarse a imágenes
de microscopía para detectar áreas teñidas para la cuantificación de biopsias, un
aspecto crucial de la investigación de cáncer.
La segunda contribución es un método automático y no supervisado de segmentación
de tejidos que identifica y elimina el fondo y los artefactos de las
imágenes de microscopía, mejorando así el rendimiento de técnicas más sofisticadas
de análisis de imagen. Este método es robusto frente a diversas imágenes, tinciones
y protocolos de adquisición, y no requiere entrenamiento.
La tercera contribución consiste en el desarrollo de métodos novedosos para
registrar imágenes histopatológicas de forma eficaz, logrando el equilibrio adecuado
entre un registro preciso y la preservación de la morfología local, en función de
la aplicación prevista.
Como cuarta contribución, los tres métodos mencionados se combinan para
crear procedimientos eficientes para la integración completa de datos volumétricos,
creando visualizaciones altamente interpretables de toda la información presente en
secciones consecutivas de biopsia de tejidos. Esta integración de datos puede tener
una gran repercusión en el diagnóstico y el tratamiento de diversas enfermedades,
en particular el cáncer de mama, al permitir la detección precoz, la realización
de pruebas clínicas precisas, la selección eficaz de tratamientos y la mejora en la
comunicación el compromiso con los pacientes. Por último, aplicamos nuestros hallazgos a la integración multimodal de datos y
la reconstrucción de tejidos para el análisis preciso de la distribución de elementos
químicos en tuberculosis, lo que arroja luz sobre las complejas interacciones entre
las bacterias, las células huésped y el sistema inmunitario durante la infección
tuberculosa. Este método también aborda problemas como el daño por adquisición,
típico de muchas modalidades de imagen.
En resumen, esta tesis muestra la aplicación de métodos clásicos de visión por
ordenador en el registro de microscopía médica y el análisis de color para abordar
los retos únicos de este campo, haciendo hincapié en la visualización eficaz y fácil de
datos complejos. Aspiramos a seguir perfeccionando nuestro trabajo con una amplia
validación técnica y un mejor análisis de los datos. Los métodos presentados en esta
tesis se caracterizan por su claridad, accesibilidad, visualización eficaz de los datos,
objetividad y transparencia. Estas características los hacen perfectos para tender
puentes robustos entre los investigadores de inteligencia artificial y los clínicos e
impulsar así la patología computacional en la práctica y la investigación médicas.Programa de Doctorado en Ciencia y Tecnología Biomédica por la Universidad Carlos III de MadridPresidenta: María Jesús Ledesma Carbayo.- Secretario: Gonzalo Ricardo Ríos Muñoz.- Vocal: Estíbaliz Gómez de Marisca
Hematological image analysis for acute lymphoblastic leukemia detection and classification
Microscopic analysis of peripheral blood smear is a critical step in detection of leukemia.However, this type of light microscopic assessment is time consuming, inherently subjective, and is governed by hematopathologists clinical acumen and experience. To
circumvent such problems, an efficient computer aided methodology for quantitative analysis of peripheral blood samples is required to be developed. In this thesis, efforts are therefore made to devise methodologies for automated detection and subclassification of Acute Lymphoblastic Leukemia (ALL) using image processing and machine learning methods.Choice of appropriate segmentation scheme plays a vital role in the automated disease recognition process. Accordingly to segment the normal mature lymphocyte and malignant lymphoblast images into constituent morphological regions novel schemes have been proposed. In order to make the proposed schemes viable from a practical and real–time stand point, the segmentation problem is addressed in both supervised and unsupervised framework. These proposed methods are based on neural network,feature space clustering, and Markov random field modeling, where the segmentation problem is formulated as pixel classification, pixel clustering, and pixel labeling
problem respectively. A comprehensive validation analysis is presented to evaluate the performance of four proposed lymphocyte image segmentation schemes against manual
segmentation results provided by a panel of hematopathologists. It is observed that morphological components of normal and malignant lymphocytes differ significantly. To automatically recognize lymphoblasts and detect ALL in peripheral blood samples, an efficient methodology is proposed.Morphological, textural and color features are extracted from the segmented nucleus and cytoplasm regions of the lymphocyte images. An ensemble of classifiers represented as EOC3 comprising of three classifiers shows highest classification accuracy of 94.73% in comparison to individual members. The subclassification of ALL based on French–American–British (FAB) and World
Health Organization (WHO) criteria is essential for prognosis and treatment planning. Accordingly two independent methodologies are proposed for automated classification of malignant lymphocyte (lymphoblast) images based on morphology and phenotype. These methods include lymphoblast image segmentation, nucleus and cytoplasm feature extraction, and efficient classification
A review of the quantification and classification of pigmented skin lesions: from dedicated to hand-held devices
In recent years, the incidence of skin cancer caseshas risen, worldwide, mainly due to the prolonged exposure toharmful ultraviolet radiation. Concurrently, the computerassistedmedical diagnosis of skin cancer has undergone majoradvances, through an improvement in the instrument and detectiontechnology, and the development of algorithms to processthe information. Moreover, because there has been anincreased need to store medical data, for monitoring, comparativeand assisted-learning purposes, algorithms for data processingand storage have also become more efficient in handlingthe increase of data. In addition, the potential use ofcommon mobile devices to register high-resolution imagesof skin lesions has also fueled the need to create real-timeprocessing algorithms that may provide a likelihood for thedevelopment of malignancy. This last possibility allows evennon-specialists to monitor and follow-up suspected skin cancercases. In this review, we present the major steps in the preprocessing,processing and post-processing of skin lesion images,with a particular emphasis on the quantification andclassification of pigmented skin lesions. We further reviewand outline the future challenges for the creation of minimum-feature,automated and real-time algorithms for the detectionof skin cancer from images acquired via common mobiledevices
- …