53 research outputs found
Online tracking and retargeting with applications to optical biopsy in gastrointestinal endoscopic examinations
With recent advances in biophotonics, techniques such as narrow band imaging, confocal laser endomicroscopy, fluorescence spectroscopy, and optical coherence tomography, can be combined with normal white-light endoscopes to provide in vivo microscopic tissue characterisation, potentially avoiding the need for offline histological analysis. Despite the advantages of these techniques to provide online optical biopsy in situ, it is challenging for gastroenterologists to retarget the optical biopsy sites during endoscopic examinations. This is because optical biopsy does not leave any mark on the tissue. Furthermore, typical endoscopic cameras only have a limited field-of-view and the biopsy sites often enter or exit the camera view as the endoscope moves. In this paper, a framework for online tracking and retargeting is proposed based on the concept of tracking-by-detection. An online detection cascade is proposed where a random binary descriptor using Haar-like features is included as a random forest classifier. For robust retargeting, we have also proposed a RANSAC-based location verification component that incorporates shape context. The proposed detection cascade can be readily integrated with other temporal trackers. Detailed performance evaluation on in vivo gastrointestinal video sequences demonstrates the performance advantage of the proposed method over the current state-of-the-art
SAME++: A Self-supervised Anatomical eMbeddings Enhanced medical image registration framework using stable sampling and regularized transformation
Image registration is a fundamental medical image analysis task. Ideally,
registration should focus on aligning semantically corresponding voxels, i.e.,
the same anatomical locations. However, existing methods often optimize
similarity measures computed directly on intensities or on hand-crafted
features, which lack anatomical semantic information. These similarity measures
may lead to sub-optimal solutions where large deformations, complex anatomical
differences, or cross-modality imagery exist. In this work, we introduce a fast
and accurate method for unsupervised 3D medical image registration building on
top of a Self-supervised Anatomical eMbedding (SAM) algorithm, which is capable
of computing dense anatomical correspondences between two images at the voxel
level. We name our approach SAM-Enhanced registration (SAME++), which
decomposes image registration into four steps: affine transformation, coarse
deformation, deep non-parametric transformation, and instance optimization.
Using SAM embeddings, we enhance these steps by finding more coherent
correspondence and providing features with better semantic guidance. We
extensively evaluated SAME++ using more than 50 labeled organs on three
challenging inter-subject registration tasks of different body parts. As a
complete registration framework, SAME++ markedly outperforms leading methods by
- in terms of Dice score while being orders of magnitude faster
than numerical optimization-based methods. Code is available at
\url{https://github.com/alibaba-damo-academy/same}
Recommended from our members
Adaptive Quantification and Subtyping of Pulmonary Emphysema on Computed Tomography
Pulmonary emphysema contributes to the chronic airflow limitation characteristic of chronic obstructive pulmonary disease (COPD), which is a leading cause of morbidity and mortality worldwide. Computed tomography (CT) has enabled in vivo assessment of pulmonary emphysema at the macroscopic level, and is commonly used to identify and assess the extent of the disease.
During the past decade, the availability of CT imaging data has increased rapidly, while the image quality has continued to improve. High-resolution CT is extremely valuable both for patient diagnosis and for studying diseases at the population level. However, visual assessment of these large data sets is subjective, inefficient, and expensive. This has increased the demand for objective, automatic, and reproducible image analysis methods.
For the assessment of pulmonary emphysema on CT, computational models usually aim either to give a measure of the extent of the disease, or to categorize the emphysema subtypes apparent in a scan. The standard methods for quantitating emphysema extent are widely used, but they remain sensitive to changes in imaging protocols and patient inspiration level. For computational subtyping of emphysema, the methods remain at a developmental stage, and one of the main challenges is the lack of reliable label data. Furthermore, the classic emphysema subtypes were defined on autopsy before the availability of CT and could be considered outdated. There is also no consensus on how to match the subtypes on autopsy to the varying emphysema patterns present on CT.
This work presents two methodological improvements for analyzing emphysema on CT. For the assessment of emphysema extent, a novel probabilistic approach is introduced and evaluated on a longitudinal data set with varying imaging protocols. The presented model is shown to improve significantly compared to standard methods, particularly at the presence of differing noise levels. The approach is also applied on quantifying emphysema on a large data set of cardiac CT scans, and is shown to improve the prediction of emphysema extent on subsequent full-lung CT scans.
The second major contribution of this work applies unsupervised learning to recognizing patterns of emphysema on CT. Instead of trying to reproduce the classic subtypes, the novel approach aims to capture the most dominant variations of lung structure pertaining to emphysema. While removing the reliance on visually assigned labels, the learned patterns are shown to represent different manifestations of emphysema with distinct appearances and regular spatial distributions. The clinical significance of the patterns is also demonstrated, along with high-level performance in the application of content-based image retrieval.
The contributions of this work advance the analysis of emphysema on CT by applying novel machine learning approaches to increase the value of the available imaging data. Probabilistic methods improve from the crude standard methods that are currently used to quantitate emphysema, and the value of learning disease patterns directly from image data is demonstrated. The common framework relying on replicating visually assigned labels of outdated subtypes has not achieved widespread acceptance. The methodology presented in this work may have a substantial impact on how emphysema subtypes on CT are recognized and defined in the future
Differently stained whole slide image registration technique with landmark validation
Abstract. One of the most significant features in digital pathology is to compare and fuse successive differently stained tissue sections, also called slides, visually. Doing so, aligning different images to a common frame, ground truth, is required. Current sample scanning tools enable to create images full of informative layers of digitalized tissues, stored with a high resolution into whole slide images. However, there are a limited amount of automatic alignment tools handling large images precisely in acceptable processing time. The idea of this study is to propose a deep learning solution for histopathology image registration. The main focus is on the understanding of landmark validation and the impact of stain augmentation on differently stained histopathology images. Also, the developed registration method is compared with the state-of-the-art algorithms which utilize whole slide images in the field of digital pathology.
There are previous studies about histopathology, digital pathology, whole slide imaging and image registration, color staining, data augmentation, and deep learning that are referenced in this study. The goal is to develop a learning-based registration framework specifically for high-resolution histopathology image registration. Different whole slide tissue sample images are used with a resolution of up to 40x magnification. The images are organized into sets of consecutive, differently dyed sections, and the aim is to register the images based on only the visible tissue and ignore the background. Significant structures in the tissue are marked with landmarks.
The quality measurements include, for example, the relative target registration error, structural similarity index metric, visual evaluation, landmark-based evaluation, matching points, and image details. These results are comparable and can be used also in the future research and in development of new tools. Moreover, the results are expected to show how the theory and practice are combined in whole slide image registration challenges. DeepHistReg algorithm will be studied to better understand the development of stain color feature augmentation-based image registration tool of this study. Matlab and Aperio ImageScope are the tools to annotate and validate the image, and Python is used to develop the algorithm of this new registration tool.
As cancer is globally a serious disease regardless of age or lifestyle, it is important to find ways to develop the systems experts can use while working with patientsâ data. There is still a lot to improve in the field of digital pathology and this study is one step toward it.Eri menetelmin vĂ€rjĂ€ttyjen virtuaalinĂ€ytelasien rekisteröintitekniikka kiintopisteiden validointia hyödyntĂ€en. TiivistelmĂ€. Yksi tĂ€rkeimmistĂ€ digitaalipatologian ominaisuuksista on verrata ja fuusioida perĂ€kkĂ€isiĂ€ eri menetelmin vĂ€rjĂ€ttyjĂ€ kudosleikkeitĂ€ toisiinsa visuaalisesti. TĂ€llöin keskenÀÀn lĂ€hes identtiset kuvat kohdistetaan samaan yhteiseen kehykseen, niin sanottuun pohjatotuuteen. Nykyiset nĂ€ytteiden skannaustyökalut mahdollistavat sellaisten kuvien luonnin, jotka ovat tĂ€ynnĂ€ kerroksittaista tietoa digitalisoiduista nĂ€ytteistĂ€, tallennettuna erittĂ€in korkean resoluution virtuaalisiin nĂ€ytelaseihin. TĂ€llĂ€ hetkellĂ€ on olemassa kuitenkin vain kourallinen automaattisia työkaluja, jotka kykenevĂ€t kĂ€sittelemÀÀn nĂ€in valtavia kuvatiedostoja tarkasti hyvĂ€ksytyin aikarajoin. TĂ€mĂ€n työn tarkoituksena on syvĂ€oppimista hyvĂ€ksikĂ€yttĂ€en löytÀÀ ratkaisu histopatologisten kuvien rekisteröintiin. TĂ€rkeimpĂ€nĂ€ osa-alueena on ymmĂ€rtÀÀ kiintopisteiden validoinnin periaatteet sekĂ€ eri vĂ€riaineiden augmentoinnin vaikutus. LisĂ€ksi tĂ€ssĂ€ työssĂ€ kehitettyĂ€ rekisteröintialgoritmia tullaan vertailemaan muihin kirjallisuudessa esitettyihin algoritmeihin, jotka myös hyödyntĂ€vĂ€t virtuaalinĂ€ytelaseja digitaalipatologian saralla.
Kirjallisessa osiossa tullaan siteeraamaan aiempia tutkimuksia muun muassa seuraavista aihealueista: histopatologia, digitaalipatologia, virtuaalinÀytelasi, kuvantaminen ja rekisteröinti, nÀytteen vÀrjÀys, data-augmentointi sekÀ syvÀoppiminen. Tavoitteena on kehittÀÀ oppimispohjainen rekisteröintikehys erityisesti korkearesoluutioisille digitalisoiduille histopatologisille kuville. Erilaisissa nÀytekuvissa tullaan kÀyttÀmÀÀn jopa 40-kertaista suurennosta. Kuvat kudoksista on jÀrjestetty eri menetelmin vÀrjÀttyihin perÀkkÀisiin kuvasarjoihin ja tÀmÀn työn pÀÀmÀÀrÀnÀ on rekisteröidÀ kuvat pohjautuen ainoastaan kudosten nÀkyviin osuuksiin, jÀttÀen kuvien tausta huomioimatta. Kudosten merkittÀvimmÀt rakenteet on merkattu niin sanotuin kiintopistein.
Työn laatumittauksina kÀytetÀÀn arvoja, kuten kohteen suhteellinen rekisteröintivirhe (rTRE), rakenteellisen samankaltaisuuindeksin mittari (SSIM), sekÀ visuaalista arviointia, kiintopisteisiin pohjautuvaa arviointia, yhteensopivuuskohtia, ja kuvatiedoston yksityiskohtia. NÀmÀ arvot ovat verrattavissa myös tulevissa tutkimuksissa ja samaisia arvoja voidaan kÀyttÀÀ uusia työkaluja kehiteltÀessÀ. DeepHistReg metodi toimii pohjana tÀssÀ työssÀ kehitettÀvÀlle nÀytteen vÀrjÀyksen parantamiseen pohjautuvalle rekisteröintityökalulle. Matlab ja Aperio ImageScope ovat ohjelmistoja, joita tullaan hyödyntÀmÀÀn tÀssÀ työssÀ kuvien merkitsemiseen ja validointiin. OhjelmointikielenÀ kÀytetÀÀn Pythonia.
SyöpÀ on maailmanlaajuisesti vakava sairaus, joka ei katso ikÀÀ eikÀ elÀmÀntyyliÀ. Siksi on tÀrkeÀÀ löytÀÀ uusia keinoja kehittÀÀ työkaluja, joita asiantuntijat voivat hyödyntÀÀ jokapÀivÀisessÀ työssÀÀn potilastietojen kÀsittelyssÀ. Digitaalipatologian osa-alueella on vielÀ paljon innovoitavaa ja tÀmÀ työ on yksi askel eteenpÀin taistelussa syöpÀsairauksia vastaan
Registration of pre-operative lung cancer PET/CT scans with post-operative histopathology images
Non-invasive imaging modalities used in the diagnosis of lung cancer, such as Positron Emission Tomography (PET) or Computed Tomography (CT), currently provide insuffcient information about the cellular make-up of the lesion microenvironment, unless they are compared against the gold standard of histopathology.The aim of this retrospective study was to build a robust imaging framework for registering in vivo and post-operative scans from lung cancer patients, in order to have a global, pathology-validated multimodality map of the tumour and its surroundings.;Initial experiments were performed on tissue-mimicking phantoms, to test different shape reconstruction methods. The choice of interpolator and slice thickness were found to affect the algorithm's output, in terms of overall volume and local feature recovery. In the second phase of the study, nine lung cancer patients referred for radical lobectomy were recruited. Resected specimens were inflated with agar, sliced at 5 mm intervals, and each cross-section was photographed. The tumour area was delineated on the block-face pathology images and on the preoperative PET/CT scans.;Airway segments were also added to the reconstructed models, to act as anatomical fiducials. Binary shapes were pre-registered by aligning their minimal bounding box axes, and subsequently transformed using rigid registration. In addition, histopathology slides were matched to the block-face photographs using moving least squares algorithm.;A two-step validation process was used to evaluate the performance of the proposed method against manual registration carried out by experienced consultants. In two out of three cases, experts rated the results generated by the algorithm as the best output, suggesting that the developed framework outperforms the current standard practice.Non-invasive imaging modalities used in the diagnosis of lung cancer, such as Positron Emission Tomography (PET) or Computed Tomography (CT), currently provide insuffcient information about the cellular make-up of the lesion microenvironment, unless they are compared against the gold standard of histopathology.The aim of this retrospective study was to build a robust imaging framework for registering in vivo and post-operative scans from lung cancer patients, in order to have a global, pathology-validated multimodality map of the tumour and its surroundings.;Initial experiments were performed on tissue-mimicking phantoms, to test different shape reconstruction methods. The choice of interpolator and slice thickness were found to affect the algorithm's output, in terms of overall volume and local feature recovery. In the second phase of the study, nine lung cancer patients referred for radical lobectomy were recruited. Resected specimens were inflated with agar, sliced at 5 mm intervals, and each cross-section was photographed. The tumour area was delineated on the block-face pathology images and on the preoperative PET/CT scans.;Airway segments were also added to the reconstructed models, to act as anatomical fiducials. Binary shapes were pre-registered by aligning their minimal bounding box axes, and subsequently transformed using rigid registration. In addition, histopathology slides were matched to the block-face photographs using moving least squares algorithm.;A two-step validation process was used to evaluate the performance of the proposed method against manual registration carried out by experienced consultants. In two out of three cases, experts rated the results generated by the algorithm as the best output, suggesting that the developed framework outperforms the current standard practice
Automated Stabilization, Enhancement and Capillaries Segmentation in Videocapillaroscopy
Oral capillaroscopy is a critical and non-invasive technique used to evaluate microcirculation. Its ability to observe small vessels in vivo has generated significant interest in the field. Capillaroscopy serves as an essential tool for diagnosing and prognosing various pathologies, with anatomicâpathological lesions playing a crucial role in their progression. Despite its importance, the utilization of videocapillaroscopy in the oral cavity encounters limitations due to the acquisition setup, encompassing spatial and temporal resolutions of the video camera, objective magnification, and physical probe dimensions. Moreover, the operatorâs influence during the acquisition process, particularly how the probe is maneuvered, further affects its effectiveness. This study aims to address these challenges and improve data reliability by developing a computerized support system for microcirculation analysis. The designed system performs stabilization, enhancement and automatic segmentation of capillaries in oral mucosal video sequences. The stabilization phase was performed by means of a method based on the coupling of seed points in a classification process. The enhancement process implemented was based on the temporal analysis of the capillaroscopic frames. Finally, an automatic segmentation phase of the capillaries was implemented with the additional objective of quantitatively assessing the signal improvement achieved through the developed techniques. Specifically, transfer learning of the renowned U-net deep network was implemented for this purpose. The proposed method underwent testing on a database with ground truth obtained from expert manual segmentation. The obtained results demonstrate an achieved Jaccard index of 90.1% and an accuracy of 96.2%, highlighting the effectiveness of the developed techniques in oral capillaroscopy. In conclusion, these promising outcomes encourage the utilization of this method to assist in the diagnosis and monitoring of conditions that impact microcirculation, such as rheumatologic or cardiovascular disorders
Image analysis-based framework for adaptive and focal radiotherapy
It is estimated that more than 60% of cancer patients will receive radiotherapy (RT). Medical
images acquired from different imaging modalities are used to guide the entire RT process
from the initial treatment plan to fractionated radiation delivery. Accurate identification of
the gross tumor volume (GTV) on computed tomography (CT), acquired at different time
points, is crucial for the success of RT. In addition, complementary information from magnetic
resonance imaging (MRI), positron emission tomography (PET), cone-beam computed
tomography (CBCT) and electronic portal imaging device (EPID) is often used to obtain better
definition of the target, track disease progression and update the radiotherapy plan. However,
identifying tumor volumes on medical image data requires significant clinical experience and is
extremely time consuming. Computer-based methods have the potential to assist with this task
and improve radiotherapy. In this thesis a method was developed for automatically identifying
the tumor volume on medical images. The method consists of three main parts: (1) a novel
rigid image registration method based on scale invariant feature transform (SIFT) and mutual
information (MI); (2) a non-rigid registration (deformable registration) method based on the
cubic B-spline and a novel similarity function; (3) a gradient-based level set method that used
the registered information as prior knowledge for further segmentation to detect changes in the
patient from disease progression or regression and to account for the time difference between
image acquisition. Validation was carried out by a clinician and by using objective methods that
measure the similarity between the anatomy defined by a clinician and by the method proposed.
With this automatic approach it was possible to identify the tumor volume on different images
acquired at different time points in the radiotherapy workflow. Specifically, for lung cancer
a mean error of 3.9% was found; clinically acceptable results were found for 12 of the 14
prostate cancer cases; and a similarity of 84.44% was achieved for the nasal cancer data. This
framework has the potential ability to track the shape variation of tumor volumes over time,
and in response to radiotherapy, and could therefore, with more validation, be used for adaptive
radiotherapy
Unveiling healthcare data archiving: Exploring the role of artificial intelligence in medical image analysis
Gli archivi sanitari digitali possono essere considerati dei moderni database progettati per immagazzinare e gestire ingenti quantitaÌ di informazioni mediche, dalle cartelle cliniche dei pazienti, a studi clinici fino alle immagini mediche e a dati genomici. I dati strutturati e non strutturati che compongono gli archivi sanitari sono oggetto di scrupolose e rigorose procedure di validazione per garantire accuratezza, affidabilitaÌ e standardizzazione a fini clinici e di ricerca.
Nel contesto di un settore sanitario in continua e rapida evoluzione, lâintelligenza artificiale (IA) si propone come una forza trasformativa, capace di riformare gli archivi sanitari digitali migliorando la gestione, lâanalisi e il recupero di vasti set di dati clinici, al fine di ottenere decisioni cliniche piuÌ informate e ripetibili, interventi tempestivi e risultati migliorati per i pazienti.
Tra i diversi dati archiviati, la gestione e lâanalisi delle immagini mediche in archivi digitali presentano numerose sfide dovute allâeterogeneitaÌ dei dati, alla variabilitaÌ della qualitaÌ delle immagini, noncheÌ alla mancanza di annotazioni. Lâimpiego di soluzioni basate sullâIA puoÌ aiutare a risolvere efficacemente queste problematiche, migliorando lâaccuratezza dellâanalisi delle immagini, standardizzando la qualitaÌ dei dati e facilitando la generazione di annotazioni dettagliate.
Questa tesi ha lo scopo di utilizzare algoritmi di IA per lâanalisi di immagini mediche depositate in archivi sanitari digitali. Il presente lavoro propone di indagare varie tecniche di imaging medico, ognuna delle quali eÌ caratterizzata da uno specifico dominio di applicazione e presenta quindi un insieme unico di sfide, requisiti e potenziali esiti. In particolare, in questo lavoro di tesi saraÌ oggetto di approfondimento lâassistenza diagnostica degli algoritmi di IA per tre diverse tecniche di imaging, in specifici scenari clinici:
i) Immagini endoscopiche ottenute durante esami di laringoscopia; cioÌ include unâesplorazione approfondita di tecniche come la detection di keypoints per la stima della motilitaÌ delle corde vocali e la segmentazione di tumori del tratto aerodigestivo superiore;
ii) Immagini di risonanza magnetica per la segmentazione dei dischi intervertebrali, per la diagnosi e il trattamento di malattie spinali, cosiÌ come per lo svolgimento di interventi chirurgici guidati da immagini;
iii) Immagini ecografiche in ambito reumatologico, per la valutazione della sindrome del tunnel carpale attraverso la segmentazione del nervo mediano.
Le metodologie esposte in questo lavoro evidenziano lâefficacia degli algoritmi di IA nellâanalizzare immagini mediche archiviate. I progressi metodologici ottenuti sottolineano il notevole potenziale dellâIA nel rivelare informazioni implicitamente presenti negli archivi sanitari digitali
- âŠ