53 research outputs found

    Online tracking and retargeting with applications to optical biopsy in gastrointestinal endoscopic examinations

    No full text
    With recent advances in biophotonics, techniques such as narrow band imaging, confocal laser endomicroscopy, fluorescence spectroscopy, and optical coherence tomography, can be combined with normal white-light endoscopes to provide in vivo microscopic tissue characterisation, potentially avoiding the need for offline histological analysis. Despite the advantages of these techniques to provide online optical biopsy in situ, it is challenging for gastroenterologists to retarget the optical biopsy sites during endoscopic examinations. This is because optical biopsy does not leave any mark on the tissue. Furthermore, typical endoscopic cameras only have a limited field-of-view and the biopsy sites often enter or exit the camera view as the endoscope moves. In this paper, a framework for online tracking and retargeting is proposed based on the concept of tracking-by-detection. An online detection cascade is proposed where a random binary descriptor using Haar-like features is included as a random forest classifier. For robust retargeting, we have also proposed a RANSAC-based location verification component that incorporates shape context. The proposed detection cascade can be readily integrated with other temporal trackers. Detailed performance evaluation on in vivo gastrointestinal video sequences demonstrates the performance advantage of the proposed method over the current state-of-the-art

    SAME++: A Self-supervised Anatomical eMbeddings Enhanced medical image registration framework using stable sampling and regularized transformation

    Full text link
    Image registration is a fundamental medical image analysis task. Ideally, registration should focus on aligning semantically corresponding voxels, i.e., the same anatomical locations. However, existing methods often optimize similarity measures computed directly on intensities or on hand-crafted features, which lack anatomical semantic information. These similarity measures may lead to sub-optimal solutions where large deformations, complex anatomical differences, or cross-modality imagery exist. In this work, we introduce a fast and accurate method for unsupervised 3D medical image registration building on top of a Self-supervised Anatomical eMbedding (SAM) algorithm, which is capable of computing dense anatomical correspondences between two images at the voxel level. We name our approach SAM-Enhanced registration (SAME++), which decomposes image registration into four steps: affine transformation, coarse deformation, deep non-parametric transformation, and instance optimization. Using SAM embeddings, we enhance these steps by finding more coherent correspondence and providing features with better semantic guidance. We extensively evaluated SAME++ using more than 50 labeled organs on three challenging inter-subject registration tasks of different body parts. As a complete registration framework, SAME++ markedly outperforms leading methods by 4.2%4.2\% - 8.2%8.2\% in terms of Dice score while being orders of magnitude faster than numerical optimization-based methods. Code is available at \url{https://github.com/alibaba-damo-academy/same}

    Differently stained whole slide image registration technique with landmark validation

    Get PDF
    Abstract. One of the most significant features in digital pathology is to compare and fuse successive differently stained tissue sections, also called slides, visually. Doing so, aligning different images to a common frame, ground truth, is required. Current sample scanning tools enable to create images full of informative layers of digitalized tissues, stored with a high resolution into whole slide images. However, there are a limited amount of automatic alignment tools handling large images precisely in acceptable processing time. The idea of this study is to propose a deep learning solution for histopathology image registration. The main focus is on the understanding of landmark validation and the impact of stain augmentation on differently stained histopathology images. Also, the developed registration method is compared with the state-of-the-art algorithms which utilize whole slide images in the field of digital pathology. There are previous studies about histopathology, digital pathology, whole slide imaging and image registration, color staining, data augmentation, and deep learning that are referenced in this study. The goal is to develop a learning-based registration framework specifically for high-resolution histopathology image registration. Different whole slide tissue sample images are used with a resolution of up to 40x magnification. The images are organized into sets of consecutive, differently dyed sections, and the aim is to register the images based on only the visible tissue and ignore the background. Significant structures in the tissue are marked with landmarks. The quality measurements include, for example, the relative target registration error, structural similarity index metric, visual evaluation, landmark-based evaluation, matching points, and image details. These results are comparable and can be used also in the future research and in development of new tools. Moreover, the results are expected to show how the theory and practice are combined in whole slide image registration challenges. DeepHistReg algorithm will be studied to better understand the development of stain color feature augmentation-based image registration tool of this study. Matlab and Aperio ImageScope are the tools to annotate and validate the image, and Python is used to develop the algorithm of this new registration tool. As cancer is globally a serious disease regardless of age or lifestyle, it is important to find ways to develop the systems experts can use while working with patients’ data. There is still a lot to improve in the field of digital pathology and this study is one step toward it.Eri menetelmin vĂ€rjĂ€ttyjen virtuaalinĂ€ytelasien rekisteröintitekniikka kiintopisteiden validointia hyödyntĂ€en. TiivistelmĂ€. Yksi tĂ€rkeimmistĂ€ digitaalipatologian ominaisuuksista on verrata ja fuusioida perĂ€kkĂ€isiĂ€ eri menetelmin vĂ€rjĂ€ttyjĂ€ kudosleikkeitĂ€ toisiinsa visuaalisesti. TĂ€llöin keskenÀÀn lĂ€hes identtiset kuvat kohdistetaan samaan yhteiseen kehykseen, niin sanottuun pohjatotuuteen. Nykyiset nĂ€ytteiden skannaustyökalut mahdollistavat sellaisten kuvien luonnin, jotka ovat tĂ€ynnĂ€ kerroksittaista tietoa digitalisoiduista nĂ€ytteistĂ€, tallennettuna erittĂ€in korkean resoluution virtuaalisiin nĂ€ytelaseihin. TĂ€llĂ€ hetkellĂ€ on olemassa kuitenkin vain kourallinen automaattisia työkaluja, jotka kykenevĂ€t kĂ€sittelemÀÀn nĂ€in valtavia kuvatiedostoja tarkasti hyvĂ€ksytyin aikarajoin. TĂ€mĂ€n työn tarkoituksena on syvĂ€oppimista hyvĂ€ksikĂ€yttĂ€en löytÀÀ ratkaisu histopatologisten kuvien rekisteröintiin. TĂ€rkeimpĂ€nĂ€ osa-alueena on ymmĂ€rtÀÀ kiintopisteiden validoinnin periaatteet sekĂ€ eri vĂ€riaineiden augmentoinnin vaikutus. LisĂ€ksi tĂ€ssĂ€ työssĂ€ kehitettyĂ€ rekisteröintialgoritmia tullaan vertailemaan muihin kirjallisuudessa esitettyihin algoritmeihin, jotka myös hyödyntĂ€vĂ€t virtuaalinĂ€ytelaseja digitaalipatologian saralla. Kirjallisessa osiossa tullaan siteeraamaan aiempia tutkimuksia muun muassa seuraavista aihealueista: histopatologia, digitaalipatologia, virtuaalinĂ€ytelasi, kuvantaminen ja rekisteröinti, nĂ€ytteen vĂ€rjĂ€ys, data-augmentointi sekĂ€ syvĂ€oppiminen. Tavoitteena on kehittÀÀ oppimispohjainen rekisteröintikehys erityisesti korkearesoluutioisille digitalisoiduille histopatologisille kuville. Erilaisissa nĂ€ytekuvissa tullaan kĂ€yttĂ€mÀÀn jopa 40-kertaista suurennosta. Kuvat kudoksista on jĂ€rjestetty eri menetelmin vĂ€rjĂ€ttyihin perĂ€kkĂ€isiin kuvasarjoihin ja tĂ€mĂ€n työn pÀÀmÀÀrĂ€nĂ€ on rekisteröidĂ€ kuvat pohjautuen ainoastaan kudosten nĂ€kyviin osuuksiin, jĂ€ttĂ€en kuvien tausta huomioimatta. Kudosten merkittĂ€vimmĂ€t rakenteet on merkattu niin sanotuin kiintopistein. Työn laatumittauksina kĂ€ytetÀÀn arvoja, kuten kohteen suhteellinen rekisteröintivirhe (rTRE), rakenteellisen samankaltaisuuindeksin mittari (SSIM), sekĂ€ visuaalista arviointia, kiintopisteisiin pohjautuvaa arviointia, yhteensopivuuskohtia, ja kuvatiedoston yksityiskohtia. NĂ€mĂ€ arvot ovat verrattavissa myös tulevissa tutkimuksissa ja samaisia arvoja voidaan kĂ€yttÀÀ uusia työkaluja kehiteltĂ€essĂ€. DeepHistReg metodi toimii pohjana tĂ€ssĂ€ työssĂ€ kehitettĂ€vĂ€lle nĂ€ytteen vĂ€rjĂ€yksen parantamiseen pohjautuvalle rekisteröintityökalulle. Matlab ja Aperio ImageScope ovat ohjelmistoja, joita tullaan hyödyntĂ€mÀÀn tĂ€ssĂ€ työssĂ€ kuvien merkitsemiseen ja validointiin. OhjelmointikielenĂ€ kĂ€ytetÀÀn Pythonia. SyöpĂ€ on maailmanlaajuisesti vakava sairaus, joka ei katso ikÀÀ eikĂ€ elĂ€mĂ€ntyyliĂ€. Siksi on tĂ€rkeÀÀ löytÀÀ uusia keinoja kehittÀÀ työkaluja, joita asiantuntijat voivat hyödyntÀÀ jokapĂ€ivĂ€isessĂ€ työssÀÀn potilastietojen kĂ€sittelyssĂ€. Digitaalipatologian osa-alueella on vielĂ€ paljon innovoitavaa ja tĂ€mĂ€ työ on yksi askel eteenpĂ€in taistelussa syöpĂ€sairauksia vastaan

    Registration of pre-operative lung cancer PET/CT scans with post-operative histopathology images

    Get PDF
    Non-invasive imaging modalities used in the diagnosis of lung cancer, such as Positron Emission Tomography (PET) or Computed Tomography (CT), currently provide insuffcient information about the cellular make-up of the lesion microenvironment, unless they are compared against the gold standard of histopathology.The aim of this retrospective study was to build a robust imaging framework for registering in vivo and post-operative scans from lung cancer patients, in order to have a global, pathology-validated multimodality map of the tumour and its surroundings.;Initial experiments were performed on tissue-mimicking phantoms, to test different shape reconstruction methods. The choice of interpolator and slice thickness were found to affect the algorithm's output, in terms of overall volume and local feature recovery. In the second phase of the study, nine lung cancer patients referred for radical lobectomy were recruited. Resected specimens were inflated with agar, sliced at 5 mm intervals, and each cross-section was photographed. The tumour area was delineated on the block-face pathology images and on the preoperative PET/CT scans.;Airway segments were also added to the reconstructed models, to act as anatomical fiducials. Binary shapes were pre-registered by aligning their minimal bounding box axes, and subsequently transformed using rigid registration. In addition, histopathology slides were matched to the block-face photographs using moving least squares algorithm.;A two-step validation process was used to evaluate the performance of the proposed method against manual registration carried out by experienced consultants. In two out of three cases, experts rated the results generated by the algorithm as the best output, suggesting that the developed framework outperforms the current standard practice.Non-invasive imaging modalities used in the diagnosis of lung cancer, such as Positron Emission Tomography (PET) or Computed Tomography (CT), currently provide insuffcient information about the cellular make-up of the lesion microenvironment, unless they are compared against the gold standard of histopathology.The aim of this retrospective study was to build a robust imaging framework for registering in vivo and post-operative scans from lung cancer patients, in order to have a global, pathology-validated multimodality map of the tumour and its surroundings.;Initial experiments were performed on tissue-mimicking phantoms, to test different shape reconstruction methods. The choice of interpolator and slice thickness were found to affect the algorithm's output, in terms of overall volume and local feature recovery. In the second phase of the study, nine lung cancer patients referred for radical lobectomy were recruited. Resected specimens were inflated with agar, sliced at 5 mm intervals, and each cross-section was photographed. The tumour area was delineated on the block-face pathology images and on the preoperative PET/CT scans.;Airway segments were also added to the reconstructed models, to act as anatomical fiducials. Binary shapes were pre-registered by aligning their minimal bounding box axes, and subsequently transformed using rigid registration. In addition, histopathology slides were matched to the block-face photographs using moving least squares algorithm.;A two-step validation process was used to evaluate the performance of the proposed method against manual registration carried out by experienced consultants. In two out of three cases, experts rated the results generated by the algorithm as the best output, suggesting that the developed framework outperforms the current standard practice

    Automated Stabilization, Enhancement and Capillaries Segmentation in Videocapillaroscopy

    Get PDF
    Oral capillaroscopy is a critical and non-invasive technique used to evaluate microcirculation. Its ability to observe small vessels in vivo has generated significant interest in the field. Capillaroscopy serves as an essential tool for diagnosing and prognosing various pathologies, with anatomic–pathological lesions playing a crucial role in their progression. Despite its importance, the utilization of videocapillaroscopy in the oral cavity encounters limitations due to the acquisition setup, encompassing spatial and temporal resolutions of the video camera, objective magnification, and physical probe dimensions. Moreover, the operator’s influence during the acquisition process, particularly how the probe is maneuvered, further affects its effectiveness. This study aims to address these challenges and improve data reliability by developing a computerized support system for microcirculation analysis. The designed system performs stabilization, enhancement and automatic segmentation of capillaries in oral mucosal video sequences. The stabilization phase was performed by means of a method based on the coupling of seed points in a classification process. The enhancement process implemented was based on the temporal analysis of the capillaroscopic frames. Finally, an automatic segmentation phase of the capillaries was implemented with the additional objective of quantitatively assessing the signal improvement achieved through the developed techniques. Specifically, transfer learning of the renowned U-net deep network was implemented for this purpose. The proposed method underwent testing on a database with ground truth obtained from expert manual segmentation. The obtained results demonstrate an achieved Jaccard index of 90.1% and an accuracy of 96.2%, highlighting the effectiveness of the developed techniques in oral capillaroscopy. In conclusion, these promising outcomes encourage the utilization of this method to assist in the diagnosis and monitoring of conditions that impact microcirculation, such as rheumatologic or cardiovascular disorders

    Image analysis-based framework for adaptive and focal radiotherapy

    Get PDF
    It is estimated that more than 60% of cancer patients will receive radiotherapy (RT). Medical images acquired from different imaging modalities are used to guide the entire RT process from the initial treatment plan to fractionated radiation delivery. Accurate identification of the gross tumor volume (GTV) on computed tomography (CT), acquired at different time points, is crucial for the success of RT. In addition, complementary information from magnetic resonance imaging (MRI), positron emission tomography (PET), cone-beam computed tomography (CBCT) and electronic portal imaging device (EPID) is often used to obtain better definition of the target, track disease progression and update the radiotherapy plan. However, identifying tumor volumes on medical image data requires significant clinical experience and is extremely time consuming. Computer-based methods have the potential to assist with this task and improve radiotherapy. In this thesis a method was developed for automatically identifying the tumor volume on medical images. The method consists of three main parts: (1) a novel rigid image registration method based on scale invariant feature transform (SIFT) and mutual information (MI); (2) a non-rigid registration (deformable registration) method based on the cubic B-spline and a novel similarity function; (3) a gradient-based level set method that used the registered information as prior knowledge for further segmentation to detect changes in the patient from disease progression or regression and to account for the time difference between image acquisition. Validation was carried out by a clinician and by using objective methods that measure the similarity between the anatomy defined by a clinician and by the method proposed. With this automatic approach it was possible to identify the tumor volume on different images acquired at different time points in the radiotherapy workflow. Specifically, for lung cancer a mean error of 3.9% was found; clinically acceptable results were found for 12 of the 14 prostate cancer cases; and a similarity of 84.44% was achieved for the nasal cancer data. This framework has the potential ability to track the shape variation of tumor volumes over time, and in response to radiotherapy, and could therefore, with more validation, be used for adaptive radiotherapy

    Unveiling healthcare data archiving: Exploring the role of artificial intelligence in medical image analysis

    Get PDF
    Gli archivi sanitari digitali possono essere considerati dei moderni database progettati per immagazzinare e gestire ingenti quantità di informazioni mediche, dalle cartelle cliniche dei pazienti, a studi clinici fino alle immagini mediche e a dati genomici. I dati strutturati e non strutturati che compongono gli archivi sanitari sono oggetto di scrupolose e rigorose procedure di validazione per garantire accuratezza, affidabilità e standardizzazione a fini clinici e di ricerca. Nel contesto di un settore sanitario in continua e rapida evoluzione, l’intelligenza artificiale (IA) si propone come una forza trasformativa, capace di riformare gli archivi sanitari digitali migliorando la gestione, l’analisi e il recupero di vasti set di dati clinici, al fine di ottenere decisioni cliniche più informate e ripetibili, interventi tempestivi e risultati migliorati per i pazienti. Tra i diversi dati archiviati, la gestione e l’analisi delle immagini mediche in archivi digitali presentano numerose sfide dovute all’eterogeneità dei dati, alla variabilità della qualità delle immagini, nonché alla mancanza di annotazioni. L’impiego di soluzioni basate sull’IA può aiutare a risolvere efficacemente queste problematiche, migliorando l’accuratezza dell’analisi delle immagini, standardizzando la qualità dei dati e facilitando la generazione di annotazioni dettagliate. Questa tesi ha lo scopo di utilizzare algoritmi di IA per l’analisi di immagini mediche depositate in archivi sanitari digitali. Il presente lavoro propone di indagare varie tecniche di imaging medico, ognuna delle quali è caratterizzata da uno specifico dominio di applicazione e presenta quindi un insieme unico di sfide, requisiti e potenziali esiti. In particolare, in questo lavoro di tesi sarà oggetto di approfondimento l’assistenza diagnostica degli algoritmi di IA per tre diverse tecniche di imaging, in specifici scenari clinici: i) Immagini endoscopiche ottenute durante esami di laringoscopia; ciò include un’esplorazione approfondita di tecniche come la detection di keypoints per la stima della motilità delle corde vocali e la segmentazione di tumori del tratto aerodigestivo superiore; ii) Immagini di risonanza magnetica per la segmentazione dei dischi intervertebrali, per la diagnosi e il trattamento di malattie spinali, così come per lo svolgimento di interventi chirurgici guidati da immagini; iii) Immagini ecografiche in ambito reumatologico, per la valutazione della sindrome del tunnel carpale attraverso la segmentazione del nervo mediano. Le metodologie esposte in questo lavoro evidenziano l’efficacia degli algoritmi di IA nell’analizzare immagini mediche archiviate. I progressi metodologici ottenuti sottolineano il notevole potenziale dell’IA nel rivelare informazioni implicitamente presenti negli archivi sanitari digitali
    • 

    corecore