9 research outputs found

    Comparative Analysis of Tissue Reconstruction Algorithms for 3D Histology

    Get PDF
    Motivation: Digital pathology enables new approaches that expand beyond storage, visualization or analysis of histological samples in digital format. One novel opportunity is 3D histology, where a three-dimensional reconstruction of the sample is formed computationally based on serial tissue sections. This allows examining tissue architecture in 3D, for example, for diagnostic purposes. Importantly, 3D histology enables joint mapping of cellular morphology with spatially resolved omics data in the true 3D context of the tissue at microscopic resolution. Several algorithms have been proposed for the reconstruction task, but a quantitative comparison of their accuracy is lacking. Results: We developed a benchmarking framework to evaluate the accuracy of several free and commercial 3D reconstruction methods using two whole slide image datasets. The results provide a solid basis for further development and application of 3D histology algorithms and indicate that methods capable of compensating for local tissue deformation are superior to simpler approaches.publishedVersionPeer reviewe

    Virtual reality for 3D histology: multi-scale visualization of organs with interactive feature exploration

    Get PDF
    Virtual reality (VR) enables data visualization in an immersive and engaging manner, and it can be used for creating ways to explore scientific data. Here, we use VR for visualization of 3D histology data, creating a novel interface for digital pathology. Our contribution includes 3D modeling of a whole organ and embedded objects of interest, fusing the models with associated quantitative features and full resolution serial section patches, and implementing the virtual reality application. Our VR application is multi-scale in nature, covering two object levels representing different ranges of detail, namely organ level and sub-organ level. In addition, the application includes several data layers, including the measured histology image layer and multiple representations of quantitative features computed from the histology. In this interactive VR application, the user can set visualization properties, select different samples and features, and interact with various objects. In this work, we used whole mouse prostates (organ level) with prostate cancer tumors (sub-organ objects of interest) as example cases, and included quantitative histological features relevant for tumor biology in the VR model. Due to automated processing of the histology data, our application can be easily adopted to visualize other organs and pathologies from various origins. Our application enables a novel way for exploration of high-resolution, multidimensional data for biomedical research purposes, and can also be used in teaching and researcher training

    Spa-RQ: an Image Analysis Tool to Visualise and Quantify Spatial Phenotypes Applied to Non-Small Cell Lung Cancer

    Get PDF
    To facilitate analysis of spatial tissue phenotypes, we created an open-source tool package named 'Spa-RQ' for 'Spatial tissue analysis: image Registration & Quantification'. Spa-RQ contains software for image registration (Spa-R) and quantitative analysis of DAB staining overlap (Spa-Q). It provides an easy-to-implement workflow for serial sectioning and staining as an alternative to multiplexed techniques. To demonstrate Spa-RQ's applicability, we analysed the spatial aspects of oncogenic KRAS-related signalling activities in non-small cell lung cancer (NSCLC). Using Spa-R in conjunction with ImageJ/Fiji, we first performed annotation-guided tumour-by-tumour phenotyping using multiple signalling markers. This analysis showed histopathology-selective activation of PI3K/AKT and MAPK signalling in Kras mutant murine tumours, as well as high p38MAPK stress signalling in p53 null murine NSCLC. Subsequently, Spa-RQ was applied to measure the co-activation of MAPK, AKT, and their mutual effector mTOR pathway in individual tumours. Both murine and clinical NSCLC samples could be stratified into 'MAPK/mTOR', 'AKT/mTOR', and 'Null' signature subclasses, suggesting mutually exclusive MAPK and AKT signalling activities. Spa-RQ thus provides a robust and easy to use tool that can be employed to identify spatially-distributed tissue phenotypes

    Spa-RQ : an Image Analysis Tool to Visualise and Quantify Spatial Phenotypes Applied to Non-Small Cell Lung Cancer

    Get PDF
    To facilitate analysis of spatial tissue phenotypes, we created an open-source tool package named 'Spa-RQ' for 'Spatial tissue analysis: image Registration & Quantification'. Spa-RQ contains software for image registration (Spa-R) and quantitative analysis of DAB staining overlap (Spa-Q). It provides an easy-to-implement workflow for serial sectioning and staining as an alternative to multiplexed techniques. To demonstrate Spa-RQ's applicability, we analysed the spatial aspects of oncogenic KRAS-related signalling activities in non-small cell lung cancer (NSCLC). Using Spa-R in conjunction with ImageJ/Fiji, we first performed annotation-guided tumour-by-tumour phenotyping using multiple signalling markers. This analysis showed histopathology-selective activation of PI3K/AKT and MAPK signalling in Kras mutant murine tumours, as well as high p38MAPK stress signalling in p53 null murine NSCLC. Subsequently, Spa-RQ was applied to measure the co-activation of MAPK, AKT, and their mutual effector mTOR pathway in individual tumours. Both murine and clinical NSCLC samples could be stratified into 'MAPK/mTOR', 'AKT/mTOR', and 'Null' signature subclasses, suggesting mutually exclusive MAPK and AKT signalling activities. Spa-RQ thus provides a robust and easy to use tool that can be employed to identify spatially-distributed tissue phenotypes.Peer reviewe

    Development of Microstructural Segmentation and 3D Reconstruction Method Using Serial Section of Tissue: 3D Educational Model of Human Hypothalamus

    Get PDF
    ํ•™์œ„๋…ผ๋ฌธ(์„์‚ฌ)--์„œ์šธ๋Œ€ํ•™๊ต ๋Œ€ํ•™์› :์˜๊ณผ๋Œ€ํ•™ ์˜ํ•™๊ณผ,2019. 8. ํ™ฉ์˜์ผ์ตœํ˜•์ง„.INTRODUCTION: The 3D reconstruction technique of tissue staining images is very valuable in that it visualizes the microstructure information that Magnetic resonance imaging (MRI) and Computed tomography (CT) data cannot provide and is widely used for pathological diagnosis. Organizational 3D reconstruction needs the latest devices and software for each phase. However, in reality, it is not easy to equip all of the most brand-new equipment, and software, so the existing research has been done by only a limited number of people. For this reason, in this study, we tried to develop a 3D reconstruction method of the organization by using available laboratory equipment and highly accessible software. The human hypothalamus is relatively small compared to other brain structures, but it is the backbone of homeostasis regulation and an important structure directly linked to survival. It consists of more than 13 nuclei and microstructures, and many attempts have been made to identify them. However, most reported histology images were based on the 2D map, that cause researchers are experiencing difficulties in understanding spatial structure perception. In addition, most of the currently reported hypothalamic 3D maps are based on MRI data. This DICOM based medical image has the disadvantage that it is difficult to understand the detailed microstructure of the nucleus. In order to overcome these drawbacks, this study aims to develop a detailed 3D model of the human hypothalamus by using easily accessible devices and software. Since the 3D map of the human hypothalamus has not been reported so far, we have developed a method that allows a wider range of researchers to perform the 3D reconstruction of the tissue, which had previously been done by a limited number of people. We also tried to make the model created by the researchers easily accessible to the field. Methods: Nissl staining of human brain hypothalamus obtained by autopsy was converted to a digital image using a tissue slide scanner. The whole slide image was converted by using image processing software to adjust the resolution and extension. After that, segmentation of the hypothalamic microstructure was performed in the Adobe Photoshop software, and the missing slide images were prepared by manual interpolation. All the structure segmented images were transformed into black and white to produce a mask suitable for 3D reconstruction, and they were classified by structure. Then, the whole image was subjected to bit number correction and extension conversion suitable for 3D reconstruction using ImageJ software. Then 3D reconstruction software reconstructs the segmented structures into three dimensions and attempts 3D rendering. After transforming them into STL extensions, we tried to edit them using MeshMixer software. Through this process, 3D map was created with WebGL, and the 3D map education model of the human hypothesis was created. Results: A total of 100 staining images were obtained by Nissl staining using human brain hypothalamus. To make our results more clearly, hypothalamic 2D maps obtained in this study were compared with Allen atlas. A total of 23 segmentations were carried out including hypothalamic surrounding structures and nucleus distribution patterns. A total of 11 excluded slides were supplemented by manual interpolation. The hypothalamus 2D map was used to reconstruct the human hypothalamus as a 3D reconstructed volume model and a 3D reconstructed surface model. The 3D reconstruction surface model was obtained by using MeshMixer to complement the smoothing and the outlier point of each structure. Then, I created a hypothalamus 3D reconstruction education model using WebGL service to make possible for anyone to easily access and learn without the constraint of time and space. Discussion: In this study, I developed a method for producing 2D map and 3D reconstructed images of Nissl stained using hypothalamus tissue. This is the first 3D reconstruction model based on the hypothalamus, which is meant to help other researchers and medical personnel in education and research. Previous studies have shown that the spacing of the slices of the hypothalamus tissue was not constant, but this study succeeded in acquiring the results of the staining of the hypothalamus tissue at 100 ใŽ› intervals as the basic data for 3D reconstruction. Many other types of missing images were found due to the lack of consideration of various variables that occurred during the reconstruction process. The anatomical structure and various parameters were considered and corrected for more satisfactory results. In addition, existing image-based software provides automatic segmentation function considering only the distinctive features of shaded images, so it is very inappropriate to classify subtle clustering patterns such as nucleus and structures in the human hypothalamus. It is significant that the progress process is segmented, and the separate software suitable for each process is applied, and the process of working with them is compatible with each other. Most software is free, low cost and easy to learn and use, so it provides a way to easily create an organization 3D image without expensive software or equipment. The existing hypothalamus training data were mostly 2D illustrations, but the 3D reconstructed images produced in this study are easy to grasp the positional relationship of structures more space. In particular, since the hypothalamus does not contain data showing nuclear reconstruction as a 3D reconstructed image, the educational model of this study will be of great help to many hypothalamus researchers. And the 3D WebGL education model has pedagogical value because it enables free access and access through users personal device, enabling ubiquitous learning that is not restricted by time and space. Conclusions: Through this study, I have established a method for producing 2D map and 3D reconstruction using human hypothalamus. Through the 3D reconstruction image and the education model, the positional relation of the human hypothalamus can be recognized by spatial perception. This result is pedagogically worthy because it can be used as U-learning material to help researchers self-directed learning by opening it to open source WebGL for easy use by anyone.์„œ๋ก : ์กฐ์ง ์—ผ์ƒ‰ ์˜์ƒ์˜ 3์ฐจ์› ์žฌ๊ตฌ์„ฑ ๊ธฐ์ˆ ์€ MRI์™€ CT๊ฐ€ ์ œ๊ณตํ•˜์ง€ ๋ชปํ•˜๋Š” ๋ฏธ์„ธ๊ตฌ์กฐ๋ฅผ ์‹œ๊ฐํ™” ํ•˜์—ฌ 3์ฐจ์› ์กฐ์งํ•™์— ํ™œ์šฉ๋œ๋‹ค๋Š” ์ ์—์„œ ๊ฐ€์น˜๊ฐ€ ์žˆ๋‹ค. ์กฐ์ง 3์ฐจ์› ์žฌ๊ตฌ์„ฑ์—๋Š” ๊ฐ ๋‹จ๊ณ„์— ์ ํ•ฉํ•œ ๊ธฐ๊ธฐ์™€ ์†Œํ”„ํŠธ์›จ์–ด๊ฐ€ ์‚ฌ์šฉ๋œ๋‹ค. ๊ทธ๋Ÿฌ๋‚˜ ๊ณ ๊ฐ€์˜ ๊ธฐ๊ธฐ์™€ ์†Œํ”„ํŠธ์›จ์–ด๋ฅผ ์ „๋ถ€ ๊ฐ–์ถ”๊ธฐ๋ž€ ์‰ฝ์ง€ ์•Š์œผ๋ฏ€๋กœ ๊ธฐ์กด์˜ ์—ฐ๊ตฌ๋Š” ์ œํ•œ์ ์ธ ์†Œ์ˆ˜์— ์˜ํ•ด ์ด๋ฃจ์–ด์ ธ ์™”๋‹ค. ์‚ฌ๋žŒ ์‹œ์ƒํ•˜๋ถ€๋Š” ๋‹ค๋ฅธ ๋‡Œ ๊ตฌ์กฐ๋ฌผ์— ๋น„ํ•ด ์ƒ๋Œ€์ ์œผ๋กœ ํฌ๊ธฐ๋Š” ์ž‘์ง€๋งŒ ํ•ญ์ƒ์„ฑ ์กฐ์ ˆ์˜ ์ค‘์ถ”์ด๋ฉฐ ์ƒ์กด๊ณผ ์ง๊ฒฐ๋œ ์‹ ์ฒด ํ™œ๋™์„ ์กฐ์ ˆํ•˜๋Š” ์ค‘์š”ํ•œ ๊ธฐ๊ด€์ด๋‹ค. ๊ธฐ์กด ์‹œ์ƒํ•˜๋ถ€์— ๋Œ€ํ•œ ์‹œ๊ฐ์  ์—ฐ๊ตฌ๋Š” ์กฐ์งํ•™ ์˜์ƒ ๊ธฐ๋ฐ˜์˜ 2์ฐจ์› ์ง€๋„ ์ค‘์‹ฌ์œผ๋กœ ์ด๋ฃจ์–ด์ ธ ๊ตฌ์กฐ๋ฅผ ๊ณต๊ฐ„์ง€๊ฐ์ ์œผ๋กœ ์ดํ•ดํ•˜๋Š” ๋ฐ ๋งŽ์€ ์–ด๋ ค์›€์ด ์žˆ์—ˆ๋‹ค. ๋˜ํ•œ ํ˜„์žฌ ๋ฐœํ‘œ๋œ ๋Œ€๋ถ€๋ถ„์˜ ์‚ฌ๋žŒ ์‹œ์ƒํ•˜๋ถ€ 3์ฐจ์› ์ง€๋„๋Š” MRI ๋ฐ์ดํ„ฐ๋ฅผ ๊ธฐ๋ฐ˜์œผ๋กœ ์ž‘์„ฑ๋˜์–ด ์žˆ์–ด ์‹ ๊ฒฝํ•ต ๋‹จ์œ„์˜ ๋ฏธ์„ธ๊ตฌ์กฐ ์ •๋ณด๋ฅผ ์ถฉ๋ถ„ํžˆ ์ „๋‹ฌํ•˜์ง€ ๋ชปํ•œ๋‹ค. ๋”ฐ๋ผ์„œ ๋ณธ ์—ฐ๊ตฌ์—์„œ๋Š” ๋‘๊ฐ€์ง€ ๋ชฉํ‘œ๋ฅผ ๋‹ฌ์„ฑํ•˜๊ณ ์ž ํ•˜์˜€๋‹ค. ์ฒซ์งธ๋กœ, ์ ‘๊ทผ์„ฑ์ด ์ข‹์€ ์žฅ๋น„์™€ ์†Œํ”„ํŠธ์›จ์–ด๋ฅผ ํ™œ์šฉํ•˜์—ฌ ๋ณด๋‹ค ๋„“์€ ๋ฒ”์œ„์˜ ์—ฐ๊ตฌ์ž๋“ค์ด ์ˆ˜ํ–‰ํ•  ์ˆ˜ ์žˆ๋Š” ์กฐ์ง์˜ 3์ฐจ์› ์žฌ๊ตฌ์„ฑ ๋ฐฉ๋ฒ•์„ ๊ฐœ๋ฐœํ•˜๊ณ ์ž ํ•˜์˜€๋‹ค. ๋‘˜์งธ๋กœ, ์œ„์˜ ๊ณผ์ •์„ ํ†ตํ•ด ํ™•๋ฆฝํ•œ ๋ฐฉ๋ฒ•์„ ์‹œ์ƒํ•˜๋ถ€์— ์ ์šฉํ•˜์—ฌ ํ•ด๋‹น ๋ถ„์•ผ ์—ฐ๊ตฌ์ž๋“ค์ด ํ•™์Šต ๋ฐ ๊ต์œก์— ํ™œ์šฉํ•  ์ˆ˜ ์žˆ๋Š” 3์ฐจ์› ๋ชจ๋ธ์„ ์ œ์ž‘ํ•˜๊ณ ์ž ํ•˜์˜€๋‹ค. ์—ฐ๊ตฌ ๋Œ€์ƒ ๋ฐ ๋ฐฉ๋ฒ•: ์‚ฌ๋žŒ ์‹œ์ƒํ•˜๋ถ€ ์ „์ฒด์™€ ์‹œ๊ฐ๋กœ๊ฐ€ ํฌํ•จ๋œ ์กฐ์ง์„ ๋Œ€์ƒ์œผ๋กœ ์กฐ์ง 3์ฐจ์› ์žฌ๊ตฌ์„ฑ์„ ์‹œ๋„ํ•˜์˜€๋‹ค. ์—ผ์ƒ‰๋œ ๊ฐ ์กฐ์ง์ ˆํŽธ์„ ์Šฌ๋ผ์ด๋“œ ์Šค์บ๋„ˆ๋ฅผ ์ด์šฉํ•ด ๋””์ง€ํ„ธ ์˜์ƒ์œผ๋กœ ๋ณ€ํ™˜ํ•˜์˜€๊ณ  ZEN์„ ์‚ฌ์šฉํ•˜์—ฌ ํ•ด์ƒ๋„ ์กฐ์ ˆ๊ณผ ํ™•์žฅ์ž ๋ณ€ํ™˜์„ ์‹œํ–‰ํ•˜์˜€๋‹ค. ์ „์ฒด ์˜์ƒ์„ Adobe Photoshop์„ ์ด์šฉํ•˜์—ฌ ์‹œ๊ฐ๋กœ์™€ ๋ฏธ์„ธํ˜ˆ๊ด€, ์•ˆ์ชฝํ›„๊ฐ๊ฒ‰์งˆ์˜ ์™ธ๊ณฝ์„ ์„ ๊ธฐ์ค€์œผ๋กœ ์ •ํ•ฉ ํ•˜์˜€๋‹ค. ์ด ํ›„ ๋ฏธ์„ธ์กฐ์ง ๊ตฌ์—ญํ™”, ์†Œ์‹ค๋œ ์Šฌ๋ผ์ด๋“œ ์˜์ƒ์˜ ์ˆ˜๋™ ๋ณด๊ฐ„๋ฒ• ์ ์šฉ, ์ „์ฒด ๊ตฌ์—ญํ™” ์˜์ƒ์˜ ํ‘๋ฐฑ ๋ณ€ํ™˜, ๋งˆ์Šคํฌ ์ œ์ž‘, ๊ตฌ์กฐ๋ฌผ ๋ณ„ ๋ถ„๋ฅ˜๋ฅผ ์‹œํ–‰ํ•˜์˜€๋‹ค. ๋˜ํ•œ ์ „์ฒด ์˜์ƒ์„ ImageJ๋ฅผ ์‚ฌ์šฉํ•˜์—ฌ bit์ˆ˜ ๊ต์ • ๋ฐ ํ™•์žฅ์ž ๋ณ€ํ™˜์„ ์ˆ˜ํ–‰ํ•˜์˜€๋‹ค. ์ด ํ›„ MEDIP์—์„œ ๊ตฌ์—ญํ™” ํ•œ ๊ตฌ์กฐ๋ฌผ ์˜์ƒ์˜ 3์ฐจ์› ์žฌ๊ตฌ์„ฑ, STL ํ™•์žฅ์ž ๋ณ€ํ™˜ ๋ฐ ๋‚ด๋ณด๋‚ด๊ธฐ๋ฅผ ์‹œํ–‰ํ•˜์˜€๋‹ค. ์ด ํ›„ MeshMixer๋ฅผ ์‚ฌ์šฉํ•˜์—ฌ ํ‘œ๋ฉด์˜ ์š”์ฒ ๊ณผ ์ด์ƒ์ ์„ ๊ต์ •ํ•œ ๋’ค webGL ๊ต์œก๋ชจ๋ธ๋กœ ์ œ์ž‘ํ•˜์˜€๋‹ค. ์ด๋ ‡๊ฒŒ ์ˆ˜๋ฆฝ๋œ protocol์„ ์‚ฌ๋žŒ ์‹œ์ƒํ•˜๋ถ€์— ์ ์šฉํ•˜์—ฌ ๋‚ด๋ถ€ ์‹ ๊ฒฝํ•ต๊ณผ ๋ฏธ์„ธ๊ตฌ์กฐ๋ฅผ 3์ฐจ์›์œผ๋กœ ์žฌ๊ตฌ์„ฑํ•˜์˜€๋‹ค. ๊ฒฐ๊ณผ: Zen, Adobe Photoshop, ImageJ๋ฅผ ์‚ฌ์šฉํ•˜์—ฌ ์กฐ์ง ์—ผ์ƒ‰ ์˜์ƒ, ์‹œ๊ฐ๋กœ 2์ฐจ์› ์ง€๋„, ์ƒ‰๋ฉด ๋ ˆ์ด์–ด, ํŒจ์Šค์˜์—ญ ๋ ˆ์ด์–ด, ํ‘๋ฐฑ๋ณ€ํ™˜ ์˜์ƒ, ํ‘๋ฐฑ ๋ฐ˜์ „ ์˜์ƒ, Raw data mask๋ฅผ ์ƒ์„ฑํ•˜์˜€๋‹ค. ์ด ํ›„ MEDIP๊ณผ Meshmixer ๋ฅผ ์‚ฌ์šฉํ•˜์—ฌ ์ „์ฒด์กฐ์ง๊ณผ ์‹œ๊ฐ๋กœ ๋ถ€ํ”ผ๋ชจ๋ธ, ํ‘œ๋ฉด๋ชจ๋ธ, ๊ต์œก๋ชจ๋ธ์„ ์ƒ์„ฑํ•˜์˜€๋‹ค. ์ด๋ฅผ ์‚ฌ๋žŒ ์‹œ์ƒํ•˜๋ถ€ ๋ฏธ์„ธ๊ตฌ์กฐ์˜ ์‹œ๊ฐํ™”์— ์ ์šฉํ•˜์—ฌ ์กฐ์ง ์—ผ์ƒ‰ ์˜์ƒ, 2์ฐจ์› ์ง€๋„, ๋ฏธ์„ธ๊ตฌ์กฐ์™€ ์‹ ๊ฒฝํ•ต ๊ตฌ์—ญํ™” ์ƒ‰๋ฉด ๋ ˆ์ด์–ด, ํŒจ์Šค์˜์—ญ ๋ ˆ์ด์–ด, ํ‘๋ฐฑ ๋ณ€ํ™˜ ์˜์ƒ, ํ‘๋ฐฑ ๋ฐ˜์ „ ์˜์ƒ, 3์ฐจ์› Raw data mask, 3์ฐจ์› ์žฌ๊ตฌ์„ฑ ๋ถ€ํ”ผ ๋ชจ๋ธ, ํ‘œ๋ฉด ๋ชจ๋ธ, 3์ฐจ์› ๊ต์œก๋ชจ๋ธ์„ ์ƒ์„ฑํ•˜์˜€๋‹ค. ๊ณ ์ฐฐ: ๋ณธ ์—ฐ๊ตฌ์—์„œ๋Š” Zen, Adobe Photoshop, ImageJ, MEDIP, Meshmixer๋กœ ์ด์–ด์ง€๋Š” ์กฐ์ง์˜ 3์ฐจ์› ์žฌ๊ตฌ์„ฑ ์ œ์ž‘ protocol์„ ๊ฐœ๋ฐœํ•˜์˜€๋‹ค. ๋ณธ ์—ฐ๊ตฌ๋Š” ๊ธฐ์กด์˜ 3์ฐจ์› ์žฌ๊ตฌ์„ฑ ๋ฐฉ๋ฒ•๋ก ๊ณผ ๋น„๊ตํ–ˆ์„ ๋•Œ ์™ธ๊ณฝ์„ ์ด ๋šœ๋ ทํ•˜์ง€ ์•Š์€ ๊ตฌ์กฐ๋ฌผ์˜ ์˜์ƒ ์œ„์— ์ˆ˜๋™์œผ๋กœ ๊ตฌ์—ญํ™”๋ฅผ ์ˆ˜ํ–‰ํ–ˆ๋‹ค๋Š” ์ ์—์„œ ์ฐจ๋ณ„์„ฑ์ด ์žˆ๋‹ค. ๋˜ํ•œ ๊ธฐ์กด์˜ ์กฐ์ง 3์ฐจ์› ์žฌ๊ตฌ์„ฑ ๋ฐฉ๋ฒ•๋ก ๊ณผ ๋น„๊ตํ–ˆ์„ ๋•Œ ๊ฐ ๋‹จ๊ณ„์— ์‚ฌ์šฉ๋˜๋Š” ๊ณ ๊ฐ€์˜ ์†Œํ”„ํŠธ์›จ์–ด์™€ ๊ธฐ๊ธฐ๋ฅผ ์—ฌ๋Ÿฌ ๊ฐœ์˜ ์ ‘๊ทผ์„ฑ์ด ๋†’์€ ์†Œํ”„ํŠธ์›จ์–ด๋กœ ๋ถ„ํ•  ๋ฐ ์ ์šฉํ–ˆ๋‹ค๋Š” ์ ์—์„œ ๊ธฐ์กด ์—ฐ๊ตฌ์™€ ๋‹ค๋ฅด๋‹ค. ๋ณธ ์—ฐ๊ตฌ์˜ 2์ฐจ์› ์ง€๋„๋Š” Allen atlas๊ฐ€ ์ œ๊ณตํ•˜๋Š” ์‚ฌ๋žŒ ๋‡Œ 2์ฐจ์› ์ง€๋„์™€ ๋น„๊ตํ–ˆ์„ ๋•Œ ๋ณด๋‹ค ์ด˜์ด˜ํ•œ 100 ใŽ›์˜ ์˜์ƒ์„ ์ผ์ •ํ•œ ๊ฐ„๊ฒฉ์œผ๋กœ ํš๋“ํ–ˆ๋‹ค๋Š” ์ ์—์„œ ์ฐจ๋ณ„์„ฑ์ด ์žˆ๋‹ค. ๋˜ํ•œ MRI ๊ธฐ๋ฐ˜ 3์ฐจ์› ์žฌ๊ตฌ์„ฑ ๋ชจ๋ธ์ด ์ œ๊ณตํ•˜์ง€ ๋ชปํ•˜๋Š” ์—ฌ๋Ÿฌ ๋ฏธ์„ธ๊ตฌ์กฐ์™€ ์‹ ๊ฒฝํ•ต์„ ์‹œ๊ฐํ™” ํ–ˆ๋‹ค๋Š” ์ ์—์„œ ๊ฐ€์น˜๊ฐ€ ์žˆ๋‹ค. ๊ธฐ์กด์˜ 2์ฐจ์› ์‹ ๊ฒฝํ•ด๋ถ€ํ•™ ๊ต์œก์ž๋ฃŒ์™€ ๋น„๊ตํ–ˆ์„ ๋•Œ ๋ณธ ์—ฐ๊ตฌ์—์„œ ์ œ์ž‘ํ•œ 3์ฐจ์› ๊ต์œก๋ชจ๋ธ์€ ๊ตฌ์กฐ๋ฌผ๋“ค์˜ ์œ„์น˜๊ด€๊ณ„๋ฅผ ๊ณต๊ฐ„์ง€๊ฐ์ ์œผ๋กœ ๋ณด๋‹ค ์‰ฝ๊ฒŒ ํŒŒ์•…ํ•  ์ˆ˜ ์žˆ๋‹ค๋Š” ์žฅ์ ์ด ์žˆ๋‹ค. ๋˜ํ•œ ๋ณธ ์—ฐ๊ตฌ์˜ ๊ฒฐ๊ณผ๋ฌผ์€ ๊ธฐ์กด์˜ 3์ฐจ์› ์‹ ๊ฒฝํ•ด๋ถ€ํ•™ ๊ต์œก์ž๋ฃŒ์™€ ๋‹ฌ๋ฆฌ ์‹คํ—˜์„ ํ†ตํ•ด ์–ป์€ ์‹ค๋ฌผ ์ž๋ฃŒ๋ฅผ ๊ธฐ๋ฐ˜์œผ๋กœ ์ œ์ž‘๋˜์—ˆ๋‹ค๋Š” ์ ์—์„œ ๊ฐ€์น˜๊ฐ€ ์žˆ๋‹ค. ๊ฒฐ๋ก : ๋ณธ ์—ฐ๊ตฌ์—์„œ๋Š” ์‚ฌ๋žŒ ์‹œ์ƒํ•˜๋ถ€ ์กฐ์ง์„ ๋งค๊ฐœ๋กœ ์กฐ์ง์˜ 3์ฐจ์› ์žฌ๊ตฌ์„ฑ protocol์„ ํ™•๋ฆฝํ•˜์˜€๋‹ค. ์ด๋ฅผ ํ†ตํ•ด ์‚ฌ๋žŒ ์‹œ์ƒํ•˜๋ถ€ ๋‚ด๋ถ€์˜ ๋ฏธ์„ธ๊ตฌ์กฐ์™€ ์‹ ๊ฒฝํ•ต์˜ ์œ„์น˜๊ด€๊ณ„๋ฅผ ๊ณต๊ฐ„์ง€๊ฐ์ ์œผ๋กœ ํŒŒ์•…ํ•  ์ˆ˜ ์žˆ๋Š” ๋ถ€ํ”ผ๋ชจ๋ธ๊ณผ ํ‘œ๋ฉด๋ชจ๋ธ์„ ์ƒ์„ฑํ•  ์ˆ˜ ์žˆ์—ˆ๋‹ค. ์ด ๊ฒฐ๊ณผ๋ฌผ๋“ค์€ ์˜๋ฃŒ์ธ ๊ต์œก์— ์ ํ•ฉํ•œ ๊ต์œก๋ชจ๋ธ๋กœ ๋ณ€ํ™˜ํ•˜์—ฌ ๊ณต๊ฐœ ์ž๋ฃŒ๋กœ ์ œ๊ณต๋˜์—ˆ๋‹ค.์ดˆ ๋ก โ€ฆโ€ฆโ€ฆโ€ฆโ€ฆโ€ฆโ€ฆโ€ฆโ€ฆโ€ฆโ€ฆโ€ฆโ€ฆโ€ฆโ€ฆโ€ฆโ€ฆโ€ฆโ€ฆโ€ฆโ€ฆโ€ฆโ€ฆโ…ฐ ๋ชฉ ์ฐจ โ€ฆโ€ฆโ€ฆโ€ฆโ€ฆโ€ฆโ€ฆโ€ฆโ€ฆโ€ฆโ€ฆโ€ฆโ€ฆโ€ฆโ€ฆโ€ฆโ€ฆโ€ฆโ€ฆโ€ฆโ€ฆโ€ฆโ€ฆ โ…ณ ํ‘œ ๋ชฉ๋ก โ€ฆโ€ฆโ€ฆโ€ฆโ€ฆโ€ฆโ€ฆโ€ฆโ€ฆโ€ฆโ€ฆโ€ฆโ€ฆโ€ฆโ€ฆโ€ฆโ€ฆโ€ฆโ€ฆโ€ฆโ€ฆ v ๊ทธ๋ฆผ ๋ชฉ๋กโ€ฆโ€ฆโ€ฆโ€ฆโ€ฆโ€ฆโ€ฆโ€ฆโ€ฆโ€ฆโ€ฆโ€ฆโ€ฆโ€ฆโ€ฆโ€ฆโ€ฆโ€ฆโ€ฆโ€ฆ vi ์„œ ๋ก โ€ฆโ€ฆโ€ฆโ€ฆโ€ฆโ€ฆโ€ฆโ€ฆโ€ฆโ€ฆโ€ฆโ€ฆโ€ฆโ€ฆโ€ฆโ€ฆโ€ฆโ€ฆโ€ฆโ€ฆโ€ฆโ€ฆโ€ฆ 9 ๋ณธ ๋ก  โ€ฆโ€ฆโ€ฆโ€ฆโ€ฆโ€ฆโ€ฆโ€ฆโ€ฆโ€ฆโ€ฆโ€ฆโ€ฆโ€ฆโ€ฆโ€ฆโ€ฆโ€ฆโ€ฆโ€ฆโ€ฆโ€ฆโ€ฆ 15 Chapter 1. ์กฐ์ง์˜ 3์ฐจ์› ์žฌ๊ตฌ์„ฑ ์ œ์ž‘ protocol ๊ฐœ๋ฐœ...... 15 Chapter 2. ์‚ฌ๋žŒ ์‹œ์ƒํ•˜๋ถ€ ์กฐ์ง์˜ 3์ฐจ์› ์žฌ๊ตฌ์„ฑ.............. 52 ๊ฒฐ ๋ก โ€ฆโ€ฆโ€ฆโ€ฆโ€ฆโ€ฆโ€ฆโ€ฆโ€ฆโ€ฆโ€ฆโ€ฆโ€ฆโ€ฆโ€ฆโ€ฆโ€ฆโ€ฆโ€ฆโ€ฆโ€ฆโ€ฆโ€ฆ 95 ์ฐธ๊ณ ๋ฌธํ—Œโ€ฆโ€ฆโ€ฆโ€ฆโ€ฆโ€ฆโ€ฆโ€ฆโ€ฆโ€ฆโ€ฆโ€ฆโ€ฆโ€ฆโ€ฆโ€ฆโ€ฆโ€ฆโ€ฆโ€ฆโ€ฆโ€ฆ96 Abstractโ€ฆโ€ฆโ€ฆโ€ฆโ€ฆโ€ฆโ€ฆโ€ฆโ€ฆโ€ฆโ€ฆโ€ฆโ€ฆโ€ฆโ€ฆโ€ฆโ€ฆโ€ฆโ€ฆโ€ฆโ€ฆ101Maste

    Artificial intelligence in histopathology image analysis for cancer precision medicine

    Get PDF
    In recent years, there have been rapid advancements in the field of computational pathology. This has been enabled through the adoption of digital pathology workflows that generate digital images of histopathological slides, the publication of large data sets of these images and improvements in computing infrastructure. Objectives in computational pathology can be subdivided into two categories, first the automation of routine workflows that would otherwise be performed by pathologists and second the addition of novel capabilities. This thesis focuses on the development, application, and evaluation of methods in this second category, specifically the prediction of gene expression from pathology images and the registration of pathology images among each other. In Study I, we developed a computationally efficient cluster-based technique to perform transcriptome-wide predictions of gene expression in prostate cancer from H&E-stained whole-slide-images (WSIs). The suggested method outperforms several baseline methods and is non-inferior to single-gene CNN predictions, while reducing the computational cost with a factor of approximately 300. We included 15,586 transcripts that encode proteins in the analysis and predicted their expression with different modelling approaches from the WSIs. In a cross-validation, 6,618 of these predictions were significantly associated with the RNA-seq expression estimates with FDR-adjusted p-values <0.001. Upon validation of these 6,618 expression predictions in a held-out test set, the association could be confirmed for 5,419 (81.9%). Furthermore, we demonstrated that it is feasible to predict the prognostic cell-cycle progression score with a Spearman correlation to the RNA-seq score of 0.527 [0.357, 0.665]. The objective of Study II is the investigation of attention layers in the context of multiple-instance-learning for regression tasks, exemplified by a simulation study and gene expression prediction. We find that for gene expression prediction, the compared methods are not distinguishable regarding their performance, which indicates that attention mechanisms may not be superior to weakly supervised learning in this context. Study III describes the results of the ACROBAT 2022 WSI registration challenge, which we organised in conjunction with the MICCAI 2022 conference. Participating teams were ranked on the median 90th percentile of distances between registered and annotated target landmarks. Median 90th percentiles for eight teams that were eligible for ranking in the test set consisting of 303 WSI pairs ranged from 60.1 ยตm to 15,938.0 ยตm. The best performing method therefore has a score slightly below the median 90th percentile of distances between first and second annotator of 67.0 ยตm. Study IV describes the data set that we published to facilitate the ACROBAT challenge. The data set is available publicly through the Swedish National Data Service SND and consists of 4,212 WSIs from 1,153 breast cancer patients. Study V is an example of the application of WSI registration for computational pathology. In this study, we investigate the possibility to register invasive cancer annotations from H&E to KI67 WSIs and then subsequently train cancer detection models. To this end, we compare the performance of models optimised with registered annotations to the performance of models that were optimised with annotations generated for the KI67 WSIs. The data set consists of 272 female breast cancer cases, including an internal test set of 54 cases. We find that in this test set, the performance of both models is not distinguishable regarding performance, while there are small differences in model calibration

    Hyperspectral imaging for resection margin assessment during breast cancer surgery

    Get PDF
    Complete tumor removal during breast-conserving surgery remains challenging due to the lack of optimal intraoperative margin assessment techniques. This thesis investigates the potential of hyperspectral imaging to assess the resection margin during surgery. Hyperspectral imaging is a non-invasive, optical imaging technique that measures differences in the optical properties of tissue. These differences in optical properties are measured in the form of diffuse reflectance spectra and can be used to differentiate tumor from healthy tissue. By imaging and analyzing the resection margin of a specimen during surgery, direct feedback can be given to the surgeon.We started our research with imaging breast tissue slices, that were obtained after gross-sectioning lumpectomy specimen. We developed a registration method to obtain a high correlation of these optical measurements with histopathology and, thereby, created an extensive hyperspectral database that was used to research the maximum capability of hyperspectral imaging to differentiate tissue types. The highest classification results were obtained using both the visual and near-infrared wavelength range. On hyperspectral signals, representing a single tissue type, we report a sensitivity and specificity above 98%, which indicates that the optical differences in tissue composition and morphology can be used to distinguish tumor from healthy breast tissue. On hyperspectral signals, representing a mixture of tissue classes, the sensitivity and specificity decrease to 80% and 93%, respectively. This is related to the percentage of a specific tissue class in the measured volume. The next step was to image lumpectomy specimen during surgery to verify the feasibility of using hyperspectral imaging during surgery. Hyperspectral imaging was fast and could provide feedback over the entire resection surface of one side of the specimen in 3 minutes. In combination with the classification performance on the tissue slices, these findings support that hyperspectral imaging can become a powerful tool for margin assessment during breast-conserving surgery. Original promotion date: April 24, 2020 (COVID-19)<br/
    corecore