33 research outputs found

    Method for coregistration of optical measurements of breast tissue with histopathology : the importance of accounting for tissue deformations

    Get PDF
    For the validation of optical diagnostic technologies, experimental results need to be benchmarked against the gold standard. Currently, the gold standard for tissue characterization is assessment of hematoxylin and eosin (H&E)-stained sections by a pathologist. When processing tissue into H&E sections, the shape of the tissue deforms with respect to the initial shape when it was optically measured. We demonstrate the importance of accounting for these tissue deformations when correlating optical measurement with routinely acquired histopathology. We propose a method to register the tissue in the H&E sections to the optical measurements, which corrects for these tissue deformations. We compare the registered H&E sections to H&E sections that were registered with an algorithm that does not account for tissue deformations by evaluating both the shape and the composition of the tissue and using microcomputer tomography data as an independent measure. The proposed method, which did account for tissue deformations, was more accurate than the method that did not account for tissue deformations. These results emphasize the need for a registration method that accounts for tissue deformations, such as the method presented in this study, which can aid in validating optical techniques for clinical use. (C) The Authors. Published by SPIE under a Creative Commons Attribution 4.0 Unported License

    Kasvaimen kolmiulotteinen histopatologia : malli kielen levyepiteelikarsinoomasta

    Get PDF
    Lääketieteessä kuvantamistutkimuksissa hyödynnetään usein kolmiulotteista (3D) mallintamista, jotta tutkittava kohde pystyttäisiin hahmottamaan yksiselitteisemmin. Sen sijaan histopatologiassa kaksiulotteinen (2D) esittämistapa on edelleen vallitseva tapa ilmoittaa esimerkiksi poistetun kasvaimen leikkausmarginaalit. Tutkimuksemme tarkoitus oli esittää leikkauksessa poistetun pehmytkudosresekaatin sisällä olevan kasvaimen dimensiot ja siitä tehtyjen histologisten leikkeiden sijainnit 3D muodossa luomalla resekaatista ja sen leikkeistä digitaalinen 3D-malli. Kehittelimme menetelmän käyttäen yleisesti saatavilla olevia instrumentteja keskittyen kielen levyepiteelikarsinooman mallintamiseen. Loimme menetelmän tunnistamalla ja ratkomalla ongelmia, jotka liittyivät histologisten leikkeiden leikesuuntien valintaan, joka aiemman kirjallisuuden perusteella on ollut keskeinen haaste pehmytkudosresekaatin 3D-mallin luomisessa. Tavanomaiseen resekaatin käsittelyyn verrattuna lisävaiheita olivat ainoastaan leikkausresekaatin skannaaminen ennen histopatologisten leikkeiden keräämistä sekä itse karsinooman mallintaminen digitaaliseksi. Nämä lisävaiheet vaativat ainoastaan 3D pöytäskannerin ja 3D mallinnusohjelmiston. Työssä esittelemme leikkausresekaatin ja histopatologisten leikkeiden mallintamiseen liittyviä haasteita ja niille kehittämiämme ratkaisuja. Työn tuloksena esittelemme valmiin 3D-mallin kielen levyepiteelikarsinooman leikkausresekaatista ja sen sisällä olevasta varsinaisesta kasvaimesta sekä digitaalisena mallina että puoliläpäisevänä valumallina (3D-tuloste). Kuvaamme työssä myös työvaiheet, jotka vaaditaan 3D-mallin luomiseksi. Julkaisuhetkellä tietääksemme työmme on ensimmäinen yritys esittää kielikasvaimen histopatologiset marginaalit 3D muodossa, kun aiemmin vain 2D muoto on ollut saatavilla. 3D-mallin luominen metodillamme ei vaadi ennalta määrättyjä leikesuuntia. Metodimme tarjoaa yksiselitteisemmän ja selkeämmän tavan havainnollistaa kasvaimen marginaalit, topografia ja orientaatio. Metodiamme voitaisiin tulevaisuudessa käyttää työkaluna postoperatiivisessa arvioinnissa sekä adjuvanttihoitojen suunnitelussa

    Additive Manufacturing of Resected Oral and Oropharyngeal Tissue : A Pilot Study

    Get PDF
    Better visualization of tumor structure and orientation are needed in the postoperative setting. We aimed to assess the feasibility of a system in which oral and oropharyngeal tumors are resected, photographed, 3D modeled, and printed using additive manufacturing techniques. Three patients diagnosed with oral/oropharyngeal cancer were included. All patients underwent preoperative magnetic resonance imaging followed by resection. In the operating room (OR), the resected tissue block was photographed using a smartphone. Digital photos were imported into Agisoft Photoscan to produce a digital 3D model of the resected tissue. Physical models were then printed using binder jetting techniques. The aforementioned process was applied in pilot cases including carcinomas of the tongue and larynx. The number of photographs taken for each case ranged from 63 to 195. The printing time for the physical models ranged from 2 to 9 h, costs ranging from 25 to 141 EUR (28 to 161 USD). Digital photography may be used to additively manufacture models of resected oral/oropharyngeal tumors in an easy, accessible and efficient fashion. The model may be used in interdisciplinary discussion regarding postoperative care to improve understanding and collaboration, but further investigation in prospective studies is required

    Probabilistic Evaluation of 3D Surfaces Using Statistical Shape Models (SSM)

    Full text link
    [EN] Inspecting a 3D object which shape has elastic manufacturing tolerances in order to find defects is a challenging and time-consuming task. This task usually involves humans, either in the specification stage followed by some automatic measurements, or in other points along the process. Even when a detailed inspection is performed, the measurements are limited to a few dimensions instead of a complete examination of the object. In this work, a probabilistic method to evaluate 3D surfaces is presented. This algorithm relies on a training stage to learn the shape of the object building a statistical shape model. Making use of this model, any inspected object can be evaluated obtaining a probability that the whole object or any of its dimensions are compatible with the model, thus allowing to easily find defective objects. Results in simulated and real environments are presented and compared to two different alternatives.This work was partially funded by Generalitat Valenciana through IVACE (Valencian Institute of Business Competitiveness) distributed nominatively to Valencian technological innovation centres under project expedient IMAMCN/2020/1.Pérez, J.; Guardiola Garcia, JL.; Pérez Jiménez, AJ.; Perez-Cortes, J. (2020). Probabilistic Evaluation of 3D Surfaces Using Statistical Shape Models (SSM). Sensors. 20(22):1-16. https://doi.org/10.3390/s20226554S1162022Brosed, F. J., Aguilar, J. J., Guillomía, D., & Santolaria, J. (2010). 3D Geometrical Inspection of Complex Geometry Parts Using a Novel Laser Triangulation Sensor and a Robot. Sensors, 11(1), 90-110. doi:10.3390/s110100090Perez-Cortes, J.-C., Perez, A., Saez-Barona, S., Guardiola, J.-L., & Salvador, I. (2018). A System for In-Line 3D Inspection without Hidden Surfaces. Sensors, 18(9), 2993. doi:10.3390/s18092993Bi, Z. M., & Wang, L. (2010). Advances in 3D data acquisition and processing for industrial applications. Robotics and Computer-Integrated Manufacturing, 26(5), 403-413. doi:10.1016/j.rcim.2010.03.003Fu, K., Peng, J., He, Q., & Zhang, H. (2020). Single image 3D object reconstruction based on deep learning: A review. Multimedia Tools and Applications, 80(1), 463-498. doi:10.1007/s11042-020-09722-8Pichat, J., Iglesias, J. E., Yousry, T., Ourselin, S., & Modat, M. (2018). A Survey of Methods for 3D Histology Reconstruction. Medical Image Analysis, 46, 73-105. doi:10.1016/j.media.2018.02.004Pathak, V. K., Singh, A. K., Sivadasan, M., & Singh, N. K. (2016). Framework for Automated GD&T Inspection Using 3D Scanner. Journal of The Institution of Engineers (India): Series C, 99(2), 197-205. doi:10.1007/s40032-016-0337-7Bustos, B., Keim, D. A., Saupe, D., Schreck, T., & Vranić, D. V. (2005). Feature-based similarity search in 3D object databases. ACM Computing Surveys, 37(4), 345-387. doi:10.1145/1118890.1118893Mian, A., Bennamoun, M., & Owens, R. (2009). On the Repeatability and Quality of Keypoints for Local Feature-based 3D Object Retrieval from Cluttered Scenes. International Journal of Computer Vision, 89(2-3), 348-361. doi:10.1007/s11263-009-0296-zLiu, Z., Zhao, C., Wu, X., & Chen, W. (2017). An Effective 3D Shape Descriptor for Object Recognition with RGB-D Sensors. Sensors, 17(3), 451. doi:10.3390/s17030451Barra, V., & Biasotti, S. (2013). 3D shape retrieval using Kernels on Extended Reeb Graphs. Pattern Recognition, 46(11), 2985-2999. doi:10.1016/j.patcog.2013.03.019Xie, J., Dai, G., Zhu, F., Wong, E. K., & Fang, Y. (2017). DeepShape: Deep-Learned Shape Descriptor for 3D Shape Retrieval. IEEE Transactions on Pattern Analysis and Machine Intelligence, 39(7), 1335-1345. doi:10.1109/tpami.2016.2596722Lague, D., Brodu, N., & Leroux, J. (2013). Accurate 3D comparison of complex topography with terrestrial laser scanner: Application to the Rangitikei canyon (N-Z). ISPRS Journal of Photogrammetry and Remote Sensing, 82, 10-26. doi:10.1016/j.isprsjprs.2013.04.009Cook, K. L. (2017). An evaluation of the effectiveness of low-cost UAVs and structure from motion for geomorphic change detection. Geomorphology, 278, 195-208. doi:10.1016/j.geomorph.2016.11.009Martínez-Carricondo, P., Agüera-Vega, F., Carvajal-Ramírez, F., Mesas-Carrascosa, F.-J., García-Ferrer, A., & Pérez-Porras, F.-J. (2018). Assessment of UAV-photogrammetric mapping accuracy based on variation of ground control points. International Journal of Applied Earth Observation and Geoinformation, 72, 1-10. doi:10.1016/j.jag.2018.05.015Burdziakowski, P., Specht, C., Dabrowski, P. S., Specht, M., Lewicka, O., & Makar, A. (2020). Using UAV Photogrammetry to Analyse Changes in the Coastal Zone Based on the Sopot Tombolo (Salient) Measurement Project. Sensors, 20(14), 4000. doi:10.3390/s20144000MARDIA, K. V., & DRYDEN, I. L. (1989). The statistical analysis of shape data. Biometrika, 76(2), 271-281. doi:10.1093/biomet/76.2.271Heimann, T., & Meinzer, H.-P. (2009). Statistical shape models for 3D medical image segmentation: A review. Medical Image Analysis, 13(4), 543-563. doi:10.1016/j.media.2009.05.004Ambellan, F., Tack, A., Ehlke, M., & Zachow, S. (2019). Automated segmentation of knee bone and cartilage combining statistical shape knowledge and convolutional neural networks: Data from the Osteoarthritis Initiative. Medical Image Analysis, 52, 109-118. doi:10.1016/j.media.2018.11.009Avendi, M. R., Kheradvar, A., & Jafarkhani, H. (2016). A combined deep-learning and deformable-model approach to fully automatic segmentation of the left ventricle in cardiac MRI. Medical Image Analysis, 30, 108-119. doi:10.1016/j.media.2016.01.005Booth, J., Roussos, A., Ponniah, A., Dunaway, D., & Zafeiriou, S. (2017). Large Scale 3D Morphable Models. International Journal of Computer Vision, 126(2-4), 233-254. doi:10.1007/s11263-017-1009-7Erus, G., Zacharaki, E. I., & Davatzikos, C. (2014). Individualized statistical learning from medical image databases: Application to identification of brain lesions. Medical Image Analysis, 18(3), 542-554. doi:10.1016/j.media.2014.02.00

    External surface anatomy of the postfolding human embryo: Computer-aided, three-dimensional reconstruction of printable digital specimens

    Get PDF
    Opportunities for clinicians, researchers, and medical students to become acquainted with the three-dimensional (3D) anatomy of the human embryo have historically been limited. This work was aimed at creating a collection of digital, printable 3D surface models demonstrating major morphogenetic changes in the embryo's external anatomy, including typical features used for external staging. Twelve models were digitally reconstructed based on optical projection tomography, high-resolution episcopic microscopy and magnetic resonance imaging datasets of formalin-fixed specimens of embryos of developmental stages 12 through 23, that is, stages following longitudinal and transverse embryo folding. The reconstructed replica reproduced the external anatomy of the actual specimens in great detail, and the progress of development over stages was recognizable in a variety of external anatomical features and bodily structures, including the general layout and curvature of the body, the pharyngeal arches and cervical sinus, the physiological gut herniation, and external genitalia. In addition, surface anatomy features commonly used for embryo staging, such as distinct steps in the morphogenesis of facial primordia and limb buds, were also apparent. These digital replica, which are all provided for 3D visualization and printing, can serve as a novel resource for teaching and learning embryology and may contribute to a better appreciation of the human embryonic development

    Mixed methodology in human brain research: integrating MRI and histology

    Get PDF
    Postmortem magnetic resonance imaging (MRI) can provide a bridge between histological observations and the in vivo anatomy of the human brain. Approaches aimed at the co-registration of data derived from the two techniques are gaining interest. Optimal integration of the two research fields requires detailed knowledge of the tissue property requirements for individual research techniques, as well as a detailed understanding of the consequences of tissue fixation steps on the imaging quality outcomes for both MRI and histology. Here, we provide an overview of existing studies that bridge between state-of-the-art imaging modalities, and discuss the background knowledge incorporated into the design, execution and interpretation of postmortem studies. A subset of the discussed challenges transfer to animal studies as well. This insight can contribute to furthering our understanding of the normal and diseased human brain, and to facilitate discussions between researchers from the individual disciplines

    Three-Dimensional Presentation of Tumor Histopathology: A Model Using Tongue Squamous Cell Carcinoma

    Get PDF
    Medical imaging often presents objects in three-dimensional (3D) form to provide better visual understanding. In contrast, histopathology is typically presented as two-dimensional (2D). Our objective was to present the tumor dimensions in 3D by creating a 3D digital model of it and so demonstrate the location of the tumor and the histological slices within the surgical soft tissue resection specimen. We developed a novel method for modeling a tongue squamous cell carcinoma using commonly available instruments. We established our 3D-modeling method by recognizing and solving challenges that concern the selection of the direction of histological slices. Additional steps to standard handling included scanning the specimen prior to grossing and modeling the carcinoma, which required only a table scanner and modeling software. We present challenges and their solutions in modeling the resection specimen and its histological slices. We introduce a finished 3D model of a soft tissue resection specimen and the actual tumor as well as its histopathological grossing sites in 3D digital and printed form. Our novel method provides steps to create a digital model of soft tissue resection specimen and the tumor within. To our knowledge, this is the first attempt to present histopathological margins of a tongue tumor in 3D form, whereas previously, only 2D has been available. The creation of the 3D model does not call for predetermined grossing directions for the pathologist. In addition, it provides a crucial initiative to enhance oncological management. The method allows a better visual understanding of tumor margins, topography, and orientation. It thus provides a tool for an improved postoperative assessment and aids, for example, in the discussion of the need for additional surgery and adjuvant therapy

    Deep active learning for suggestive segmentation of biomedical image stacks via optimisation of Dice scores and traced boundary length

    Get PDF
    Manual segmentation of stacks of 2D biomedical images (e.g., histology) is a time-consuming task which can be sped up with semi-automated techniques. In this article, we present a suggestive deep active learning framework that seeks to minimise the annotation effort required to achieve a certain level of accuracy when labelling such a stack. The framework suggests, at every iteration, a specific region of interest (ROI) in one of the images for manual delineation. Using a deep segmentation neural network and a mixed cross-entropy loss function, we propose a principled strategy to estimate class probabilities for the whole stack, conditioned on heterogeneous partial segmentations of the 2D images, as well as on weak supervision in the form of image indices that bound each ROI. Using the estimated probabilities, we propose a novel active learning criterion based on predictions for the estimated segmentation performance and delineation effort, measured with average Dice scores and total delineated boundary length, respectively, rather than common surrogates such as entropy. The query strategy suggests the ROI that is expected to maximise the ratio between performance and effort, while considering the adjacency of structures that may have already been labelled – which decrease the length of the boundary to trace. We provide quantitative results on synthetically deformed MRI scans and real histological data, showing that our framework can reduce labelling effort by up to 60–70% without compromising accuracy
    corecore