2,232 research outputs found

    Review of photoacoustic imaging plus X

    Full text link
    Photoacoustic imaging (PAI) is a novel modality in biomedical imaging technology that combines the rich optical contrast with the deep penetration of ultrasound. To date, PAI technology has found applications in various biomedical fields. In this review, we present an overview of the emerging research frontiers on PAI plus other advanced technologies, named as PAI plus X, which includes but not limited to PAI plus treatment, PAI plus new circuits design, PAI plus accurate positioning system, PAI plus fast scanning systems, PAI plus novel ultrasound sensors, PAI plus advanced laser sources, PAI plus deep learning, and PAI plus other imaging modalities. We will discuss each technology's current state, technical advantages, and prospects for application, reported mostly in recent three years. Lastly, we discuss and summarize the challenges and potential future work in PAI plus X area

    Multisource and Multitemporal Data Fusion in Remote Sensing

    Get PDF
    The sharp and recent increase in the availability of data captured by different sensors combined with their considerably heterogeneous natures poses a serious challenge for the effective and efficient processing of remotely sensed data. Such an increase in remote sensing and ancillary datasets, however, opens up the possibility of utilizing multimodal datasets in a joint manner to further improve the performance of the processing approaches with respect to the application at hand. Multisource data fusion has, therefore, received enormous attention from researchers worldwide for a wide variety of applications. Moreover, thanks to the revisit capability of several spaceborne sensors, the integration of the temporal information with the spatial and/or spectral/backscattering information of the remotely sensed data is possible and helps to move from a representation of 2D/3D data to 4D data structures, where the time variable adds new information as well as challenges for the information extraction algorithms. There are a huge number of research works dedicated to multisource and multitemporal data fusion, but the methods for the fusion of different modalities have expanded in different paths according to each research community. This paper brings together the advances of multisource and multitemporal data fusion approaches with respect to different research communities and provides a thorough and discipline-specific starting point for researchers at different levels (i.e., students, researchers, and senior researchers) willing to conduct novel investigations on this challenging topic by supplying sufficient detail and references

    Review on Active and Passive Remote Sensing Techniques for Road Extraction

    Get PDF
    Digital maps of road networks are a vital part of digital cities and intelligent transportation. In this paper, we provide a comprehensive review on road extraction based on various remote sensing data sources, including high-resolution images, hyperspectral images, synthetic aperture radar images, and light detection and ranging. This review is divided into three parts. Part 1 provides an overview of the existing data acquisition techniques for road extraction, including data acquisition methods, typical sensors, application status, and prospects. Part 2 underlines the main road extraction methods based on four data sources. In this section, road extraction methods based on different data sources are described and analysed in detail. Part 3 presents the combined application of multisource data for road extraction. Evidently, different data acquisition techniques have unique advantages, and the combination of multiple sources can improve the accuracy of road extraction. The main aim of this review is to provide a comprehensive reference for research on existing road extraction technologies.Peer reviewe

    A Systematic Review of Convolutional Neural Network-Based Structural Condition Assessment Techniques

    Get PDF
    With recent advances in non-contact sensing technology such as cameras, unmanned aerial and ground vehicles, the structural health monitoring (SHM) community has witnessed a prominent growth in deep learning-based condition assessment techniques of structural systems. These deep learning methods rely primarily on convolutional neural networks (CNNs). The CNN networks are trained using a large number of datasets for various types of damage and anomaly detection and post-disaster reconnaissance. The trained networks are then utilized to analyze newer data to detect the type and severity of the damage, enhancing the capabilities of non-contact sensors in developing autonomous SHM systems. In recent years, a broad range of CNN architectures has been developed by researchers to accommodate the extent of lighting and weather conditions, the quality of images, the amount of background and foreground noise, and multiclass damage in the structures. This paper presents a detailed literature review of existing CNN-based techniques in the context of infrastructure monitoring and maintenance. The review is categorized into multiple classes depending on the specific application and development of CNNs applied to data obtained from a wide range of structures. The challenges and limitations of the existing literature are discussed in detail at the end, followed by a brief conclusion on potential future research directions of CNN in structural condition assessment

    HBIM MODELLING FOR AN HISTORICAL URBAN CENTRE

    Get PDF
    The research in the geospatial data structuring and formats interoperability direction is the crucial task for creating a 3D Geodatabase at the urban scale. Both geometric and semantic data structuring should be considered, mainly regarding the interoperability of objects and formats generated outside the geographical space. Current reflections on 3D database generation, based on geospatial data, are mostly related to visualisation issues and context-related application. The purposes and scale of representation according to LoDs require some reflections, particularly for the transmission of semantic information. This contribution adopts and develops the integration of some tools to derive object-oriented modelling in the HBIM environment, both at the urban and architectural scale, from point clouds obtained by UAV (Unmanned Aerial Vehicle) photogrammetry. One of the paper’s objectives is retracing the analysis phases of the point clouds acquired by UAV photogrammetry technique and their suitability for multiscale modelling. Starting from UAV clouds, through the optimisation and segmentation, the proposed workflow tries to trigger the modelling of the objects according to the LODs, comparing the one coming from CityGML and the one in use in the BIM community. The experimentation proposed is focused on the case study of the city of Norcia, which like many other historic centres spread over the territory of central Italy, was deeply damaged by the 2016-17 earthquake

    HBIM MODELLING FOR AN HISTORICAL URBAN CENTRE

    Get PDF
    Abstract. The research in the geospatial data structuring and formats interoperability direction is the crucial task for creating a 3D Geodatabase at the urban scale. Both geometric and semantic data structuring should be considered, mainly regarding the interoperability of objects and formats generated outside the geographical space. Current reflections on 3D database generation, based on geospatial data, are mostly related to visualisation issues and context-related application. The purposes and scale of representation according to LoDs require some reflections, particularly for the transmission of semantic information.This contribution adopts and develops the integration of some tools to derive object-oriented modelling in the HBIM environment, both at the urban and architectural scale, from point clouds obtained by UAV (Unmanned Aerial Vehicle) photogrammetry.One of the paper's objectives is retracing the analysis phases of the point clouds acquired by UAV photogrammetry technique and their suitability for multiscale modelling. Starting from UAV clouds, through the optimisation and segmentation, the proposed workflow tries to trigger the modelling of the objects according to the LODs, comparing the one coming from CityGML and the one in use in the BIM community. The experimentation proposed is focused on the case study of the city of Norcia, which like many other historic centres spread over the territory of central Italy, was deeply damaged by the 2016-17 earthquake

    Deep Learning Meets Hyperspectral Image Analysis: A Multidisciplinary Review

    Get PDF
    Modern hyperspectral imaging systems produce huge datasets potentially conveying a great abundance of information; such a resource, however, poses many challenges in the analysis and interpretation of these data. Deep learning approaches certainly offer a great variety of opportunities for solving classical imaging tasks and also for approaching new stimulating problems in the spatial–spectral domain. This is fundamental in the driving sector of Remote Sensing where hyperspectral technology was born and has mostly developed, but it is perhaps even more true in the multitude of current and evolving application sectors that involve these imaging technologies. The present review develops on two fronts: on the one hand, it is aimed at domain professionals who want to have an updated overview on how hyperspectral acquisition techniques can combine with deep learning architectures to solve specific tasks in different application fields. On the other hand, we want to target the machine learning and computer vision experts by giving them a picture of how deep learning technologies are applied to hyperspectral data from a multidisciplinary perspective. The presence of these two viewpoints and the inclusion of application fields other than Remote Sensing are the original contributions of this review, which also highlights some potentialities and critical issues related to the observed development trends

    Karpinski Score under Digital Investigation: A Fully Automated Segmentation Algorithm to Identify Vascular and Stromal Injury of Donors’ Kidneys

    Get PDF
    In kidney transplantations, the evaluation of the vascular structures and stromal areas is crucial for determining kidney acceptance, which is currently based on the pathologist's visual evaluation. In this context, an accurate assessment of the vascular and stromal injury is fundamental to assessing the nephron status. In the present paper, the authors present a fully automated algorithm, called RENFAST (Rapid EvaluatioN of Fibrosis And vesselS Thickness), for the segmentation of kidney blood vessels and fibrosis in histopathological images. The proposed method employs a novel strategy based on deep learning to accurately segment blood vessels, while interstitial fibrosis is assessed using an adaptive stain separation method. The RENFAST algorithm is developed and tested on 350 periodic acid-Schiff (PAS) images for blood vessel segmentation and on 300 Massone's trichrome (TRIC) stained images for the detection of renal fibrosis. In the TEST set, the algorithm exhibits excellent segmentation performance in both blood vessels (accuracy: 0.8936) and fibrosis (accuracy: 0.9227) and outperforms all the compared methods. To the best of our knowledge, the RENFAST algorithm is the first fully automated method capable of detecting both blood vessels and fibrosis in digital histological images. Being very fast (average computational time 2.91 s), this algorithm paves the way for automated, quantitative, and real-time kidney graft assessments
    • …
    corecore