1,124 research outputs found

    Methodological and terminological issues in animal-assisted interventions: An umbrella review of systematic reviews

    Get PDF
    Recently, animal-assisted interventions (AAIs), which are defined as psychological, educational, and rehabilitation support activities, have become widespread in different contexts. For many years, they have been a subject of interest in the international scientific community and are at the center of an important discussion regarding their effectiveness and the most appropriate practices for their realization. We carried out an umbrella review (UR) of systematic reviews (SRs), created for the purpose of exploring the literature and aimed at deepening the terminological and methodological aspects of AAIs. It is created by exploring the online databases PubMed, Google Scholar, and Cochrane Library. The SRs present in the high-impact indexed search engines Web of Sciences and Scopus are selected. After screening, we selected 15 SRs that met the inclusion criteria. All papers complained of the poor quality of AAIs; some considered articles containing interventions that did not always correspond to the terminology they have explored and whose operating practices were not always comparable. This stresses the need for the development and consequent diffusion of not only operational protocols, but also research protocols which provide for the homogeneous use of universally recognized terminologies, thus facilitating the study, deepening, and comparison between the numerous experiences described

    MULTI-TEMPORAL ANALYSIS OF LANDSCAPES AND URBAN AREAS

    Get PDF
    This article presents a 4D modelling approach that employs multi-temporal and historical aerial images to derive spatio-temporal information for scenes and landscapes. Such imagery represent a unique data source, which combined with photo interpretation and reality-based 3D reconstruction techniques, can offer a more complete modelling procedure because it adds the fourth dimension of time to 3D geometrical representation and thus, allows urban planners, historians, and others to identify, describe, and analyse changes in individual scenes and buildings as well as across landscapes. Particularly important to this approach are historical aerial photos, which provide data about the past that can be collected, processed, and then integrated as a database. The proposed methodology employs both historical (1945) and more recent (1973 and 2000s) aerial images from the Trentino region in North-eastern Italy in order to create a multi-temporal database of information to assist researchers in many disciplines such as topographic mapping, geology, geography, architecture, and archaeology as they work to reconstruct building phases and to understand landscape transformations (Fig. 1)

    IMAGE ORIENTATION WITH A HYBRID PIPELINE ROBUST TO ROTATIONS AND WIDE-BASELINES

    Get PDF
    The extraction of reliable and repeatable interest points among images is a fundamental step for automatic image orientation (Structure-From-Motion). Despite recent progresses, open issues in challenging conditions - such as wide baselines and strong light variations - are still present. Over the years, traditional hand-crafted methods have been paired by learning-based approaches, progressively updating the state-of-the-art according to recent benchmarks. Notwithstanding these advancements, learning-based methods are often not suitable for real photogrammetric surveys due to their lack of rotation invariance, a fundamental requirement for these specific applications. This paper proposes a novel hybrid image matching pipeline which employs both hand-crafted and deep-based components, to extract reliable rotational invariant keypoints optimized for wide-baseline scenarios. The proposed hybrid pipeline was compared with other hand-crafted and learning-based state-of-the-art approaches on some photogrammetric datasets using metric ground-truth data. Results show that the proposed hybrid matching pipeline has high accuracy and appeared to be the only method among the evaluated ones able to register images in the most challenging wide-baseline scenarios

    Underwater calibration of dome port pressure housings.

    Get PDF
    Underwater photogrammetry using consumer grade photographic equipment can be feasible for different applications, e.g. archaeology, biology, industrial inspections, etc. The use of a camera underwater can be very different from its terrestrial use due to the optical phenomena involved. The presence of the water and camera pressure housing in front of the camera act as additional optical elements. Spherical dome ports are difficult to manufacture and consequently expensive but at the same time they are the most useful for underwater photogrammetry as they keep the main geometric characteristics of the lens unchanged. Nevertheless, the manufacturing and alignment of dome port pressure housing components can be the source of unexpected changes of radial and decentering distortion, source of systematic errors that can influence the final 3D measurements. The paper provides a brief introduction of underwater optical phenomena involved in underwater photography, then presents the main differences between flat and dome ports to finally discuss the effect of manufacturing on 3D measurements in two case studies

    JOINT ALIGNMENT OF UNDERWATER AND ABOVE-THE-WATER PHOTOGRAMMETRIC 3D MODELS BY INDEPENDENT MODELS ADJUSTMENT

    Get PDF
    The surveying and 3D modelling of objects that extend both below and above the water level, such as ships, harbour structures, offshore platforms, are still an open issue. Commonly, a combined and simultaneous survey is the adopted solution, with acoustic/optical sensors respectively in underwater and in air (most common) or optical/optical sensors both below and above the water level. In both cases, the system must be calibrated and a ship is to be used and properly equipped with also a navigation system for the alignment of sequential 3D point clouds. Such a system is usually highly expensive and has been proved to work with still structures. On the other hand for free floating objects it does not provide a very practical solution. In this contribution, a flexible, low-cost alternative for surveying floating objects is presented. The method is essentially based on photogrammetry, employed for surveying and modelling both the emerged and submerged parts of the object. Special targets, named Orientation Devices, are specifically designed and adopted for the successive alignment of the two photogrammetric models (underwater and in air). A typical scenario where the proposed procedure can be particularly suitable and effective is the case of a ship after an accident whose damaged part is underwater and necessitate to be measured (Figure 1). The details of the mathematical procedure are provided in the paper, together with a critical explanation of the results obtained from the adoption of the method for the survey of a small pleasure boat in floating condition

    DEEP LEARNING TO SUPPORT 3D MAPPING CAPABILITIES OF A PORTABLE VSLAM-BASED SYSTEM

    Get PDF
    The use of vision-based localization and mapping techniques, such as visual odometry and SLAM, has become increasingly prevalent in the field of Geomatics, particularly in mobile mapping systems. These methods provide real-time estimation of the 3D scene as well as sensor's position and orientation using images or LiDAR sensors mounted on a moving platform. While visual odometry primarily focuses on the camera's position, SLAM also creates a 3D reconstruction of the environment. Conventional (geometric) and learning-based approaches are used in visual SLAM, with deep learning networks being integrated to perform semantic segmentation, object detection and depth prediction. The goal of this work is to report ongoing developments to extend the GuPho stereo-vision SLAM-based system with deep learning networks for tasks such as crack detection, obstacle detection and depth estimation. Our findings show how a neural network can be coupled to SLAM sequences in order to support 3D mapping application with semantic information

    Deep-image-matching: A toolbox for multiview image matching of complex scenarios

    Get PDF
    Finding corresponding points between images is a fundamental step in photogrammetry and computer vision tasks. Traditionally, image matching has relied on hand-crafted algorithms such as SIFT or ORB. However, these algorithms face challenges when dealing with multi-Temporal images, varying radiometry and contents as well as significant viewpoint differences. Recently, the computer vision community has proposed several deep learning-based approaches that are trained for challenging illumination and wide viewing angle scenarios. However, they suffer from certain limitations, such as rotations, and they are not applicable to high resolution images due to computational constraints. In addition, they are not widely used by the photogrammetric community due to limited integration with standard photogrammetric software packages. To overcome these challenges, this paper introduces Deep-Image-Matching, an opensource toolbox designed to match images using different matching strategies, ranging from traditional hand-crafted to deep-learning methods (https://github.com/3DOM-FBK/deep-image-matching). The toolbox accommodates high-resolution datasets, e.g. data acquired with full-frame or aerial sensors, and addresses known rotation-related problems of the learned features. The toolbox provides image correspondences outcomes that are directly compatible with commercial and open-source software packages, such as COLMAP and openMVG, for a bundle adjustment. The paper includes also a series of cultural heritage case studies that present challenging conditions where traditional hand-crafted approaches typically fail

    Knowledge and valorization of historical sites through 3D documentation and modeling

    Get PDF
    The paper presents the first results of an interdisciplinary project related to the 3D documentation, dissemination, valorization and digital access of archeological sites. Beside the mere 3D documentation aim, the project has two goals: (i) to easily explore and share via web references and results of the interdisciplinary work, including the interpretative process and the final reconstruction of the remains; (ii) to promote and valorize archaeological areas using reality-based 3D data and Virtual Reality devices. This method has been verified on the ruins of the archeological site of Pausilypon, a maritime villa of Roman period (Naples, Italy). Using Unity3D, the virtual tour of the heritage site was integrated and enriched with the surveyed 3D data, text documents, CAAD reconstruction hypotheses, drawings, photos, etc. In this way, starting from the actual appearance of the ruins (panoramic images), passing through the 3D digital surveying models and several other historical information, the user is able to access virtual contents and reconstructed scenarios, all in a single virtual, interactive and immersive environment. These contents and scenarios allow to derive documentation and geometrical information, understand the site, perform analyses, see interpretative processes, communicate historical information and valorize the heritage location

    Geometric calibration and radiometric correction of the maia multispectral camera

    Get PDF
    Multispectral imaging is a widely used remote sensing technique, whose applications range from agriculture to environmental monitoring, from food quality check to cultural heritage diagnostic. A variety of multispectral imaging sensors are available on the market, many of them designed to be mounted on different platform, especially small drones. This work focuses on the geometric and radiometric characterization of a brand-new, lightweight, low-cost multispectral camera, called MAIA. The MAIA camera is equipped with nine sensors, allowing for the acquisition of images in the visible and near infrared parts of the electromagnetic spectrum. Two versions are available, characterised by different set of band-pass filters, inspired by the sensors mounted on the WorlView-2 and Sentinel2 satellites, respectively. The camera details and the developed procedures for the geometric calibrations and radiometric correction are presented in the paper
    corecore