12,012 research outputs found

    Categorization of indoor places by combining local binary pattern histograms of range and reflectance data from laser range finders

    Get PDF
    This paper presents an approach to categorize typical places in indoor environments using 3D scans provided by a laser range finder. Examples of such places are offices, laboratories, or kitchens. In our method, we combine the range and reflectance data from the laser scan for the final categorization of places. Range and reflectance images are transformed into histograms of local binary patterns and combined into a single feature vector. This vector is later classified using support vector machines. The results of the presented experiments demonstrate the capability of our technique to categorize indoor places with high accuracy. We also show that the combination of range and reflectance information improves the final categorization results in comparison with a single modality

    Matterport3D: Learning from RGB-D Data in Indoor Environments

    Full text link
    Access to large, diverse RGB-D datasets is critical for training RGB-D scene understanding algorithms. However, existing datasets still cover only a limited number of views or a restricted scale of spaces. In this paper, we introduce Matterport3D, a large-scale RGB-D dataset containing 10,800 panoramic views from 194,400 RGB-D images of 90 building-scale scenes. Annotations are provided with surface reconstructions, camera poses, and 2D and 3D semantic segmentations. The precise global alignment and comprehensive, diverse panoramic set of views over entire buildings enable a variety of supervised and self-supervised computer vision tasks, including keypoint matching, view overlap prediction, normal prediction from color, semantic segmentation, and region classification

    The survey of the Basilica di Collemaggio in L’Aquila with a system of terrestrial imaging and most proven techniques

    Get PDF
    The proposed job concerns the evaluation of a series of surveys carried out in the context of a campaign of studies begun in 2015 with the objective of comparing the accuracies obtainable with the systems of terrestrial imaging, compared to unmanned aerial vehicle imaging and laser scanner survey. In particular, the authors want to test the applicability of a system of imaging rover (IR), an innovative terrestrial imaging system, that consists of a multi-camera with integrated global positioning system (GPS)/global navigation satellite system (GNSS) receiver, that is very recently released technique, and only a few literature references exist on the specific subject. In detail, the IR consists of a total of 12 calibrated cameras – seven “panorama” and five downward-looking – providing complete site documentation that can potentially be used to make photogrammetric measurements. The data acquired in this experimentation were then elaborated with various software packages in order to obtain point clouds and a three-dimensional model in different cases, and a comparison of the various results obtained was carried out. Following, the case study of the Basilica di Santa Maria di Collemaggio in L’Aquila is reported; Collemaggio is an UNESCO world heritage site; it was damaged during the seismic event of 2009, and its restoration is still in progress

    Digital Urban - The Visual City

    Get PDF
    Nothing in the city is experienced by itself for a city’s perspicacity is the sum of its surroundings. To paraphrase Lynch (1960), at every instant, there is more than we can see and hear. This is the reality of the physical city, and thus in order to replicate the visual experience of the city within digital space, the space itself must convey to the user a sense of place. This is what we term the “Visual City”, a visually recognisable city built out of the digital equivalent of bricks and mortar, polygons, textures, and most importantly data. Recently there has been a revolution in the production and distribution of digital artefacts which represent the visual city. Digital city software that was once in the domain of high powered personal computers, research labs and professional software are now in the domain of the public-at-large through both the web and low-end home computing. These developments have gone hand in hand with the re-emergence of geography and geographic location as a way of tagging information to non-proprietary web-based software such as Google Maps, Google Earth, Microsoft’s Virtual Earth, ESRI’s ArcExplorer, and NASA’s World Wind, amongst others. The move towards ‘digital earths’ for the distribution of geographic information has, without doubt, opened up a widespread demand for the visualization of our environment where the emphasis is now on the third dimension. While the third dimension is central to the development of the digital or visual city, this is not the only way the city can be visualized for a number of emerging tools and ‘mashups’ are enabling visual data to be tagged geographically using a cornucopia of multimedia systems. We explore these social, textual, geographical, and visual technologies throughout this chapter
    • 

    corecore