19 research outputs found

    Vernacular architecture in Brazilian semiarid region: survey and memory in the state of Sergipe

    Full text link
    [EN] Buildings with earth in their composition have been common since the beginning of the Brazilian territory’s settlement. Until this day, wattle-and-daub homes are frequent in the Northeast region of the country. This technique uses a structural cage made of the weft of woods whose interlocking voids are covered with thrown wet clay. Due to the current association of these buildings as shelters for insects that may contain Trypanosoma cruzi (which transmits the Chagas disease) numerous public policies guide eradication and replacement of these buildings by others built with masonry. Due to the destruction of these buildings, built with vernacular earthen techniques, this research aims to survey buildings that still resist in the semiarid region of Sergipe state. Therefore, literature review was carried out on architecture in the semiarid region and building investigation techniques using digital tools. Considering Sars-Cov-2 pandemic as a prohibitive condition that caused difficulties in collecting data in the field, it was necessary to seek out methods that could be used for a remote survey. Furthermore, an exploratory analysis was carried out with digitally available tools in which it was possible to observe popular buildings built with earth in the legal semi-arid region. Initially, data was collected from the latest demographic censuses carried out by the Brazilian Institute of Geography and Statistics (IBGE), as well as the socioeconomic data of Brazilian families in poverty situations registered with the government. This initial data, however, did not present information on geographic positioning of the dwellings, making it necessary to conduct a survey through Google Street View software, allowing the visualization of images at ground level, being effective on searching for wattle-and-daub residences. From these data, a catalog of the constructions found was generated and, by georeferencing these dwellings, the documentation produced may contribute to the preservation of vernacular constructive memory of this study’s location object.Felix Andrade, D.; Penido De Rezende, M.; Araújo Lima Bessa, S. (2022). Vernacular architecture in Brazilian semiarid region: survey and memory in the state of Sergipe. Editorial Universitat Politècnica de València. 31-38. https://doi.org/10.4995/HERITAGE2022.2022.15127313

    A Hierarchical Urban Forest Index Using Street-Level Imagery and Deep Learning

    Get PDF
    We develop a method based on computer vision and a hierarchical multilevel model to derive an Urban Street Tree Vegetation Index which aims to quantify the amount of vegetation visible from the point of view of a pedestrian. Our approach unfolds in two steps. First, areas of vegetation are detected within street-level imagery using a state-of-the-art deep neural network model. Second, information is combined from several images to derive an aggregated indicator at the area level using a hierarchical multilevel model. The comparative performance of our proposed approach is demonstrated against a widely used image segmentation technique based on a pre-labelled dataset. The approach is deployed to a real-world scenario for the city of Cardiff, Wales, using Google Street View imagery. Based on more than 200,000 street-level images, an urban tree street-level indicator is derived to measure the spatial distribution of tree cover, accounting for the presence of obstructing objects present in images at the Lower Layer Super Output Area (LSOA) level, corresponding to the most commonly used administrative areas for policy-making in the United Kingdom. The results show a high degree of correspondence between our tree street-level score and aerial tree cover estimates. They also evidence more accurate estimates at a pedestrian perspective from our tree score by more appropriately capturing tree cover in areas with large burial, woodland, formal open and informal open spaces where shallow trees are abundant, in high density residential areas with backyard trees, and along street networks with high density of high trees. The proposed approach is scalable and automatable. It can be applied to cities across the world and provides robust estimates of urban trees to advance our understanding of the link between mental health, well-being, green space and air pollution. View Full-Tex

    DeepClean : a robust deep learning technique for autonomous vehicle camera data privacy

    Get PDF
    Autonomous Vehicles (AVs) are equipped with several sensors which produce various forms of data, such as geo-location, distance, and camera data. The volume and utility of these data, especially camera data, have contributed to the advancement of high-performance self-driving applications. However, these vehicles and their collected data are prone to security and privacy attacks. One of the main attacks against AV-generated camera data is location inference, in which camera data is used to extract knowledge for tracking the users. A few research studies have proposed privacy-preserving approaches for analysing AV-generated camera data using powerful generative models, such as Variational Auto Encoder (VAE) and Generative Adversarial Network (GAN). However, the related work considers a weak geo-localisation attack model, which leads to weak privacy protection against stronger attack models. This paper proposes DeepClean, a robust deeplearning model that combines VAE and a private clustering technique. DeepClean learns distinct labelled object structures of the image data as clusters and generates a more visual representation of the non-private object clusters, e.g., roads. It then distorts the private object areas using a private Gaussian Mixture Model (GMM) to learn distinct cluster structures of the labelled object areas. The synthetic images generated from our model guarantee privacy and resist a robust location inference attack by less than 4% localisation accuracy. This result implies that using DeepClean for synthetic data generation makes it less likely for a subject to be localised by an attacker, even when using a robust geo-localisation attack. The overall image utility level of the generated synthetic images by DeepClean is comparable to the benchmark studies

    Line-based deep learning method for tree branch detection from digital images

    Get PDF
    The final publication is available at Elsevier via https://doi.org/10.1016/j.jag.2022.102759. © 2021. This manuscript version is made available under the CC-BY-NC-ND 4.0 licensePreventive maintenance of power lines, including cutting and pruning of tree branches, is essential to avoid interruptions in the energy supply. Automatic methods can support this risky task and also reduce time consuming. Here, we propose a method in which the orientation and the grasping positions of tree branches are estimated. The proposed method firstly predicts the straight line (representing the tree branch extension) based on a convolutional neural network (CNN). Secondly, a Hough transform is applied to estimate the direction and position of the line. Finally, we estimate the grip point as the pixel point with the highest probability of belonging to the line. We generated a dataset based on internet searches and annotated 1868 images considering challenging scenarios with different tree branch shapes, capture devices, and environmental conditions. Ten-fold cross-validation was adopted, considering 90% for training and 10% for testing. We also assessed the method under corruptions (gaussian and shot) with different severity levels. The experimental analysis showed the effectiveness of the proposed method reporting F1-score of 96.78%. Our method outperformed state-of-the-art Deep Hough Transform (DHT) and Fully Convolutional Line Parsing (F-Clip).This research was funded by CNPq (p: 433783/2018–4, 310517/2020–6, 314902/2018–0, 304052/2019–1 and 303559/2019–5), FUNDECT (p: 59/300. 066/2015, 071/2015) and CAPES PrInt (p: 88881.311850/2018–01). The authors acknowledge the support of the UFMS (Federal University of Mato Grosso do Sul) and CAPES (Finance Code 001). This research was also partially supported by the Emerging Interdisciplinary Project of Central University of Finance and Economics

    Integrating aerial and street view images for urban land use classification

    Get PDF
    Urban land use is key to rational urban planning and management. Traditional land use classification methods rely heavily on domain experts, which is both expensive and inefficient. In this paper, deep neural network-based approaches are presented to label urban land use at pixel level using high-resolution aerial images and ground-level street view images. We use a deep neural network to extract semantic features from sparsely distributed street view images and interpolate them in the spatial domain to match the spatial resolution of the aerial images, which are then fused together through a deep neural network for classifying land use categories. Our methods are tested on a large publicly available aerial and street view images dataset of New York City, and the results show that using aerial images alone can achieve relatively high classification accuracy, the ground-level street view images contain useful information for urban land use classification, and fusing street image features with aerial images can improve classification accuracy. Moreover, we present experimental studies to show that street view images add more values when the resolutions of the aerial images are lower, and we also present case studies to illustrate how street view images provide useful auxiliary information to aerial images to boost performances

    Automatic Annotation of Subsea Pipelines using Deep Learning

    Get PDF
    Regulatory requirements for sub-sea oil and gas operators mandates the frequent inspection of pipeline assets to ensure that their degradation and damage are maintained at acceptable levels. The inspection process is usually sub-contracted to surveyors who utilize sub-sea Remotely Operated Vehicles (ROVs), launched from a surface vessel and piloted over the pipeline. ROVs capture data from various sensors/instruments which are subsequently reviewed and interpreted by human operators, creating a log of event annotations; a slow, labor-intensive and costly process. The paper presents an automatic image annotation framework that identifies/classifies key events of interest in the video footage viz. exposure, burial, field joints, anodes, and free spans. The reported methodology utilizes transfer learning with a Deep Convolutional Neural Network (ResNet-50), fine-tuned on real-life, representative data from challenging sub-sea environments with low lighting conditions, sand agitation, sea-life and vegetation. The network outputs are configured to perform multi-label image classifications for critical events. The annotation performance varies between 95.1 and 99.7 in terms of accuracy and 90.4 and 99.4 in terms of F1-Score depending on event type. The performance results are on a per-frame basis and corroborate the potential of the algorithm to be the foundation for an intelligent decision support framework that automates the annotation process. The solution can execute annotations in real-time and is significantly more cost-effective than human-only approaches
    corecore