926 research outputs found

    Urban greenery and mental wellbeing in adults: Cross-sectional mediation analyses on multiple pathways across different greenery measures

    Full text link
    Multiple mechanisms have been proposed to explain how greenery enhances their mental wellbeing. Mediation studies, however, focus on a limited number of mechanisms and rely on remotely sensed greenery measures, which do not accurately capture how neighborhood greenery is perceived on the ground. To examine: 1) how streetscape and remote sensing-based greenery affect people's mental wellbeing in Guangzhou, China; 2) whether and, if so, to what extent the associations are mediated by physical activity, stress, air quality and noise, and social cohesion; and 3) whether differences in the mediation across the streetscape greenery and NDVI exposure metrics occurred. Mental wellbeing was quantified by the WHO-5 wellbeing index. Greenery measures were extracted at the neighborhood level: 1) streetscape greenery from street view data via a convolutional neural network, and 2) the NDVI remote sensing images. Single and multiple mediation analyses with multilevel regressions were conducted. Streetscape and NDVI greenery were weakly and positively, but not significantly, correlated. Our regression results revealed that streetscape greenery and NDVI were, individually and jointly, positively associated with mental wellbeing. Significant partial mediators for the streetscape greenery were physical activity, stress, air quality and noise, and social cohesion; together, they explained 62% of the association. For NDVI, only physical activity and social cohesion were significant partial mediators, accounting for 22% of the association. Mental health and wellbeing and both streetscape and satellite-derived greenery seem to be both directly correlated and indirectly mediated. Our findings signify that both greenery measures capture different aspects of natural environments and may contribute to people's wellbeing by means of different mechanisms

    Treepedia 2.0: Applying Deep Learning for Large-scale Quantification of Urban Tree Cover

    Full text link
    Recent advances in deep learning have made it possible to quantify urban metrics at fine resolution, and over large extents using street-level images. Here, we focus on measuring urban tree cover using Google Street View (GSV) images. First, we provide a small-scale labelled validation dataset and propose standard metrics to compare the performance of automated estimations of street tree cover using GSV. We apply state-of-the-art deep learning models, and compare their performance to a previously established benchmark of an unsupervised method. Our training procedure for deep learning models is novel; we utilize the abundance of openly available and similarly labelled street-level image datasets to pre-train our model. We then perform additional training on a small training dataset consisting of GSV images. We find that deep learning models significantly outperform the unsupervised benchmark method. Our semantic segmentation model increased mean intersection-over-union (IoU) from 44.10% to 60.42% relative to the unsupervised method and our end-to-end model decreased Mean Absolute Error from 10.04% to 4.67%. We also employ a recently developed method called gradient-weighted class activation map (Grad-CAM) to interpret the features learned by the end-to-end model. This technique confirms that the end-to-end model has accurately learned to identify tree cover area as key features for predicting percentage tree cover. Our paper provides an example of applying advanced deep learning techniques on a large-scale, geo-tagged and image-based dataset to efficiently estimate important urban metrics. The results demonstrate that deep learning models are highly accurate, can be interpretable, and can also be efficient in terms of data-labelling effort and computational resources.Comment: Accepted and will appear in IEEE BigData Congress 2018 Conference Proceeding

    Efficient assessment of window views in high-rise, high-density urban areas using 3D color City Information Models

    Full text link
    Urban-scale quantification of window views can inform housing selection and valuation, landscape management, and urban planning. However, window views are numerous in high-rise, high-density urban areas and current automatic assessments of window views are inaccurate and time-consuming. Thus, both accurate and efficient assessment of window views is significant in improving the automation for urban-scale window view applications. The paper presents an automatic, accurate, and efficient assessment of window view indices (WVIs) of greenery, sky, waterbody, and construction using 3D color City Information Models (CIMs). The workflow includes: i) 3D semantic segmentation of photorealistic CIM and Digital Surface Model (DSM), and ii) batch computation of WVIs. Experimental results showed the estimated WVIs were more accurate (RMSE < 0.01), and the proposed method was more efficient (3.68 times faster) than Li et al.'s (2022) 2D semantic segmentation. Thus, the proposed method can facilitate large-scale WVI assessment and update in healthy high-rise, high-density urban development

    An AI-based framework for studying visual diversity of urban neighborhoods and its relationship with socio-demographic variables

    Get PDF
    This study presents a framework to study quantitatively geographical visual diversities of urban neighborhood from a large collection of street-view images using an Artificial Intelligence (AI)-based image segmentation technique. A variety of diversity indices are computed from the extracted visual semantics. They are utilized to discover the relationships between urban visual appearance and socio-demographic variables. This study also validates the reliability of the method with human evaluators. The methodology and results obtained from this study can potentially be used to study urban features, locate houses, establish services, and better operate municipalities
    corecore