93 research outputs found

    Streetscore -- Predicting the Perceived Safety of One Million Streetscapes

    Get PDF
    Social science literature has shown a strong connection between the visual appearance of a city's neighborhoods and the behavior and health of its citizens. Yet, this research is limited by the lack of methods that can be used to quantify the appearance of streetscapes across cities or at high enough spatial resolutions. In this paper, we describe 'Streetscore', a scene understanding algorithm that predicts the perceived safety of a streetscape, using training data from an online survey with contributions from more than 7000 participants. We first study the predictive power of commonly used image features using support vector regression, finding that Geometric Texton and Color Histograms along with GIST are the best performers when it comes to predict the perceived safety of a streetscape. Using Streetscore, we create high resolution maps of perceived safety for 21 cities in the Northeast and Midwest of the United States at a resolution of 200 images/square mile, scoring ~1 million images from Google Streetview. These datasets should be useful for urban planners, economists and social scientists looking to explain the social and economic consequences of urban perception.MIT Media Lab ConsortiumGoogle (Firm). Living Labs Tides Foundatio

    Cities Are Physical Too: Using Computer Vision to Measure the Quality and Impact of Urban Appearance

    Get PDF
    For social scientists, developing an empirical connection between the physical appearance of a city and the behavior and health of its inhabitants has proved challenging due to a lack of data on urban appearance. Can we use computers to quantify urban appearance from street-level imagery? We describe Streetscore: a computer vision algorithm that measures the perceived safety of streetscapes. Using Streetscore to evaluate 19 American cities, we find that the average perceived safety has a strong positive correlation with population density and household income; and the variation in perceived safety has a strong positive correlation with income inequality

    Assessing verticalization effects on urban safety perception

    Full text link
    We describe an experiment with the modeling of urban verticalization effects on perceived safety scores as obtained with computer vision on Google Streetview data for New York City. Preliminary results suggests that for smaller buildings (between one and seven floors), perceived safety increases with building height, but that for high-rise buildings, perceived safety decreases with increased height. We also determined that while height contributing for this relation, other zonal aspects also influences the perceived safety scores, suggesting spatial structuring also influences such scores.Comment: 2017 SIGSPATIAL Student Research Competitio

    Computer vision uncovers predictors of physical urban change

    Get PDF
    Which neighborhoods experience physical improvements? In this paper, we introduce a computer vision method to measure changes in the physical appearances of neighborhoods from time-series street-level imagery. We connect changes in the physical appearance of five US cities with economic and demographic data and find three factors that predict neighborhood improvement. First, neighborhoods that are densely populated by college-educated adults are more likely to experience physical improvements—an observation that is compatible with the economic literature linking human capital and local success. Second, neighborhoods with better initial appearances experience, on average, larger positive improvements—an observation that is consistent with “tipping” theories of urban change. Third, neighborhood improvement correlates positively with physical proximity to the central business district and to other physically attractive neighborhoods—an observation that is consistent with the “invasion” theories of urban sociology. Together, our results provide support for three classical theories of urban change and illustrate the value of using computer vision methods and street-level imagery to understand the physical dynamics of cities

    Street-Frontage-Net: urban image classification using deep convolutional neural networks

    Get PDF
    Quantifying aspects of urban design on a massive scale is crucial to help develop a deeper understanding of urban designs elements that contribute to the success of a public space. In this study, we further develop the Street-Frontage-Net (SFN), a convolutional neural network (CNN) that can successfully evaluate the quality of street frontage as either being active (frontage containing windows and doors) or blank (frontage containing walls, fences and garages). Small-scale studies have indicated that the more active the frontage, the livelier and safer a street feels. However, collecting the city-level data necessary to evaluate street frontage quality is costly. The SFN model uses a deep CNN to classify the frontage of a street. This study expands on the previous research via five experiments. We find robust results in classifying frontage quality for an out-of-sample test set that achieves an accuracy of up to 92.0%. We also find active frontages in a neighbourhood has a significant link with increased house prices. Lastly, we find that active frontage is associated with more scenicness compared to blank frontage. While further research is needed, the results indicate the great potential for using deep learning methods in geographic information extraction and urban design

    Investigating the impacts of street environment on pre-owned housing price in Shanghai using street-level images

    Full text link
    [EN] Studies considering street environment quality’s impact on housing value were limited to top-down variables such as the green ratio measured from satellite maps. In contrast, this study quantified street views’ impacts on the value of second-hand commodity residential properties in Shanghai based on analysis of street view imagery. (1) It applied computer vision to objectively measure street features from largely accessible street view imagery. (2) Based on the classical urban design measures frameworks, it applied machine learning to evaluate human perceived street quality as street scores systematically, in contrast to the common practice of doing so in a more intuition-based fashion. (3) It further identified important indicators from both human-centered street scores as well as the more objective street feature measures with positive or adverse effects on property values based on a hedonic modeling method. The estimation suggested both street scores and features are significant and nonnegligible. For the perceived street scores (from 0-10 scale), neighborhoods with a unit increase in their “enclosure” or “safety” score enjoy price premium of 0.3% to 0.6%. Meanwhile, streets with 10% greater tree canopy exposure are attributable to a 0.2% increase in the property value. This study enriched our current understanding at a micro level of the factors that impact property values from the perspective of the built environment. It introduced human-centered perception of street scores and objective measures of street features as spatial variables into the analysis of neighborhood attribute vectors.Qiu, W.; Huang, X.; Li, X.; Li, W.; Zhang, Z. (2020). Investigating the impacts of street environment on pre-owned housing price in Shanghai using street-level images. Editorial Universitat Politècnica de València. 29-39. https://doi.org/10.4995/CARMA2020.2020.11410OCS293
    • …
    corecore