23,080 research outputs found

    ACCURACY ASSESSMENT OF 3D MODELS GENERATED FROM GOOGLE STREET VIEW IMAGERY

    Get PDF
    Google Street View is a technology implemented in several Google services/applications (e.g. Google Maps, Google Earth) which provides the user, interested in viewing a particular location on the map, with panoramic images (represented in equi-rectangular projection) at street level. Generally, consecutive panoramas are acquired with an average distance of 5–10 m and can be compared to a traditional photogrammetric strip and, thus, processed to reconstruct portion of city at nearly zero cost. Most of the photogrammetric software packages available today implement spherical camera models and can directly process images in equi-rectangular projection. Although many authors provided in the past relevant works that involved the use of Google Street View imagery, mainly for 3D city model reconstruction, very few references can be found about the actual accuracy that can be obtained with such data. The goal of the present work is to present preliminary tests (at time of writing just three case studies has been analysed) about the accuracy and reliability of the 3D models obtained from Google Street View panoramas

    An Approach Of Automatic Reconstruction Of Building Models For Virtual Cities From Open Resources

    Get PDF
    Along with the ever-increasing popularity of virtual reality technology in recent years, 3D city models have been used in different applications, such as urban planning, disaster management, tourism, entertainment, and video games. Currently, those models are mainly reconstructed from access-restricted data sources such as LiDAR point clouds, airborne images, satellite images, and UAV (uncrewed air vehicle) images with a focus on structural illustration of buildings’ contours and layouts. To help make 3D models closer to their real-life counterparts, this thesis research proposes a new approach for the automatic reconstruction of building models from open resources. In this approach, first, building shapes are reconstructed by using the structural and geographic information retrievable from the open repository of OpenStreetMap (OSM). Later, images available from the street view of Google maps are used to extract information of the exterior appearance of buildings for texture mapping onto their boundaries. The constructed 3D environment is used as prior knowledge for the navigation purposes in a self-driving car. The static objects from the 3D model are compared with the real-time images of static objects to reduce the computation time by eliminating them from the detection proces

    Sky View Factors from Synthetic Fisheye Photos for Thermal Comfort Routing—A Case Study in Phoenix, Arizona

    Get PDF
    abstract: The Sky View Factor (SVF) is a dimension-reduced representation of urban form and one of the major variables in radiation models that estimate outdoor thermal comfort. Common ways of retrieving SVFs in urban environments include capturing fisheye photographs or creating a digital 3D city or elevation model of the environment. Such techniques have previously been limited due to a lack of imagery or lack of full scale detailed models of urban areas. We developed a web based tool that automatically generates synthetic hemispherical fisheye views from Google Earth at arbitrary spatial resolution and calculates the corresponding SVFs through equiangular projection. SVF results were validated using Google Maps Street View and compared to results from other SVF calculation tools. We generated 5-meter resolution SVF maps for two neighborhoods in Phoenix, Arizona to illustrate fine-scale variations of intra-urban horizon limitations due to urban form and vegetation. To demonstrate the utility of our synthetic fisheye approach for heat stress applications, we automated a radiation model to generate outdoor thermal comfort maps for Arizona State University’s Tempe campus for a hot summer day using synthetic fisheye photos and on-site meteorological data. Model output was tested against mobile transect measurements of the six-directional radiant flux density. Based on the thermal comfort maps, we implemented a pedestrian routing algorithm that is optimized for distance and thermal comfort preferences. Our synthetic fisheye approach can help planners assess urban design and tree planting strategies to maximize thermal comfort outcomes and can support heat hazard mitigation in urban areas

    Seeing the invisible: from imagined to virtual urban landscapes

    Get PDF
    Urban ecosystems consist of infrastructure features working together to provide services for inhabitants. Infrastructure functions akin to an ecosystem, having dynamic relationships and interdependencies. However, with age, urban infrastructure can deteriorate and stop functioning. Additional pressures on infrastructure include urbanizing populations and a changing climate that exposes vulnerabilities. To manage the urban infrastructure ecosystem in a modernizing world, urban planners need to integrate a coordinated management plan for these co-located and dependent infrastructure features. To implement such a management practice, an improved method for communicating how these infrastructure features interact is needed. This study aims to define urban infrastructure as a system, identify the systematic barriers preventing implementation of a more coordinated management model, and develop a virtual reality tool to provide visualization of the spatial system dynamics of urban infrastructure. Data was collected from a stakeholder workshop that highlighted a lack of appreciation for the system dynamics of urban infrastructure. An urban ecology VR model was created to highlight the interconnectedness of infrastructure features. VR proved to be useful for communicating spatial information to urban stakeholders about the complexities of infrastructure ecology and the interactions between infrastructure features.https://doi.org/10.1016/j.cities.2019.102559Published versio

    Virtual cities management and organisation

    Get PDF
    This paper presents a recent overview of the increasing use of Virtual Reality (VR) technologies for the simulation of urban environments. It builds on previous research conducted on the identification of three-dimensional (3D) city models and offers an analysis of the development, utilization and construction of VR city models. Issues pertaining to advantages, barriers and ownership are identified. The paper describes a case study of the development of a VR model for the city of Newcastle upon Tyne in the UK and outlines the role that academic institutions can play in both the creation and utilization of urban models. The study offers a new approach for the creation, management and update of urban models and reflects on issues which are emerging. Areas for future research are discussed

    An overview of virtual city modelling : emerging organisational issues

    Get PDF
    This paper presents a recent overview of the increasing use of Virtual Reality (VR) technologies for the simulation of urban environments. It builds on previous research conducted on the identification of three-dimensional (3D) city models and offers an analysis of the development, utilization and construction of VR city models. Issues pertaining to advantages, barriers and ownership are identified. The paper describes a case study of the development of a VR model for the city of Newcastle upon Tyne in the UK and outlines the role that academic institutions can play in both the creation and utilization of urban models. The study offers a new approach for the creation, management and update of urban models and reflects on issues which are emerging. Areas for future research are discussed

    Digital Urban - The Visual City

    Get PDF
    Nothing in the city is experienced by itself for a city’s perspicacity is the sum of its surroundings. To paraphrase Lynch (1960), at every instant, there is more than we can see and hear. This is the reality of the physical city, and thus in order to replicate the visual experience of the city within digital space, the space itself must convey to the user a sense of place. This is what we term the “Visual City”, a visually recognisable city built out of the digital equivalent of bricks and mortar, polygons, textures, and most importantly data. Recently there has been a revolution in the production and distribution of digital artefacts which represent the visual city. Digital city software that was once in the domain of high powered personal computers, research labs and professional software are now in the domain of the public-at-large through both the web and low-end home computing. These developments have gone hand in hand with the re-emergence of geography and geographic location as a way of tagging information to non-proprietary web-based software such as Google Maps, Google Earth, Microsoft’s Virtual Earth, ESRI’s ArcExplorer, and NASA’s World Wind, amongst others. The move towards ‘digital earths’ for the distribution of geographic information has, without doubt, opened up a widespread demand for the visualization of our environment where the emphasis is now on the third dimension. While the third dimension is central to the development of the digital or visual city, this is not the only way the city can be visualized for a number of emerging tools and ‘mashups’ are enabling visual data to be tagged geographically using a cornucopia of multimedia systems. We explore these social, textual, geographical, and visual technologies throughout this chapter
    • 

    corecore