14,042 research outputs found

    Visual communication in urban planning and urban design

    Get PDF
    This report documents the current status of visual communication in urban design and planning. Visual communication is examined through discussion of standalone and network media, specifically concentrating on visualisation on the World Wide Web(WWW).Firstly, we examine the use of Solid and Geometric Modelling for visualising urban planning and urban design. This report documents and compares examples of the use of Virtual Reality Modelling Language (VRML) and proprietary WWW based Virtual Reality modelling software. Examples include the modelling of Bath and Glasgow using both VRML 1.0 and 2.0. A review is carried out on the use of Virtual Worldsand their role in visualising urban form within multi-user environments. The use of Virtual Worlds is developed into a case study of the possibilities and limitations of Virtual Internet Design Arenas (ViDAs), an initiative undertaken at the Centre for Advanced Spatial Analysis, University College London. The use of Virtual Worlds and their development towards ViDAs is seen as one of the most important developments in visual communication for urban planning and urban design since the development plan.Secondly, photorealistic media in the process of communicating plans is examined.The process of creating photorealistic media is documented, examples of the Virtual Streetscape and Wired Whitehall Virtual Urban Interface System are provided. The conclusion is drawn that although the use of photo-realistic media on the WWW provides a way to visually communicate planning information, its use is limited. The merging of photorealistic media and solid geometric modelling is reviewed in the creation of Augmented Reality. Augmented Reality is seen to provide an important step forward in the ability to quickly and easily visualise urban planning and urban design information.Thirdly, the role of visual communication of planning data through GIS is examined interms of desktop, three dimensional and Internet based GIS systems. The evolution to Internet GIS is seen as a critical component in the development of virtual cities which will allow urban planners and urban designers to visualise and model the complexity of the built environment in networked virtual reality.Finally a viewpoint is put forward of the Virtual City, linking Internet GIS with photorealistic multi-user Virtual Worlds. At present there are constraints on how far virtual cities can be developed, but a view is provided on how these networked virtual worlds are developing to aid visual communication in urban planning and urban design

    MusA: Using Indoor Positioning and Navigation to Enhance Cultural Experiences in a museum

    Get PDF
    In recent years there has been a growing interest into the use of multimedia mobile guides in museum environments. Mobile devices have the capabilities to detect the user context and to provide pieces of information suitable to help visitors discovering and following the logical and emotional connections that develop during the visit. In this scenario, location based services (LBS) currently represent an asset, and the choice of the technology to determine users' position, combined with the definition of methods that can effectively convey information, become key issues in the design process. In this work, we present MusA (Museum Assistant), a general framework for the development of multimedia interactive guides for mobile devices. Its main feature is a vision-based indoor positioning system that allows the provision of several LBS, from way-finding to the contextualized communication of cultural contents, aimed at providing a meaningful exploration of exhibits according to visitors' personal interest and curiosity. Starting from the thorough description of the system architecture, the article presents the implementation of two mobile guides, developed to respectively address adults and children, and discusses the evaluation of the user experience and the visitors' appreciation of these application

    A content-based retrieval system for UAV-like video and associated metadata

    Get PDF
    In this paper we provide an overview of a content-based retrieval (CBR) system that has been specifically designed for handling UAV video and associated meta-data. Our emphasis in designing this system is on managing large quantities of such information and providing intuitive and efficient access mechanisms to this content, rather than on analysis of the video content. The retrieval unit in our system is termed a "trip". At capture time, each trip consists of an MPEG-1 video stream and a set of time stamped GPS locations. An analysis process automatically selects and associates GPS locations with the video timeline. The indexed trip is then stored in a shared trip repository. The repository forms the backend of a MPEG-211 compliant Web 2.0 application for subsequent querying, browsing, annotation and video playback. The system interface allows users to search/browse across the entire archive of trips and, depending on their access rights, to annotate other users' trips with additional information. Interaction with the CBR system is via a novel interactive map-based interface. This interface supports content access by time, date, region of interest on the map, previously annotated specific locations of interest and combinations of these. To develop such a system and investigate its practical usefulness in real world scenarios, clearly a significant amount of appropriate data is required. In the absence of a large volume of UAV data with which to work, we have simulated UAV-like data using GPS tagged video content captured from moving vehicles

    A procedure for developing an acceptance test for airborne bathymetric lidar data application to NOAA charts in shallow waters

    Get PDF
    National Oceanic and Atmospheric Administration (NOAA) hydrographic data is typically acquired using sonar systems, with a small percent acquired via airborne lidar bathymetry for near‐shore areas. This study investigated an integrated approach for meeting NOAA’s hydrographic survey requirements for near‐shore areas of NOAA charts, using the existing topographic‐bathymetric lidar data from USACE’s National Coastal Mapping Program (NCMP). Because these existing NCMP bathymetric lidar datasets were not collected to NOAA hydrographic surveying standards, it is unclear if, and under what circumstances, they might aid in meeting certain hydrographic surveying requirements. The NCMP’s bathymetric lidar data are evaluated through a comparison to NOAA’s Office of Coast Survey hydrographic data derived from acoustic surveys. As a result, it is possible to assess if NCMP’s bathymetry can be used to fill in the data gap shoreward of the navigable area limit line (0 to 4 meters) and if there is potential for applying NCMP’s bathymetry lidar data to near‐shore areas deeper than 10 meters. Based on the study results, recommendations will be provided to NOAA for the site conditions where this data will provide the most benefit. Additionally, this analysis may allow the development of future operating procedures and workflows using other topographic‐ bathymetric lidar datasets to help update near‐shore areas of the NOAA charts

    USE OF UNMANNED AERIAL VEHICLES (UAV) FOR URBAN TREE INVENTORIES

    Get PDF
    In contrast to standard aerial imagery, unmanned aerial systems (UAS) utilize recent technological advances to provide an affordable alternative for imagery acquisition. Increased value can be realized through clarity and detail providing higher resolution (2-5 cm) over traditional products. Many natural resource disciplines such as urban forestry will benefit from UAS. Tree inventories for risk assessment, biodiversity, planning, and design can be efficiently achieved with the UAS. Recent advances in photogrammetric processing have proved automated methods for three dimensional rendering of aerial imagery. Point clouds can be generated from images providing additional benefits. Association of spatial locational information within the point cloud can be used to produce elevation models i.e. digital elevation, digital terrain and digital surface. Taking advantage of this point cloud data, additional information such as tree heights can be obtained. Several software applications have been developed for LiDAR data which can be adapted to utilize UAS point clouds. This study examines solutions to provide tree inventory and heights from UAS imagery. Imagery taken with a micro-UAS was processed to produce a seamless orthorectified image. This image provided an accurate way to obtain a tree inventory within the study boundary. Utilizing several methods, tree height models were developed with variations in spatial accuracy. Model parameters were modified to offset spatial inconsistencies providing statistical equality of means. Statistical results (p = 0.756) with a level of significance (α = 0.01) between measured and modeled tree height means resulted with 82% of tree species obtaining accurate tree heights. Within this study, the UAS has proven to be an efficient tool for urban forestry providing a cost effective and reliable system to obtain remotely sensed data

    USE OF ASSISTED PHOTOGRAMMETRY FOR INDOOR AND OUTDOOR NAVIGATION PURPOSES

    Get PDF
    Nowadays, devices and applications that require navigation solutions are continuously growing. For instance, consider the increasing demand of mapping information or the development of applications based on users’ location. In some case it could be sufficient an approximate solution (e.g. at room level), but in the large amount of cases a better solution is required. The navigation problem has been solved from a long time using Global Navigation Satellite System (GNSS). However, it can be unless in obstructed areas, such as in urban areas or inside buildings. An interesting low cost solution is photogrammetry, assisted using additional information to scale the photogrammetric problem and recovering a solution also in critical situation for image-based methods (e.g. poor textured surfaces). In this paper, the use of assisted photogrammetry has been tested for both outdoor and indoor scenarios. Outdoor navigation problem has been faced developing a positioning system with Ground Control Points extracted from urban maps as constrain and tie points automatically extracted from the images acquired during the survey. The proposed approach has been tested under different scenarios, recovering the followed trajectory with an accuracy of 0.20 m. For indoor navigation a solution has been thought to integrate the data delivered by Microsoft Kinect, by identifying interesting features on the RGB images and re-projecting them on the point clouds generated from the delivered depth maps. Then, these points have been used to estimate the rotation matrix between subsequent point clouds and, consequently, to recover the trajectory with few centimeters of error
    corecore