70 research outputs found

    Understanding the Impact of the New Aesthetics and New Media Works on Future Curatorial Resource Responsibilities for Research Collections

    Get PDF
    The author examines the emerging impact of the works of the “New Aesthetic,” along with other works that have their genesis in the rapid technological changes of the last fifty-plus years. Consideration is given to the history of digital audio/visual works that will eventually be held by repositories of cultural heritage and how this history has, or has not, been documented. These creations have developed out of an environment of networked, shared, re-usable and re-purposed data. The article briefly examines how these works are utilized while looking at the future impact of the growing creation and use of complex, compound multimedia digital re- search and cultural collections as evidenced by augmented and virtual reality environments such as smartphone apps and Second Life.Ye

    Extracting curbside storm drain locations from street-level images

    Get PDF
    This thesis presents a machine vision procedure to identify and extract storm drain locations from natural images along surface street curbsides. Existing storm drain infrastructure information is commonly reposed by managing agencies in either paper or digital format. Access to these data for urban hydrologic and hydraulic modeling purposes may be limited by security protocols and/or the format in which the data may be available. The procedure described in this work uses a novel vision algorithm with Google Street View imagery to identify and extract the locations of curbside storm drains. Results are converted into a tabular format that can be converted into geometric input files for modeling purposes. This fast, approximation approach to assembling storm drain data could be of interest to public works managers, urban hydrology and hydraulics practitioners and researchers, and citizen scientists, to improve general understanding of the civil and environmental infrastructure

    Classification and mapping of urban canyon geometry using Google Street View images and deep multitask learning

    Get PDF
    Urban canyon classification plays an important role in analyzing the impact of urban canyon geometry on urban morphology and microclimates. Existing classification methods using aspect ratios require a large number of field surveys, which are often expensive and laborious. Moreover, it is difficult for these methods to handle the complex geometry of street canyons, which is often required by specific applications. To overcome these difficulties, we develop a street canyon classification approach using publicly available Google Street View (GSV) images. Our method is inspired by the latest advances in deep multitask learning based on densely connected convolutional networks (DenseNets) and tailored for multiple street canyon classification, i.e., H/W-based (Level 1), symmetry-based (Level 2), and complex-geometry-based (Level 3) classifications. We conducted a series of experiments to verify the proposed method. First, taking the Hong Kong area as an example, the method achieved an accuracy of 89.3%, 86.6%, and 86.1%, respectively for the three levels. Even using the field survey data as the ground truth, it gained approximately 80% for different levels. Then, we tested our pretrained model in five other cities and compared the results with traditional methods. The transferability and effectiveness of the scheme were demonstrated. Finally, to enrich the representation of more complicated street geometry, the approach can separately generate thematic maps of street canyons at multiple levels to better facilitate microclimatic studies in high-density built environments. The developed techniques for the classification and mapping of street canyons provide a cost-effective tool for studying the impact of complex and evolving urban canyon geometry on microclimate changes

    Scalable Methods to Collect and Visualize Sidewalk Accessibility Data for People with Mobility Impairments

    Get PDF
    Poorly maintained sidewalks pose considerable accessibility challenges for people with mobility impairments. Despite comprehensive civil rights legislation of Americans with Disabilities Act, many city streets and sidewalks in the U.S. remain inaccessible. The problem is not just that sidewalk accessibility fundamentally affects where and how people travel in cities, but also that there are few, if any, mechanisms to determine accessible areas of a city a priori. To address this problem, my Ph.D. dissertation introduces and evaluates new scalable methods for collecting data about street-level accessibility using a combination of crowdsourcing, automated methods, and Google Street View (GSV). My dissertation has four research threads. First, we conduct a formative interview study to establish a better understanding of how people with mobility impairments currently assess accessibility in the built environment and the role of emerging location-based technologies therein. The study uncovers the existing methods for assessing accessibility of physical environment and identify useful features of future assistive technologies. Second, we develop and evaluate scalable crowdsourced accessibility data collection methods. We show that paid crowd workers recruited from an online labor marketplace can find and label accessibility attributes in GSV with accuracy of 81%. This accuracy improves to 93% with quality control mechanisms such as majority vote. Third, we design a system that combines crowdsourcing and automated methods to increase data collection efficiency. Our work shows that by combining crowdsourcing and automated methods, we can increase data collection efficiency by 13% without sacrificing accuracy. Fourth, we develop and deploy a web tool that lets volunteers to help us collect the street-level accessibility data from Washington, D.C. As of writing this dissertation, we have collected the accessibility data from 20% of the streets in D.C. We conduct a preliminary evaluation on how the said web tool is used. Finally, we implement proof-of-concept accessibility-aware applications with accessibility data collected with the help of volunteers. My dissertation contributes to the accessibility, computer science, and HCI communities by: (i) extending the knowledge of how people with mobility impairments interact with technology to navigate in cities; (ii) introducing the first work that demonstrates that GSV is a viable source for learning about the accessibility of the physical world; (iii) introducing the first method that combines crowdsourcing and automated methods to remotely collect accessibility information; (iv) deploying interactive web tools that allow volunteers to help populate the largest dataset about street-level accessibility of the world; and (v) demonstrating accessibility-aware applications that empower people with mobility impairments

    Detecting disparities in police deployments using dashcam data

    Full text link
    Large-scale policing data is vital for detecting inequity in police behavior and policing algorithms. However, one important type of policing data remains largely unavailable within the United States: aggregated police deployment data capturing which neighborhoods have the heaviest police presences. Here we show that disparities in police deployment levels can be quantified by detecting police vehicles in dashcam images of public street scenes. Using a dataset of 24,803,854 dashcam images from rideshare drivers in New York City, we find that police vehicles can be detected with high accuracy (average precision 0.82, AUC 0.99) and identify 233,596 images which contain police vehicles. There is substantial inequality across neighborhoods in police vehicle deployment levels. The neighborhood with the highest deployment levels has almost 20 times higher levels than the neighborhood with the lowest. Two strikingly different types of areas experience high police vehicle deployments - 1) dense, higher-income, commercial areas and 2) lower-income neighborhoods with higher proportions of Black and Hispanic residents. We discuss the implications of these disparities for policing equity and for algorithms trained on policing data.Comment: To appear in ACM Conference on Fairness, Accountability, and Transparency (FAccT) '2

    Commercial Satellite Imagery as an Evolving Open-Source Verification Technology: Emerging Trends and Their Impact for Nuclear Nonproliferation Analysis

    Get PDF
    One evolving and increasingly important means of verification of a State’s compliance with its international security obligations involves the application of publicly available commercial satellite imagery. The International Atomic Energy Agency (IAEA) views commercial satellite imagery as “a particularly valuable open source of information.” In 2001, the IAEA established an in-house Satellite Imagery Analysis Unit (SIAU) to provide an independent capability for "the exploitation of satellite imagery which involves imagery analysis, including correlation/fusion with other sources (open source, geospatial, and third party). Commercial satellite imagery not only supports onsite inspection planning and verification of declared activities,” but perhaps its most important role is that it also “increases the possibility of detecting proscribed nuclear activities.” Analysis of imagery derived from low-earth-orbiting observation satellites has a long history dating to the early 1906s in the midst of the Cold War era. That experience provides a sound basis for effectively exploiting the flood of now publicly available commercial satellite imagery data that is now within reach of anyone with Internet access. This paper provides insights on the process of imagery analysis, together with the use of modern geospatial tools like Google Earth, and highlights a few of the potential pitfalls that can lead to erroneous analytical conclusions. A number of illustrative exemplar cases are reviewed to illustrate how academic researchers (including those within the European Union’s Joint Research Centre) and others in Non-Governmental Organizations are now applying commercial satellite imagery in combination with other open source information in innovative and effective ways for various verification purposes. The international constellation of civil imaging satellites is rapidly growing larger, thereby improving the temporal resolution (reducing the time between image acquisitions), but the satellites are also significantly improving in capabilities with regard to both spatial and spectral resolutions. The significant increase, in both the volume and type of raw imagery data that these satellites can provide, and the ease of access to it, will likely lead to a concomitant increase in new non-proliferation relevant knowledge as well. Many of these new developments were previously unanticipated, and they have already had profound effects beyond what anyone would have thought possible just a few years ago. Among those include multi-satellite, multi-sensor synergies deriving from the diversity of sensors and satellites now available, which are exemplified in a few case studies. This paper also updates earlier work on the subject by this author and explains how the many recent significant developments in the commercial satellite imaging domain will play an ever increasingly valuable role for open source nuclear nonproliferation monitoring and verification in the future.JRC.E.8-Nuclear securit

    Data Collection and Machine Learning Methods for Automated Pedestrian Facility Detection and Mensuration

    Get PDF
    Large-scale collection of pedestrian facility (crosswalks, sidewalks, etc.) presence data is vital to the success of efforts to improve pedestrian facility management, safety analysis, and road network planning. However, this kind of data is typically not available on a large scale due to the high labor and time costs that are the result of relying on manual data collection methods. Therefore, methods for automating this process using techniques such as machine learning are currently being explored by researchers. In our work, we mainly focus on machine learning methods for the detection of crosswalks and sidewalks from both aerial and street-view imagery. We test data from these two viewpoints individually and with an ensemble method that we refer to as our “dual-perspective prediction model”. In order to obtain this data, we developed a data collection pipeline that combines crowdsourced pedestrian facility location data with aerial and street-view imagery from Bing Maps. In addition to the Convolutional Neural Network used to perform pedestrian facility detection using this data, we also trained a segmentation network to measure the length and width of crosswalks from aerial images. In our tests with a dual-perspective image dataset that was heavily occluded in the aerial view but relatively clear in the street view, our dual-perspective prediction model was able to increase prediction accuracy, recall, and precision by 49%, 383%, and 15%, respectively (compared to using a single perspective model based on only aerial view images). In our tests with satellite imagery provided by the Mississippi Department of Transportation, we were able to achieve accuracies as high as 99.23%, 91.26%, and 93.7% for aerial crosswalk detection, aerial sidewalk detection, and aerial crosswalk mensuration, respectively. The final system that we developed packages all of our machine learning models into an easy-to-use system that enables users to process large batches of imagery or examine individual images in a directory using a graphical interface. Our data collection and filtering guidelines can also be used to guide future research in this area by establishing standards for data quality and labelling

    LOCATIVE MEDIA, AUGMENTED REALITIES AND THE ORDINARY AMERICAN LANDSCAPE

    Get PDF
    This dissertation investigates the role of annotative locative media in mediating experiences of place. The overarching impetus motivating this research is the need to bring to bear the theoretical and substantive concerns of cultural landscape studies on the development of a methodological framework for interrogating the ways in which annotative locative media reconfigure experiences of urban landscapes. I take as my empirical cases i) Google Maps with its associated Street View and locational placemark interface, and ii) Layar, an augmented reality platform combining digital mapping and real-time locational augmentation. In the spirit of landscape studies’ longstanding and renewed interest in what may be termed “ordinary” residential landscapes, and reflecting the increasing imbrication of locative media technologies in everyday lives, the empirical research is based in Kenwick, a middleclass, urban residential neighborhood in Lexington, Kentucky. Overall, I present an argument about the need to consider the digital, code (i.e. software), and specifically locative media, in the intellectual context of critical geographies in general and cultural landscape studies in particular
    • …
    corecore