1,044,956 research outputs found

    The role of earth observation in an integrated deprived area mapping “system” for low-to-middle income countries

    Get PDF
    Urbanization in the global South has been accompanied by the proliferation of vast informal and marginalized urban areas that lack access to essential services and infrastructure. UN-Habitat estimates that close to a billion people currently live in these deprived and informal urban settlements, generally grouped under the term of urban slums. Two major knowledge gaps undermine the efforts to monitor progress towards the corresponding sustainable development goal (i.e., SDG 11—Sustainable Cities and Communities). First, the data available for cities worldwide is patchy and insufficient to differentiate between the diversity of urban areas with respect to their access to essential services and their specific infrastructure needs. Second, existing approaches used to map deprived areas (i.e., aggregated household data, Earth observation (EO), and community-driven data collection) are mostly siloed, and, individually, they often lack transferability and scalability and fail to include the opinions of different interest groups. In particular, EO-based-deprived area mapping approaches are mostly top-down, with very little attention given to ground information and interaction with urban communities and stakeholders. Existing top-down methods should be complemented with bottom-up approaches to produce routinely updated, accurate, and timely deprived area maps. In this review, we first assess the strengths and limitations of existing deprived area mapping methods. We then propose an Integrated Deprived Area Mapping System (IDeAMapS) framework that leverages the strengths of EO- and community-based approaches. The proposed framework offers a way forward to map deprived areas globally, routinely, and with maximum accuracy to support SDG 11 monitoring and the needs of different interest groups

    An Agent-based Modelling Framework for Driving Policy Learning in Connected and Autonomous Vehicles

    Get PDF
    Due to the complexity of the natural world, a programmer cannot foresee all possible situations, a connected and autonomous vehicle (CAV) will face during its operation, and hence, CAVs will need to learn to make decisions autonomously. Due to the sensing of its surroundings and information exchanged with other vehicles and road infrastructure, a CAV will have access to large amounts of useful data. While different control algorithms have been proposed for CAVs, the benefits brought about by connectedness of autonomous vehicles to other vehicles and to the infrastructure, and its implications on policy learning has not been investigated in literature. This paper investigates a data driven driving policy learning framework through an agent-based modelling approaches. The contributions of the paper are two-fold. A dynamic programming framework is proposed for in-vehicle policy learning with and without connectivity to neighboring vehicles. The simulation results indicate that while a CAV can learn to make autonomous decisions, vehicle-to-vehicle (V2V) communication of information improves this capability. Furthermore, to overcome the limitations of sensing in a CAV, the paper proposes a novel concept for infrastructure-led policy learning and communication with autonomous vehicles. In infrastructure-led policy learning, road-side infrastructure senses and captures successful vehicle maneuvers and learns an optimal policy from those temporal sequences, and when a vehicle approaches the road-side unit, the policy is communicated to the CAV. Deep-imitation learning methodology is proposed to develop such an infrastructure-led policy learning framework

    New England StatNet: A Community of Practice in Performance Measurement

    Get PDF
    One way public organizations improve services is through the implementation of performance measurement programs. To help public managers address the challenges of performance measurement, knowing how others are using data in decision-making is valuable. Communities of practice are one way for public managers to access such information. StatNet began in 2008 as a network of municipal officials using data-driven performance management approaches. The group gathers three times per year for in-depth discussion of municipal governance, focusing on topics such as police, fire, budgets, constituent relations and DPWs

    LEI: Livestock Event Information Schema for Enabling Data Sharing

    Full text link
    Data-driven advances have resulted in significant improvements in dairy production. However, the meat industry has lagged behind in adopting data-driven approaches, underscoring the crucial need for data standardisation to facilitate seamless data transmission to maximise productivity, save costs, and increase market access. To address this gap, we propose a novel data schema, Livestock Event Information (LEI) schema, designed to accurately and uniformly record livestock events. LEI complies with the International Committee for Animal Recording (ICAR) and Integrity System Company (ISC) schemas to deliver this data standardisation and enable data sharing between producers and consumers. To validate the superiority of LEI, we conducted a structural metrics analysis and a comprehensive case study. The analysis demonstrated that LEI outperforms the ICAR and ISC schemas in terms of design, while the case study confirmed its superior ability to capture livestock event information. Our findings lay the foundation for the implementation of the LEI schema, unlocking the potential for data-driven advances in livestock management. Moreover, LEI's versatility opens avenues for future expansion into other agricultural domains, encompassing poultry, fisheries, and crops. The adoption of LEI promises substantial benefits, including improved data accuracy, reduced costs, and increased productivity, heralding a new era of sustainability in the meat industry.Comment: 63 pages, 7 figure

    OpenET : filling a critical data gap in water management for the western United States.

    Get PDF
    The lack of consistent, accurate information on evapotranspiration (ET) and consumptive use of water by irrigated agriculture is one of the most important data gaps for water managers in the western United States (U.S.) and other arid agricultural regions globally. The ability to easily access information on ET is central to improving water budgets across the West, advancing the use of data-driven irrigation management strategies, and expanding incentive-driven conservation programs. Recent advances in remote sensing of ET have led to the development of multiple approaches for field-scale ET mapping that have been used for local and regional water resource management applications by U.S. state and federal agencies. The OpenET project is a community-driven effort that is building upon these advances to develop an operational system for generating and distributing ET data at a field scale using an ensemble of six well-established satellite-based approaches for mapping ET. Key objectives of OpenET include: Increasing access to remotely sensed ET data through a web-based data explorer and data services; supporting the use of ET data for a range of water resource management applications; and development of use cases and training resources for agricultural producers and water resource managers. Here we describe the OpenET framework, including the models used in the ensemble, the satellite, meteorological, and ancillary data inputs to the system, and the OpenET data visualization and access tools. We also summarize an extensive intercomparison and accuracy assessment conducted using ground measurements of ET from 139 flux tower sites instrumented with open path eddy covariance systems. Results calculated for 24 cropland sites from Phase I of the intercomparison and accuracy assessment demonstrate strong agreement between the satellite-driven ET models and the flux tower ET data. For the six models that have been evaluated to date (ALEXI/DisALEXI, eeMETRIC, geeSEBAL, PT-JPL, SIMS, and SSEBop) and the ensemble mean, the weighted average mean absolute error (MAE) values across all sites range from 13.6 to 21.6 mm/month at a monthly timestep, and 0.74 to 1.07 mm/day at a daily timestep. At seasonal time scales, for all but one of the models the weighted mean total ET is within ±8% of both the ensemble mean and the weighted mean total ET calculated from the flux tower data. Overall, the ensemble mean performs as well as any individual model across nearly all accuracy statistics for croplands, though some individual models may perform better for specific sites and regions. We conclude with three brief use cases to illustrate current applications and benefits of increased access to ET data, and discuss key lessons learned from the development of OpenET

    Neural Distributed Compressor Discovers Binning

    Full text link
    We consider lossy compression of an information source when the decoder has lossless access to a correlated one. This setup, also known as the Wyner-Ziv problem, is a special case of distributed source coding. To this day, practical approaches for the Wyner-Ziv problem have neither been fully developed nor heavily investigated. We propose a data-driven method based on machine learning that leverages the universal function approximation capability of artificial neural networks. We find that our neural network-based compression scheme, based on variational vector quantization, recovers some principles of the optimum theoretical solution of the Wyner-Ziv setup, such as binning in the source space as well as optimal combination of the quantization index and side information, for exemplary sources. These behaviors emerge although no structure exploiting knowledge of the source distributions was imposed. Binning is a widely used tool in information theoretic proofs and methods, and to our knowledge, this is the first time it has been explicitly observed to emerge from data-driven learning.Comment: draft of a journal version of our previous ISIT 2023 paper (available at: arXiv:2305.04380). arXiv admin note: substantial text overlap with arXiv:2305.0438

    xNet+SC: Classifying Places Based on Images by Incorporating Spatial Contexts

    Get PDF
    With recent advancements in deep convolutional neural networks, researchers in geographic information science gained access to powerful models to address challenging problems such as extracting objects from satellite imagery. However, as the underlying techniques are essentially borrowed from other research fields, e.g., computer vision or machine translation, they are often not spatially explicit. In this paper, we demonstrate how utilizing the rich information embedded in spatial contexts (SC) can substantially improve the classification of place types from images of their facades and interiors. By experimenting with different types of spatial contexts, namely spatial relatedness, spatial co-location, and spatial sequence pattern, we improve the accuracy of state-of-the-art models such as ResNet - which are known to outperform humans on the ImageNet dataset - by over 40%. Our study raises awareness for leveraging spatial contexts and domain knowledge in general in advancing deep learning models, thereby also demonstrating that theory-driven and data-driven approaches are mutually beneficial

    GeneDistiller—Distilling Candidate Genes from Linkage Intervals

    Get PDF
    Background: Linkage studies often yield intervals containing several hundred positional candidate genes. Different manual or automatic approaches exist for the determination of the gene most likely to cause the disease. While the manual search is very flexible and takes advantage of the researchers ’ background knowledge and intuition, it may be very cumbersome to collect and study the relevant data. Automatic solutions on the other hand usually focus on certain models, remain ‘‘black boxes’ ’ and do not offer the same degree of flexibility. Methodology: We have developed a web-based application that combines the advantages of both approaches. Information from various data sources such as gene-phenotype associations, gene expression patterns and protein-protein interactions was integrated into a central database. Researchers can select which information for the genes within a candidate interval or for single genes shall be displayed. Genes can also interactively be filtered, sorted and prioritised according to criteria derived from the background knowledge and preconception of the disease under scrutiny. Conclusions: GeneDistiller provides knowledge-driven, fully interactive and intuitive access to multiple data sources. It displays maximum relevant information, while saving the user from drowning in the flood of data. A typical query takes less than two seconds, thus allowing an interactive and explorative approach to the hunt for the candidate gene
    • 

    corecore