281 research outputs found

    Integration of Satellite Data, Physically-based Model, and Deep Neural Networks for Historical Terrestrial Water Storage Reconstruction

    Get PDF
    Terrestrial water storage (TWS) is an essential part of the global water cycle. Long-term monitoring of observed and modeled TWS is fundamental to analyze droughts, floods, and other meteorological extreme events caused by the effects of climate change on the hydrological cycle. Over the past several decades, hydrologists have been applying physically-based global hydrological model (GHM) and land surface model (LSM) to simulate TWS and the water components (e.g., groundwater storage) composing TWS. However, the reliability of these physically-based models is often affected by uncertainties in climatic forcing data, model parameters, model structure, and mechanisms for physical process representations. Launched in March 2002, the Gravity Recovery and Climate Experiment (GRACE) satellite mission exclusively applies remote sensing techniques to measure the variations in TWS on a global scale. The mission length of GRACE, however, is too short to meet the requirements for analyzing long-term TWS. Therefore, lots of effort has been devoted to the reconstruction of GRACE-like TWS data during the pre-GRACE era. Data-driven methods, such as multilinear regression and machine learning, exhibit a great potential to improve TWS assessments by integrating GRACE observations and physically-based simulations. The advances in artificial intelligence enable adaptive learning of correlations between variables in complex spatiotemporal systems. As for GRACE reconstruction, the applicability of various deep learning techniques has not been well studied previously. Thus, in this study, three deep learning-based models are developed based on the LSM-simulated TWS, to reconstruct the historical TWS in the Canadian landmass from 1979 to 2002. The performance of the models is evaluated against the GRACE-observed TWS anomalies from 2002 to 2004, and 2014 to 2016. The trained models achieve a mean correlation coefficient of 0.96, with a mean RMSE of 53 mm. The results show that the LSM-based deep learning models significantly improve the match between original LSM simulations and GRACE observations

    Urban Informatics

    Get PDF
    This open access book is the first to systematically introduce the principles of urban informatics and its application to every aspect of the city that involves its functioning, control, management, and future planning. It introduces new models and tools being developed to understand and implement these technologies that enable cities to function more efficiently ā€“ to become ā€˜smartā€™ and ā€˜sustainableā€™. The smart city has quickly emerged as computers have become ever smaller to the point where they can be embedded into the very fabric of the city, as well as being central to new ways in which the population can communicate and act. When cities are wired in this way, they have the potential to become sentient and responsive, generating massive streams of ā€˜bigā€™ data in real time as well as providing immense opportunities for extracting new forms of urban data through crowdsourcing. This book offers a comprehensive review of the methods that form the core of urban informatics from various kinds of urban remote sensing to new approaches to machine learning and statistical modelling. It provides a detailed technical introduction to the wide array of tools information scientists need to develop the key urban analytics that are fundamental to learning about the smart city, and it outlines ways in which these tools can be used to inform design and policy so that cities can become more efficient with a greater concern for environment and equity

    Urban Informatics

    Get PDF
    This open access book is the first to systematically introduce the principles of urban informatics and its application to every aspect of the city that involves its functioning, control, management, and future planning. It introduces new models and tools being developed to understand and implement these technologies that enable cities to function more efficiently ā€“ to become ā€˜smartā€™ and ā€˜sustainableā€™. The smart city has quickly emerged as computers have become ever smaller to the point where they can be embedded into the very fabric of the city, as well as being central to new ways in which the population can communicate and act. When cities are wired in this way, they have the potential to become sentient and responsive, generating massive streams of ā€˜bigā€™ data in real time as well as providing immense opportunities for extracting new forms of urban data through crowdsourcing. This book offers a comprehensive review of the methods that form the core of urban informatics from various kinds of urban remote sensing to new approaches to machine learning and statistical modelling. It provides a detailed technical introduction to the wide array of tools information scientists need to develop the key urban analytics that are fundamental to learning about the smart city, and it outlines ways in which these tools can be used to inform design and policy so that cities can become more efficient with a greater concern for environment and equity

    Citizen Science and Geospatial Capacity Building

    Get PDF
    This book is a collection of the articles published the Special Issue of ISPRS International Journal of Geo-Information on ā€œCitizen Science and Geospatial Capacity Buildingā€. The articles cover a wide range of topics regarding the applications of citizen science from a geospatial technology perspective. Several applications show the importance of Citizen Science (CitSci) and volunteered geographic information (VGI) in various stages of geodata collection, processing, analysis and visualization; and for demonstrating the capabilities, which are covered in the book. Particular emphasis is given to various problems encountered in the CitSci and VGI projects with a geospatial aspect, such as platform, tool and interface design, ontology development, spatial analysis and data quality assessment. The book also points out the needs and future research directions in these subjects, such as; (a) data quality issues especially in the light of big data; (b) ontology studies for geospatial data suited for diverse user backgrounds, data integration, and sharing; (c) development of machine learning and artificial intelligence based online tools for pattern recognition and object identification using existing repositories of CitSci and VGI projects; and (d) open science and open data practices for increasing the efficiency, decreasing the redundancy, and acknowledgement of all stakeholders

    Urban Informatics

    Get PDF
    This open access book is the first to systematically introduce the principles of urban informatics and its application to every aspect of the city that involves its functioning, control, management, and future planning. It introduces new models and tools being developed to understand and implement these technologies that enable cities to function more efficiently ā€“ to become ā€˜smartā€™ and ā€˜sustainableā€™. The smart city has quickly emerged as computers have become ever smaller to the point where they can be embedded into the very fabric of the city, as well as being central to new ways in which the population can communicate and act. When cities are wired in this way, they have the potential to become sentient and responsive, generating massive streams of ā€˜bigā€™ data in real time as well as providing immense opportunities for extracting new forms of urban data through crowdsourcing. This book offers a comprehensive review of the methods that form the core of urban informatics from various kinds of urban remote sensing to new approaches to machine learning and statistical modelling. It provides a detailed technical introduction to the wide array of tools information scientists need to develop the key urban analytics that are fundamental to learning about the smart city, and it outlines ways in which these tools can be used to inform design and policy so that cities can become more efficient with a greater concern for environment and equity

    Design and validation of novel methods for long-term road traffic forecasting

    Get PDF
    132 p.Road traffic management is a critical aspect for the design and planning of complex urban transport networks for which vehicle flow forecasting is an essential component. As a testimony of its paramount relevance in transport planning and logistics, thousands of scientific research works have covered the traffic forecasting topic during the last 50 years. In the beginning most approaches relied on autoregressive models and other analysis methods suited for time series data. During the last two decades, the development of new technology, platforms and techniques for massive data processing under the Big Data umbrella, the availability of data from multiple sources fostered by the Open Data philosophy and an ever-growing need of decision makers for accurate traffic predictions have shifted the spotlight to data-driven procedures. Even in this convenient context, with abundance of open data to experiment and advanced techniques to exploit them, most predictive models reported in literature aim for shortterm forecasts, and their performance degrades when the prediction horizon is increased. Long-termforecasting strategies are more scarce, and commonly based on the detection and assignment to patterns. These approaches can perform reasonably well unless an unexpected event provokes non predictable changes, or if the allocation to a pattern is inaccurate.The main core of the work in this Thesis has revolved around datadriven traffic forecasting, ultimately pursuing long-term forecasts. This has broadly entailed a deep analysis and understanding of the state of the art, and dealing with incompleteness of data, among other lesser issues. Besides, the second part of this dissertation presents an application outlook of the developed techniques, providing methods and unexpected insights of the local impact of traffic in pollution. The obtained results reveal that the impact of vehicular emissions on the pollution levels is overshadowe

    Quantification and attribution of urban fossil fuel emissions through atmospheric measurements

    Get PDF
    Background Fossil fuel combustion causes an increase in atmospheric carbon dioxide (CO2) levels and is one of the major causes of climate change. Therefore, efforts are made to reduce CO2 emissions from fossil fuel combustion through (inter)national agreements, with the most famous example being the Paris agreement. Each member state that ratified the agreement has to aim for pre-set emission reduction targets. In this collaborative effort it is important to keep track of the progress made towards these targets, but also to gain insight in which emission reduction policies are most effective to support future decision-making. Therefore, scholars have started developing atmospheric monitoring techniques, mainly focused on urban areas. Since about 70% of the anthropogenic CO2 emissions takes place in urban areas, the largest emission reductions will take place here. This causes large atmospheric signals that are relatively easy to measure. However, scholars have faced some major challenges. For example, the transport within a built-up area is complex, making the interpretation of atmospheric observations difficult. Moreover, emission reduction policies often target specific source sectors (such as road traffic or industry). Hence, these sectors should be monitored separately to understand the effectiveness of individual measures. This source attribution is impossible with only CO2 observations when source sectors are not spatially isolated. Aim The overall aim of this thesis is to improve our understanding of the monitoring requirements to constrain urban fossil fuel CO2 emissions per source sector. A key feature of a monitoring system is a network of observation sites. Therefore, the first research objective is to identify the most useful monitoring sites and network configurations. Besides CO2 we also included measurements of trace gasses that are co-emitted with CO2 during fossil fuel combustion. This happens in a ratio that is specific for a source sector and therefore these tracers have the potential to identify the source of a CO2 signal. We examined this opportunity to use co-emitted species to attribute CO2 signals to specific source sectors. Besides observations a good model representation of atmospheric transport is needed to interpret the observations. Therefore, the second research objective is to better understand the possibilities and limitations of atmospheric transport models in reproducing observed mixing ratios within/close to a city and find a useful modelling approach. The third objective is to predict high-resolution emissions in an urban area using proxy data and to gain insight in the uncertainties related to these emissions. Finally, we combine our insights related to measurements, models and emission modelling into an inversion framework to estimate how well we can constrain urban CO2 emissions per source sector (objective 4). Results and conclusions In Chapter 2 we examined the effectiveness of two observation sites close to the city border of Rotterdam, providing a gradient in the CO2 mixing ratio over the city from the upwind to the downwind site. The two sites provide one year of hourly mixing ratio gradients which are used to make a first estimate of the urban emissions. For this purpose we first examined whether the upwind site was representative for the composition of the background signal, which proved to be the case for specific wind directions. We found on average large enhancements at the downwind site compared to the upwind site for three major source areas: the city, the port and the glasshouse area. From the selected gradients we calculated emissions, accounting only for average biospheric fluxes, footprints, and boundary layer height. Although this approach is very simplified it shows reasonable flux estimates compared to the reported emissions. Nevertheless, we found that the estimates can be heavily influenced by local emissions and by transport processes that we could not take into account. For example, the presence of elevated stack emissions complicates the estimate of the emissions without detailed knowledge of the atmospheric transport. Finally, the results show that CO can potentially attribute a CO2 signal to industrial or residential source areas. We conclude that observed mixing ratio gradients can be used to make a rough estimate of the urban emissions, in which CO is of added value to identify dominant source types. In Chapter 3 we compared two atmospheric transport models: the Eulerian WRF-Chem model (1x1 km2 resolution) and the Lagrangian OPS model. Atmospheric transport models are useful to account for the impact of transport, mixing, entrainment, and biospheric fluxes on the observed mixing ratios and can help interpret the observed signals. We examined the ability of these models to reproduce the observed mixing ratios at several measurement sites along a transect from an urban (Rotterdam) to rural location. On average, WRF-Chem gives good results, reproducing meso-scale features with the correct order of magnitude for the observed CO2 mixing ratios. However, the timing of CO2 mixing ratio enhancements is often incorrect, which is mainly the result of an incorrect representation of the wind direction causing the model to sample the wrong source area. Moreover, we found that the representation of point sources is problematic. In a Eulerian model emissions get instantly mixed throughout the grid box, which causes a large underestimation of local and downwind mixing ratios for sources with a small horizontal extent. Using the OPS model improves the representation of point sources, because it has no spatial discretization. The difference between OPS and WRF-Chem is only visible up to approximately 15 km from major stack emissions, such that point sources further away from observation sites can be represented by WRF-Chem as well. An additional advantage of the OPS model is that it can be driven by locally observed meteorological data, such that it overcomes the wind direction issue from WRF-Chem. However, the OPS model is sub-optimal for area source emissions over a large domain and therefore we conclude that a combination of both models is the best option in Rotterdam. Finally, the results in Chapter 3 show that urban sites are well-exposed to urban fossil fuel fluxes and can be used to separate between different source areas (such as the residential and industrial area), especially if besides CO2 also CO is included. Sites that are further removed from the city (semi-urban) provide a better constraint on the total flux. Chapter 4 explored the potential of several data streams to predict high-resolution emissions. These data were combined in a dynamic fossil fuel emission model that estimates emissions based on additional knowledge about the emission landscape. First, we calculated the total yearly emissions for the Netherlands per source sector using activity data (such as Gross Domestic Product), emission factors (the amount of CO2 emitted per amount of fuel consumed) and energy efficiency (amount of fuel consumed per amount of activity). Then the total yearly emissions were disaggregated to hourly and 1x1 km2 scale using proxies and hourly activity data. In this way we created a dynamic emission map based on a wide range of parameters that are specified per source sector. One major advantage is that we can estimate the (unknown) uncertainty in the high-resolution emissions from the (better-known) uncertainty in the model parameters. We find that we can estimate the yearly emissions for the Netherlands with a 15% uncertainty when using generalized proxies (i.e. based on general, large-scale activity data and emission factors). Using more specific knowledge about the region (e.g. about technological advancement) and local activity data reduces this uncertainty. We can also use the emission model to calculate emissions of co-emitted species by multiplying the CO2 emissions with the typical emission ratios for each source sector. These emission ratios are variable and uncertain and the emissions of co-emitted species have a larger uncertainty than the CO2 emission. Finally, the model parameters have a physical meaning and can be linked to emission reduction policies, making it a useful tool for policy-makers. With the dynamic emission model we identified the most important and uncertain parameters affecting the emissions (CO2 emission factors, emission ratios and time profiles). In Chapter 5 we tried to optimize these parameters using a newly developed inverse modelling framework. The inversion system uses the multi-model framework described in Chapter 3 to translate the emissions calculated by the dynamic emission model into mixing ratios of CO2, CO, NOx (nitrogen oxides) and SO2 (sulphur dioxide). We used the same modelling framework to create pseudo-observations, which are used to validate the model. The only difference is the values appointed to the parameters in the emission model (generalized data for the prior, local data for the pseudo-observations). We performed an experiment to explore the difference between an urban and a rural observation network, which shows that the CO2 signals captured by the rural network are too small to contain relevant information. The urban network performs well and gives a good estimate of the total yearly emissions for the Rotterdam area (5% error). When we included observations of the co-emitted tracers the emission estimate per source sector generally improves. Some sectors remain difficult to constrain, for example due to the lack of large enhancements or the lack of a clear emission ratio signature. The time profiles can also be constrained relatively well, at least the day-to-day variability. However, for households the error in the time profile gets aliased into the emission factor, causing the emission factor to be less well constrained. When we introduced erroneous atmospheric transport the results deteriorate drastically, especially for power plants and industry (i.e. point sources) which suffer most from the transport errors. We conclude that an inversion system with a dynamic emission model as a prior has great potential for monitoring urban emissions, but transport errors currently hamper its applicability to real observations. This work contributed to a better understanding of the complexity of the urban fossil fuel emissions and what is needed to monitor this. Urban observations provide useful information and, depending on the size and shape of the monitoring network, can be used to constrain urban emissions in more or less detail. Observations of co-emitted species have the potential to attribute CO2 emissions to specific source sectors and are an important addition to our inversion framework. The dynamic fossil fuel emission model has several major advantages over a regular emission map, being flexible and physically meaningful. Although several challenges remain, the work described in this thesis is an important step in the development of urban monitoring capacities.</p

    Bicycle Sharing Systems: Fast and Slow Urban Mobility Dynamics

    Get PDF
    In cities all around the world, new forms of urban micromobility have observed rapid and wide-scale adoption due to their benefits as a shared mode that are environmentally friendly, convenient and accessible. Bicycle sharing systems are the most established among these modes, facilitating complete end-to-end journeys as well as forming a solution for the first/last mile issue that public transportation users face in getting to and from transit stations. They mark the beginnings of a gradual transition towards a more sustainable transportation model that include greater use of shared and active modes. As such, understanding the way in which these systems are used is essential in order to improve their management and efficiency. Given the lack of operator published data, this thesis aims to explore the utility of open bicycle sharing system data standards that are intended for real-time dissemination of bicycle locations in uncovering novel insights into their activity dynamics over varying temporal and geographical scales. The thesis starts by exploring bicycle sharing systems at a global-scale, uncovering their long-term growth and evolution through the development of data cleaning and metric creation heuristics that also form the foundations of the most comprehensive classification of systems. Having established the values of these metrics in conducting comparisons at scale, the thesis then analyses the medium-term impacts of mobility interventions in the context of the COVID-19 pandemic, employing spatio-temporal and network analysis methods that highlight their adaptability and resilience. Finally, the thesis closes with the analysis of granular spatial and temporal dynamics within a dockless system in London that enable the identification of the variations in journey locations throughout different times of the day. In each of these cases, the research highlights the indispensable value of open data and the important role that bicycle sharing systems play in urban mobility

    Advancing knowledge on fugitive gas migration from integrity compromised energy wells

    Get PDF
    Decommissioned oil and gas wells can suffer integrity failure and release fugitive gases into the environment. This typically occurs unnoticed since post-abandonment monitoring is uncommon. To reach NetZero, methane emissions from fugitive sources such as decommissioned wells, must be mitigated increasing the need for research on this emerging issue. This research aimed to advance knowledge on this topic through three main thrusts. First, by evaluating the integrity of decommissioned wells in the field, finding no signs of integrity failure and highlighting a need for standardised assessment methods. Next, by identifying sedimentary rock properties controlling fugitive gas migration in the shallow subsurface of an area of extensive hydrocarbon development, finding flow will occur through units with low total displacement pressure, or through preferential pathways. Finally, by evaluating data from an airborne methane survey to better understand the incidence rate of well integrity failure and identify well attributes related to its occurrence, finding a 5% failure rate and that well operator, well type, abandonment years, completion type, surface casing vent flow and remedial treatments reported may be linked to integrity failure. Overall, this study will aid in developing effective fugitive gas monitoring and detection strategies, establishing emission targets and identifying parameters involved in development of well integrity failure.James Watt ScholarshipGeoscience BCā€™s grant (Project 2017-002

    Actuators for Intelligent Electric Vehicles

    Get PDF
    This book details the advanced actuators for IEVs and the control algorithm design. In the actuator design, the configuration four-wheel independent drive/steering electric vehicles is reviewed. An in-wheel two-speed AMT with selectable one-way clutch is designed for IEV. Considering uncertainties, the optimization design for the planetary gear train of IEV is conducted. An electric power steering system is designed for IEV. In addition, advanced control algorithms are proposed in favour of active safety improvement. A supervision mechanism is applied to the segment drift control of autonomous driving. Double super-resolution network is used to design the intelligent driving algorithm. Torque distribution control technology and four-wheel steering technology are utilized for path tracking and adaptive cruise control. To advance the control accuracy, advanced estimation algorithms are studied in this book. The tyre-road peak friction coefficient under full slip rate range is identified based on the normalized tyre model. The pressure of the electro-hydraulic brake system is estimated based on signal fusion. Besides, a multi-semantic driver behaviour recognition model of autonomous vehicles is designed using confidence fusion mechanism. Moreover, a mono-vision based lateral localization system of low-cost autonomous vehicles is proposed with deep learning curb detection. To sum up, the discussed advanced actuators, control and estimation algorithms are beneficial to the active safety improvement of IEVs
    • ā€¦
    corecore