390 research outputs found

    Modeling Semi-Arid River-Aquifer Systems With Bayesian Networks and Artificial Neural Networks

    Get PDF
    In semiarid areas, precipitations usually appear in the form of big and brief floods, which affect the aquifer through water infiltration, causing groundwater temperature changes. These changes may have an impact on the physical, chemical and biological processes of the aquifer and, thus, modeling the groundwater temperature variations associated with stormy precipitation episodes is essential, especially since this kind of precipitation is becoming increasingly frequent in semiarid regions. In this paper, we compare the predictive performance of two popular tools in statistics and machine learning, namely Bayesian networks (BNs) and artificial neural networks (ANNs), in modeling groundwater temperature variation associated with precipitation events. More specifically, we trained a total of 2145 ANNs with different node configurations, from one to five layers. On the other hand, we trained three different BNs using different structure learning algorithms. We conclude that, while both tools are equivalent in terms of accuracy for predicting groundwater temperature drops, the computational cost associated with the estimation of Bayesian networks is significantly lower, and the resulting BN models are more versatile and allow a more detailed analysis

    Spatiotemporal and temporal forecasting of ambient air pollution levels through data-intensive hybrid artificial neural network models

    Get PDF
    Outdoor air pollution (AP) is a serious public threat which has been linked to severe respiratory and cardiovascular illnesses, and premature deaths especially among those residing in highly urbanised cities. As such, there is a need to develop early-warning and risk management tools to alleviate its effects. The main objective of this research is to develop AP forecasting models based on Artificial Neural Networks (ANNs) according to an identified model-building protocol from existing related works. Plain, hybrid and ensemble ANN model architectures were developed to estimate the temporal and spatiotemporal variability of hourly NO2 levels in several locations in the Greater London area. Wavelet decomposition was integrated with Multilayer Perceptron (MLP) and Long Short-term Memory (LSTM) models to address the issue of high variability of AP data and improve the estimation of peak AP levels. Block-splitting and crossvalidation procedures have been adapted to validate the models based on Root Mean Squared Error (RMSE), Mean Absolute Error (MAE), and Willmott’s index of agreement (IA). The results of the proposed models present better performance than those from the benchmark models. For instance, the proposed wavelet-based hybrid approach provided 39.15% and 28.58% reductions in RMSE and MAE indices, respectively, on the performance of the benchmark MLP model results for the temporal forecasting of NO2 levels. The same approach reduced the RMSE and MAE indices of the benchmark LSTM model results by 12.45% and 20.08%, respectively, for the spatiotemporal estimation of NO2 levels in one site at Central London. The proposed hybrid deep learning approach offers great potential to be operational in providing air pollution forecasts in areas without a reliable database. The model-building protocol adapted in this thesis can also be applied to studies using measurements from other sites.Outdoor air pollution (AP) is a serious public threat which has been linked to severe respiratory and cardiovascular illnesses, and premature deaths especially among those residing in highly urbanised cities. As such, there is a need to develop early-warning and risk management tools to alleviate its effects. The main objective of this research is to develop AP forecasting models based on Artificial Neural Networks (ANNs) according to an identified model-building protocol from existing related works. Plain, hybrid and ensemble ANN model architectures were developed to estimate the temporal and spatiotemporal variability of hourly NO2 levels in several locations in the Greater London area. Wavelet decomposition was integrated with Multilayer Perceptron (MLP) and Long Short-term Memory (LSTM) models to address the issue of high variability of AP data and improve the estimation of peak AP levels. Block-splitting and crossvalidation procedures have been adapted to validate the models based on Root Mean Squared Error (RMSE), Mean Absolute Error (MAE), and Willmott’s index of agreement (IA). The results of the proposed models present better performance than those from the benchmark models. For instance, the proposed wavelet-based hybrid approach provided 39.15% and 28.58% reductions in RMSE and MAE indices, respectively, on the performance of the benchmark MLP model results for the temporal forecasting of NO2 levels. The same approach reduced the RMSE and MAE indices of the benchmark LSTM model results by 12.45% and 20.08%, respectively, for the spatiotemporal estimation of NO2 levels in one site at Central London. The proposed hybrid deep learning approach offers great potential to be operational in providing air pollution forecasts in areas without a reliable database. The model-building protocol adapted in this thesis can also be applied to studies using measurements from other sites

    Wisdom of the crowd, The: reliable deep reinforcement learning through ensembles of Q-functions

    Get PDF
    2018 Summer.Includes bibliographical references.Reinforcement learning agents learn by exploring the environment and then exploiting what they have learned. This frees the human trainers from having to know the preferred action or intrinsic value of each encountered state. The cost of this freedom is reinforcement learning can feel too slow and unstable during learning: exhibiting performance like that of a randomly initialized Q-function just a few parameter updates after solving the task. We explore the possibility that ensemble methods can remedy these shortcomings and do so by investigating a novel technique which harnesses the wisdom of the crowds by bagging Q-function approximator estimates. Our results show that this proposed approach improves all tasks and reinforcement learning approaches attempted. We are able to demonstrate that this is a direct result of the increased stability of the action portion of the state-action-value function used by Q-learning to select actions and by policy gradient methods to train the policy. Recently developed methods attempt to solve these RL challenges at the cost of increasing the number of interactions with the environment by several orders of magnitude. On the other hand, the proposed approach has little downside for inclusion: it addresses RL challenges while reducing the number interactions with the environment

    Machine learning enabled millimeter wave cellular system and beyond

    Get PDF
    Millimeter-wave (mmWave) communication with advantages of abundant bandwidth and immunity to interference has been deemed a promising technology for the next generation network and beyond. With the help of mmWave, the requirements envisioned of the future mobile network could be met, such as addressing the massive growth required in coverage, capacity as well as traffic, providing a better quality of service and experience to users, supporting ultra-high data rates and reliability, and ensuring ultra-low latency. However, due to the characteristics of mmWave, such as short transmission distance, high sensitivity to the blockage, and large propagation path loss, there are some challenges for mmWave cellular network design. In this context, to enjoy the benefits from the mmWave networks, the architecture of next generation cellular network will be more complex. With a more complex network, it comes more complex problems. The plethora of possibilities makes planning and managing a complex network system more difficult. Specifically, to provide better Quality of Service and Quality of Experience for users in the such network, how to provide efficient and effective handover for mobile users is important. The probability of handover trigger will significantly increase in the next generation network, due to the dense small cell deployment. Since the resources in the base station (BS) is limited, the handover management will be a great challenge. Further, to generate the maximum transmission rate for the users, Line-of-sight (LOS) channel would be the main transmission channel. However, due to the characteristics of mmWave and the complexity of the environment, LOS channel is not feasible always. Non-line-of-sight channel should be explored and used as the backup link to serve the users. With all the problems trending to be complex and nonlinear, and the data traffic dramatically increasing, the conventional method is not effective and efficiency any more. In this case, how to solve the problems in the most efficient manner becomes important. Therefore, some new concepts, as well as novel technologies, require to be explored. Among them, one promising solution is the utilization of machine learning (ML) in the mmWave cellular network. On the one hand, with the aid of ML approaches, the network could learn from the mobile data and it allows the system to use adaptable strategies while avoiding unnecessary human intervention. On the other hand, when ML is integrated in the network, the complexity and workload could be reduced, meanwhile, the huge number of devices and data could be efficiently managed. Therefore, in this thesis, different ML techniques that assist in optimizing different areas in the mmWave cellular network are explored, in terms of non-line-of-sight (NLOS) beam tracking, handover management, and beam management. To be specific, first of all, a procedure to predict the angle of arrival (AOA) and angle of departure (AOD) both in azimuth and elevation in non-line-of-sight mmWave communications based on a deep neural network is proposed. Moreover, along with the AOA and AOD prediction, a trajectory prediction is employed based on the dynamic window approach (DWA). The simulation scenario is built with ray tracing technology and generate data. Based on the generated data, there are two deep neural networks (DNNs) to predict AOA/AOD in the azimuth (AAOA/AAOD) and AOA/AOD in the elevation (EAOA/EAOD). Furthermore, under an assumption that the UE mobility and the precise location is unknown, UE trajectory is predicted and input into the trained DNNs as a parameter to predict the AAOA/AAOD and EAOA/EAOD to show the performance under a realistic assumption. The robustness of both procedures is evaluated in the presence of errors and conclude that DNN is a promising tool to predict AOA and AOD in a NLOS scenario. Second, a novel handover scheme is designed aiming to optimize the overall system throughput and the total system delay while guaranteeing the quality of service (QoS) of each user equipment (UE). Specifically, the proposed handover scheme called O-MAPPO integrates the reinforcement learning (RL) algorithm and optimization theory. An RL algorithm known as multi-agent proximal policy optimization (MAPPO) plays a role in determining handover trigger conditions. Further, an optimization problem is proposed in conjunction with MAPPO to select the target base station and determine beam selection. It aims to evaluate and optimize the system performance of total throughput and delay while guaranteeing the QoS of each UE after the handover decision is made. Third, a multi-agent RL-based beam management scheme is proposed, where multiagent deep deterministic policy gradient (MADDPG) is applied on each small-cell base station (SCBS) to maximize the system throughput while guaranteeing the quality of service. With MADDPG, smart beam management methods can serve the UEs more efficiently and accurately. Specifically, the mobility of UEs causes the dynamic changes of the network environment, the MADDPG algorithm learns the experience of these changes. Based on that, the beam management in the SCBS is optimized according the reward or penalty when severing different UEs. The approach could improve the overall system throughput and delay performance compared with traditional beam management methods. The works presented in this thesis demonstrate the potentiality of ML when addressing the problem from the mmWave cellular network. Moreover, it provides specific solutions for optimizing NLOS beam tracking, handover management and beam management. For NLOS beam tracking part, simulation results show that the prediction errors of the AOA and AOD can be maintained within an acceptable range of ±2. Further, when it comes to the handover optimization part, the numerical results show the system throughput and delay are improved by 10% and 25%, respectively, when compared with two typical RL algorithms, Deep Deterministic Policy Gradient (DDPG) and Deep Q-learning (DQL). Lastly, when it considers the intelligent beam management part, numerical results reveal the convergence performance of the MADDPG and the superiority in improving the system throughput compared with other typical RL algorithms and the traditional beam management method

    Bayesian Optimization Enhanced Deep Reinforcement Learning for Trajectory Planning and Network Formation in Multi-UAV Networks

    Full text link
    In this paper, we employ multiple UAVs coordinated by a base station (BS) to help the ground users (GUs) to offload their sensing data. Different UAVs can adapt their trajectories and network formation to expedite data transmissions via multi-hop relaying. The trajectory planning aims to collect all GUs' data, while the UAVs' network formation optimizes the multi-hop UAV network topology to minimize the energy consumption and transmission delay. The joint network formation and trajectory optimization is solved by a two-step iterative approach. Firstly, we devise the adaptive network formation scheme by using a heuristic algorithm to balance the UAVs' energy consumption and data queue size. Then, with the fixed network formation, the UAVs' trajectories are further optimized by using multi-agent deep reinforcement learning without knowing the GUs' traffic demands and spatial distribution. To improve the learning efficiency, we further employ Bayesian optimization to estimate the UAVs' flying decisions based on historical trajectory points. This helps avoid inefficient action explorations and improves the convergence rate in the model training. The simulation results reveal close spatial-temporal couplings between the UAVs' trajectory planning and network formation. Compared with several baselines, our solution can better exploit the UAVs' cooperation in data offloading, thus improving energy efficiency and delay performance.Comment: 15 pages, 10 figures, 2 algorithm

    Leveraging Supervoxels for Medical Image Volume Segmentation With Limited Supervision

    Get PDF
    The majority of existing methods for machine learning-based medical image segmentation are supervised models that require large amounts of fully annotated images. These types of datasets are typically not available in the medical domain and are difficult and expensive to generate. A wide-spread use of machine learning based models for medical image segmentation therefore requires the development of data-efficient algorithms that only require limited supervision. To address these challenges, this thesis presents new machine learning methodology for unsupervised lung tumor segmentation and few-shot learning based organ segmentation. When working in the limited supervision paradigm, exploiting the available information in the data is key. The methodology developed in this thesis leverages automatically generated supervoxels in various ways to exploit the structural information in the images. The work on unsupervised tumor segmentation explores the opportunity of performing clustering on a population-level in order to provide the algorithm with as much information as possible. To facilitate this population-level across-patient clustering, supervoxel representations are exploited to reduce the number of samples, and thereby the computational cost. In the work on few-shot learning-based organ segmentation, supervoxels are used to generate pseudo-labels for self-supervised training. Further, to obtain a model that is robust to the typically large and inhomogeneous background class, a novel anomaly detection-inspired classifier is proposed to ease the modelling of the background. To encourage the resulting segmentation maps to respect edges defined in the input space, a supervoxel-informed feature refinement module is proposed to refine the embedded feature vectors during inference. Finally, to improve trustworthiness, an architecture-agnostic mechanism to estimate model uncertainty in few-shot segmentation is developed. Results demonstrate that supervoxels are versatile tools for leveraging structural information in medical data when training segmentation models with limited supervision

    Deep MR to CT Synthesis for PET/MR Attenuation Correction

    Get PDF
    Positron Emission Tomography - Magnetic Resonance (PET/MR) imaging combines the functional information from PET with the flexibility of MR imaging. It is essential, however, to correct for photon attenuation when reconstructing PETs, which is challenging for PET/MR as neither modality directly image tissue attenuation properties. Classical MR-based computed tomography (CT) synthesis methods, such as multi-atlas propagation, have been the method of choice for PET attenuation correction (AC), however, these methods are slow and suffer from the poor ability to handle anatomical abnormalities. To overcome this limitation, this thesis explores the rising field of artificial intelligence in order to develop novel methods for PET/MR AC. Deep learning-based synthesis methods such as the standard U-Net architecture are not very stable, accurate, and robust to small variations in image appearance. Thus, the first proposed MR to CT synthesis method deploys a boosting strategy, where multiple weak predictors build a strong predictor providing a significant improvement in CT and PET reconstruction accuracy. Standard deep learning-based methods as well as more advanced methods like the first proposed method show issues in the presence of very complex imaging environments and large images such as whole-body images. The second proposed method learns the image context between whole-body MRs and CTs through multiple resolutions while simultaneously modelling uncertainty. Lastly, as the purpose of synthesizing a CT is to better reconstruct PET data, the use of CT-based loss functions is questioned within this thesis. Such losses fail to recognize the main objective of MR-based AC, which is to generate a synthetic CT that, when used for PET AC, makes the reconstructed PET as close as possible to the gold standard PET. The third proposed method introduces a novel PET-based loss that minimizes CT residuals with respect to the PET reconstruction
    • …
    corecore