4,001 research outputs found

    Towards Vehicle-to-everything Autonomous Driving: A Survey on Collaborative Perception

    Full text link
    Vehicle-to-everything (V2X) autonomous driving opens up a promising direction for developing a new generation of intelligent transportation systems. Collaborative perception (CP) as an essential component to achieve V2X can overcome the inherent limitations of individual perception, including occlusion and long-range perception. In this survey, we provide a comprehensive review of CP methods for V2X scenarios, bringing a profound and in-depth understanding to the community. Specifically, we first introduce the architecture and workflow of typical V2X systems, which affords a broader perspective to understand the entire V2X system and the role of CP within it. Then, we thoroughly summarize and analyze existing V2X perception datasets and CP methods. Particularly, we introduce numerous CP methods from various crucial perspectives, including collaboration stages, roadside sensors placement, latency compensation, performance-bandwidth trade-off, attack/defense, pose alignment, etc. Moreover, we conduct extensive experimental analyses to compare and examine current CP methods, revealing some essential and unexplored insights. Specifically, we analyze the performance changes of different methods under different bandwidths, providing a deep insight into the performance-bandwidth trade-off issue. Also, we examine methods under different LiDAR ranges. To study the model robustness, we further investigate the effects of various simulated real-world noises on the performance of different CP methods, covering communication latency, lossy communication, localization errors, and mixed noises. In addition, we look into the sim-to-real generalization ability of existing CP methods. At last, we thoroughly discuss issues and challenges, highlighting promising directions for future efforts. Our codes for experimental analysis will be public at https://github.com/memberRE/Collaborative-Perception.Comment: 19 page

    To Compute or not to Compute? Adaptive Smart Sensing in Resource-Constrained Edge Computing

    Full text link
    We consider a network of smart sensors for edge computing application that sample a signal of interest and send updates to a base station for remote global monitoring. Sensors are equipped with sensing and compute, and can either send raw data or process them on-board before transmission. Limited hardware resources at the edge generate a fundamental latency-accuracy trade-off: raw measurements are inaccurate but timely, whereas accurate processed updates are available after computational delay. Also, if sensor on-board processing entails data compression, latency caused by wireless communication might be higher for raw measurements. Hence, one needs to decide when sensors should transmit raw measurements or rely on local processing to maximize overall network performance. To tackle this sensing design problem, we model an estimation-theoretic optimization framework that embeds computation and communication delays, and propose a Reinforcement Learning-based approach to dynamically allocate computational resources at each sensor. Effectiveness of our proposed approach is validated through numerical simulations with case studies motivated by the Internet of Drones and self-driving vehicles.Comment: 14 pages, 14 figures; submitted to IEEE TNSM; revised versio

    PastoralScape : an environment-driven model of vaccination decision making within pastoralist groups in East Africa

    Get PDF
    Economic and cultural resilience among pastoralists in East Africa is threatened by the interconnected forces of climate change and contagious diseases spread. A key factor in the resilience of livestock dependent communities is human decision making regarding vaccination against preventable diseases such as Rift Valley fever and Contagious Bovine Pleuropneumonia. The relationship between healthy and productive livestock and economic development of poor households and communities is mediated by human decision making. This paper describes a coupled human and natural systems agent-based model that focuses on One Health. Disease propagation and animal nutritional health are driven by historical GIS data that captures changes in foraging condition. The results of a series of experiments are presented that demonstrate the sensitivity of a transformed Random Field Ising Model of human decision making to changes in human memory and rationality parameters. Results presented communicate that convergence in the splitting of households between vaccinating or not is achieved for combinations of memory and rationality. The interaction of these cognition parameters with public information and social networks of opinions is detailed. This version of the PastoralScape model is intended to form the basis upon which richer economic and human factor models can be built. © 2021, University of Surrey. All rights reserved

    Bridging Machine Learning for Smart Grid Applications

    Get PDF
    This dissertation proposes to develop, leverage, and apply machine learning algorithms on various smart grid applications including state estimation, false data injection attack detection, and reliability evaluation. The dissertation is divided into four parts as follows.. Part I: Power system state estimation (PSSE). The PSSE is commonly formulated as a weighted least-square (WLS) algorithm and solved using iterative methods such as Gauss-Newton methods. However, iterative methods have become more sensitive to system operating conditions than ever before due to the deployment of intermittent renewable energy sources, zero-emission technologies (e.g., electric vehicles), and demand response programs. Efficient approaches for PSSE are required to avoid pitfalls of the WLS-based PSSE computations for accurate prediction of operating conditions. The first part of this dissertation develops a data-driven real-time PSSE using a deep ensemble learning algorithm. In the proposed approach, the ensemble learning setup is formulated with dense residual neural networks as base-learners and a multivariate-linear regressor as a meta-learner. Historical measurements and states are utilized to train and test the model. The trained model can be used in real-time to estimate power system states (voltage magnitudes and phase angles) using real-time measurements. Most of current data-driven PSSE methods assume the availability of a complete set of measurements, which may not be the case in real power system data acquisition. This work adopts multivariate linear regression to forecast system states for instants of missing measurements to assist the proposed PSSE technique. Case studies are performed on various IEEE standard benchmark systems to validate the proposed approach. Part II: Cyber-attacks on Voltage Regulation. Several wired and wireless advanced communication technologies have been used for coordinated voltage regulation schemes in distribution systems. These technologies have been employed to both receive voltage measurements from field sensors and transmit control settings to voltage regulating devices (VRDs). Communication networks for voltage regulation can be susceptible to data falsification attacks, which can lead to voltage instability. In this context, an attacker can alter multiple field measurements in a coordinated manner to disturb voltage control algorithms. The second part of this dissertation develops a machine learning-based two-stage approach to detect, locate, and distinguish coordinated data falsification attacks on control systems of coordinated voltage regulation schemes in distribution systems with distributed generators. In the first stage (regression), historical voltage measurements along with current meteorological data (solar irradiance and ambient temperature) are provided to random forest regressor to forecast voltage magnitudes of a given current state. In the second stage, a logistic regression compares the forecasted voltage with the measured voltage (used to set VRDs) to detect, locate, and distinguish coordinated data falsification attacks in real-time. The proposed approach is validated through several case studies on a 240-node real distribution system (based in the USA) and the standard IEEE 123-node benchmark distribution system.Part III: Cyber-attacks on Distributed Generators. Part III of the dissertation proposes a deep learning-based multi-label classification approach to detect coordinated and simultaneously launched data falsification attacks on a large number of distributed generators (DGs). The proposed approach is developed to detect power output manipulation and falsification attacks on DGs including additive attacks, deductive attacks, and combination of additive and deductive attacks (attackers use the combination of additive and deductive attacks to camouflage their attacks). The proposed approach is demonstrated on several systems including the 240-node and IEEE 123-node distribution test system. Part IV: Composite System Reliability Evaluation. Traditional composite system reliability evaluation is computationally demanding and may become inapplicable to large integrated power grids due to the requirements of repetitively solving optimal power flow (OPF) for a large number of system states. Machine learning-based approaches have been used to avoid solving OPF in composite system reliability evaluation except in the training stage. However, current approaches have been utilized only to classify system states into success and failure states (i.e., up or down). In other words, they can be used to evaluate power system probability and frequency reliability indices, but they cannot be used to evaluate power and energy reliability indices unless OPF is solved for each failure state to determine minimum load curtailments. In the fourth part of this dissertation, a convolutional neural network (CNN)-based regression approach is proposed to determine the minimum amount of load curtailments of sampled states without solving OPF. Unavoidable load curtailments due to failures are then used to evaluate power and energy indices (e.g., expected demand not supplied) as well as to evaluate the probability and frequency indices. The proposed approach is applied on several systems including the IEEE Reliability Test System and Saskatchewan Power Corporation in Canada

    EFFICIENT PARAMETRIC AND NON-PARAMETRICLOCALIZATION AND MAPPING IN ROBOTIC NETWORKS

    Get PDF
    Since the eighties localization and mapping problems have attracted the efforts of robotics researchers. However in the last decade, thanks to the increasing capabilities of the new electronic devices, many new related challenges have been posed, such as swarm robotics, aerial vehicles, autonomous cars and robotics networks. Efficiency, robustness and scalability play a key role in these scenarios. Efficiency is intended as an ability for an application to minimize the resources usage, in particular CPU time and memory space. In the aforementioned applications an underlying communication network is required so, for robustness we mean asynchronous algorithms resilient to delays and packet-losses. Finally scalability is the ability of an application to continue functioning without any dramatic performance degradation even if the number of devices involved keep increasing. In this thesis the interest is focused on parametric and non-parametric estimation algorithms ap- plied to localization and mapping in robotics. The main contribution can be summarized in the following four arguments: (i) Consensus-based localization We address the problem of optimal estimating the position of each agent in a network from relative noisy vectorial distances with its neighbors by means of only local communication and bounded complexity, independent of network size and topology. In particular we propose a consensus-based algorithm with the use of local memory variables which allows asynchronous implementation, has guaranteed exponential convergence to the optimal solution under simple deterministic and randomized communication protocols, and requires minimal packet transmission. In the randomized scenario, we then study the rate of convergence in expectation of the estimation error and we argue that it can be used to obtain upper and lower bound for the rate of converge in mean square. In particular, we show that for regular graphs, such as Cayley, Ramanujan, and complete graphs, the convergence rate in expectation has the same asymptotic degradation of memoryless asynchronous consensus algorithms in terms of network size. In addition, we show that the asynchronous implementation is also robust to delays and communication failures. We finally complement the analytical results with some numerical simulations, comparing the proposed strategy with other algorithms which have been recently proposed in the literature. (ii) Aerial Vehicles distributed localization: We study the problem of distributed multi- agent localization in presence of heterogeneous measurements and wireless communication. The proposed algorithm integrates low precision global sensors, like GPS and compasses, with more precise relative position (i.e., range plus bearing) sensors. Global sensors are used to reconstruct the absolute position and orientation, while relative sensors are used to retrieve the shape of the formation. A fast distributed and asynchronous linear least-squares algorithm is proposed to solve an approximated version of the non-linear Maximum Likelihood problem. The algorithm is provably shown to be robust to communication losses and random delays. The use of ACK-less broadcast-based communication protocols ensures an efficient and easy implementation in real world scenarios. If the relative measurement errors are sufficiently small, we show that the algorithm attains a solution which is very close to the maximum likelihood solution. The theoretical findings and the algorithm performances are extensively tested by means of Monte-Carlo simulations. (iii) Estimation and Coverage: We address the problem of optimal coverage of a region via multiple robots when the sensory field used to approximate the density of event appearance is not known in advance. We address this problem in the context of a client-server architecture in which the mobile robots can communicate with a base station via a possibly unreliable wireless network subject to packet losses. Based on Gaussian regression which allows to estimate the true sensory field with any arbitrary accuracy, we propose a randomised strategy in which the robots and the base station simultaneously estimate the true sensory distribution by collecting measurements and compute the corresponding optimal Voronoi partitions. This strategy is designed to promote exploration at the beginning and then smoothly transition to station the robots at the centroid of the estimated optimal Voronoi partitions. Under mild assumptions on the transmission failure probability, we prove that the proposed strategy guarantees the convergence of the estimated sensory field to the true field and that the corresponding Voronoi partitions asymptotically becomes arbitrarily close to an optimal Voronoi partition. Additionally, we also provide numerically efficient approximation that trade-off accuracy of the estimated map for reduced memory and CPU complexity. Finally, we provide a set of extensive simulations which confirm the effectiveness of the proposed approach. (iv) Non-parametric estimation of spatio-temporal fields: We address the problem of efficiently and optimally estimating an unknown time-varying function through the collection of noisy measurements. We cast our problem in the framework of non-parametric estimation and we assume that the unknown function is generated by a Gaussian process with a known covariance. Under mild assumptions on the kernel function, we propose a solution which links the standard Gaussian regression to the Kalman filtering thanks to the exploitation of a grid where measurements collection and estimation take place. This work show an efficient in time and space method to estimate time-varying function, which combine the advantages of the Gaussian regression, e.g. model-less, and of the Kalman filter, e.g. efficiency

    Data-Centric Epidemic Forecasting: A Survey

    Full text link
    The COVID-19 pandemic has brought forth the importance of epidemic forecasting for decision makers in multiple domains, ranging from public health to the economy as a whole. While forecasting epidemic progression is frequently conceptualized as being analogous to weather forecasting, however it has some key differences and remains a non-trivial task. The spread of diseases is subject to multiple confounding factors spanning human behavior, pathogen dynamics, weather and environmental conditions. Research interest has been fueled by the increased availability of rich data sources capturing previously unobservable facets and also due to initiatives from government public health and funding agencies. This has resulted, in particular, in a spate of work on 'data-centered' solutions which have shown potential in enhancing our forecasting capabilities by leveraging non-traditional data sources as well as recent innovations in AI and machine learning. This survey delves into various data-driven methodological and practical advancements and introduces a conceptual framework to navigate through them. First, we enumerate the large number of epidemiological datasets and novel data streams that are relevant to epidemic forecasting, capturing various factors like symptomatic online surveys, retail and commerce, mobility, genomics data and more. Next, we discuss methods and modeling paradigms focusing on the recent data-driven statistical and deep-learning based methods as well as on the novel class of hybrid models that combine domain knowledge of mechanistic models with the effectiveness and flexibility of statistical approaches. We also discuss experiences and challenges that arise in real-world deployment of these forecasting systems including decision-making informed by forecasts. Finally, we highlight some challenges and open problems found across the forecasting pipeline.Comment: 67 pages, 12 figure

    Particle filtering in compartmental projection models

    Get PDF
    Simulation models are important tools for real-time forecasting of pandemics. Models help health decision makers examine interventions and secure strong guidance when anticipating outbreak evolution. However, models usually diverge from the real observations. Stochastics involved in pandemic systems, such as changes in human contact patterns play a substantial role in disease transmissions and are not usually captured in traditional dynamic models. In addition, models of emerging diseases face the challenge of limited epidemiological knowledge about the natural history of disease. Even when the information about natural history is available -- for example for endemic seasonal diseases -- transmission models are often simplified and are involved with omissions. Availability of data streams can provide a view of early days of a pandemic, but fail to predict how the pandemic will evolve. Recent developments of computational statistics algorithms such as Sequential Monte Carlo and Markov Chain Monte Carlo, provide the possibility of creating models based on historical data as well as re-grounding models based on ongoing data observations. The objective of this thesis is to combine particle filtering -- a Sequential Monte Carlo algorithm -- with system dynamics models of pandemics. We developed particle filtering models that can recurrently be re-grounded as new observations become available. To this end, we also examined the effectiveness of this arrangement which is subject to specifics of the configuration (e.g., frequency of data sampling). While clinically-diagnosed cases are valuable incoming data stream during an outbreak, new generation of geo-spatially specific data sources, such as search volumes can work as a complementary data resource to clinical data. As another contribution, we used particle filtering in a model which can be re-grounded based on both clinical and search volume data. Our results indicate that the particle filtering in combination with compartmental models provides accurate projection systems for the estimation of model states and also model parameters (particularly compared to traditional calibration methodologies and in the context of emerging communicable diseases). The results also suggest that more frequent sampling from clinical data improves predictive accuracy outstandingly. The results also present that assumptions to make regarding the parameters associated with the particle filtering itself and changes in contact rate were robust across adequacy of empirical data since the beginning of the outbreak and inter-observation interval. The results also support the use of data from Google search API along with clinical data

    Multi-agent reinforcement learning for the coordination of residential energy flexibility

    Get PDF
    This thesis investigates whether residential energy flexibility can be coordinated without sharing personal data at scale to achieve a positive impact on energy users and the grid. To tackle climate change, energy uses are being electrified at pace, just as electricity is increasingly provided by non-dispatchable renewable energy sources. These shifts increase the requirements for demand-side flexibility. Despite the potential of residential energy to provide such flexibility, it has remained largely untapped due to cost, social acceptance, and technical barriers. This thesis investigates the use of multi-agent reinforcement learning to overcome these challenges. This thesis presents a novel testing environment, which models electric vehicles, space heating, and flexible household loads in a distribution network. Additionally, a generative adversarial network-based data generator is developed to obtain realistic training and testing data. Experiments conducted in this environment showed that standard independent learners fail to coordinate in the partially observable stochastic environment. To address this, additional coordination mechanisms are proposed for agents to practise coordination in a centralised simulated rehearsal, ahead of fully decentralised implementation. Two such coordination mechanisms are proposed: optimisation-informed independent learning, and a centralised but factored critic network. In the former, agents lean from omniscient convex optimisation results ahead of fully decentralised coordination. This enables cooperation at scale where standard independent learners under partial observability could not be coordinated. In the latter, agents employ a deep neural factorisation network to learn to assess their impact on global rewards. This approach delivers comparable performance for four agents and more, with a 34-fold speed improvement for 30 agents and only first-order computational time growth. Finally, the impacts of implementing implicit coordination using these multi- agent reinforcement learning methodologies are modelled. It is observed that even without explicit grid constraint management, cooperating energy users reduce the likelihood of voltage deviations. The cooperative management of voltage constraints could be further promoted by the MARL policies, whereby their likelihood could be reduced by 43.08% relative to an uncoordinated baseline, albeit with trade-offs in other costs. However, while this thesis demonstrates the technical feasibility of MARL-based cooperation, further market mechanisms are required to reward all participants for their cooperation

    Special Topics in Information Technology

    Get PDF
    This open access book presents thirteen outstanding doctoral dissertations in Information Technology from the Department of Electronics, Information and Bioengineering, Politecnico di Milano, Italy. Information Technology has always been highly interdisciplinary, as many aspects have to be considered in IT systems. The doctoral studies program in IT at Politecnico di Milano emphasizes this interdisciplinary nature, which is becoming more and more important in recent technological advances, in collaborative projects, and in the education of young researchers. Accordingly, the focus of advanced research is on pursuing a rigorous approach to specific research topics starting from a broad background in various areas of Information Technology, especially Computer Science and Engineering, Electronics, Systems and Control, and Telecommunications. Each year, more than 50 PhDs graduate from the program. This book gathers the outcomes of the thirteen best theses defended in 2020-21 and selected for the IT PhD Award. Each of the authors provides a chapter summarizing his/her findings, including an introduction, description of methods, main achievements and future work on the topic. Hence, the book provides a cutting-edge overview of the latest research trends in Information Technology at Politecnico di Milano, presented in an easy-to-read format that will also appeal to non-specialists
    corecore