10 research outputs found
Road Traffic Congestion Analysis Via Connected Vehicles
La congestion routière est un état particulier de mobilité où les temps de déplacement augmentent et de plus en plus de temps est passé dans le véhicule. En plus d’être une expérience très stressante pour les conducteurs, la congestion a également un impact négatif sur l’environnement
et l’économie. Dans ce contexte, des pressions sont exercées sur les autorités afin qu’elles prennent des mesures décisives pour améliorer le flot du trafic sur le réseau
routier. En améliorant le flot, la congestion est réduite et la durée totale de déplacement des véhicules est réduite. D’une part, la congestion routière peut être récurrente, faisant référence à la congestion qui se produit régulièrement. La congestion non récurrente (NRC), quant à elle, dans un réseau urbain, est principalement causée par des incidents, des zones de construction, des événements spéciaux ou des conditions météorologiques défavorables. Les
opérateurs d’infrastructure surveillent le trafic sur le réseau mais sont contraints à utiliser le moins de ressources possibles. Cette contrainte implique que l’état du trafic ne peut pas être mesuré partout car il n’est pas réaliste de déployer des équipements sophistiqués pour assurer la collecte précise des données de trafic et la détection en temps réel des événements partout sur le réseau routier. Alors certains emplacements où le flot de trafic doit être amélioré ne sont pas surveillés car ces emplacements varient beaucoup. D’un autre côté, de nombreuses études sur la congestion routière ont été consacrées aux autoroutes plutôt qu’aux régions urbaines, qui sont pourtant beaucoup plus susceptibles d’être surveillées par les autorités de la circulation. De plus, les systèmes actuels de collecte de données de trafic n’incluent pas la possibilité d’enregistrer des informations détaillées sur les événements qui surviennent sur la route, tels que les collisions, les conditions météorologiques défavorables, etc. Aussi, les études proposées dans la littérature ne font que détecter la congestion ; mais ce n’est pas suffisant, nous devrions être en mesure de mieux caractériser l’événement qui en est la cause. Les agences doivent comprendre quelle est la cause qui affecte la variabilité de flot sur leurs installations et dans quelle mesure elles peuvent prendre les actions appropriées pour atténuer la congestion.----------ABSTRACT: Road traffic congestion is a particular state of mobility where travel times increase and more and more time is spent in vehicles. Apart from being a quite-stressful experience for drivers,
congestion also has a negative impact on the environment and the economy. In this context, there is pressure on the authorities to take decisive actions to improve the network traffic flow. By improving network flow, congestion is reduced and the total travel time of vehicles is decreased. In fact, congestion can be classified as recurrent and non-recurrent (NRC). Recurrent congestion refers to congestion that happens on a regular basis. Non-recurrent congestion in an urban network is mainly caused by incidents, workzones, special events and adverse weather. Infrastructure operators monitor traffic on the network while using the least possible resources. Thus, traffic state cannot be directly measured everywhere on the traffic road network. But the location where traffic flow needs to be improved varies highly and certainly, deploying highly sophisticated equipment to ensure the accurate estimation of traffic flows and timely detection of events everywhere on the road network is not feasible. Also, many studies have been devoted to highways rather than highly congested urban
regions which are intricate, complex networks and far more likely to be monitored by the traffic authorities. Moreover, current traffic data collection systems do not incorporate the ability of registring detailed information on the altering events happening on the road, such as vehicle crashes, adverse weather, etc. Operators require external data sources to retireve this information in real time. Current methods only detect congestion but it’s not enough,
we should be able to better characterize the event causing it. Agencies need to understand what is the cause affecting variability on their facilities and to what degree so that they can take the appropriate action to mitigate congestion
Réseau dorsal virtuel pour la découverte de services dans les réseaux ad hoc
Définitions et concepts de base -- Éléments de la problématique -- Objectifs de la recherche -- Découverte de services dans les réseaux ad hoc -- Réseaux ad hoc -- Découverte de service -- Protocoles de routage dans les réseaux ad hoc -- Stratégies de découvertes de services -- Protocoles de découverte de services pour les réseaux Manet -- Réseau dorsal virtuel pour la découverte -- Formulation du modèle -- Architecture générale -- Phase I : formation du réseau dorsal virtuel -- Maintenance du réseau dorsal virtuel -- Enregistrement et découverte de services -- Implémentation -- Méthodologie d'implémentation -- Plan d'expériences -- Analyse des résultats -- Synthèse des travaux -- Limites de l'approche proposée
Prediction of Traffic Flow via Connected Vehicles
We propose a Short-term Traffic flow Prediction (STP) framework so that
transportation authorities take early actions to control flow and prevent
congestion. We anticipate flow at future time frames on a target road segment
based on historical flow data and innovative features such as real time feeds
and trajectory data provided by Connected Vehicles (CV) technology. To cope
with the fact that existing approaches do not adapt to variation in traffic, we
show how this novel approach allows advanced modelling by integrating into the
forecasting of flow, the impact of the various events that CV realistically
encountered on segments along their trajectory. We solve the STP problem with a
Deep Neural Networks (DNN) in a multitask learning setting augmented by input
from CV. Results show that our approach, namely MTL-CV, with an average
Root-Mean-Square Error (RMSE) of 0.052, outperforms state-of-the-art ARIMA time
series (RMSE of 0.255) and baseline classifiers (RMSE of 0.122). Compared to
single task learning with Artificial Neural Network (ANN), ANN had a lower
performance, 0.113 for RMSE, than MTL-CV. MTL-CV learned historical
similarities between segments, in contrast to using direct historical trends in
the measure, because trends may not exist in the measure but do in the
similarities
Learning Cyber Defence Tactics from Scratch with Multi-Agent Reinforcement Learning
Recent advancements in deep learning techniques have opened new possibilities
for designing solutions for autonomous cyber defence. Teams of intelligent
agents in computer network defence roles may reveal promising avenues to
safeguard cyber and kinetic assets. In a simulated game environment, agents are
evaluated on their ability to jointly mitigate attacker activity in host-based
defence scenarios. Defender systems are evaluated against heuristic attackers
with the goals of compromising network confidentiality, integrity, and
availability. Value-based Independent Learning and Centralized Training
Decentralized Execution (CTDE) cooperative Multi-Agent Reinforcement Learning
(MARL) methods are compared revealing that both approaches outperform a simple
multi-agent heuristic defender. This work demonstrates the ability of
cooperative MARL to learn effective cyber defence tactics against varied
threats.Comment: Presented at 2nd International Workshop on Adaptive Cyber Defense,
2023 (arXiv:2308.09520
Untargeted Poisoning Attack Detection in Federated Learning via Behavior AttestationAl
Federated Learning (FL) is a paradigm in Machine Learning (ML) that addresses data privacy, security, access rights and access to heterogeneous information issues by training a global model using distributed nodes. Despite its advantages, there is an increased potential for cyberattacks on FL-based ML techniques that can undermine the benefits. Model-poisoning attacks on FL target the availability of the model. The adversarial objective is to disrupt the training. We propose attestedFL, a defense mechanism that monitors the training of individual nodes through state persistence in order to detect a malicious worker in small to medium federation size. A fine-grained assessment of the history of the worker permits the evaluation of its behavior in time and results in innovative detection strategies. We present three lines of defense that aim at assessing if the worker is reliable by observing if the node is truly training, while advancing towards a goal. Our defense exposes an attacker’s malicious behavior and removes unreliable nodes from the aggregation process so that the FL process converge faster. attestedFL increased the accuracy of the model in different FL settings, under different attacking patterns, and scenarios e.g., attacks performed at different stages of the convergence, colluding attackers, and continuous attacks