42 research outputs found

    Production Scheduling

    Get PDF
    Generally speaking, scheduling is the procedure of mapping a set of tasks or jobs (studied objects) to a set of target resources efficiently. More specifically, as a part of a larger planning and scheduling process, production scheduling is essential for the proper functioning of a manufacturing enterprise. This book presents ten chapters divided into five sections. Section 1 discusses rescheduling strategies, policies, and methods for production scheduling. Section 2 presents two chapters about flow shop scheduling. Section 3 describes heuristic and metaheuristic methods for treating the scheduling problem in an efficient manner. In addition, two test cases are presented in Section 4. The first uses simulation, while the second shows a real implementation of a production scheduling system. Finally, Section 5 presents some modeling strategies for building production scheduling systems. This book will be of interest to those working in the decision-making branches of production, in various operational research areas, as well as computational methods design. People from a diverse background ranging from academia and research to those working in industry, can take advantage of this volume

    The 9th Conference of PhD Students in Computer Science

    Get PDF

    Evaluating network criticality of interdependent infrastructure systems: applications for electrical power distribution and rail transport

    Get PDF
    Critical infrastructure provides essential services of economic and social value. However, the pressures of demand growth, congestion, capacity constraints and hazards such as extreme weather increase the need for infrastructure resilience. The increasingly interdependent nature of infrastructure also heightens the risk of cascading failure between connected systems. Infrastructure companies must meet the twin-challenge of day-to-day operations and long-term planning with increasingly constrained budgets and resources. With a need for an effective process of resource allocation, this thesis presents a network criticality assessment methodology for prioritising locations across interdependent infrastructure systems, using metrics of the expected consequence of an asset failure for operational service performance. Existing literature is focused mainly upon simulating the vulnerability of national-scale infrastructure, with assumptions of both system dynamics and dependencies for simplicity. This thesis takes a data-driven and evidence-based approach, using historical performance databases to inherently capture system behaviour, whilst network diagrams are used to directly identify asset dependencies. Network criticality assessments are produced for three applications of increasing complexity from (i) electricity distribution, to (ii) railway transport, to (iii) electrified railway dependencies on external power supplies, using case studies of contrasting infrastructure management regions. This thesis demonstrates how network criticality assessments can add value to subjective tacit knowledge and high-level priorities both within and between infrastructure systems. The spatial distribution of criticality is highlighted, whilst the key contribution of the research is the identification of high-resolution single points of failure and their spatial correlation across systems, particularly within urban areas. Service-level metrics also have a broad applicability for a range of functions, including incident response, maintenance and long-term investment. The role of network criticality within a holistic and systemic decision-making process is explored, for risk assessment and resilience interventions. The limitations of the research, regarding sample-size caveats and the definition of system boundaries within performance databases, lead to recommendations on cross-system fault reporting and the improvement of information systems

    BNAIC 2008:Proceedings of BNAIC 2008, the twentieth Belgian-Dutch Artificial Intelligence Conference

    Get PDF

    Support for flexible and transparent distributed computing

    Get PDF
    Modern distributed computing developed from the traditional supercomputing community rooted firmly in the culture of batch management. Therefore, the field has been dominated by queuing-based resource managers and work flow based job submission environments where static resource demands needed be determined and reserved prior to launching executions. This has made it difficult to support resource environments (e.g. Grid, Cloud) where the available resources as well as the resource requirements of applications may be both dynamic and unpredictable. This thesis introduces a flexible execution model where the compute capacity can be adapted to fit the needs of applications as they change during execution. Resource provision in this model is based on a fine-grained, self-service approach instead of the traditional one-time, system-level model. The thesis introduces a middleware based Application Agent (AA) that provides a platform for the applications to dynamically interact and negotiate resources with the underlying resource infrastructure. We also consider the issue of transparency, i.e., hiding the provision and management of the distributed environment. This is the key to attracting public to use the technology. The AA not only replaces user-controlled process of preparing and executing an application with a transparent software-controlled process, it also hides the complexity of selecting right resources to ensure execution QoS. This service is provided by an On-line Feedback-based Automatic Resource Configuration (OAC) mechanism cooperating with the flexible execution model. The AA constantly monitors utility-based feedbacks from the application during execution and thus is able to learn its behaviour and resource characteristics. This allows it to automatically compose the most efficient execution environment on the fly and satisfy any execution requirements defined by users. Two policies are introduced to supervise the information learning and resource tuning in the OAC. The Utility Classification policy classifies hosts according to their historical performance contributions to the application. According to this classification, the AA chooses high utility hosts and withdraws low utility hosts to configure an optimum environment. The Desired Processing Power Estimation (DPPE) policy dynamically configures the execution environment according to the estimated desired total processing power needed to satisfy users’ execution requirements. Through the introducing of flexibility and transparency, a user is able to run a dynamic/normal distributed application anywhere with optimised execution performance, without managing distributed resources. Based on the standalone model, the thesis further introduces a federated resource negotiation framework as a step forward towards an autonomous multi-user distributed computing world

    Data-Driven Intelligent Scheduling For Long Running Workloads In Large-Scale Datacenters

    Get PDF
    Cloud computing is becoming a fundamental facility of society today. Large-scale public or private cloud datacenters spreading millions of servers, as a warehouse-scale computer, are supporting most business of Fortune-500 companies and serving billions of users around the world. Unfortunately, modern industry-wide average datacenter utilization is as low as 6% to 12%. Low utilization not only negatively impacts operational and capital components of cost efficiency, but also becomes the scaling bottleneck due to the limits of electricity delivered by nearby utility. It is critical and challenge to improve multi-resource efficiency for global datacenters. Additionally, with the great commercial success of diverse big data analytics services, enterprise datacenters are evolving to host heterogeneous computation workloads including online web services, batch processing, machine learning, streaming computing, interactive query and graph computation on shared clusters. Most of them are long-running workloads that leverage long-lived containers to execute tasks. We concluded datacenter resource scheduling works over last 15 years. Most previous works are designed to maximize the cluster efficiency for short-lived tasks in batch processing system like Hadoop. They are not suitable for modern long-running workloads of Microservices, Spark, Flink, Pregel, Storm or Tensorflow like systems. It is urgent to develop new effective scheduling and resource allocation approaches to improve efficiency in large-scale enterprise datacenters. In the dissertation, we are the first of works to define and identify the problems, challenges and scenarios of scheduling and resource management for diverse long-running workloads in modern datacenter. They rely on predictive scheduling techniques to perform reservation, auto-scaling, migration or rescheduling. It forces us to pursue and explore more intelligent scheduling techniques by adequate predictive knowledges. We innovatively specify what is intelligent scheduling, what abilities are necessary towards intelligent scheduling, how to leverage intelligent scheduling to transfer NP-hard online scheduling problems to resolvable offline scheduling issues. We designed and implemented an intelligent cloud datacenter scheduler, which automatically performs resource-to-performance modeling, predictive optimal reservation estimation, QoS (interference)-aware predictive scheduling to maximize resource efficiency of multi-dimensions (CPU, Memory, Network, Disk I/O), and strictly guarantee service level agreements (SLA) for long-running workloads. Finally, we introduced a large-scale co-location techniques of executing long-running and other workloads on the shared global datacenter infrastructure of Alibaba Group. It effectively improves cluster utilization from 10% to averagely 50%. It is far more complicated beyond scheduling that involves technique evolutions of IDC, network, physical datacenter topology, storage, server hardwares, operating systems and containerization. We demonstrate its effectiveness by analysis of newest Alibaba public cluster trace in 2017. We are the first of works to reveal the global view of scenarios, challenges and status in Alibaba large-scale global datacenters by data demonstration, including big promotion events like Double 11 . Data-driven intelligent scheduling methodologies and effective infrastructure co-location techniques are critical and necessary to pursue maximized multi-resource efficiency in modern large-scale datacenter, especially for long-running workloads

    Cost-based linear holding practice and collaborative air traffic flow management under trajectory based operations

    Get PDF
    The current air transportation system is reaching the capacity limit in many countries/regions across the world. It tends to be less efficient or even incapable sometimes to deal with the enormous air traffic demand that continues growing year by year. This has been evidenced by the record-breaking flight delays reported in various places in recent years, which, have resulted in notable economical loses. To mitigate this imbalance between demand and capacity, air traffic flow management (ATFM) is usually one of the most useful options. It regulates traffic flows according to air traffic control capacity while preserving safety and efficiency of flights. ATFM initiatives can be considered well in advance of the flight execution - more than one year earlier - based on air traffic forecasts and capacity plans, and continue in effect, with information updated, to eventually the day of operation. This long effective period will inevitably allow substantial collaboration among different stakeholders, including the ATFM authority, airspace users (AUs), air navigation service providers (ANSPs), airports, etc. Under the forthcoming paradigm of trajectory based operations (TBO), the flight 4-Dimensional trajectory has been anticipated to further enhance the connection between flight planning and execution phases, thus fostering such collaboration in ATFM. Moreover, under nowadays operations, ground holding is a typical measure undertaken in many widely-used ATFM programs. Even though holding on the ground, at the origin airport, has the advantage of fuel efficiency over the air holding, it turns out that its feature of low flexibility would, in some circumstances, affect the ATFM performance. Yet, with proper flight trajectory management, it is also possible to have delay airborne at no extra fuel cost than performing ground holding. This PhD thesis firstly focuses on this trajectory management, specifically on a cost-based linear holding practice. The linear holding is realized progressively along the planned trajectory through precise speed control which can be enabled by aircraft trajectory optimization techniques. Some typical short/mid haul flights are simulated for achieving the maximum airborne delay that can be yielded using same fuel consumption as initially scheduled. Based on this, its potential applicability is demonstrated. A network ATFM model is adapted from the well-studied Bertsimas Stock-Patterson (BSP) model, incorporating different types of delay (including the linear holding) to flexibly handle the traffic flow with a set of given (yet changeable) capacities. In order that the benefits of the model can be fully realized, AUs are required to participate in the decision-making process, submitting for instance the maximum linear holding bound per flight along the planned trajectory. Next, increased AUs' participation is expected for a proposed Collaborative ATFM framework, in which not only various delay initiatives are considered, but also alternative trajectories which allow flights to route out of the identified hotspot areas. A centralized linear programming optimization model then computes for the best trajectory selections and the optimal delay distributions across all concerned flights. Finally, ANSPs' involvement is additionally considered for the framework, through dynamic airspace reconfiguration, further enhancing the collaboration between ATFM stakeholders. As such, the traffic flow regulation and sector opening scheduling are bounded into an integrated optimization model, and thus are conducted in a synchronized way. Results indicate that the performance of demand and capacity balancing can be even improved if compared with the previous ATFM models presented in this PhD thesis.El sistema de transport aeri actual està arribant al seu límit de capacitat en molts països i regions del món. Una gestió del flux de trànsit aeri (ATFM) més adequada podria mitigar aquest desequilibri entre la demanda i la capacitat. La funció de l'ATFM és regular els fluxos de trànsit aeri segons la capacitat de control del trànsit aeri, i alhora assegurar que els vols siguin segurs i eficients. Les regulacions del sistema d'ATFM es poden aplicar molt abans de l'execució del vol més d'un any abans. Un cop aplicades, aquestes regulacions continuaran evolucionant, amb informació actualitzada, fins el dia de la seva execució. El llarg període entre la planificació del vol i la seva execució permetrà una important col·laboració entre els diferents membres implicats, inclosa l'autoritat de l'ATFM, els usuaris de l'espai aeri (AUs), els proveïdors de serveis de navegació aèria (ANSP), els aeroports, etc. En les operacions d'avui en dia l'espera a terra és una de les regulacions que més aplica el sistema d'ATFM per tal d'evitar congestions als aeroports o sectors de l'espai aeri. Tot i que esperar a terra, a l'aeroport d'origen, té l'avantatge de consumir menys combustible que esperar a l'aire a l'aeroport de destí, la seva poca flexibilitat podria afectar negativament al rendiment de l'ATFM en algunes circumstàncies. Tanmateix, amb una gestió adequada de la trajectòria de vol, també és possible efectuar cert retard a l'aire sense cap cost addicional de combustible respecte al que resultaria esperant a terra. Aquesta tesi doctoral s'enfoca en primer lloc en aquesta gestió de trajectòria de vol, específicament en una pràctica d'espera tenint en compte els costos per l'aerolínia. L'espera lineal s'efectua progressivament al llarg de la trajectòria planificada mitjançant un control precís de la velocitat. Les velocitats que generen l'espera desitjada durant el vol és calculen mitjançant tècniques d'optimització. Alguns vols típics de curt i mig abast es simulen per quantificar el màxim retard a l'aire que es podria generar utilitzant el mateix consum de combustible que el previst inicialment. Basant-se en els resultats obtinguts, s'explora la seva aplicabilitat potencial. Es desenvolupa un model de la xarxa d'ATFM basat en el model de Bertsima Stock-Patterson. Com a novetat, el model desenvolupat en aquesta tesi incorpora diferents tipus de retard (incloent-hi l'espera lineal) per gestionar de forma més flexible el flux de trànsit donat un conjunt de capacitats pre-definides. Per tal d'explotar al màxim els beneficis del model proposat en aquesta tesi, les autoritats regionals estan obligades a participar en el procés de presa de decisions, declarant, per exemple, la màxima espera lineal associada a cada vol al llarg de la trajectòria planejada. Tot seguit, s'inclou la participació dels AUs en un sistema d'ATFM col·laboratiu, en el qual no només es consideren diverses tipus de retard per balancejar la capacitat i la demanda, sinó també trajectòries alternatives que permeten que els vols evitin de forma òptima els sectors de l?espai aeri congestionats. Un model d'optimització centralitzat basat en programació lineal calcula les millors seleccions de trajectòria i les distribucions òptimes de retard en tots els vols afectats per la regulació. Es demostra que incloure trajectòries alternatives pot reduir notablement la quantitat de retards. Finalment, es considera també la participació de l'ANSP en el sistema d'ATFM, a través de la configuració dinàmica de l'espai aeri, millorant encara més la col·laboració entre els membres implicats en el sistema. Com a tal, la regulació del flux de trànsit i la programació d'obertura dels diferents sectors de l'espai aeri s'inclouen en un model integrat d'optimització i, per tant, es programen de forma sincronitzada. Els resultats suggereixen que el rendiment del balanc¸ de la demanda i la capacitat es pot millorar encara m´es amb aquest sistema ATFM col·laboratiu complert. El nou model de balanc¸ de demanda i capacitat millora encara ées els resultats, si es compara amb els altres models d’ATFM presentats també en aquesta tesi doctoral.El sistema de transporte aéreo actual está llegando a su límite de capacidad en muchos países y regiones del mundo. Como consecuencia, éste tiende a ser menos eficiente e incluso en ocasiones incapaz de afrontar la enorme demanda de tráfico aéreo que incluso hoy en día crece rápidamente. Este hecho se ha visto evidenciado por los enormes retrasos registrados en diferentes lugares los últimos años, lo cual ha comportado enormes pérdidas económicas para la sociedad. Una gestión del flujo del tráfico aéreo (ATFM) más adecuada podría mitigar este desequilibrio entre la demanda y la capacidad. La función del ATFM es regular los flujos de tráfico aéreo según la capacidad de control del tráfico aéreo, siempre asegurando que los vuelos sean seguros y eficientes. Las regulaciones del sistema de ATFM se pueden aplicar mucho antes de la ejecución del vuelo –más de un año antes– en función de las previsiones de tráfico aéreo y de la capacidad esperada. Una vez aplicadas, estas regulaciones continuarán evolucionando, con información actualizada, hasta el día de su ejecución. El largo periodo entre la planificación del vuelo y su ejecución permitirá una importante colaboración entre los diferentes miembros implicados, incluida la autoridad del ATFM, los usuarios del espacio aéreo (AUs), los proveedores de servicios de navegación aérea (ANSP), los aeropuertos, etc. En el marco del futuro paradigma de las operaciones basadas en trayectorias, la introducción de vuelos con control sobre la trayectoria en las 4 dimensiones espera mejorar aún más la conexión entre las fases de planificación del vuelo y su ejecución, fomentando así la colaboración en el proceso de toma de decisiones del sistema ATFM. En las operaciones de hoy en día la espera en tierra es una de las regulaciones que más se aplica en el sistema de ATFM con el fin de evitar congestiones en los aeropuertos o en los sectores del espacio aéreo. Aun teniendo en cuenta que esperar en tierra, en el aeropuerto de origen, tiene la ventaja de consumir menos combustible que esperar en el aire en el aeropuerto de destino, su poca flexibilidad podría afectar negativamente al rendimiento del ATFM en algunas circunstancias. Aun así, con una gestión adecuada de la trayectoria de vuelo, también es posible efectuar cierto retraso en el aire sin ningún coste adicional de combustible respecto a lo que resultaría esperando en tierra. Esta tesis doctoral se centra en primer lugar en esta gestión de la trayectoria de vuelo, específicamente en una práctica de espera lineal considerando los costes para la aerolínea. La espera lineal se efectúa progresivamente a lo largo de la trayectoria planificada mediante un control preciso de la velocidad. Las velocidades que generan la espera deseada durante el vuelo se calculan mediante técnicas de optimización. Algunos vuelos típicos de corto y medio alcance se simulan para cuantificar el máximo retraso en el aire que se podría generar utilizando el mismo consumo de combustible que el previsto inicialmente. Basándose en los resultados obtenidos, se investiga su potencial aplicabilidad, como por ejemplo mejorar la planificación de programas de flujo del espacio aéreo, y ayudar a neutralizar los retrasos no deseados adicionales debidos a la incertidumbre del sistema. Se desarrolla un modelo de la red de ATFM basado en el conocido modelo Bertsimas Stock-Patterson (BSP). Como novedad, el modelo desarrollado en esta tesis incorpora diferentes tipos de retraso (incluyendo la espera lineal) para gestionar de manera más flexible el flujo de tráfico dado un conjunto de capacidades predefinidas. Con el fin de explotar al máximo los beneficios del modelo propuesto en esta tesis, se asume que las aerolíneas participaran en el proceso de toma de decisiones, declarando, por ejemplo, la máxima espera lineal asociada a cada vuelo a lo largo de la trayectoria planeada. Este concepto se ilustra con un caso de estudio, donde se demuestra una reducción significativa de los retrasos, comparado con el modelo BSP. Seguidamente, se incluye la participación de las aerolíneas en un sistema de ATFM colaborativo, en el cual no tan sólo se consideran diferentes tipos de retrasos para balancear la capacidad y la demanda, sino también trayectorias alternativas que permiten que los vuelos eviten de forma óptima los sectores del espacio aéreo congestionados. Un modelo de optimización centralizado basado en programación lineal calcula las mejores selecciones de la trayectoria y las distribuciones óptimas de retraso en todos los vuelos afectado por la regulación. Se demuestra que incluir trayectorias alternativas puede reducir notablemente la cantidad de retrasos. Finalmente, se considera también la participación de los ANSP en el sistema de ATFM, a través de la configuración dinámica del espacio aéreo, mejorando aún más la colaboración entre los miembros implicados en el sistema. Como tales, la regulación del flujo de tráfico aéreo y la programación de apertura de los diferentes sectores del espacio aéreo se incluyen en un modelo integrado de optimización y, por lo tanto, se programan de manera sincronizada. El nuevo modelo de balance de demanda y capacidad mejora aún más los resultados, si se compara con los otros modelos ATFM presentados también en esta tesis doctoralPostprint (published version

    Integrated modelling of electrical energy systems for the study of residential demand response strategies

    Get PDF
    Building and urban energy simulation software aim to model the energy flows in buildings and urban communities in which most of them are located, providing tools that assist in the decision-making process to improve their initial and ongoing energy performance. To maintain their utility, they must continually develop in tandem with emerging technologies in the energy field. Demand Response (DR) strategies represent one such family of technology that has been identified as a key and affordable solution in the global transition towards clean energy generation and use, in particular at the residential scale. This thesis contributes towards the development and application of a comprehensive building and urban energy simulation capability that parsimoniously represents occupants' energy using behaviours and responses to strategies to influence them. This platform intends to better unify the modelling of Demand Response strategies, by integrating the modelling of different energy systems through Multi Agent Simulation, considering stochastic processes taking place in electricity demand and supply. This is addressed by: (a) improving the fidelity of predictions of household electricity demand, using stochastic models, (b) demonstrating the potential of Demand Response strategies using Multi-Agent Simulation and machine learning techniques, (c) integrating a suitable model for the low voltage network to study and incorporate effects on the grid, (d) identifying how this platform should be extended to better represent human-to-device interactions; to test strategies designed to influence the scope and timing of occupants' energy using services. Stochastic demand models provide the means to realistically simulate power demands, which are subject to naturally random human behaviour. In this work, the power demand arising from small household appliances is identified as a stochastic variable, for which different candidate modelling methods are explored. Variants of two types of stochastic models have been tested, based on discrete time and continuous time stochastic processes. The alternative candidate models are compared and validated using Household Electricity Survey data, which is also used to test strategies, informed by advanced cluster analysis techniques, to simplify the form of these models. The recommended small appliance model is integrated with a Multi Agent Simulation (MAS) platform, which is in turn extended and deployed to test DR strategies, such as load shifting and electric storage operation. In the search for optimal load-shifting strategies, machine learning algorithms, Q-learning in particular, are utilised. The application of this new developed tool, No-MASS/DR, is demonstrated through the study of strategies to maximise the locally generated renewable energy of a single household and a small community of buildings connected to a Low Voltage network. Finally, an explicit model of the Low Voltage (LV) network has been developed and coupled with the DR framework. The model solves for power-flow analysis of a general low-voltage distribution network, using an electrical circuit-based approach, implemented as a novel recursive algorithm, that can efficiently calculate the voltages at different nodes of a complex branched network. The work accomplished in this thesis contributes to the understanding of residential electricity management, by developing better unified modelling of Demand Response strategies, that require integrated modelling of energy systems, with a particular focus on the study of maximising locally generated renewable energy

    Integrated modelling of electrical energy systems for the study of residential demand response strategies

    Get PDF
    Building and urban energy simulation software aim to model the energy flows in buildings and urban communities in which most of them are located, providing tools that assist in the decision-making process to improve their initial and ongoing energy performance. To maintain their utility, they must continually develop in tandem with emerging technologies in the energy field. Demand Response (DR) strategies represent one such family of technology that has been identified as a key and affordable solution in the global transition towards clean energy generation and use, in particular at the residential scale. This thesis contributes towards the development and application of a comprehensive building and urban energy simulation capability that parsimoniously represents occupants' energy using behaviours and responses to strategies to influence them. This platform intends to better unify the modelling of Demand Response strategies, by integrating the modelling of different energy systems through Multi Agent Simulation, considering stochastic processes taking place in electricity demand and supply. This is addressed by: (a) improving the fidelity of predictions of household electricity demand, using stochastic models, (b) demonstrating the potential of Demand Response strategies using Multi-Agent Simulation and machine learning techniques, (c) integrating a suitable model for the low voltage network to study and incorporate effects on the grid, (d) identifying how this platform should be extended to better represent human-to-device interactions; to test strategies designed to influence the scope and timing of occupants' energy using services. Stochastic demand models provide the means to realistically simulate power demands, which are subject to naturally random human behaviour. In this work, the power demand arising from small household appliances is identified as a stochastic variable, for which different candidate modelling methods are explored. Variants of two types of stochastic models have been tested, based on discrete time and continuous time stochastic processes. The alternative candidate models are compared and validated using Household Electricity Survey data, which is also used to test strategies, informed by advanced cluster analysis techniques, to simplify the form of these models. The recommended small appliance model is integrated with a Multi Agent Simulation (MAS) platform, which is in turn extended and deployed to test DR strategies, such as load shifting and electric storage operation. In the search for optimal load-shifting strategies, machine learning algorithms, Q-learning in particular, are utilised. The application of this new developed tool, No-MASS/DR, is demonstrated through the study of strategies to maximise the locally generated renewable energy of a single household and a small community of buildings connected to a Low Voltage network. Finally, an explicit model of the Low Voltage (LV) network has been developed and coupled with the DR framework. The model solves for power-flow analysis of a general low-voltage distribution network, using an electrical circuit-based approach, implemented as a novel recursive algorithm, that can efficiently calculate the voltages at different nodes of a complex branched network. The work accomplished in this thesis contributes to the understanding of residential electricity management, by developing better unified modelling of Demand Response strategies, that require integrated modelling of energy systems, with a particular focus on the study of maximising locally generated renewable energy

    Quality of Service in Vehicular Ad Hoc Networks: Methodical Evaluation and Enhancements for ITS-G5

    Get PDF
    After many formative years, the ad hoc wireless communication between vehicles has become a vehicular technology available in mass production cars in 2020. Vehicles form spontaneous Vehicular Ad Hoc Networks (VANETs), which enable communication whenever vehicles are nearby without need for supportive infrastructure. In Europe, this communication is standardised comprehensively as Intelligent Transport Systems in the 5.9 GHz band (ITS-G5). This thesis centres around Quality of Service (QoS) in these VANETs based on ITS-G5 technology. Whilst only a few vehicles communicate, radio resources are plenty, and channel congestion is a minor issue. With progressing deployment, congestion control becomes crucial to preserve QoS by preventing high latencies or foiled information dissemination. The developed VANET simulation model, featuring an elaborated ITS-G5 protocol stack, allows investigation of QoS methodically. It also considers the characteristics of ITS-G5 radios such as the signal attenuation in vehicular environments and the capture effect by receivers. Backed by this simulation model, several enhancements for ITS-G5 are proposed to control congestion reliably and thus ensure QoS for its applications. Modifications at the GeoNetworking (GN) protocol prevent massive packet occurrences in a short time and hence congestion. Glow Forwarding is introduced as GN extension to distribute delay-tolerant information. The revised Decentralized Congestion Control (DCC) cross-layer supports low-latency transmission of event-triggered, periodic and relayed packets. DCC triggers periodic services and manages a shared duty cycle budget dedicated to packet forwarding for this purpose. Evaluation in large-scale networks reveals that this enhanced ITS-G5 system can reliably reduce the information age of periodically sent messages. The forwarding budget virtually eliminates the starvation of multi-hop packets and still avoids congestion caused by excessive forwarding. The presented enhancements thus pave the way to scale up VANETs for wide-spread deployment and future applications
    corecore