247 research outputs found

    The effects of tube deformities on the dynamic calibration of a tubing system

    Get PDF
    Using the Berge and Tijdemen method for tube calibration is powerful as it allows for tubes of various dimensions to be used in a dynamic pressure data acquisition system by using post-processing methods to calibrate for the tubes natural dynamic response. Knowing the tubes response and using the inverse Fourier transform to calibrate the tube system is accepted however knowing how tube deformities influence this calibration is not known. Small singular deformities caused by pinch, twist and bending, which corresponded to a pinch and internal area ratios less than approximately 5 and 3.57 respectively, do not affect the tubing response of a system. Significant effects on the tubes response only occur at pinch and area ratios above these values. Furthermore, pinching ratios above 5 are extreme and represent a tube that is pinched locally to the point where it is almost blocked. This is testament to the tubes resilience to local and internal diameter changes. It can be safely assumed that unwanted and unexpected dampening of a tubing system could be due to a local tube deformity

    Human-Data Interaction: The Human Face of the Data-Driven Society

    Get PDF
    The increasing generation and collection of personal data has created a complex ecosystem, often collaborative but sometimes combative, around companies and individuals engaging in the use of these data. We propose that the interactions between these agents warrants a new topic of study: Human-Data Interaction (HDI). In this paper we discuss how HDI sits at the intersection of various disciplines, including computer science, statistics, sociology, psychology and behavioural economics. We expose the challenges that HDI raises, organised into three core themes of legibility, agency and negotiability, and we present the HDI agenda to open up a dialogue amongst interested parties in the personal and big data ecosystems

    Traffic synchronization with controlled time of arrival for cost-efficient trajectories in high-density terminal airspace

    Get PDF
    The growth in air traffic has led to a continuously growing environmental sensitivity in aviation, encouraging the research into methods for achieving a greener air transportation. In this context, continuous descent operations (CDOs) allow aircraft to follow an optimum flight path that delivers major environmental and economic benefits, giving as a result engine-idle descents from the cruise altitude to right before landing that reduce fuel consumption, pollutant emissions and noise nuisance. However, this type of operations suffers from a well-known drawback: the loss of predictability from the air traffic control (ATC) point of view in terms of overfly times at the different waypoints of the route. In consequence, ATC requires large separation buffers, thus reducing the capacity of the airport. Previous works investigating this issue showed that the ability to meet a controlled time of arrival (CTA) at a metering fix could enable CDOs while simultaneously maintaining airport throughput. In this context, more research is needed focusing on how modern arrival managers (AMANs)—and extended arrival managers (E-AMANs)—could provide support to select the appropriate CTA. ATC would be in charge to provide the CTA to the pilot, who would then use four-dimensional (4D) flight management system (FMS) trajectory management capabilities to satisfy it. A key transformation to achieve a more efficient aircraft scheduling is the use of new air traffic management (ATM) paradigms, such as the trajectory based operations (TBO) concept. This concept aims at completely removing open-loop vectoring and strategic constraints on the trajectories by efficiently implementing a 4D trajectory negotiation process to synchronize airborne and ground equipment with the aim of maximizing both flight efficiency and throughput. The main objective of this PhD thesis is to develop methods to efficiently schedule arrival aircraft in terminal airspace, together with concepts of operations compliant with the TBO concept. The simulated arrival trajectories generated for all the experiments conducted in this PhD thesis, to the maximum possible extent, are considered to be energy-neutral CDOs, seeking to reduce the overall environmental impact of aircraft operations in the ATM system. Ultimately, the objective of this PhD is to achieve a more efficient arrival management of traffic, in which higher levels of predictability and similar levels of capacity are achieved, while the safety of the operations is kept. The designed experiments consider a TBO environment, involving a high synchronization between all the involved actors of the ATM system. Higher levels of automation and information sharing are expected, together with a modernization of both current ATC ground-support tools and aircraft FMSs to comply with the new TBO paradigm.L’increment de tràfic aeri ha portat a una major sensibilitat mediambiental en l’aviació, motivant la recerca en mètodes per aconseguir un transport aeri més ecològic. En aquest context, les operacions de descens continu (CDOs) permeten a les aeronaus seguir una trajectòria que aporta grans beneficis econòmics i ambientals, donant com a resultat descensos amb els motors al ralentí des de l’altitud de creuer fins just abans d’aterrar. Aquestes trajectòries redueixen el consum de combustible, les emissions contaminants i el soroll generat per les aeronaus. No obstant això, aquest tipus d’operacions té un gran desavantatge: la pèrdua de predictibilitat des del punt de vista del controlador aeri (ATC) en termes de temps de pas als diferents punts de la ruta. Com a conseqüència, l’ATC necessita assignar una major separació entre les aeronaus, la qual cosa comporta una reducció en la capacitat de l’aeroport. Estudis previs investigant aquest problema han demostrat que la capacitat de complir amb un temps controlat d’arribada (CTA) a un punt de la ruta (utilitzat per seqüenciar les aeronaus) podria habilitar les CDOs tot mantenint la capacitat de l’aeroport. En aquest context, es necessita investigar més en com els gestors d’arribades (AMANs) i els gestors d’arribades ampliats (E-AMANs) podrien donar suport en la selecció de la CTA més adequada. L’ATC seria l’encarregat d’enviar la CTA al pilot, el qual, per tal de complir amb la CTA, faria servir la capacitat de gestió de trajectòries d’un sistema de gestió de vol (FMS) de quatre dimensions (4D). Una transformació clau per aconseguir una gestió més eficient del tràfic d’arribada és l’ús de nous paradigmes de gestió del tràfic aeri (ATM), com per exemple el concepte d’operacions basades en trajectòries (TBO). Aquest concepte té com a objectiu eliminar completament de les trajectòries la vectorització en “bucle obert” i les restriccions estratègiques. Per aconseguir-ho, es proposa implementar de manera eficient una negociació de la trajectòria 4D, amb l’objectiu de sincronitzar l’equipament de terra amb el de l’aeronau, maximitzant d’aquesta manera l’eficiència dels vols i la capacitat del sistema. El principal objectiu d’aquest doctorat és desenvolupar mètodes per gestionar aeronaus de manera eficient en espai aeri terminal, juntament amb conceptes d’operacions que compleixin amb el concepte de TBO. Les trajectòries d’arribada simulades per tots els experiments definits en aquesta tesi doctoral, en la mesura que s’ha pogut, són CDOs d’energia neutral. D’aquesta manera, la idea és reduir el màxim possible l’impacte mediambiental de les operacions aèries al sistema ATM. En definitiva, l’objectiu d’aquest doctorat és aconseguir una gestió del tràfic d’arribada més eficient, obtenint una major predictibilitat i capacitat, i assegurant que la seguretat de les operacions es manté. Els experiments dissenyats consideren una situació on el concepte de TBO és present, el que comporta una sincronització elevada entre tots els actors implicats en el sistema ATM. Així mateix, s’esperen nivells majors d’automatització i de compartició d’informació, juntament amb una modernització de les eines de suport en terra a l’ATC i dels FMSs de les aeronaus, tot amb l’objectiu de complir amb el nou paradigma de TBO.El incremento de tráfico aéreo ha llevado a una mayor sensibilidad medioambiental en la aviación, motivando la investigación de métodos para conseguir un transporte aéreo más ecológico. En este contexto, las operaciones de descenso continuo (CDOs) permiten a las aeronaves seguir una trayectoria que aporta grandes beneficios económicos y ambientales, dando como resultado descensos con los motores al ralentí desde la altitud de crucero hasta justo antes de aterrizar. Estas trayectorias reducen el consumo de combustible, las emisiones contaminantes y el ruido generado por las aeronaves. No obstante, este tipo de operaciones tiene una gran desventaja: la pérdida de predictibilidad desde el punto de vista del controlador aéreo (ATC) en términos de tiempos de paso en los diferentes puntos de la ruta. Como consecuencia, el ATC necesita asignar una mayor separación entre las aeronaves, lo cual comporta una reducción en la capacidad del aeropuerto. Estudios previos investigando este problema han demostrado que la capacidad de cumplir con un tiempo controlado de llegada (CTA) en un punto de la ruta (utilizado para secuenciar las aeronaves) podría habilitar las CDOs manteniendo al mismo tiempo la capacidad del aeropuerto. En este contexto, es necesario investigar más en cómo los gestores de llegadas (AMANs)—y los gestores de llegadas extendidos (E-AMANs)—podrían dar soporte en la selección de la CTA más adecuada. El ATC sería el encargado de enviar la CTA al piloto, el cual, para cumplir con la CTA, usaría la capacidad de gestión de trayectorias de un sistema de gestión de vuelo (FMS) de cuatro dimensiones (4D). Una transformación clave para conseguir una gestión más eficiente del tráfico de llegada es el uso de nuevos paradigmas de gestión del tráfico aéreo (ATM), como por ejemplo el concepto de operaciones basadas en trayectorias (TBO). Este concepto tiene como objetivo eliminar completamente de las trayectorias la vectorización en “bucle abierto” y las restricciones estratégicas. Para conseguirlo, se propone implementar de manera eficiente una negociación de la trayectoria 4D, con el objetivo de sincronizar el equipamiento de tierra con el de la aeronave, maximizando de esta manera la eficiencia de los vuelos y la capacidad del sistema. El principal objetivo de este doctorado es desarrollar métodos para gestionar aeronaves de manera eficiente en espacio aéreo terminal, junto con conceptos de operaciones que cumplan con el concepto de TBO. Las trayectorias de llegada simuladas para todos los experimentos definidos en esta tesis doctoral, en la medida de lo posible, son CDOs de energía neutra. De esta manera, la idea es reducir lo máximo posible el impacto medioambiental de las operaciones aéreas en el sistema ATM. En definitiva, el objetivo de este doctorado es conseguir una gestión del tráfico de llegada más eficiente, obteniendo una mayor predictibilidad y capacidad, y asegurando que la seguridad de las operaciones se mantiene. Los experimentos diseñados consideran una situación xxi donde el concepto de TBO está presente, lo que comporta una sincronización elevada entre todos los actores implicados en el sistema ATM. Asimismo, se esperan mayores niveles de automatización y de compartición de información, junto con una modernización de las herramientas de soporte en tierra al ATC y de los FMSs de las aeronaves, todo con el objetivo de cumplir con el nuevo paradigma de TBO. Primero de todo, se define un marco para la optimización de trayectorias utilizado para generar las trayectorias simuladas para los experimentos definidos en esta tesis doctoral. A continuación, se evalúan los beneficios de volar CDOs de energía neutra comparándolas con trayectorias reales obtenidas de datos de vuelo históricos. Se comparan dos fuentes de datos, concluyendo cuál es la más adecuada para estudios de eficiencia en espacio aéreo terminal. Las CDOs de energía neutra son el tipo preferido de trayectorias desde un punto de vista medioambiental pero, dependiendo de la cantidad de tráfico, podría ser imposible para el ATC asignar una CTA que pueda ser cumplida por las aeronaves mientras vuelan la ruta de llegada publicada. En esta tesis doctoral, se comparan dos estrategias con el objetivo de cumplir con la CTA asignada: volar CDOs de energía neutra por rutas más largas/cortas o volar descensos con el motor accionado por la ruta publicada. Para ambas estrategias, se analiza la sensibilidad del consumo de combustible a diferentes parámetros, como la altitud inicial de crucero o la velocidad del viento. Finalmente, en esta tesis doctoral se analizan dos estrategias para gestionar de manera eficiente el tráfico de llegada en espacio aéreo terminal. Primero, se utiliza una estrategia provisional a medio camino entre la negociación completa de trayectorias 4D y la vectorización en “bucle abierto”: se propone una metodología para gestionar de manera eficaz tráfico de llegada donde las aeronaves vuelan CDOs de energía neutra en un procedimiento de navegación de área (RNAV) conocido como trombón. A continuación, se propone una nueva metodología para generar rutas de llegada dinámicas que se adaptan automáticamente a la demanda actual de tráfico. De igual manera, se aplican CDOs de energía neutra a todo el tráfico de llegada. Hay diferentes factores a considerar que podrían limitar los beneficios de las soluciones propuestas. La cantidad y distribución del tráfico de llegada tiene un gran efecto sobre los resultados obtenidos, limitando en algunos casos una gestión eficiente de las aeronaves de llegada. Además, algunas de las soluciones propuestas comportan elevadas cargas computacionales que podrían limitar su aplicación operacional, motivando mayor investigación en el futuro con el fin de optimizar los modelos y metodologías utilizados. Finalmente, permitir a algunos aviones volar descensos con el motor accionado podría facilitar la gestión de las aeronaves de llegada en los experimentos que se centran en el procedimiento de trombón y en la generación de rutas de llegada dinámicas.Postprint (published version

    A Survey of Machine Learning Techniques for Video Quality Prediction from Quality of Delivery Metrics

    Get PDF
    A growing number of video streaming networks are incorporating machine learning (ML) applications. The growth of video streaming services places enormous pressure on network and video content providers who need to proactively maintain high levels of video quality. ML has been applied to predict the quality of video streams. Quality of delivery (QoD) measurements, which capture the end-to-end performances of network services, have been leveraged in video quality prediction. The drive for end-to-end encryption, for privacy and digital rights management, has brought about a lack of visibility for operators who desire insights from video quality metrics. In response, numerous solutions have been proposed to tackle the challenge of video quality prediction from QoD-derived metrics. This survey provides a review of studies that focus on ML techniques for predicting the QoD metrics in video streaming services. In the context of video quality measurements, we focus on QoD metrics, which are not tied to a particular type of video streaming service. Unlike previous reviews in the area, this contribution considers papers published between 2016 and 2021. Approaches for predicting QoD for video are grouped under the following headings: (1) video quality prediction under QoD impairments, (2) prediction of video quality from encrypted video streaming traffic, (3) predicting the video quality in HAS applications, (4) predicting the video quality in SDN applications, (5) predicting the video quality in wireless settings, and (6) predicting the video quality in WebRTC applications. Throughout the survey, some research challenges and directions in this area are discussed, including (1) machine learning over deep learning; (2) adaptive deep learning for improved video delivery; (3) computational cost and interpretability; (4) self-healing networks and failure recovery. The survey findings reveal that traditional ML algorithms are the most widely adopted models for solving video quality prediction problems. This family of algorithms has a lot of potential because they are well understood, easy to deploy, and have lower computational requirements than deep learning techniques

    Methods for revealing and reshaping the African Internet Ecosystem as a case study for developing regions: from isolated networks to a connected continent

    Get PDF
    Mención Internacional en el título de doctorWhile connecting end-users worldwide, the Internet increasingly promotes local development by making challenges much simpler to overcome, regardless of the field in which it is used: governance, economy, education, health, etc. However, African Network Information Centre (AfriNIC), the Regional Internet Registry (RIR) of Africa, is characterized by the lowest Internet penetration: 28.6% as of March 2017 compared to an average of 49.7% worldwide according to the International Telecommunication Union (ITU) estimates [139]. Moreover, end-users experience a poor Quality of Service (QoS) provided at high costs. It is thus of interest to enlarge the Internet footprint in such under-connected regions and determine where the situation can be improved. Along these lines, this doctoral thesis thoroughly inspects, using both active and passive data analysis, the critical aspects of the African Internet ecosystem and outlines the milestones of a methodology that could be adopted for achieving similar purposes in other developing regions. The thesis first presents our efforts to help build measurements infrastructures for alleviating the shortage of a diversified range of Vantage Points (VPs) in the region, as we cannot improve what we can not measure. It then unveils our timely and longitudinal inspection of the African interdomain routing using the enhanced RIPE Atlas measurements infrastructure for filling the lack of knowledge of both IPv4 and IPv6 topologies interconnecting local Internet Service Providers (ISPs). It notably proposes reproducible data analysis techniques suitable for the treatment of any set of similar measurements to infer the behavior of ISPs in the region. The results show a large variety of transit habits, which depend on socio-economic factors such as the language, the currency area, or the geographic location of the country in which the ISP operates. They indicate the prevailing dominance of ISPs based outside Africa for the provision of intracontinental paths, but also shed light on the efforts of stakeholders for traffic localization. Next, the thesis investigates the causes and impacts of congestion in the African IXP substrate, as the prevalence of this endemic phenomenon in local Internet markets may hinder their growth. Towards this end, Ark monitors were deployed at six strategically selected local Internet eXchange Points (IXPs) and used for collecting Time-Sequence Latency Probes (TSLP) measurements during a whole year. The analysis of these datasets reveals no evidence of widespread congestion: only 2.2% of the monitored links experienced noticeable indication of congestion, thus promoting peering. The causes of these events were identified during IXP operator interviews, showing how essential collaboration with stakeholders is to understanding the causes of performance degradations. As part of the Internet Society (ISOC) strategy to allow the Internet community to profile the IXPs of a particular region and monitor their evolution, a route-collector data analyzer was then developed and afterward, it was deployed and tested in AfriNIC. This open source web platform titled the “African” Route-collectors Data Analyzer (ARDA) provides metrics, which picture in real-time the status of interconnection at different levels, using public routing information available at local route-collectors with a peering viewpoint of the Internet. The results highlight that a small proportion of Autonomous System Numbers (ASNs) assigned by AfriNIC (17 %) are peering in the region, a fraction that remained static from April to September 2017 despite the significant growth of IXPs in some countries. They show how ARDA can help detect the impact of a policy on the IXP substrate and help ISPs worldwide identify new interconnection opportunities in Africa, the targeted region. Since broadening the underlying network is not useful without appropriately provisioned services to exploit it, the thesis then delves into the availability and utilization of the web infrastructure serving the continent. Towards this end, a comprehensive measurement methodology is applied to collect data from various sources. A focus on Google reveals that its content infrastructure in Africa is, indeed, expanding; nevertheless, much of its web content is still served from the United States (US) and Europe, although being the most popular content source in many African countries. Further, the same analysis is repeated across top global and regional websites, showing that even top African websites prefer to host their content abroad. Following that, the primary bottlenecks faced by Content Providers (CPs) in the region such as the lack of peering between the networks hosting our probes and poorly configured DNS resolvers are explored to outline proposals for further ISP and CP deployments. Considering the above, an option to enrich connectivity and incentivize CPs to establish a presence in the region is to interconnect ISPs present at isolated IXPs by creating a distributed IXP layout spanning the continent. In this respect, the thesis finally provides a four-step interconnection scheme, which parameterizes socio-economic, geographical, and political factors using public datasets. It demonstrates that this constrained solution doubles the percentage of continental intra-African paths, reduces their length, and drastically decreases the median of their Round Trip Times (RTTs) as well as RTTs to ASes hosting the top 10 global and top 10 regional Alexa websites. We hope that quantitatively demonstrating the benefits of this framework will incentivize ISPs to intensify peering and CPs to increase their presence, for enabling fast, affordable, and available access at the Internet frontier.Programa Oficial de Doctorado en Ingeniería TelemáticaPresidente: David Fernández Cambronero.- Secretario: Alberto García Martínez.- Vocal: Cristel Pelsse

    Recent Advances in Anomaly Detection Methods Applied to Aviation

    Get PDF
    International audienceAnomaly detection is an active area of research with numerous methods and applications. This survey reviews the state-of-the-art of data-driven anomaly detection techniques and their application to the aviation domain. After a brief introduction to the main traditional data-driven methods for anomaly detection, we review the recent advances in the area of neural networks, deep learning and temporal-logic based learning. In particular, we cover unsupervised techniques applicable to time series data because of their relevance to the aviation domain, where the lack of labeled data is the most usual case, and the nature of flight trajectories and sensor data is sequential, or temporal. The advantages and disadvantages of each method are presented in terms of computational efficiency and detection efficacy. The second part of the survey explores the application of anomaly detection techniques to aviation and their contributions to the improvement of the safety and performance of flight operations and aviation systems. As far as we know, some of the presented methods have not yet found an application in the aviation domain. We review applications ranging from the identification of significant operational events in air traffic operations to the prediction of potential aviation system failures for predictive maintenance

    Investigating the business process implications of managing road works and street works

    Get PDF
    Around 2.5 million utility works (street works) occurred in England in 2016 with a construction cost of approximately £2 billion. Comparative figures for highway works (road works) are not readily available, but are expected to be similarly significant. Unsurprisingly, the volume of road works and street works (RWSW) activity in urban areas is considered to have a negative impact on the road network causing disruption and premature deterioration, blighting the street scene, damaging local business trade, and significantly increasing social, economic and environmental costs. Indeed the social costs of street works alone are estimated to be around £5.1 billion annually. Despite the economic significance of highway infrastructure, the subject of road works and street works management is under-researched, with greater research emphasis on technology-based, as opposed to policy-based management approaches. Consequently, the aim of this study was to investigate the efficiency and effectiveness of managing the business process of RWSW. Due to limited academic literature in the subject domain, earlier research focused on identifying the industry actors, their motivations, as well as drivers and barriers to RWSW management. Semi-structured interviews with industry stakeholders highlighted the industry s complexity and revealed that several issues contributed to ineffective RWSW management. Principal problems included Street Authorities (SA) failing to take enough ownership of the RWSW coordination process, highway legislation not encouraging joint working due to inherent challenges arising from reinstatement guarantees, and entrenched attitudes and adversarial practices in the construction industry encouraging silo working. The Derby Permit Scheme (legislative tool) was intended to improve RWSW management through giving SAs greater control of highway works. Accordingly, RWSW activity was tested through a statistical time series intervention analysis to separately examine the impacts of the Highway Authority (HA) led works and utility industry led works over 6.5 years. The Permit Scheme was found to reduce utility works durations by around 5.4%; equivalent to 727 days, saving between £2.1 - £7.4 million in construction and societal costs annually. Conversely, the Permit Scheme did not noticeable reduce the HA led works. Instead, the introduction of a works order management system (WOMS) to automate some of the back office road works process was found to reduce works durations by 34%; equivalent to 6519 days and saving between £8.3 - £48.3m per annum. This case study highlighted that more considered practices were required by the HA to reduce RWSW. The stakeholder study and the automated WOMS technology found that well-managed business processes tended to lead to better executed highway works on-site. Informed by these experiences, the sponsor was keen to re-engineer its internal business processes. Business process mapping was adopted to identify inefficient practices and improved coordinated working opportunities on three key internal teams involved in the road works process. Findings revealed that silo working was inherent and that processes were built around fragmented and outdated Information Technology (IT) systems, creating inefficiencies. A subsequent validation exercise found that certain practices, such as restricted data access and hierarchal management styles were culturally embedded and also common across other local authorities. Peer reviewed recommendations to improve working practices were made, such as adopting an integrated Highways Management IT system, vertical integration between the customer relationship management IT system and the Highways IT systems, and the provision of regulatory training. In conclusion, based on the finding of this study, a generic logic map was created with potential to transfer the learning to other local authorities and for their use when evaluating road works administrative processes

    Improving Anycast with Measurements

    Get PDF
    Since the first Distributed Denial-of-Service (DDoS) attacks were launched, the strength of such attacks has been steadily increasing, from a few megabits per second to well into the terabit/s range. The damage that these attacks cause, mostly in terms of financial cost, has prompted researchers and operators alike to investigate and implement mitigation strategies. Examples of such strategies include local filtering appliances, Border Gateway Protocol (BGP)-based blackholing and outsourced mitigation in the form of cloud-based DDoS protection providers. Some of these strategies are more suited towards high bandwidth DDoS attacks than others. For example, using a local filtering appliance means that all the attack traffic will still pass through the owner's network. This inherently limits the maximum capacity of such a device to the bandwidth that is available. BGP Blackholing does not have such limitations, but can, as a side-effect, cause service disruptions to end-users. A different strategy, that has not attracted much attention in academia, is based on anycast. Anycast is a technique that allows operators to replicate their service across different physical locations, while keeping that service addressable with just a single IP-address. It relies on the BGP to effectively load balance users. In practice, it is combined with other mitigation strategies to allow those to scale up. Operators can use anycast to scale their mitigation capacity horizontally. Because anycast relies on BGP, and therefore in essence on the Internet itself, it can be difficult for network engineers to fine tune this balancing behavior. In this thesis, we show that that is indeed the case through two different case studies. In the first, we focus on an anycast service during normal operations, namely the Google Public DNS, and show that the routing within this service is far from optimal, for example in terms of distance between the client and the server. In the second case study, we observe the root DNS, while it is under attack, and show that even though in aggregate the bandwidth available to this service exceeds the attack we observed, clients still experienced service degradation. This degradation was caused due to the fact that some sites of the anycast service received a much higher share of traffic than others. In order for operators to improve their anycast networks, and optimize it in terms of resilience against DDoS attacks, a method to assess the actual state of such a network is required. Existing methodologies typically rely on external vantage points, such as those provided by RIPE Atlas, and are therefore limited in scale, and inherently biased in terms of distribution. We propose a new measurement methodology, named Verfploeter, to assess the characteristics of anycast networks in terms of client to Point-of-Presence (PoP) mapping, i.e. the anycast catchment. This method does not rely on external vantage points, is free of bias and offers a much higher resolution than any previous method. We validated this methodology by deploying it on a testbed that was locally developed, as well as on the B root DNS. We showed that the increased \textit{resolution} of this methodology improved our ability to assess the impact of changes in the network configuration, when compared to previous methodologies. As final validation we implement Verfploeter on Cloudflare's global-scale anycast Content Delivery Network (CDN), which has almost 200 global Points-of-Presence and an aggregate bandwidth of 30 Tbit/s. Through three real-world use cases, we demonstrate the benefits of our methodology: Firstly, we show that changes that occur when withdrawing routes from certain PoPs can be accurately mapped, and that in certain cases the effect of taking down a combination of PoPs can be calculated from individual measurements. Secondly, we show that Verfploeter largely reinstates the ping to its former glory, showing how it can be used to troubleshoot network connectivity issues in an anycast context. Thirdly, we demonstrate how accurate anycast catchment maps offer operators a new and highly accurate tool to identify and filter spoofed traffic. Where possible, we make datasets collected over the course of the research in this thesis available as open access data. The two best (open) dataset awards that were awarded for these datasets confirm that they are a valued contribution. In summary, we have investigated two large anycast services and have shown that their deployments are not optimal. We developed a novel measurement methodology, that is free of bias and is able to obtain highly accurate anycast catchment mappings. By implementing this methodology and deploying it on a global-scale anycast network we show that our method adds significant value to the fast-growing anycast CDN industry and enables new ways of detecting, filtering and mitigating DDoS attacks

    Future Transportation

    Get PDF
    Greenhouse gas (GHG) emissions associated with transportation activities account for approximately 20 percent of all carbon dioxide (co2) emissions globally, making the transportation sector a major contributor to the current global warming. This book focuses on the latest advances in technologies aiming at the sustainable future transportation of people and goods. A reduction in burning fossil fuel and technological transitions are the main approaches toward sustainable future transportation. Particular attention is given to automobile technological transitions, bike sharing systems, supply chain digitalization, and transport performance monitoring and optimization, among others
    corecore