116 research outputs found

    Airspace analysis for greener operations: towards more adoptability and predictability of continuous descent approach (cda)

    Get PDF
    Continuous Descent Approach (CDA), also known as Optimized Profile Descent (OPD), is the advanced flight technique for commercial aircraft to descend continuously from cruise altitude to Final Approach Fix (FAF) or touchdown without level-offs and with- or near-idle thrust setting. Descending using CDA, aircraft stays as high as possible for longer time thereby expanding the vertical distance between aircraft\u27s sources of noise and ground, and thus significantly reducing the noise levels for populated areas around airports. Also, descending with idle engines, fuel burn is reduced resulting in reduction of harmful emissions to the environment and fuel consumption to air carriers. Due to safety considerations, CDA procedures may require more separation between aircraft, which could reduce the full utilization of runway capacity. Thus, CDA has been limited to low to moderate traffic levels at airports. Several studies in literature have used various approaches to present solutions to the problem of increasing the CDA implementation during periods of high traffic at airports. However, insufficient attention was given to define thresholds that would help Air Traffic Controllers (ATC) to manage and accommodate more CDA operations, strategically and tactically. Bridging this gap is the main intent of this work. This research focus is on increasing CDA operations at airports during high traffic levels by considering factors that impact its CDA adoption as they relate to airports\u27 demographics, and airspace around them {known as terminal maneuvering area (TMA)}. To capture the effect of these factors on CDA Adoptability (CDA-A), in general, and CDA Predictability (CDA-P), at the operational level, two (2) approaches are introduced. The CDA-A model defines and captures the maximum level of traffic threshold for CDA adoption. The model captures the factors affecting CDA in a single measure, which are designated collectively as the Probability of Blocking. It is defined as the fraction of time an aircraft\u27s request to embark on CDA is denied. The denial could emanate from safety concerns as well as other operational conditions, such as the congestion of the stacking space within the TMA. This metric should enhance ATC on the strategic level to increasing CDA operations during possibly higher traffic than normally the case. The other approach is for a CDA-P. This model is developed based on data-driven system approach. It extracts traffic features, such as aircraft type and speed, altitude, and rate of descent; from actual flights data to aid in further operational utilization of CDA in real time. By accurately predicting CDA instances during high traffic at airports, the CDA-P model should assist ATC manage adopting more CDA operations during periods of high demand. Through its framework, the CDA-P model utilizes Feature Engineering and Hierarchal Clustering Analysis, to facilitate descent profile visualization and labeling, for building, training, testing, and validation of CDA predictive models using Decision Trees with AdaBoost and Support Vector Machines (SVM). The CDA-P model is validated using actual flight data operated at Nashville Int\u27l Airport (BNA)

    Rusty Clusters? Dusting an IPv6 Research Foundation

    Get PDF
    The long-running IPv6 Hitlist service is an important foundation for IPv6 measurement studies. It helps to overcome infeasible, complete address space scans by collecting valuable, unbiased IPv6 address candidates and regularly testing their responsiveness. However, the Internet itself is a quickly changing ecosystem that can affect longrunning services, potentially inducing biases and obscurities into ongoing data collection means. Frequent analyses but also updates are necessary to enable a valuable service to the community. In this paper, we show that the existing hitlist is highly impacted by the Great Firewall of China, and we offer a cleaned view on the development of responsive addresses. While the accumulated input shows an increasing bias towards some networks, the cleaned set of responsive addresses is well distributed and shows a steady increase. Although it is a best practice to remove aliased prefixes from IPv6 hitlists, we show that this also removes major content delivery networks. More than 98% of all IPv6 addresses announced by Fastly were labeled as aliased and Cloudflare prefixes hosting more than 10M domains were excluded. Depending on the hitlist usage, e.g., higher layer protocol scans, inclusion of addresses from these providers can be valuable. Lastly, we evaluate different new address candidate sources, including target generation algorithms to improve the coverage of the current IPv6 Hitlist. We show that a combination of different methodologies is able to identify 5.6M new, responsive addresses. This accounts for an increase by 174% and combined with the current IPv6 Hitlist, we identify 8.8M responsive addresses

    On Internet Traffic Classification: A Two-Phased Machine Learning Approach

    Get PDF
    Traffic classification utilizing flow measurement enables operators to perform essential network management. Flow accounting methods such as NetFlow are, however, considered inadequate for classification requiring additional packet-level information, host behaviour analysis, and specialized hardware limiting their practical adoption. This paper aims to overcome these challenges by proposing two-phased machine learning classification mechanism with NetFlow as input. The individual flow classes are derived per application through k-means and are further used to train a C5.0 decision tree classifier. As part of validation, the initial unsupervised phase used flow records of fifteen popular Internet applications that were collected and independently subjected to k-means clustering to determine unique flow classes generated per application. The derived flow classes were afterwards used to train and test a supervised C5.0 based decision tree. The resulting classifier reported an average accuracy of 92.37% on approximately 3.4 million test cases increasing to 96.67% with adaptive boosting. The classifier specificity factor which accounted for differentiating content specific from supplementary flows ranged between 98.37% and 99.57%. Furthermore, the computational performance and accuracy of the proposed methodology in comparison with similar machine learning techniques lead us to recommend its extension to other applications in achieving highly granular real-time traffic classification

    Improving Anycast with Measurements

    Get PDF
    Since the first Distributed Denial-of-Service (DDoS) attacks were launched, the strength of such attacks has been steadily increasing, from a few megabits per second to well into the terabit/s range. The damage that these attacks cause, mostly in terms of financial cost, has prompted researchers and operators alike to investigate and implement mitigation strategies. Examples of such strategies include local filtering appliances, Border Gateway Protocol (BGP)-based blackholing and outsourced mitigation in the form of cloud-based DDoS protection providers. Some of these strategies are more suited towards high bandwidth DDoS attacks than others. For example, using a local filtering appliance means that all the attack traffic will still pass through the owner's network. This inherently limits the maximum capacity of such a device to the bandwidth that is available. BGP Blackholing does not have such limitations, but can, as a side-effect, cause service disruptions to end-users. A different strategy, that has not attracted much attention in academia, is based on anycast. Anycast is a technique that allows operators to replicate their service across different physical locations, while keeping that service addressable with just a single IP-address. It relies on the BGP to effectively load balance users. In practice, it is combined with other mitigation strategies to allow those to scale up. Operators can use anycast to scale their mitigation capacity horizontally. Because anycast relies on BGP, and therefore in essence on the Internet itself, it can be difficult for network engineers to fine tune this balancing behavior. In this thesis, we show that that is indeed the case through two different case studies. In the first, we focus on an anycast service during normal operations, namely the Google Public DNS, and show that the routing within this service is far from optimal, for example in terms of distance between the client and the server. In the second case study, we observe the root DNS, while it is under attack, and show that even though in aggregate the bandwidth available to this service exceeds the attack we observed, clients still experienced service degradation. This degradation was caused due to the fact that some sites of the anycast service received a much higher share of traffic than others. In order for operators to improve their anycast networks, and optimize it in terms of resilience against DDoS attacks, a method to assess the actual state of such a network is required. Existing methodologies typically rely on external vantage points, such as those provided by RIPE Atlas, and are therefore limited in scale, and inherently biased in terms of distribution. We propose a new measurement methodology, named Verfploeter, to assess the characteristics of anycast networks in terms of client to Point-of-Presence (PoP) mapping, i.e. the anycast catchment. This method does not rely on external vantage points, is free of bias and offers a much higher resolution than any previous method. We validated this methodology by deploying it on a testbed that was locally developed, as well as on the B root DNS. We showed that the increased \textit{resolution} of this methodology improved our ability to assess the impact of changes in the network configuration, when compared to previous methodologies. As final validation we implement Verfploeter on Cloudflare's global-scale anycast Content Delivery Network (CDN), which has almost 200 global Points-of-Presence and an aggregate bandwidth of 30 Tbit/s. Through three real-world use cases, we demonstrate the benefits of our methodology: Firstly, we show that changes that occur when withdrawing routes from certain PoPs can be accurately mapped, and that in certain cases the effect of taking down a combination of PoPs can be calculated from individual measurements. Secondly, we show that Verfploeter largely reinstates the ping to its former glory, showing how it can be used to troubleshoot network connectivity issues in an anycast context. Thirdly, we demonstrate how accurate anycast catchment maps offer operators a new and highly accurate tool to identify and filter spoofed traffic. Where possible, we make datasets collected over the course of the research in this thesis available as open access data. The two best (open) dataset awards that were awarded for these datasets confirm that they are a valued contribution. In summary, we have investigated two large anycast services and have shown that their deployments are not optimal. We developed a novel measurement methodology, that is free of bias and is able to obtain highly accurate anycast catchment mappings. By implementing this methodology and deploying it on a global-scale anycast network we show that our method adds significant value to the fast-growing anycast CDN industry and enables new ways of detecting, filtering and mitigating DDoS attacks

    Cloud Watching: Understanding Attacks Against Cloud-Hosted Services

    Full text link
    Cloud computing has dramatically changed service deployment patterns. In this work, we analyze how attackers identify and target cloud services in contrast to traditional enterprise networks and network telescopes. Using a diverse set of cloud honeypots in 5~providers and 23~countries as well as 2~educational networks and 1~network telescope, we analyze how IP address assignment, geography, network, and service-port selection, influence what services are targeted in the cloud. We find that scanners that target cloud compute are selective: they avoid scanning networks without legitimate services and they discriminate between geographic regions. Further, attackers mine Internet-service search engines to find exploitable services and, in some cases, they avoid targeting IANA-assigned protocols, causing researchers to misclassify at least 15\% of traffic on select ports. Based on our results, we derive recommendations for researchers and operators.Comment: Proceedings of the 2023 ACM Internet Measurement Conference (IMC '23), October 24--26, 2023, Montreal, QC, Canad

    Methods for revealing and reshaping the African Internet Ecosystem as a case study for developing regions: from isolated networks to a connected continent

    Get PDF
    Mención Internacional en el título de doctorWhile connecting end-users worldwide, the Internet increasingly promotes local development by making challenges much simpler to overcome, regardless of the field in which it is used: governance, economy, education, health, etc. However, African Network Information Centre (AfriNIC), the Regional Internet Registry (RIR) of Africa, is characterized by the lowest Internet penetration: 28.6% as of March 2017 compared to an average of 49.7% worldwide according to the International Telecommunication Union (ITU) estimates [139]. Moreover, end-users experience a poor Quality of Service (QoS) provided at high costs. It is thus of interest to enlarge the Internet footprint in such under-connected regions and determine where the situation can be improved. Along these lines, this doctoral thesis thoroughly inspects, using both active and passive data analysis, the critical aspects of the African Internet ecosystem and outlines the milestones of a methodology that could be adopted for achieving similar purposes in other developing regions. The thesis first presents our efforts to help build measurements infrastructures for alleviating the shortage of a diversified range of Vantage Points (VPs) in the region, as we cannot improve what we can not measure. It then unveils our timely and longitudinal inspection of the African interdomain routing using the enhanced RIPE Atlas measurements infrastructure for filling the lack of knowledge of both IPv4 and IPv6 topologies interconnecting local Internet Service Providers (ISPs). It notably proposes reproducible data analysis techniques suitable for the treatment of any set of similar measurements to infer the behavior of ISPs in the region. The results show a large variety of transit habits, which depend on socio-economic factors such as the language, the currency area, or the geographic location of the country in which the ISP operates. They indicate the prevailing dominance of ISPs based outside Africa for the provision of intracontinental paths, but also shed light on the efforts of stakeholders for traffic localization. Next, the thesis investigates the causes and impacts of congestion in the African IXP substrate, as the prevalence of this endemic phenomenon in local Internet markets may hinder their growth. Towards this end, Ark monitors were deployed at six strategically selected local Internet eXchange Points (IXPs) and used for collecting Time-Sequence Latency Probes (TSLP) measurements during a whole year. The analysis of these datasets reveals no evidence of widespread congestion: only 2.2% of the monitored links experienced noticeable indication of congestion, thus promoting peering. The causes of these events were identified during IXP operator interviews, showing how essential collaboration with stakeholders is to understanding the causes of performance degradations. As part of the Internet Society (ISOC) strategy to allow the Internet community to profile the IXPs of a particular region and monitor their evolution, a route-collector data analyzer was then developed and afterward, it was deployed and tested in AfriNIC. This open source web platform titled the “African” Route-collectors Data Analyzer (ARDA) provides metrics, which picture in real-time the status of interconnection at different levels, using public routing information available at local route-collectors with a peering viewpoint of the Internet. The results highlight that a small proportion of Autonomous System Numbers (ASNs) assigned by AfriNIC (17 %) are peering in the region, a fraction that remained static from April to September 2017 despite the significant growth of IXPs in some countries. They show how ARDA can help detect the impact of a policy on the IXP substrate and help ISPs worldwide identify new interconnection opportunities in Africa, the targeted region. Since broadening the underlying network is not useful without appropriately provisioned services to exploit it, the thesis then delves into the availability and utilization of the web infrastructure serving the continent. Towards this end, a comprehensive measurement methodology is applied to collect data from various sources. A focus on Google reveals that its content infrastructure in Africa is, indeed, expanding; nevertheless, much of its web content is still served from the United States (US) and Europe, although being the most popular content source in many African countries. Further, the same analysis is repeated across top global and regional websites, showing that even top African websites prefer to host their content abroad. Following that, the primary bottlenecks faced by Content Providers (CPs) in the region such as the lack of peering between the networks hosting our probes and poorly configured DNS resolvers are explored to outline proposals for further ISP and CP deployments. Considering the above, an option to enrich connectivity and incentivize CPs to establish a presence in the region is to interconnect ISPs present at isolated IXPs by creating a distributed IXP layout spanning the continent. In this respect, the thesis finally provides a four-step interconnection scheme, which parameterizes socio-economic, geographical, and political factors using public datasets. It demonstrates that this constrained solution doubles the percentage of continental intra-African paths, reduces their length, and drastically decreases the median of their Round Trip Times (RTTs) as well as RTTs to ASes hosting the top 10 global and top 10 regional Alexa websites. We hope that quantitatively demonstrating the benefits of this framework will incentivize ISPs to intensify peering and CPs to increase their presence, for enabling fast, affordable, and available access at the Internet frontier.Programa Oficial de Doctorado en Ingeniería TelemáticaPresidente: David Fernández Cambronero.- Secretario: Alberto García Martínez.- Vocal: Cristel Pelsse

    Traffic synchronization with controlled time of arrival for cost-efficient trajectories in high-density terminal airspace

    Get PDF
    The growth in air traffic has led to a continuously growing environmental sensitivity in aviation, encouraging the research into methods for achieving a greener air transportation. In this context, continuous descent operations (CDOs) allow aircraft to follow an optimum flight path that delivers major environmental and economic benefits, giving as a result engine-idle descents from the cruise altitude to right before landing that reduce fuel consumption, pollutant emissions and noise nuisance. However, this type of operations suffers from a well-known drawback: the loss of predictability from the air traffic control (ATC) point of view in terms of overfly times at the different waypoints of the route. In consequence, ATC requires large separation buffers, thus reducing the capacity of the airport. Previous works investigating this issue showed that the ability to meet a controlled time of arrival (CTA) at a metering fix could enable CDOs while simultaneously maintaining airport throughput. In this context, more research is needed focusing on how modern arrival managers (AMANs)—and extended arrival managers (E-AMANs)—could provide support to select the appropriate CTA. ATC would be in charge to provide the CTA to the pilot, who would then use four-dimensional (4D) flight management system (FMS) trajectory management capabilities to satisfy it. A key transformation to achieve a more efficient aircraft scheduling is the use of new air traffic management (ATM) paradigms, such as the trajectory based operations (TBO) concept. This concept aims at completely removing open-loop vectoring and strategic constraints on the trajectories by efficiently implementing a 4D trajectory negotiation process to synchronize airborne and ground equipment with the aim of maximizing both flight efficiency and throughput. The main objective of this PhD thesis is to develop methods to efficiently schedule arrival aircraft in terminal airspace, together with concepts of operations compliant with the TBO concept. The simulated arrival trajectories generated for all the experiments conducted in this PhD thesis, to the maximum possible extent, are considered to be energy-neutral CDOs, seeking to reduce the overall environmental impact of aircraft operations in the ATM system. Ultimately, the objective of this PhD is to achieve a more efficient arrival management of traffic, in which higher levels of predictability and similar levels of capacity are achieved, while the safety of the operations is kept. The designed experiments consider a TBO environment, involving a high synchronization between all the involved actors of the ATM system. Higher levels of automation and information sharing are expected, together with a modernization of both current ATC ground-support tools and aircraft FMSs to comply with the new TBO paradigm.L’increment de tràfic aeri ha portat a una major sensibilitat mediambiental en l’aviació, motivant la recerca en mètodes per aconseguir un transport aeri més ecològic. En aquest context, les operacions de descens continu (CDOs) permeten a les aeronaus seguir una trajectòria que aporta grans beneficis econòmics i ambientals, donant com a resultat descensos amb els motors al ralentí des de l’altitud de creuer fins just abans d’aterrar. Aquestes trajectòries redueixen el consum de combustible, les emissions contaminants i el soroll generat per les aeronaus. No obstant això, aquest tipus d’operacions té un gran desavantatge: la pèrdua de predictibilitat des del punt de vista del controlador aeri (ATC) en termes de temps de pas als diferents punts de la ruta. Com a conseqüència, l’ATC necessita assignar una major separació entre les aeronaus, la qual cosa comporta una reducció en la capacitat de l’aeroport. Estudis previs investigant aquest problema han demostrat que la capacitat de complir amb un temps controlat d’arribada (CTA) a un punt de la ruta (utilitzat per seqüenciar les aeronaus) podria habilitar les CDOs tot mantenint la capacitat de l’aeroport. En aquest context, es necessita investigar més en com els gestors d’arribades (AMANs) i els gestors d’arribades ampliats (E-AMANs) podrien donar suport en la selecció de la CTA més adequada. L’ATC seria l’encarregat d’enviar la CTA al pilot, el qual, per tal de complir amb la CTA, faria servir la capacitat de gestió de trajectòries d’un sistema de gestió de vol (FMS) de quatre dimensions (4D). Una transformació clau per aconseguir una gestió més eficient del tràfic d’arribada és l’ús de nous paradigmes de gestió del tràfic aeri (ATM), com per exemple el concepte d’operacions basades en trajectòries (TBO). Aquest concepte té com a objectiu eliminar completament de les trajectòries la vectorització en “bucle obert” i les restriccions estratègiques. Per aconseguir-ho, es proposa implementar de manera eficient una negociació de la trajectòria 4D, amb l’objectiu de sincronitzar l’equipament de terra amb el de l’aeronau, maximitzant d’aquesta manera l’eficiència dels vols i la capacitat del sistema. El principal objectiu d’aquest doctorat és desenvolupar mètodes per gestionar aeronaus de manera eficient en espai aeri terminal, juntament amb conceptes d’operacions que compleixin amb el concepte de TBO. Les trajectòries d’arribada simulades per tots els experiments definits en aquesta tesi doctoral, en la mesura que s’ha pogut, són CDOs d’energia neutral. D’aquesta manera, la idea és reduir el màxim possible l’impacte mediambiental de les operacions aèries al sistema ATM. En definitiva, l’objectiu d’aquest doctorat és aconseguir una gestió del tràfic d’arribada més eficient, obtenint una major predictibilitat i capacitat, i assegurant que la seguretat de les operacions es manté. Els experiments dissenyats consideren una situació on el concepte de TBO és present, el que comporta una sincronització elevada entre tots els actors implicats en el sistema ATM. Així mateix, s’esperen nivells majors d’automatització i de compartició d’informació, juntament amb una modernització de les eines de suport en terra a l’ATC i dels FMSs de les aeronaus, tot amb l’objectiu de complir amb el nou paradigma de TBO.El incremento de tráfico aéreo ha llevado a una mayor sensibilidad medioambiental en la aviación, motivando la investigación de métodos para conseguir un transporte aéreo más ecológico. En este contexto, las operaciones de descenso continuo (CDOs) permiten a las aeronaves seguir una trayectoria que aporta grandes beneficios económicos y ambientales, dando como resultado descensos con los motores al ralentí desde la altitud de crucero hasta justo antes de aterrizar. Estas trayectorias reducen el consumo de combustible, las emisiones contaminantes y el ruido generado por las aeronaves. No obstante, este tipo de operaciones tiene una gran desventaja: la pérdida de predictibilidad desde el punto de vista del controlador aéreo (ATC) en términos de tiempos de paso en los diferentes puntos de la ruta. Como consecuencia, el ATC necesita asignar una mayor separación entre las aeronaves, lo cual comporta una reducción en la capacidad del aeropuerto. Estudios previos investigando este problema han demostrado que la capacidad de cumplir con un tiempo controlado de llegada (CTA) en un punto de la ruta (utilizado para secuenciar las aeronaves) podría habilitar las CDOs manteniendo al mismo tiempo la capacidad del aeropuerto. En este contexto, es necesario investigar más en cómo los gestores de llegadas (AMANs)—y los gestores de llegadas extendidos (E-AMANs)—podrían dar soporte en la selección de la CTA más adecuada. El ATC sería el encargado de enviar la CTA al piloto, el cual, para cumplir con la CTA, usaría la capacidad de gestión de trayectorias de un sistema de gestión de vuelo (FMS) de cuatro dimensiones (4D). Una transformación clave para conseguir una gestión más eficiente del tráfico de llegada es el uso de nuevos paradigmas de gestión del tráfico aéreo (ATM), como por ejemplo el concepto de operaciones basadas en trayectorias (TBO). Este concepto tiene como objetivo eliminar completamente de las trayectorias la vectorización en “bucle abierto” y las restricciones estratégicas. Para conseguirlo, se propone implementar de manera eficiente una negociación de la trayectoria 4D, con el objetivo de sincronizar el equipamiento de tierra con el de la aeronave, maximizando de esta manera la eficiencia de los vuelos y la capacidad del sistema. El principal objetivo de este doctorado es desarrollar métodos para gestionar aeronaves de manera eficiente en espacio aéreo terminal, junto con conceptos de operaciones que cumplan con el concepto de TBO. Las trayectorias de llegada simuladas para todos los experimentos definidos en esta tesis doctoral, en la medida de lo posible, son CDOs de energía neutra. De esta manera, la idea es reducir lo máximo posible el impacto medioambiental de las operaciones aéreas en el sistema ATM. En definitiva, el objetivo de este doctorado es conseguir una gestión del tráfico de llegada más eficiente, obteniendo una mayor predictibilidad y capacidad, y asegurando que la seguridad de las operaciones se mantiene. Los experimentos diseñados consideran una situación xxi donde el concepto de TBO está presente, lo que comporta una sincronización elevada entre todos los actores implicados en el sistema ATM. Asimismo, se esperan mayores niveles de automatización y de compartición de información, junto con una modernización de las herramientas de soporte en tierra al ATC y de los FMSs de las aeronaves, todo con el objetivo de cumplir con el nuevo paradigma de TBO. Primero de todo, se define un marco para la optimización de trayectorias utilizado para generar las trayectorias simuladas para los experimentos definidos en esta tesis doctoral. A continuación, se evalúan los beneficios de volar CDOs de energía neutra comparándolas con trayectorias reales obtenidas de datos de vuelo históricos. Se comparan dos fuentes de datos, concluyendo cuál es la más adecuada para estudios de eficiencia en espacio aéreo terminal. Las CDOs de energía neutra son el tipo preferido de trayectorias desde un punto de vista medioambiental pero, dependiendo de la cantidad de tráfico, podría ser imposible para el ATC asignar una CTA que pueda ser cumplida por las aeronaves mientras vuelan la ruta de llegada publicada. En esta tesis doctoral, se comparan dos estrategias con el objetivo de cumplir con la CTA asignada: volar CDOs de energía neutra por rutas más largas/cortas o volar descensos con el motor accionado por la ruta publicada. Para ambas estrategias, se analiza la sensibilidad del consumo de combustible a diferentes parámetros, como la altitud inicial de crucero o la velocidad del viento. Finalmente, en esta tesis doctoral se analizan dos estrategias para gestionar de manera eficiente el tráfico de llegada en espacio aéreo terminal. Primero, se utiliza una estrategia provisional a medio camino entre la negociación completa de trayectorias 4D y la vectorización en “bucle abierto”: se propone una metodología para gestionar de manera eficaz tráfico de llegada donde las aeronaves vuelan CDOs de energía neutra en un procedimiento de navegación de área (RNAV) conocido como trombón. A continuación, se propone una nueva metodología para generar rutas de llegada dinámicas que se adaptan automáticamente a la demanda actual de tráfico. De igual manera, se aplican CDOs de energía neutra a todo el tráfico de llegada. Hay diferentes factores a considerar que podrían limitar los beneficios de las soluciones propuestas. La cantidad y distribución del tráfico de llegada tiene un gran efecto sobre los resultados obtenidos, limitando en algunos casos una gestión eficiente de las aeronaves de llegada. Además, algunas de las soluciones propuestas comportan elevadas cargas computacionales que podrían limitar su aplicación operacional, motivando mayor investigación en el futuro con el fin de optimizar los modelos y metodologías utilizados. Finalmente, permitir a algunos aviones volar descensos con el motor accionado podría facilitar la gestión de las aeronaves de llegada en los experimentos que se centran en el procedimiento de trombón y en la generación de rutas de llegada dinámicas.Postprint (published version

    Risk Assessment in Air Traffic Management

    Get PDF
    One of the most complex challenges for the future of aviation is to ensure a safe integration of the expected air traffic demand. Air traffic is expected to almost double its current value in 20 years, which cannot be managed without the development and implementation of a safe air traffic management (ATM) system. In ATM, risk assessment is a crucial cornerstone to validate the operation of air traffic flows, airport processes, or navigation accuracy. This book tries to be a focal point and motivate further research by encompassing crosswise and widespread knowledge about this critical and exciting issue by bringing to light the different purposes and methods developed for risk assessment in ATM

    Detecting Middlebox Interference on Applications

    Get PDF
    PhDMiddleboxes are widely used in today’s Internet, especially for security and performance. Middleboxes classify, filter and shape traffic, therefore interfering with application behaviour and performing new network functions for end hosts. Recent studies have uncovered and studied middleboxes in different types of networks. In order to understand the middlebox interference on traffic flows and explore the involved ASes, our methodology relies on a client-server architecture, to be able to observe both directions of the middlebox interaction. Meanwhile, probing with increasing TTL values provides us chances to inspect behaviour of middleboxes hop by hop. Implementing our methodologies, we exploit a large-scale proxy infrastructure Luminati, to detect HTTP-interacting middleboxes across the Internet. We collect a large-scale dataset from vantage points distributed in nearly 10,000 ASes across 196 countries. Our results provide abundant evidence for middleboxes deployed across more than 1000 ASes. We observe various middlebox interference in both directions of traffic flows, and across a wide range networks, including mobile operators and data center networks

    Understanding Social Media through Large Volume Measurements

    Get PDF
    The amount of user-generated web content has grown drastically in the past 15 years and many social media services are exceedingly popular nowadays. In this thesis we study social media content creation and consumption through large volume measurements of three prominent social media services, namely Twitter, YouTube, and Wikipedia. Common to the services is that they have millions of users, they are free to use, and the users of the services can both create and consume content. The motivation behind this thesis is to examine how users create and consume social media content, investigate why social media services are as popular as they are, what drives people to contribute on them, and see if it is possible to model the conduct of the users. We study how various aspects of social media content be that for example its creation and consumption or its popularity can be measured, characterized, and linked to real world occurrences. We have gathered more than 20 million tweets, metadata of more than 10 million YouTube videos and a complete six-year page view history of 19 different Wikipedia language editions. We show, for example, daily and hourly patterns for the content creation and consumption, content popularity distributions, characteristics of popular content, and user statistics. We will also compare social media with traditional news services and show the interaction with social media, news, and stock prices. In addition, we combine natural language processing with social media analysis, and discover interesting correlations between news and social media content. Moreover, we discuss the importance of correct measurement methods and show the effects of different sampling methods using YouTube measurements as an example.Sosiaalisen median suosio ja sen käyttäjien luoman sisällön määrä on kasvanut valtavasti viimeisen 15 vuoden aikana ja palvelut kuten Facebook, Instagram, Twitter, YouTube ja Wikipedia ovat erittäin suosittuja. Tässä väitöskirjassa tarkastellaan sosiaalisen median sisällön luonti- ja kulutusmalleja laajavoluumisen mittausdatan kautta. Väitöskirja sisältää mittausdataa Twitter-, YouTube- ja Wikipedia -palveluista. Yhteistä näille kolmelle palvelulle on muuan muassa se, että niillä on miljoonia käyttäjiä, niitä voi käyttää maksutta ja käyttäjät voivat luoda sekä kuluttaa sisältöä. Mittausdata sisältää yli 20 miljoona Twitter -viestiä, metadatatietoja yli kymmenestä miljoonasta YouTube -videosta ja täydellisen artikkelien katselukertojen tiedot kuudelta vuodelta 19 eri Wikipedian kieliversiosta. Tutkimuksen tarkoituksena on tarkastella kuinka käyttäjät luovat ja kuluttavat sisältöä sekä löytää niihin liittyviä malleja, joita voi hyödyntää tiedon jaossa, replikoinnissa ja tallentamisessa. Tutkimuksessa pyritään siis selvittämään miksi miksi sosiaalisen median palvelut ovat niin suosittuja kuin ne nyt ovat, mikä saa käyttäjät tuottamaan sisältöä niihin ja onko palveluiden käyttöä mahdollista mallintaa ja ennakoida. Väitöskirjassa verrataan myös sosiaalisen median ja tavallisten uutispalveluiden luonti- ja kulutusmalleja. Lisäksi näytetään kuinka sosiaalisen median sisältö, uutiset ja pörssikurssi hinnat ovat vuorovaikutuksessa toisiinsa. Väitöskirja sisältää myös pohdintaa oikean mittausmenetelmän valinnasta ja käyttämisestä sekä näytetään eri mittausmenetelmien vaikutuksista tuloksiin YouTube -mittausdatan avulla
    corecore