40 research outputs found

    The White-Box Adversarial Data Stream Model

    Full text link
    We study streaming algorithms in the white-box adversarial model, where the stream is chosen adaptively by an adversary who observes the entire internal state of the algorithm at each time step. We show that nontrivial algorithms are still possible. We first give a randomized algorithm for the L1L_1-heavy hitters problem that outperforms the optimal deterministic Misra-Gries algorithm on long streams. If the white-box adversary is computationally bounded, we use cryptographic techniques to reduce the memory of our L1L_1-heavy hitters algorithm even further and to design a number of additional algorithms for graph, string, and linear algebra problems. The existence of such algorithms is surprising, as the streaming algorithm does not even have a secret key in this model, i.e., its state is entirely known to the adversary. One algorithm we design is for estimating the number of distinct elements in a stream with insertions and deletions achieving a multiplicative approximation and sublinear space; such an algorithm is impossible for deterministic algorithms. We also give a general technique that translates any two-player deterministic communication lower bound to a lower bound for {\it randomized} algorithms robust to a white-box adversary. In particular, our results show that for all p≥0p\ge 0, there exists a constant Cp>1C_p>1 such that any CpC_p-approximation algorithm for FpF_p moment estimation in insertion-only streams with a white-box adversary requires Ω(n)\Omega(n) space for a universe of size nn. Similarly, there is a constant C>1C>1 such that any CC-approximation algorithm in an insertion-only stream for matrix rank requires Ω(n)\Omega(n) space with a white-box adversary. Our algorithmic results based on cryptography thus show a separation between computationally bounded and unbounded adversaries. (Abstract shortened to meet arXiv limits.)Comment: PODS 202

    Anomaly-Based Intrusion Detection by Modeling Probability Distributions of Flow Characteristics

    Get PDF
    In recent years, with the increased use of network communication, the risk of compromising the information has grown immensely. Intrusions have evolved and become more sophisticated. Hence, classical detection systems show poor performance in detecting novel attacks. Although much research has been devoted to improving the performance of intrusion detection systems, few methods can achieve consistently efficient results with the constant changes in network communications. This thesis proposes an intrusion detection system based on modeling distributions of network flow statistics in order to achieve a high detection rate for known and stealthy attacks. The proposed model aggregates the traffic at the IP subnetwork level using a hierarchical heavy hitters algorithm. This aggregated traffic is used to build the distribution of network statistics for the most frequent IPv4 addresses encountered as destination. The obtained probability density functions are learned by the Extreme Learning Machine method which is a single-hidden layer feedforward neural network. In this thesis, different sequential and batch learning strategies are proposed in order to analyze the efficiency of this proposed approach. The performance of the model is evaluated on the ISCX-IDS 2012 dataset consisting of injection attacks, HTTP flooding, DDoS and brute force intrusions. The experimental results of the thesis indicate that the presented method achieves an average detection rate of 91% while having a low misclassification rate of 9%, which is on par with the state-of-the-art approaches using this dataset. In addition, the proposed method can be utilized as a network behavior analysis tool specifically for DDoS mitigation, since it can isolate aggregated IPv4 addresses from the rest of the network traffic, thus supporting filtering out DDoS attacks

    Characterizing the IoT ecosystem at scale

    Get PDF
    Internet of Things (IoT) devices are extremely popular with home, business, and industrial users. To provide their services, they typically rely on a backend server in- frastructure on the Internet, which collectively form the IoT Ecosystem. This ecosys- tem is rapidly growing and offers users an increasing number of services. It also has been a source and target of significant security and privacy risks. One notable exam- ple is the recent large-scale coordinated global attacks, like Mirai, which disrupted large service providers. Thus, characterizing this ecosystem yields insights that help end-users, network operators, policymakers, and researchers better understand it, obtain a detailed view, and keep track of its evolution. In addition, they can use these insights to inform their decision-making process for mitigating this ecosystem’s security and privacy risks. In this dissertation, we characterize the IoT ecosystem at scale by (i) detecting the IoT devices in the wild, (ii) conducting a case study to measure how deployed IoT devices can affect users’ privacy, and (iii) detecting and measuring the IoT backend infrastructure. To conduct our studies, we collaborated with a large European Internet Service Provider (ISP) and a major European Internet eXchange Point (IXP). They rou- tinely collect large volumes of passive, sampled data, e.g., NetFlow and IPFIX, for their operational purposes. These data sources help providers obtain insights about their networks, and we used them to characterize the IoT ecosystem at scale. We start with IoT devices and study how to track and trace their activity in the wild. We developed and evaluated a scalable methodology to accurately detect and monitor IoT devices with limited, sparsely sampled data in the ISP and IXP. Next, we conduct a case study to measure how a myriad of deployed devices can affect the privacy of ISP subscribers. Unfortunately, we found that the privacy of a substantial fraction of IPv6 end-users is at risk. We noticed that a single device at home that encodes its MAC address into the IPv6 address could be utilized as a tracking identifier for the entire end-user prefix—even if other devices use IPv6 privacy extensions. Our results showed that IoT devices contribute the most to this privacy leakage. Finally, we focus on the backend server infrastructure and propose a methodology to identify and locate IoT backend servers operated by cloud services and IoT vendors. We analyzed their IoT traffic patterns as observed in the ISP. Our analysis sheds light on their diverse operational and deployment strategies. The need for issuing a priori unknown network-wide queries against large volumes of network flow capture data, which we used in our studies, motivated us to develop Flowyager. It is a system built on top of existing traffic capture utilities, and it relies on flow summarization techniques to reduce (i) the storage and transfer cost of flow captures and (ii) query response time. We deployed a prototype of Flowyager at both the IXP and ISP.Internet-of-Things-Geräte (IoT) sind aus vielen Haushalten, Büroräumen und In- dustrieanlagen nicht mehr wegzudenken. Um ihre Dienste zu erbringen, nutzen IoT- Geräte typischerweise auf eine Backend-Server-Infrastruktur im Internet, welche als Gesamtheit das IoT-Ökosystem bildet. Dieses Ökosystem wächst rapide an und bie- tet den Nutzern immer mehr Dienste an. Das IoT-Ökosystem ist jedoch sowohl eine Quelle als auch ein Ziel von signifikanten Risiken für die Sicherheit und Privatsphäre. Ein bemerkenswertes Beispiel sind die jüngsten groß angelegten, koordinierten globa- len Angriffe wie Mirai, durch die große Diensteanbieter gestört haben. Deshalb ist es wichtig, dieses Ökosystem zu charakterisieren, eine ganzheitliche Sicht zu bekommen und die Entwicklung zu verfolgen, damit Forscher, Entscheidungsträger, Endnutzer und Netzwerkbetreibern Einblicke und ein besseres Verständnis erlangen. Außerdem können alle Teilnehmer des Ökosystems diese Erkenntnisse nutzen, um ihre Entschei- dungsprozesse zur Verhinderung von Sicherheits- und Privatsphärerisiken zu verbes- sern. In dieser Dissertation charakterisieren wir die Gesamtheit des IoT-Ökosystems indem wir (i) IoT-Geräte im Internet detektieren, (ii) eine Fallstudie zum Einfluss von benutzten IoT-Geräten auf die Privatsphäre von Nutzern durchführen und (iii) die IoT-Backend-Infrastruktur aufdecken und vermessen. Um unsere Studien durchzuführen, arbeiten wir mit einem großen europäischen Internet- Service-Provider (ISP) und einem großen europäischen Internet-Exchange-Point (IXP) zusammen. Diese sammeln routinemäßig für operative Zwecke große Mengen an pas- siven gesampelten Daten (z.B. als NetFlow oder IPFIX). Diese Datenquellen helfen Netzwerkbetreibern Einblicke in ihre Netzwerke zu erlangen und wir verwendeten sie, um das IoT-Ökosystem ganzheitlich zu charakterisieren. Wir beginnen unsere Analysen mit IoT-Geräten und untersuchen, wie diese im Inter- net aufgespürt und verfolgt werden können. Dazu entwickelten und evaluierten wir eine skalierbare Methodik, um IoT-Geräte mit Hilfe von eingeschränkten gesampelten Daten des ISPs und IXPs präzise erkennen und beobachten können. Als Nächstes führen wir eine Fallstudie durch, in der wir messen, wie eine Unzahl von eingesetzten Geräten die Privatsphäre von ISP-Nutzern beeinflussen kann. Lei- der fanden wir heraus, dass die Privatsphäre eines substantiellen Teils von IPv6- Endnutzern bedroht ist. Wir entdeckten, dass bereits ein einzelnes Gerät im Haus, welches seine MAC-Adresse in die IPv6-Adresse kodiert, als Tracking-Identifikator für das gesamte Endnutzer-Präfix missbraucht werden kann — auch wenn andere Geräte IPv6-Privacy-Extensions verwenden. Unsere Ergebnisse zeigten, dass IoT-Geräte den Großteil dieses Privatsphäre-Verlusts verursachen. Abschließend fokussieren wir uns auf die Backend-Server-Infrastruktur und wir schla- gen eine Methodik zur Identifizierung und Lokalisierung von IoT-Backend-Servern vor, welche von Cloud-Diensten und IoT-Herstellern betrieben wird. Wir analysier- ten Muster im IoT-Verkehr, der vom ISP beobachtet wird. Unsere Analyse gibt Auf- schluss über die unterschiedlichen Strategien, wie IoT-Backend-Server betrieben und eingesetzt werden. Die Notwendigkeit a-priori unbekannte netzwerkweite Anfragen an große Mengen von Netzwerk-Flow-Daten zu stellen, welche wir in in unseren Studien verwenden, moti- vierte uns zur Entwicklung von Flowyager. Dies ist ein auf bestehenden Netzwerkverkehrs- Tools aufbauendes System und es stützt sich auf die Zusammenfassung von Verkehrs- flüssen, um (i) die Kosten für Archivierung und Transfer von Flow-Daten und (ii) die Antwortzeit von Anfragen zu reduzieren. Wir setzten einen Prototypen von Flowyager sowohl im IXP als auch im ISP ein

    A Survey on Data Plane Programming with P4: Fundamentals, Advances, and Applied Research

    Full text link
    With traditional networking, users can configure control plane protocols to match the specific network configuration, but without the ability to fundamentally change the underlying algorithms. With SDN, the users may provide their own control plane, that can control network devices through their data plane APIs. Programmable data planes allow users to define their own data plane algorithms for network devices including appropriate data plane APIs which may be leveraged by user-defined SDN control. Thus, programmable data planes and SDN offer great flexibility for network customization, be it for specialized, commercial appliances, e.g., in 5G or data center networks, or for rapid prototyping in industrial and academic research. Programming protocol-independent packet processors (P4) has emerged as the currently most widespread abstraction, programming language, and concept for data plane programming. It is developed and standardized by an open community and it is supported by various software and hardware platforms. In this paper, we survey the literature from 2015 to 2020 on data plane programming with P4. Our survey covers 497 references of which 367 are scientific publications. We organize our work into two parts. In the first part, we give an overview of data plane programming models, the programming language, architectures, compilers, targets, and data plane APIs. We also consider research efforts to advance P4 technology. In the second part, we analyze a large body of literature considering P4-based applied research. We categorize 241 research papers into different application domains, summarize their contributions, and extract prototypes, target platforms, and source code availability.Comment: Submitted to IEEE Communications Surveys and Tutorials (COMS) on 2021-01-2

    Accurate and Resource-Efficient Monitoring for Future Networks

    Get PDF
    Monitoring functionality is a key component of any network management system. It is essential for profiling network resource usage, detecting attacks, and capturing the performance of a multitude of services using the network. Traditional monitoring solutions operate on long timescales producing periodic reports, which are mostly used for manual and infrequent network management tasks. However, these practices have been recently questioned by the advent of Software Defined Networking (SDN). By empowering management applications with the right tools to perform automatic, frequent, and fine-grained network reconfigurations, SDN has made these applications more dependent than before on the accuracy and timeliness of monitoring reports. As a result, monitoring systems are required to collect considerable amounts of heterogeneous measurement data, process them in real-time, and expose the resulting knowledge in short timescales to network decision-making processes. Satisfying these requirements is extremely challenging given today’s larger network scales, massive and dynamic traffic volumes, and the stringent constraints on time availability and hardware resources. This PhD thesis tackles this important challenge by investigating how an accurate and resource-efficient monitoring function can be realised in the context of future, software-defined networks. Novel monitoring methodologies, designs, and frameworks are provided in this thesis, which scale with increasing network sizes and automatically adjust to changes in the operating conditions. These achieve the goal of efficient measurement collection and reporting, lightweight measurement- data processing, and timely monitoring knowledge delivery

    History Matching and Optimization Using Stochastic Methods: Applications to Chemical Flooding

    Get PDF
    A typical lifecycle of an oil and gas field is characterized by three stages: primary recovery by natural depletion, secondary recovery by fluid injection, and enhanced oil recovery (EOR). The primary goal of reservoir management is to increase hydrocarbon recovery while reducing capital and operational expenditures. Two key techniques for the success of reservoir management are model calibration and production optimization. History matching is used to calibrate existing geological models against to measured data and predict the range of future recovery. Production optimization on calibrated reservoir models provides economic assessment of different field development plans and suggests optimal strategies to maximize recovery and minimize cost. We first presented the workflow of history matching in chemical flooding. Evolutionary algorithms are the method of choice due to its capability of calibrating various parameter types and its global search nature. Chemical flooding simulator UTCHEM, developed by The University of Texas at Austin, is coupled during the history matching process to consider complex mechanisms such as phase behavior, chemical and physical transformations, etc. Next, we implemented the proposed workflow to calibrate models in multiple stages that can efficiently reduce large amounts of uncertain parameters in alkaline-surfactant-polymer (ASP) flooding. Each stage of model calibration will follow an order of field scale, and then individual well scale, with consideration of behaviors brought by ASP flooding, such as surfactant/polymer adsorption. The proposed multi-stage history matching workflow is powerful to deliver better history matching results and significantly reduce the uncertainty of large numbers of parameters involved in chemical flooding. Lastly, we extended the evolutionary workflows for multi-objective optimization via introducing the concept of Pareto optimality. Pareto front method is proposed to handle conflicting objective functions such as oil production and chemical efficiency instead of weighted sum method in optimizing ASP flooding. Non-dominated Sorting Genetic Algorithm (NSGA-II) is used to search for Pareto optimal solutions. The robustness and practical feasibility of our approaches have been demonstrated through both synthetic and field examples

    Model Calibration, Drainage Volume Calculation and Optimization in Heterogeneous Fractured Reservoirs

    Get PDF
    We propose a rigorous approach for well drainage volume calculations in gas reservoirs based on the flux field derived from dual porosity finite-difference simulation and demonstrate its application to optimize well placement. Our approach relies on a high frequency asymptotic solution of the diffusivity equation and emulates the propagation of a 'pressure front' in the reservoir along gas streamlines. The proposed approach is a generalization of the radius of drainage concept in well test analysis (Lee 1982), which allows us not only to compute rigorously the well drainage volumes as a function of time but also to examine the potential impact of infill wells on the drainage volumes of existing producers. Using these results, we present a systematic approach to optimize well placement to maximize the Estimated Ultimate Recovery. A history matching algorithm is proposed that sequentially calibrates reservoir parameters from the global-to-local scale considering parameter uncertainty and the resolution of the data. Parameter updates are constrained to the prior geologic heterogeneity and performed parsimoniously to the smallest spatial scales at which they can be resolved by the available data. In the first step of the workflow, Genetic Algorithm is used to assess the uncertainty in global parameters that influence field-scale flow behavior, specifically reservoir energy. To identify the reservoir volume over which each regional multiplier is applied, we have developed a novel approach to heterogeneity segmentation from spectral clustering theory. The proposed clustering can capture main feature of prior model by using second eigenvector of graph affinity matrix. In the second stage of the workflow, we parameterize the high-resolution heterogeneity in the spectral domain using the Grid Connectivity based Transform to severely compress the dimension of the calibration parameter set. The GCT implicitly imposes geological continuity and promotes minimal changes to each prior model in the ensemble during the calibration process. The field scale utility of the workflow is then demonstrated with the calibration of a model characterizing a structurally complex and highly fractured reservoir
    corecore