79 research outputs found

    Doctor of Philosophy

    Get PDF
    dissertationThe next generation mobile network (i.e., 5G network) is expected to host emerging use cases that have a wide range of requirements; from Internet of Things (IoT) devices that prefer low-overhead and scalable network to remote machine operation or remote healthcare services that require reliable end-to-end communications. Improving scalability and reliability is among the most important challenges of designing the next generation mobile architecture. The current (4G) mobile core network heavily relies on hardware-based proprietary components. The core networks are expensive and therefore are available in limited locations in the country. This leads to a high end-to-end latency due to the long latency between base stations and the mobile core, and limitations in having innovations and an evolvable network. Moreover, at the protocol level the current mobile network architecture was designed for a limited number of smart-phones streaming a large amount of high quality traffic but not a massive number of low-capability devices sending small and sporadic traffic. This results in high-overhead control and data planes in the mobile core network that are not suitable for a massive number of future Internet-of-Things (IoT) devices. In terms of reliability, network operators already deployed multiple monitoring sys- tems to detect service disruptions and fix problems when they occur. However, detecting all service disruptions is challenging. First, there is a complex relationship between the network status and user-perceived service experience. Second, service disruptions could happen because of reasons that are beyond the network itself. With technology advancements in Software-defined Network (SDN) and Network Func- tion Virtualization (NFV), the next generation mobile network is expected to be NFV-based and deployed on NFV platforms. However, in contrast to telecom-grade hardware with built-in redundancy, commodity off-the-shell (COTS) hardware in NFV platforms often can't be comparable in term of reliability. Availability of Telecom-grade mobile core network hardwares is typically 99.999% (i.e., "five-9s" availability) while most NFV platforms only guarantee "three-9s" availability - orders of magnitude less reliable. Therefore, an NFV-based mobile core network needs extra mechanisms to guarantee its availability. This Ph.D. dissertation focuses on using SDN/NFV, data analytics and distributed system techniques to enhance scalability and reliability of the next generation mobile core network. The dissertation makes the following contributions. First, it presents SMORE, a practical offloading architecture that reduces end-to-end latency and enables new functionalities in mobile networks. It then presents SIMECA, a light-weight and scalable mobile core network designed for a massive number of future IoT devices. Second, it presents ABSENCE, a passive service monitoring system using customer usage and data analytics to detect silent failures in an operational mobile network. Lastly, it presents ECHO, a distributed mobile core network architecture to improve availability of NFV-based mobile core network in public clouds

    Evaluating Scalable Distributed Erlang for Scalability and Reliability

    Get PDF
    Large scale servers with hundreds of hosts and tens of thousands of cores are becoming common. To exploit these platforms software must be both scalable and reliable, and distributed actor languages like Erlang are a proven technology in this area. While distributed Erlang conceptually supports the engineering of large scale reliable systems, in practice it has some scalability limits that force developers to depart from the standard language mechanisms at scale. In earlier work we have explored these scalability limitations, and addressed them by providing a Scalable Distributed (SD) Erlang library that partitions the network of Erlang Virtual Machines (VMs) into scalable groups (s_groups). This paper presents the first systematic evaluation of SD Erlang s_groups and associated tools, and how they can be used. We present a comprehensive evaluation of the scalability and reliability of SD Erlang using three typical benchmarks and a case study. We demonstrate that s_groups improve the scalability of reliable and unreliable Erlang applications on up to 256 hosts (6,144 cores). We show that SD Erlang preserves the class-leading distributed Erlang reliability model, but scales far better than the standard model. We present a novel, systematic, and tool-supported approach for refactoring distributed Erlang applications into SD Erlang. We outline the new and improved monitoring, debugging and deployment tools for large scale SD Erlang applications. We demonstrate the scaling characteristics of key tools on systems comprising up to 10 K Erlang VMs

    SwiftCloud: Fault-Tolerant Geo-Replication Integrated all the Way to the Client Machine

    Get PDF
    Client-side logic and storage are increasingly used in web and mobile applications to improve response time and availability. Current approaches tend to be ad-hoc and poorly integrated with the server-side logic. We present a principled approach to integrate client- and server-side storage. We support mergeable and strongly consistent transactions that target either client or server replicas and provide access to causally-consistent snapshots efficiently. In the presence of infrastructure faults, a client-assisted failover solution allows client execution to resume immediately and seamlessly access consistent snapshots without waiting. We implement this approach in SwiftCloud, the first transactional system to bring geo-replication all the way to the client machine. Example applications show that our programming model is useful across a range of application areas. Our experimental evaluation shows that SwiftCloud provides better fault tolerance and at the same time can improve both latency and throughput by up to an order of magnitude, compared to classical geo-replication techniques

    Network Factors Influencing Packet Loss in Online Games

    Get PDF
    In real-time communications it is often vital that data arrive at its destination in a timely fashion. Whether it is the user experience of online games, or the reliability of tele-surgery, a reliable, consistent and predictable communications channel between source and destination is important. However, the Internet as we know it was designed to ensure that data will arrive at the desired destination instead of being designed for predictable, low-latency communication. Data traveling from point to point on the Internet is comprised of smaller packages known as packets. As these packets traverse the Internet, they encounter routers or similar devices that will often queue the packets before sending them toward their destination. Queued packets introduces a delay that depends greatly on the router configuration and the number of other packets that exist on the network. In times of high demand, packets may be discarded by the router or even lost in transmission. Protocols exist that retransmit lost packets, but these protocols introduce additional overhead and delays - costs that may be prohibitive in some applications. Being able to predict when packets may be delayed or lost could allow applications to compensate for unreliable data channels. In this thesis I investigate the effects of cross traffic and router configuration on a low bandwidth traffic stream such as that which is common in games. The experiments investigate the effects of cross traffic packet size, bit-rate, inter-packet timing and protocol used. The experiments also investigate router configurations including queue management type and the number of queues. These experiments are compared to real-world data and a mitigation strategy, where n previous packets are bundled with each new packet, is applied to both the simulated data and the real-world captures. The experiments indicate that most of the parameters explored had an impact on the packet loss. However, the real world data and simulated data differ and would require additional work to attempt to apply the lessons learned to real world applications. The mitigation strategy appeared to work well, allowing 90\% of all runs to complete without data loss. However, the mitigation strategy was implemented analytically and the actual implementation and testing has been left for future work

    Quantitative Verification and Synthesis of Resilient Networks

    Get PDF

    Distributed Inference and Learning with Byzantine Data

    Get PDF
    We are living in an increasingly networked world with sensing networks of varying shapes and sizes: the network often comprises of several tiny devices (or nodes) communicating with each other via different topologies. To make the problem even more complicated, the nodes in the network can be unreliable due to a variety of reasons: noise, faults and attacks, thus, providing corrupted data. Although the area of statistical inference has been an active area of research in the past, distributed learning and inference in a networked setup with potentially unreliable components has only gained attention recently. The emergence of big and dirty data era demands new distributed learning and inference solutions to tackle the problem of inference with corrupted data. Distributed inference networks (DINs) consist of a group of networked entities which acquire observations regarding a phenomenon of interest (POI), collaborate with other entities in the network by sharing their inference via different topologies to make a global inference. The central goal of this thesis is to analyze the effect of corrupted (or falsified) data on the inference performance of DINs and design robust strategies to ensure reliable overall performance for several practical network architectures. Specifically, the inference (or learning) process can be that of detection or estimation or classification, and the topology of the system can be parallel, hierarchical or fully decentralized (peer to peer). Note that, the corrupted data model may seem similar to the scenario where local decisions are transmitted over a Binary Symmetric Channel (BSC) with a certain cross over probability, however, there are fundamental differences. Over the last three decades, research community has extensively studied the impact of transmission channels or faults on the distributed detection system and related problems due to its importance in several applications. However, corrupted (Byzantine) data models considered in this thesis, are philosophically different from the BSC or the faulty sensor cases. Byzantines are intentional and intelligent, therefore, they can optimize over the data corruption parameters. Thus, in contrast to channel aware detection, both the FC and the Byzantines can optimize their utility by choosing their actions based on the knowledge of their opponent’s behavior. Study of these practically motivated scenarios in the presence of Byzantines is of utmost importance, and is missing from the channel aware detection and fault tolerant detection literature. This thesis advances the distributed inference literature by providing fundamental limits of distributed inference with Byzantine data and provides optimal counter-measures (using the insights provided by these fundamental limits) from a network designer’s perspective. Note that, the analysis of problems related to strategical interaction between Byzantines and network designed is very challenging (NP-hard is many cases). However, we show that by utilizing the properties of the network architecture, efficient solutions can be obtained. Specifically, we found that several problems related to the design of optimal counter-measures in the inference context are, in fact, special cases of these NP-hard problems which can be solved in polynomial time. First, we consider the problem of distributed Bayesian detection in the presence of data falsification (or Byzantine) attacks in the parallel topology. Byzantines considered in this thesis are those nodes that are compromised and reprogrammed by an adversary to transmit false information to a centralized fusion center (FC) to degrade detection performance. We show that above a certain fraction of Byzantine attackers in the network, the detection scheme becomes completely incapable (or blind) of utilizing the sensor data for detection. When the fraction of Byzantines is not sufficient to blind the FC, we also provide closed form expressions for the optimal attacking strategies for the Byzantines that most degrade the detection performance. Optimal attacking strategies in certain cases have the minimax property and, therefore, the knowledge of these strategies has practical significance and can be used to implement a robust detector at the FC. In several practical situations, parallel topology cannot be implemented due to limiting factors, such as, the FC being outside the communication range of the nodes and limited energy budget of the nodes. In such scenarios, a multi-hop network is employed, where nodes are organized hierarchically into multiple levels (tree networks). Next, we study the problem of distributed inference in tree topologies in the presence of Byzantines under several practical scenarios. We analytically characterize the effect of Byzantines on the inference performance of the system. We also look at the possible counter-measures from the FC’s perspective to protect the network from these Byzantines. These counter-measures are of two kinds: Byzantine identification schemes and Byzantine tolerant schemes. Using learning based techniques, Byzantine identification schemes are designed that learn the identity of Byzantines in the network and use this information to improve system performance. For scenarios where this is not possible, Byzantine tolerant schemes, which use game theory and error-correcting codes, are developed that tolerate the effect of Byzantines while maintaining a reasonably good inference performance in the network. Going a step further, we also consider scenarios where a centralized FC is not available. In such scenarios, a solution is to employ detection approaches which are based on fully distributed consensus algorithms, where all of the nodes exchange information only with their neighbors. For such networks, we analytically characterize the negative effect of Byzantines on the steady-state and transient detection performance of conventional consensus-based detection schemes. To avoid performance deterioration, we propose a distributed weighted average consensus algorithm that is robust to Byzantine attacks. Next, we exploit the statistical distribution of the nodes’ data to devise techniques for mitigating the influence of data falsifying Byzantines on the distributed detection system. Since some parameters of the statistical distribution of the nodes’ data might not be known a priori, we propose learning based techniques to enable an adaptive design of the local fusion or update rules. The above considerations highlight the negative effect of the corrupted data on the inference performance. However, it is possible for a system designer to utilize the corrupted data for network’s benefit. Finally, we consider the problem of detecting a high dimensional signal based on compressed measurements with secrecy guarantees. We consider a scenario where the network operates in the presence of an eavesdropper who wants to discover the state of the nature being monitored by the system. To keep the data secret from the eavesdropper, we propose to use cooperating trustworthy nodes that assist the FC by injecting corrupted data in the system to deceive the eavesdropper. We also design the system by determining the optimal values of parameters which maximize the detection performance at the FC while ensuring perfect secrecy at the eavesdropper

    Resource provision in object oriented distributed systems

    Get PDF

    Critical Thinking Skills Profile of High School Students In Learning Science-Physics

    Get PDF
    This study aims to describe Critical Thinking Skills high school students in the city of Makassar. To achieve this goal, the researchers conducted an analysis of student test results of 200 people scattered in six schools in the city of Makassar. The results of the quantitative descriptive analysis of the data found that the average value of students doing the interpretation, analysis, and inference in a row by 1.53, 1.15, and 1.52. This value is still very low when compared with the maximum value that may be obtained by students, that is equal to 10.00. This shows that the critical thinking skills of high school students are still very low. One fact Competency Standards science subjects-Physics is demonstrating the ability to think logically, critically, and creatively with the guidance of teachers and demonstrate the ability to solve simple problems in daily life. In fact, according to Michael Scriven stated that the main task of education is to train students and or students to think critically because of the demands of work in the global economy, the survival of a democratic and personal decisions and decisions in an increasingly complex society needs people who can think well and make judgments good. Therefore, the need for teachers in the learning device scenario such as: driving question or problem, authentic Investigation: Science Processes

    Computer Science Principles with Java

    Get PDF
    This textbook is intended to be used for a first course in computer science, such as the College Board’s Advanced Placement course known as AP Computer Science Principles (CSP). This book includes all the topics on the CSP exam, plus some additional topics. It takes a breadth-first approach, with an emphasis on the principles which form the foundation for hardware and software. No prior experience with programming should be required to use this book. This version of the book uses the Java programming language.https://rdw.rowan.edu/oer/1018/thumbnail.jp
    • …
    corecore