117 research outputs found

    Deployable filtering architectures against large denial-of-service attacks

    Get PDF
    Denial-of-Service attacks continue to grow in size and frequency despite serious underreporting. While several research solutions have been proposed over the years, they have had important deployment hurdles that have prevented them from seeing any significant level of deployment on the Internet. Commercial solutions exist, but they are costly and generally are not meant to scale to Internet-wide levels. In this thesis we present three filtering architectures against large Denial-of-Service attacks. Their emphasis is in providing an effective solution against such attacks while using simple mechanisms in order to overcome the deployment hurdles faced by other solutions. While these are well-suited to being implemented in fast routing hardware, in the early stages of deployment this is unlikely to be the case. Because of this, we implemented them on low-cost off-the-shelf hardware and evaluated their performance on a network testbed. The results are very encouraging: this setup allows us to forward traffic on a single PC at rates of millions of packets per second even for minimum-sized packets, while at the same time processing as many as one million filters; this gives us confidence that the architecture as a whole could combat even the large botnets currently being reported. Better yet, we show that this single-PC performance scales well with the number of CPU cores and network interfaces, which is promising for our solutions if we consider the current trend in processor design. In addition to using simple mechanisms, we discuss how the architectures provide clear incentives for ISPs that adopt them early, both at the destination as well as at the sources of attacks. The hope is that these will be sufficient to achieve some level of initial deployment. The larger goal is to have an architectural solution against large DoS deployed in place before even more harmful attacks take place; this thesis is hopefully a step in that direction

    FPGA-based Query Acceleration for Non-relational Databases

    Get PDF
    Database management systems are an integral part of today’s everyday life. Trends like smart applications, the internet of things, and business and social networks require applications to deal efficiently with data in various data models close to the underlying domain. Therefore, non-relational database systems provide a wide variety of database models, like graphs and documents. However, current non-relational database systems face performance challenges due to the end of Dennard scaling and therefore performance scaling of CPUs. In the meanwhile, FPGAs have gained traction as accelerators for data management. Our goal is to tackle the performance challenges of non-relational database systems with FPGA acceleration and, at the same time, address design challenges of FPGA acceleration itself. Therefore, we split this thesis up into two main lines of work: graph processing and flexible data processing. Because of the lacking benchmark practices for graph processing accelerators, we propose GraphSim. GraphSim is able to reproduce runtimes of these accelerators based on a memory access model of the approach. Through this simulation environment, we extract three performance-critical accelerator properties: asynchronous graph processing, compressed graph data structure, and multi-channel memory. Since these accelerator properties have not been combined in one system, we propose GraphScale. GraphScale is the first scalable, asynchronous graph processing accelerator working on a compressed graph and outperforms all state-of-the-art graph processing accelerators. Focusing on accelerator flexibility, we propose PipeJSON as the first FPGA-based JSON parser for arbitrary JSON documents. PipeJSON is able to achieve parsing at line-speed, outperforming the fastest, vectorized parsers for CPUs. Lastly, we propose the subgraph query processing accelerator GraphMatch which outperforms state-of-the-art CPU systems for subgraph query processing and is able to flexibly switch queries during runtime in a matter of clock cycles

    MediaSync: Handbook on Multimedia Synchronization

    Get PDF
    This book provides an approachable overview of the most recent advances in the fascinating field of media synchronization (mediasync), gathering contributions from the most representative and influential experts. Understanding the challenges of this field in the current multi-sensory, multi-device, and multi-protocol world is not an easy task. The book revisits the foundations of mediasync, including theoretical frameworks and models, highlights ongoing research efforts, like hybrid broadband broadcast (HBB) delivery and users' perception modeling (i.e., Quality of Experience or QoE), and paves the way for the future (e.g., towards the deployment of multi-sensory and ultra-realistic experiences). Although many advances around mediasync have been devised and deployed, this area of research is getting renewed attention to overcome remaining challenges in the next-generation (heterogeneous and ubiquitous) media ecosystem. Given the significant advances in this research area, its current relevance and the multiple disciplines it involves, the availability of a reference book on mediasync becomes necessary. This book fills the gap in this context. In particular, it addresses key aspects and reviews the most relevant contributions within the mediasync research space, from different perspectives. Mediasync: Handbook on Multimedia Synchronization is the perfect companion for scholars and practitioners that want to acquire strong knowledge about this research area, and also approach the challenges behind ensuring the best mediated experiences, by providing the adequate synchronization between the media elements that constitute these experiences

    Generating intelligent tutoring systems for teaching reading: combining phonological awareness and thematic approaches

    Get PDF
    The objective of this thesis is to investigate the use of computers with artificial intelligence methods for the teaching of basic literacy skills to be applied eventually to the teaching of illiterate adults in Brazil.In its development many issues related to adult education have been considered, and two very significant approaches to the teaching of reading were focused on in detail: Phonological Awareness (PA) and Generative Themes. After being excluded from literacy curricula for a long time during the ascendancy of the "Whole Word" approaches, activities for the development of phonological awareness are currently being accepted as fundamental for teaching reading, and are being incorporated in most English literacy programmes. Generative Themes, in turn, were first introduced in Brazil in a massive programme for teaching reading to adults, and have since then been used successfully in a number of developing countries for the same purpose. However, these two approaches are apparently conflicting in their principles and emphasis, for the first (PA) is generally centred on the technical aspects of phonology, based on well controlled experiments and research, whereas the second is socially inspired and focused mainly on meaning and social relationships.The main question addressed in this research, consequently, is whether these two apparently conflicting approaches could be combined to create a method that would be technically PA oriented but at the same time could concentrate on meaning by using thematic vocabularies as stimuli for teaching. Would it be possible to find words to illustrate all the phonological features with which a PA method deals using a thematic vocabulary?To answer this question diverse concepts, languages and tools have been developed as part of this research, in order to allow the selection of thematic vocabularies, the description of PA curricula, the distribution of thematic words across PA curricula, the description of teaching activities and the definition of the teaching strategy rules to orient the teaching sequence.The resultant vocabularies have been evaluated and the outcomes of the research have been assessed by literacy experts. A prototype system for delivering experimental teaching activities through the Internet has also been developed and demonstrated

    MOOClm: Learner Modelling for MOOCs

    Get PDF
    Massively Open Online Learning systems, or MOOCs, generate enormous quantities of learning data. Analysis of this data has considerable potential benefits for learners, educators, teaching administrators and educational researchers. How to realise this potential is still an open question. This thesis explores use of such data to create a rich Open Learner Model (OLM). The OLM is designed to take account of the restrictions and goals of lifelong learner model usage. Towards this end, we structure the learner model around a standard curriculum-based ontology. Since such a learner model may be very large, we integrate a visualisation based on a highly scalable circular treemap representation. The visualisation allows the student to either drill down further into increasingly detailed views of the learner model, or filter the model down to a smaller, selected subset. We introduce the notion of a set of Reference learner models, such as an ideal student, a typical student, or a selected set of learning objectives within the curriculum. Introducing these provides a foundation for a learner to make a meaningful evaluation of their own model by comparing against a reference model. To validate the work, we created MOOClm to implement this framework, then used this in the context of a Small Private Online Course (SPOC) run at the University of Sydney. We also report a qualitative usability study to gain insights into the ways a learner can make use of the OLM. Our contribution is the design and validation of MOOClm, a framework that harnesses MOOC data to create a learner model with an OLM interface for student and educator usage

    Global connectivity architecture of mobile personal devices

    Get PDF
    Thesis (Ph. D.)--Massachusetts Institute of Technology, Dept. of Electrical Engineering and Computer Science, 2008.This electronic version was submitted by the student author. The certified thesis is available in the Institute Archives and Special Collections.Includes bibliographical references (p. 193-207).The Internet's architecture, designed in the days of large, stationary computers tended by technically savvy and accountable administrators, fails to meet the demands of the emerging ubiquitous computing era. Nontechnical users now routinely own multiple personal devices, many of them mobile, and need to share information securely among them using interactive, delay-sensitive applications.Unmanaged Internet Architecture (UIA) is a novel, incrementally deployable network architecture for modern personal devices, which reconsiders three architectural cornerstones: naming, routing, and transport. UIA augments the Internet's global name system with a personal name system, enabling users to build personal administrative groups easily and intuitively, to establish secure bindings between his devices and with other users' devices, and to name his devices and his friends much like using a cell phone's address book. To connect personal devices reliably, even while mobile, behind NATs or firewalls, or connected via isolated ad hoc networks, UIA gives each device a persistent, location-independent identity, and builds an overlay routing service atop IP to resolve and route among these identities. Finally, to support today's interactive applications built using concurrent transactions and delay-sensitive media streams, UIA introduces a new structured stream transport abstraction, which solves the efficiency and responsiveness problems of TCP streams and the functionality limitations of UDP datagrams. Preliminary protocol designs and implementations demonstrate UIA's features and benefits. A personal naming prototype supports easy and portable group management, allowing use of personal names alongside global names in unmodified Internet applications. A prototype overlay router leverages the naming layer's social network to provide efficient ad hoc connectivity in restricted but important common-case scenarios.(cont) Simulations of more general routing protocols--one inspired by distributed hash tables, one based on recent compact routing theory--explore promising generalizations to UIA's overlay routing. A library-based prototype of UIA's structured stream transport enables incremental deployment in either OS infrastructure or applications, and demonstrates the responsiveness benefits of the new transport abstraction via dynamic prioritization of interactive web downloads. Finally, an exposition and experimental evaluation of NAT traversal techniques provides insight into routing optimizations useful in UIA and elsewhere.by Bryan Alexander Ford.Ph.D

    The User Attribution Problem and the Challenge of Persistent Surveillance of User Activity in Complex Networks

    Get PDF
    In the context of telecommunication networks, the user attribution problem refers to the challenge faced in recognizing communication traffic as belonging to a given user when information needed to identify the user is missing. This is analogous to trying to recognize a nameless face in a crowd. This problem worsens as users move across many mobile networks (complex networks) owned and operated by different providers. The traditional approach of using the source IP address, which indicates where a packet comes from, does not work when used to identify mobile users. Recent efforts to address this problem by exclusively relying on web browsing behavior to identify users were limited to a small number of users (28 and 100 users). This was due to the inability of solutions to link up multiple user sessions together when they rely exclusively on the web sites visited by the user. This study has tackled this problem by utilizing behavior based identification while accounting for time and the sequential order of web visits by a user. Hierarchical Temporal Memories (HTM) were used to classify historical navigational patterns for different users. Each layer of an HTM contains variable order Markov chains of connected nodes which represent clusters of web sites visited in time order by the user (user sessions). HTM layers enable inference generalization by linking Markov chains within and across layers and thus allow matching longer sequences of visited web sites (multiple user sessions). This approach enables linking multiple user sessions together without the need for a tracking identifier such as the source IP address. Results are promising. HTMs can provide high levels of accuracy using synthetic data with 99% recall accuracy for up to 500 users and good levels of recall accuracy of 95 % and 87% for 5 and 10 users respectively when using cellular network data. This research confirmed that the presence of long tail web sites (rarely visited) among many repeated destinations can create unique differentiation. What was not anticipated prior to this research was the very high degree of repetitiveness of some web destinations found in real network data

    Private and censorship-resistant communication over public networks

    Get PDF
    Society’s increasing reliance on digital communication networks is creating unprecedented opportunities for wholesale surveillance and censorship. This thesis investigates the use of public networks such as the Internet to build robust, private communication systems that can resist monitoring and attacks by powerful adversaries such as national governments. We sketch the design of a censorship-resistant communication system based on peer-to-peer Internet overlays in which the participants only communicate directly with people they know and trust. This ‘friend-to-friend’ approach protects the participants’ privacy, but it also presents two significant challenges. The first is that, as with any peer-to-peer overlay, the users of the system must collectively provide the resources necessary for its operation; some users might prefer to use the system without contributing resources equal to those they consume, and if many users do so, the system may not be able to survive. To address this challenge we present a new game theoretic model of the problem of encouraging cooperation between selfish actors under conditions of scarcity, and develop a strategy for the game that provides rational incentives for cooperation under a wide range of conditions. The second challenge is that the structure of a friend-to-friend overlay may reveal the users’ social relationships to an adversary monitoring the underlying network. To conceal their sensitive relationships from the adversary, the users must be able to communicate indirectly across the overlay in a way that resists monitoring and attacks by other participants. We address this second challenge by developing two new routing protocols that robustly deliver messages across networks with unknown topologies, without revealing the identities of the communication endpoints to intermediate nodes or vice versa. The protocols make use of a novel unforgeable acknowledgement mechanism that proves that a message has been delivered without identifying the source or destination of the message or the path by which it was delivered. One of the routing protocols is shown to be robust to attacks by malicious participants, while the other provides rational incentives for selfish participants to cooperate in forwarding messages
    corecore