1,561 research outputs found

    Flexible, wide-area storage for distributed systems using semantic cues

    Get PDF
    Thesis (Ph. D.)--Massachusetts Institute of Technology, Dept. of Electrical Engineering and Computer Science, 2009.This electronic version was submitted by the student author. The certified thesis is available in the Institute Archives and Special Collections.Cataloged from student submitted PDF version of thesis.Includes bibliographical references (p. 81-87).There is a growing set of Internet-based services that are too big, or too important, to run at a single site. Examples include Web services for e-mail, video and image hosting, and social networking. Splitting such services over multiple sites can increase capacity, improve fault tolerance, and reduce network delays to clients. These services often need storage infrastructure to share data among the sites. This dissertation explores the use of a new file system (WheelFS) specifically designed to be the storage infrastructure for wide-area distributed services. WheelFS allows applications to adjust the semantics of their data via semantic cues, which provide application control over consistency, failure handling, and file and replica placement. This dissertation describes a particular set of semantic cues that reflect the specific challenges that storing data over the wide-area network entails: high-latency and low-bandwidth links, coupled with increased node and link failures, when compared to local-area networks. By augmenting a familiar POSIX interface with support for semantic cues, WheelFS provides a wide-area distributed storage system intended to help multi-site applications share data and gain fault tolerance, in the form of a distributed file system. Its design allows applications to adjust the tradeoff between prompt visibility of updates from other sites and the ability for sites to operate independently despite failures and long delays. WheelFS is implemented as a user-level file system and is deployed on PlanetLab and Emu-lab.(cont.) Six applications (an all-pairs-pings script, a distributed Web cache, an email service, large file distribution, distributed compilation, and protein sequence alignment software) demonstrate that WheelFS's file system interface simplifies construction of distributed applications by allowing reuse of existing software. These applications would perform poorly with the strict semantics implied by a traditional file system interface, but by providing cues to WheelFS they are able to achieve good performance. Measurements show that applications built on WheelFS deliver comparable performance to services such as CoralCDN and BitTorrent that use specialized wide-area storage systems.by Jeremy Andrew Stribling.Ph.D

    Analysis of the Impact of Data Normalization on Cyber Event Correlation Query Performance

    Get PDF
    A critical capability required in the operation of cyberspace is the ability to maintain situational awareness of the status of the infrastructure elements that constitute cyberspace. Event logs from cyber devices can yield significant information, and when properly utilized they can provide timely situational awareness about the state of the cyber infrastructure. In addition, proper Information Assurance requires the validation and verification of the integrity of results generated by a commercial log analysis tool. Event log analysis can be performed using relational databases. To enhance database query performance, previous literatures affirm denormalization of databases. Yet database normalization can also increase query performance. Database normalization improved the majority of the queries performed using very large data sets of router events. In addition, queries performed faster on normalized tables when all the necessary data were contained in the normalized tables. Database normalization improves table organization and maintains better data consistency than a lack of normalization. Nonetheless, there are some tradeoffs when normalizing a database, such as additional preprocessing time and extra storage requirements. But overall, normalization improved query performance and must be considered an option when analyzing event logs using relational databases. There are three primary research questions addressed in this thesis: (1) What standards exist for the generation, transport, storage, and analysis of event log data for security analysis?; (2) How does database normalization impact query performance when using very large data sets (over 30 million) of router events?; and (3) What are the tradeoffs between using a normalized versus non-normalized database in terms of preprocessing time, query performance, storage requirements, and database consistency

    Identifying Data Exchange Congestion Through Real-Time Monitoring Of Beowulf Cluster Infiniband Networks

    Get PDF
    The ability to gather data from many types of new information sources has grown quickly using new technologies. The ability to store and retrieve large quantities of data from these new sources has created a need for computing platforms that are able to process the data for information. High Performance Computing Cluster systems have been developed to fulfill a role required for fast processing of large amounts of data for many difficult types of computing applications. Beowulf Clusters use many separate compute nodes to create a tightly coupled parallel HPCC system. The ability for a Beowulf Cluster HPCC system to process data depends on the ability of the compute nodes within the HPCC system to be able to retrieve data, share data, and store data with as little delay as possible. With many compute nodes competing to exchange data over limited network connections, network congestion can occur that can negatively impact the speed of computations. With concerns about network performance optimization, and uneven distribution of computational capacity, it is important for Beowulf HPCC System Administrators to be able to evaluate real-time data transfer metrics for congestion within a particular HPCC system. In this thesis, Heat-Maps will be created to identify potential issues with Infiniband network congestion due to simultaneous data exchanges between compute nodes

    System support for object replication in distributed systems

    Get PDF
    Distributed systems are composed of a collection of cooperating but failure prone system components. The number of components in such systems is often large and, despite low probabilities of any particular component failing, the likelihood that there will be at least a small number of failures within the system at a given time is high. Therefore, distributed systems must be able to withstand partial failures. By being resilient to partial failures, a distributed system becomes more able to offer a dependable service and therefore more useful. Replication is a well known technique used to mask partial failures and increase reliability in distributed computer systems. However, replication management requires sophisticated distributed control algorithms, and is therefore a labour intensive and error prone task. Furthermore, replication is in most cases employed due to applications' non-functional requirements for reliability, as dependability is generally an orthogonal issue to the problem domain of the application. If system level support for replication is provided, the application developer can devote more effort to application specific issues. Distributed systems are inherently more complex than centralised systems. Encapsulation and abstraction of components and services can be of paramount importance in managing their complexity. The use of object oriented techniques and languages, providing support for encapsulation and abstraction, has made development of distributed systems more manageable. In systems where applications are being developed using object-oriented techniques, system support mechanisms must recognise this, and provide support for the object-oriented approach. The architecture presented exploits object-oriented techniques to improve transparency and to reduce the application programmer involvement required to use the replication mechanisms. This dissertation describes an approach to implementing system support for object replication, which is distinct from other approaches such as replicated objects in that objects are not specially designed for replication. Additionally, object replication, in contrast to data replication, is a function-shipping approach and deals with the replication of both operations and data. Object replication is complicated by objects' encapsulation of local state and the arbitrary interaction patterns that may exist among objects. Although fully transparent object replication has not been achieved, my thesis is that partial system support for replication of program-level objects is practicable and assists the development of certain classes of reliable distributed applications. I demonstrate the usefulness of this approach by describing a prototype implementation and showing how it supports the development of an example toy application. To increase their flexibility, the system support mechanisms described are tailorable. The approach adopted in this work is to provide partial support for object replication, relying on some assistance from the application developer to supply application dependent functionality within particular collators for dealing with processing of results from object replicas. Care is taken to make the programming model as simple and concise as possible

    A bottom-up approach to real-time search in large networks and clouds

    Full text link

    Developing Cyberspace Data Understanding: Using CRISP-DM for Host-based IDS Feature Mining

    Get PDF
    Current intrusion detection systems generate a large number of specific alerts, but do not provide actionable information. Many times, these alerts must be analyzed by a network defender, a time consuming and tedious task which can occur hours or days after an attack occurs. Improved understanding of the cyberspace domain can lead to great advancements in Cyberspace situational awareness research and development. This thesis applies the Cross Industry Standard Process for Data Mining (CRISP-DM) to develop an understanding about a host system under attack. Data is generated by launching scans and exploits at a machine outfitted with a set of host-based data collectors. Through knowledge discovery, features are identified within the data collected which can be used to enhance host-based intrusion detection. By discovering relationships between the data collected and the events, human understanding of the activity is shown. This method of searching for hidden relationships between sensors greatly enhances understanding of new attacks and vulnerabilities, bolstering our ability to defend the cyberspace domain

    Benchmarking Eventually Consistent Distributed Storage Systems

    Get PDF
    Cloud storage services and NoSQL systems typically offer only "Eventual Consistency", a rather weak guarantee covering a broad range of potential data consistency behavior. The degree of actual (in-)consistency, however, is unknown. This work presents novel solutions for determining the degree of (in-)consistency via simulation and benchmarking, as well as the necessary means to resolve inconsistencies leveraging this information
    corecore