62 research outputs found

    Techniques of data prefetching, replication, and consistency in the Internet

    Get PDF
    Internet has become a major infrastructure for information sharing in our daily life, and indispensable to critical and large applications in industry, government, business, and education. Internet bandwidth (or the network speed to transfer data) has been dramatically increased, however, the latency time (or the delay to physically access data) has been reduced in a much slower pace. The rich bandwidth and lagging latency can be effectively coped with in Internet systems by three data management techniques: caching, replication, and prefetching. The focus of this dissertation is to address the latency problem in Internet by utilizing the rich bandwidth and large storage capacity for efficiently prefetching data to significantly improve the Web content caching performance, by proposing and implementing scalable data consistency maintenance methods to handle Internet Web address caching in distributed name systems (DNS), and to handle massive data replications in peer-to-peer systems. While the DNS service is critical in Internet, peer-to-peer data sharing is being accepted as an important activity in Internet.;We have made three contributions in developing prefetching techniques. First, we have proposed an efficient data structure for maintaining Web access information, called popularity-based Prediction by Partial Matching (PB-PPM), where data are placed and replaced guided by popularity information of Web accesses, thus only important and useful information is stored. PB-PPM greatly reduces the required storage space, and improves the prediction accuracy. Second, a major weakness in existing Web servers is that prefetching activities are scheduled independently of dynamically changing server workloads. Without a proper control and coordination between the two kinds of activities, prefetching can negatively affect the Web services and degrade the Web access performance. to address this problem, we have developed a queuing model to characterize the interactions. Guided by the model, we have designed a coordination scheme that dynamically adjusts the prefetching aggressiveness in Web Servers. This scheme not only prevents the Web servers from being overloaded, but it can also minimize the average server response time. Finally, we have proposed a scheme that effectively coordinates the sharing of access information for both proxy and Web servers. With the support of this scheme, the accuracy of prefetching decisions is significantly improved.;Regarding data consistency support for Internet caching and data replications, we have conducted three significant studies. First, we have developed a consistency support technique to maintain the data consistency among the replicas in structured P2P networks. Based on Pastry, an existing and popular P2P system, we have implemented this scheme, and show that it can effectively maintain consistency while prevent hot-spot and node-failure problems. Second, we have designed and implemented a DNS cache update protocol, called DNScup, to provide strong consistency for domain/IP mappings. Finally, we have developed a dynamic lease scheme to timely update the replicas in Internet

    A trusted execution platform for multiparty computation

    Get PDF
    Thesis (S.M.)--Massachusetts Institute of Technology, Dept. of Electrical Engineering and Computer Science, 2000.Includes bibliographical references (p. 92-94).by Sameer Ajmani.S.M

    Designing Digital Forensics Challenges for Multinational Cyber Defense Exercises

    Get PDF
    Töös püütakse kujundada ja hinnata digitaalse kohtuekspertiisi väljakutset, mida kasutada rahvusvahelisel küberkaitse õppusel. Eesmärk on fokusseerida põhioskustele, mida üks riiklik organisatsioon oma digitaalse kohtuekspertiisi ekspertidelt vajab ja disainida ning integreerida tehnilisi ülesandeid, mis adekvaatselt testivad neid oskusi suuremal küberkaitse õppusel. See töö kasutab NATO Locked Shields küberkaitse õppust test-näitena, mille jaoks väitekirja autor liitus digitaalse kohtuekspertiisi disainimeeskonnaga, NATO Cyber Defense Centre of Excellence juures, kui nad kavandasid ja rakendasid kolme-päevast digitaalse kohtuekspertiisi väljakutset. See lõputöö kehtestab rea tehnilisi ja protseduurilisi oskuseid, mida riiklikud organisatsioonid vajavad oma ekspertidelt, määrab viisid, kuidas testida neid oskusi ja arendab stsenaariumipõhist digitaalse kohtuekspertiisi väljakutset. Kasutades vahetult saadud tähelepanekuid, osaleja tagasisidet ja väljakutse tulemusi, et hinnata väljakutse efektiivsust, lõputöös leitakse, et stsenaarium testis piisavalt enamus oskusi õigel raskustasemel ja vajab parendamist ajastuses ning aruandlusstandardites. Lõpetuseks uuritakse erinevaid viise, kuidas parendada valitud meetodeid ja ülesandeid tuleviku õppusteks.This thesis seeks to design and evaluate a digital forensics challenge for inclusion in a multinational cyber defense exercise. The intent is to narrow down the key skills a state-based organization requires of its digital forensics experts and design and integrate technical tasks that adequately test these skills into a larger cyber defense exercise. It uses the NATO Locked Shields cyber defense exercise as a test case, for which the thesis author joined the digital forensics design team at the NATO Cyber Defense Centre of Excellence in designing and implementing a three day digital forensics challenge. This thesis establishes a series of technical and procedural skills state-based organizations require of their experts, determines ways to test these skills, and develops a scenario-based digital forensics challenge. Using first hand observations, participant feedback, and challenge scores to evaluate the effectiveness of the challenge, it finds that the scenario adequately tested a majority of the skills at the appropriate difficulty level and needs improvement in timing and reporting standards. Finally, it explores ways to improve upon the selected methods and tasks for future exercises

    An experimental study of web transport protocols in cellular networks

    Get PDF
    HTTP and TCP have been the backbone of web transport for decades. There have been numerous enhancements and modifications to both of these protocols. HTTP and TCP were developed for traditional packet networks existing since 1990's. Today, however, wired network parameters such as bandwidth and delay have significantly improved all over the world. However, cellular data networks (GPRS, HSPA) still experience bandwidth and delay issues, which affect the performance of these protocols. HTTP and TCP protocols can be optimized for today's network conditions and end-user requirements, such as accelerated page loading, low latency and better network utilization. Through the course of this work, we measure the improvements in using the SPDY protocol in comparison to HTTP. We measure the impact of header compression, number of parallel TCP connection per domain, and multiplexing of streams. From the TCP perspective, we analyze the impact of higher initial congestion windows. Some of the interesting findings are discussed, comparing various initial congestion window values. All of these experiments are conducted over live GPRS, HSPA and LTE networks. We study the challenges of moving from HTTP to alternative protocols. We also discuss the ways to improve the mobile web browsing by introducing and refining the existing schemes such as DNS pre-fetching, radio transition delays, smart use of IP versions, reduction of TLS negotiation delays, and intelligent allocation of TCP connections in HTTP. Our studies reveal that low bandwidth networks such as GPRS benefits from header compression, whereas the HSPA and LTE networks benefit from multiplexing as it saves the time for establishing new TCP connections. The advantage of higher TCP initial congestion window is seen only in networks with high band width and high latency

    Automated Injection of Curated Knowledge Into Real-Time Clinical Systems: CDS Architecture for the 21st Century

    Get PDF
    abstract: Clinical Decision Support (CDS) is primarily associated with alerts, reminders, order entry, rule-based invocation, diagnostic aids, and on-demand information retrieval. While valuable, these foci have been in production use for decades, and do not provide a broader, interoperable means of plugging structured clinical knowledge into live electronic health record (EHR) ecosystems for purposes of orchestrating the user experiences of patients and clinicians. To date, the gap between knowledge representation and user-facing EHR integration has been considered an “implementation concern” requiring unscalable manual human efforts and governance coordination. Drafting a questionnaire engineered to meet the specifications of the HL7 CDS Knowledge Artifact specification, for example, carries no reasonable expectation that it may be imported and deployed into a live system without significant burdens. Dramatic reduction of the time and effort gap in the research and application cycle could be revolutionary. Doing so, however, requires both a floor-to-ceiling precoordination of functional boundaries in the knowledge management lifecycle, as well as formalization of the human processes by which this occurs. This research introduces ARTAKA: Architecture for Real-Time Application of Knowledge Artifacts, as a concrete floor-to-ceiling technological blueprint for both provider heath IT (HIT) and vendor organizations to incrementally introduce value into existing systems dynamically. This is made possible by service-ization of curated knowledge artifacts, then injected into a highly scalable backend infrastructure by automated orchestration through public marketplaces. Supplementary examples of client app integration are also provided. Compilation of knowledge into platform-specific form has been left flexible, in so far as implementations comply with ARTAKA’s Context Event Service (CES) communication and Health Services Platform (HSP) Marketplace service packaging standards. Towards the goal of interoperable human processes, ARTAKA’s treatment of knowledge artifacts as a specialized form of software allows knowledge engineers to operate as a type of software engineering practice. Thus, nearly a century of software development processes, tools, policies, and lessons offer immediate benefit: in some cases, with remarkable parity. Analyses of experimentation is provided with guidelines in how choice aspects of software development life cycles (SDLCs) apply to knowledge artifact development in an ARTAKA environment. Portions of this culminating document have been further initiated with Standards Developing Organizations (SDOs) intended to ultimately produce normative standards, as have active relationships with other bodies.Dissertation/ThesisDoctoral Dissertation Biomedical Informatics 201

    An adaptive admission control and load balancing algorithm for a QoS-aware Web system

    Get PDF
    The main objective of this thesis focuses on the design of an adaptive algorithm for admission control and content-aware load balancing for Web traffic. In order to set the context of this work, several reviews are included to introduce the reader in the background concepts of Web load balancing, admission control and the Internet traffic characteristics that may affect the good performance of a Web site. The admission control and load balancing algorithm described in this thesis manages the distribution of traffic to a Web cluster based on QoS requirements. The goal of the proposed scheduling algorithm is to avoid situations in which the system provides a lower performance than desired due to servers' congestion. This is achieved through the implementation of forecasting calculations. Obviously, the increase of the computational cost of the algorithm results in some overhead. This is the reason for designing an adaptive time slot scheduling that sets the execution times of the algorithm depending on the burstiness that is arriving to the system. Therefore, the predictive scheduling algorithm proposed includes an adaptive overhead control. Once defined the scheduling of the algorithm, we design the admission control module based on throughput predictions. The results obtained by several throughput predictors are compared and one of them is selected to be included in our algorithm. The utilisation level that the Web servers will have in the near future is also forecasted and reserved for each service depending on the Service Level Agreement (SLA). Our load balancing strategy is based on a classical policy. Hence, a comparison of several classical load balancing policies is also included in order to know which of them better fits our algorithm. A simulation model has been designed to obtain the results presented in this thesis
    corecore