241 research outputs found

    Cooperative Cashing? An Economic Analysis of Document Duplication in Cooperative Web Caching

    Get PDF
    Cooperative caching is a popular mechanism to allow an array of distributed caches to cooperate and serve each others\u27 Web requests. Controlling duplication of documents across cooperating caches is a challenging problem faced by cache managers. In this paper, we study the economics of document duplication in strategic and nonstrategic settings. We have three primary findings. First, we find that the optimum level of duplication at a cache is nondecreasing in intercache latency, cache size, and extent of request locality. Second, in situations in which cache peering spans organizations, we find that the interaction between caches is a game of strategic substitutes wherein a cache employs lesser resources towards eliminating duplicate documents when the other caches employs more resources towards eliminating duplicate documents at that cache. Thus, a significant challenge will be to simultaneously induce multiple caches to contribute more resources towards reducing duplicate documents in the system. Finally, centralized decision making, which as expected provides improvements in average latency over a decentralized setup, can entail highly asymmetric duplication levels at the caches. This in turn can benefit one set of users at the expense of the other, and thus will be challenging to implement

    Adaptive and secured resource management in distributed and Internet systems

    Get PDF
    The effectiveness of computer system resource management has been always determined by two major factors: (1) workload demands and management objectives, (2) the updates of the computer technology. These two factors are dynamically changing, and resource management systems must be timely adaptive to the changes. This dissertation attempts to address several important and related resource management issues.;We first study memory system utilization in centralized servers by improving memory performance of sorting algorithms, which provides fundamental understanding on memory system organizations and its performance optimizations for data-intensive workloads. to reduce different types of cache misses, we restructure the mergesort and quicksort algorithms by integrating tiling, padding, and buffering techniques and by repartitioning the data set. Our study shows substantial performance improvements from our new methods.;We have further extended the work to improve load sharing for utilizing global memory resources in distributed systems. Aiming at reducing the memory resource contention caused by page faults and I/O activities, we have developed and examined load sharing policies by considering effective usage of global memory in addition to CPU load balancing in both homogeneous and heterogeneous clusters.;Extending our research from clusters to Internet systems, we have further investigated memory and storage utilizations in Web caching systems. We have proposed several novel management schemes to restructure and decentralize the existing caching system by exploiting data locality at different levels of the global memory hierarchy and by effectively sharing data objects among the clients and their proxy caches.;Data integrity and communication anonymity issues are raised from our decentralized Web caching system design, which are also security concerns for general peer-to-peer systems. We propose an integrity protocol to ensure data integrity, and several protocols to achieve mutual communication anonymity between an information requester and a provider.;The potential impact and contributions of this dissertation are briefly stated as follows: (1) two major research topics identified in this dissertation are fundamentally important for the growth and development of information technology, and will continue to be demanding topics for a long term. (2) Our proposed cache-effective sorting methods bridge a serious gap between analytical complexity of algorithms and their execution complexity in practice due to the increasingly deep memory hierarchy in computer systems. This approach can also be used to improve memory performance at different levels of the memory hierarchy, such as I/O and file systems. (3) Our load sharing principle of giving a high priority to the requests of data accesses in memory and I/Os timely adapts the technology changes and effectively responds to the increasing demand of data-intensive applications. (4) Our proposed decentralized Web caching framework and its resource management schemes present a comprehensive case study to examine the P2P model. Our results and experiences can be used for related and further studies in distributed computing. (5) The proposed data integrity and communication anonymity protocols address limits and weaknesses of existing ones, and place a solid foundation for us to continue our work in this important area

    Building high-performance web-caching servers

    Get PDF

    Database server workload characterization in an e-commerce environment

    Get PDF
    A typical E-commerce system that is deployed on the Internet has multiple layers that include Web users, Web servers, application servers, and a database server. As the system use and user request frequency increase, Web/application servers can be scaled up by replication. A load balancing proxy can be used to route user requests to individual machines that perform the same functionality. To address the increasing workload while avoiding replicating the database server, various dynamic caching policies have been proposed to reduce the database workload in E-commerce systems. However, the nature of the changes seen by the database server as a result of dynamic caching remains unknown. A good understanding of this change is fundamental for tuning a database server to get better performance. In this study, the TPC-W (a transactional Web E-commerce benchmark) workloads on a database server are characterized under two different dynamic caching mechanisms, which are generalized and implemented as query-result cache and table cache. The characterization focuses on response time, CPU computation, buffer pool references, disk I/O references, and workload classification. This thesis combines a variety of analysis techniques: simulation, real time measurement and data mining. The experimental results in this thesis reveal some interesting effects that the dynamic caching has on the database server workload characteristics. The main observations include: (a) dynamic cache can considerably reduce the CPU usage of the database server and the number of database page references when it is heavily loaded; (b) dynamic cache can also reduce the database reference locality, but to a smaller degree than that reported in file servers. The data classification results in this thesis show that with dynamic cache, the database server sees TPC-W profiles more like on-line transaction processing workloads

    Optimization inWeb Caching: Cache Management, Capacity Planning, and Content Naming

    Full text link
    Caching is fundamental to performance in distributed information retrieval systems such as the World Wide Web. This thesis introduces novel techniques for optimizing performance and cost-effectiveness in Web cache hierarchies. When requests are served by nearby caches rather than distant servers, server loads and network traffic decrease and transactions are faster. Cache system design and management, however, face extraordinary challenges in loosely-organized environments like the Web, where the many components involved in content creation, transport, and consumption are owned and administered by different entities. Such environments call for decentralized algorithms in which stakeholders act on local information and private preferences. In this thesis I consider problems of optimally designing new Web cache hierarchies and optimizing existing ones. The methods I introduce span the Web from point of content creation to point of consumption: I quantify the impact of content-naming practices on cache performance; present techniques for variable-quality-of-service cache management; describe how a decentralized algorithm can compute economically-optimal cache sizes in a branching two-level cache hierarchy; and introduce a new protocol extension that eliminates redundant data transfers and allows “dynamic” content to be cached consistently. To evaluate several of my new methods, I conducted trace-driven simulations on an unprecedented scale. This in turn required novel workload measurement methods and efficient new characterization and simulation techniques. The performance benefits of my proposed protocol extension are evaluated using two extraordinarily large and detailed workload traces collected in a traditional corporate network environment and an unconventional thin-client system. My empirical research follows a simple but powerful paradigm: measure on a large scale an important production environment’s exogenous workload; identify performance bounds inherent in the workload, independent of the system currently serving it; identify gaps between actual and potential performance in the environment under study; and finally devise ways to close these gaps through component modifications or through improved inter-component integration. This approach may be applicable to a wide range of Web services as they mature.Ph.D.Computer Science and EngineeringUniversity of Michiganhttp://deepblue.lib.umich.edu/bitstream/2027.42/90029/1/kelly-optimization_web_caching.pdfhttp://deepblue.lib.umich.edu/bitstream/2027.42/90029/2/kelly-optimization_web_caching.ps.bz

    Understanding emerging client-Side web vulnerabilities using dynamic program analysis

    Get PDF
    Today's Web heavily relies on JavaScript as it is the main driving force behind the plethora of Web applications that we enjoy daily. The complexity and amount of this client-side code have been steadily increasing over the years. At the same time, new vulnerabilities keep being uncovered, for which we mostly rely on manual analysis of security experts. Unfortunately, such manual efforts do not scale to the problem space at hand. Therefore in this thesis, we present techniques capable of finding vulnerabilities automatically and at scale that originate from malicious inputs to postMessage handlers, polluted prototypes, and client-side storage mechanisms. Our results highlight that the investigated vulnerabilities are prevalent even among the most popular sites, showing the need for automated systems that help developers uncover them in a timely manner. Using the insights gained during our empirical studies, we provide recommendations for developers and browser vendors to tackle the underlying problems in the future. Furthermore, we show that security mechanisms designed to mitigate such and similar issues cannot currently be deployed by first-party applications due to their reliance on third-party functionality. This leaves developers in a no-win situation, in which either functionality can be preserved or security enforced.JavaScript ist die treibende Kraft hinter all den Web Applikationen, die wir heutzutage täglich nutzen. Allerdings ist über die Zeit hinweg gesehen die Masse, aber auch die Komplexität, von Client-seitigem JavaScript Code stetig gestiegen. Außerdem finden Sicherheitsexperten immer wieder neue Arten von Verwundbarkeiten, meistens durch manuelle Analyse des Codes. In diesem Werk untersuchen wir deshalb Methodiken, mit denen wir automatisch Verwundbarkeiten finden können, die von postMessages, veränderten Prototypen, oder Werten aus Client-seitigen Persistenzmechnanismen stammen. Unsere Ergebnisse zeigen, dass die untersuchten Schwachstellen selbst unter den populärsten Websites weit verbreitet sind, was den Bedarf an automatisierten Systemen zeigt, die Entwickler bei der rechtzeitigen Aufdeckung dieser Schwachstellen unterstützen. Anhand der in unseren empirischen Studien gewonnenen Erkenntnissen geben wir Empfehlungen für Entwickler und Browser-Anbieter, um die zugrunde liegenden Probleme in Zukunft anzugehen. Zudem zeigen wir auf, dass Sicherheitsmechanismen, die solche und ähnliche Probleme mitigieren sollen, derzeit nicht von Seitenbetreibern eingesetzt werden können, da sie auf die Funktionalität von Drittanbietern angewiesen sind. Dies zwingt den Seitenbetreiber dazu, zwischen Funktionalität und Sicherheit zu wählen

    Scripts in a Frame: A Framework for Archiving Deferred Representations

    Get PDF
    Web archives provide a view of the Web as seen by Web crawlers. Because of rapid advancements and adoption of client-side technologies like JavaScript and Ajax, coupled with the inability of crawlers to execute these technologies effectively, Web resources become harder to archive as they become more interactive. At Web scale, we cannot capture client-side representations using the current state-of-the art toolsets because of the migration from Web pages to Web applications. Web applications increasingly rely on JavaScript and other client-side programming languages to load embedded resources and change client-side state. We demonstrate that Web crawlers and other automatic archival tools are unable to archive the resulting JavaScript-dependent representations (what we term deferred representations), resulting in missing or incorrect content in the archives and the general inability to replay the archived resource as it existed at the time of capture. Building on prior studies on Web archiving, client-side monitoring of events and embedded resources, and studies of the Web, we establish an understanding of the trends contributing to the increasing unarchivability of deferred representations. We show that JavaScript leads to lower-quality mementos (archived Web resources) due to the archival difficulties it introduces. We measure the historical impact of JavaScript on mementos, demonstrating that the increased adoption of JavaScript and Ajax correlates with the increase in missing embedded resources. To measure memento and archive quality, we propose and evaluate a metric to assess memento quality closer to Web users’ perception. We propose a two-tiered crawling approach that enables crawlers to capture embedded resources dependent upon JavaScript. Measuring the performance benefits between crawl approaches, we propose a classification method that mitigates the performance impacts of the two-tiered crawling approach, and we measure the frontier size improvements observed with the two-tiered approach. Using the two-tiered crawling approach, we measure the number of client-side states associated with each URI-R and propose a mechanism for storing the mementos of deferred representations. In short, this dissertation details a body of work that explores the following: why JavaScript and deferred representations are difficult to archive (establishing the term deferred representation to describe JavaScript dependent representations); the extent to which JavaScript impacts archivability along with its impact on current archival tools; a metric for measuring the quality of mementos, which we use to describe the impact of JavaScript on archival quality; the performance trade-offs between traditional archival tools and technologies that better archive JavaScript; and a two-tiered crawling approach for discovering and archiving currently unarchivable descendants (representations generated by client-side user events) of deferred representations to mitigate the impact of JavaScript on our archives. In summary, what we archive is increasingly different from what we as interactive users experience. Using the approaches detailed in this dissertation, archives can create mementos closer to what users experience rather than archiving the crawlers’ experiences on the Web

    Plugging in trust and privacy : three systems to improve widely used ecosystems

    Get PDF
    The era of touch-enabled mobile devices has fundamentally changed our communication habits. Their high usability and unlimited data plans provide the means to communicate any place, any time and lead people to publish more and more (sensitive) information. Moreover, the success of mobile devices also led to the introduction of new functionality that crucially relies on sensitive data (e.g., location-based services). With our today’s mobile devices, the Internet has become the prime source for information (e.g., news) and people need to rely on the correctness of information provided on the Internet. However, most of the involved systems are neither prepared to provide robust privacy guarantees for the users, nor do they provide users with the means to verify and trust in delivered content. This dissertation introduces three novel trust and privacy mechanisms that overcome the current situation by improving widely used ecosystems. With WebTrust we introduce a robust authenticity and integrity framework that provides users with the means to verify both the correctness and authorship of data transmitted via HTTP. X-pire! and X-pire 2.0 offer a digital expiration date for images in social networks to enforce post-publication privacy. AppGuard enables the enforcement of fine-grained privacy policies on third-party applications in Android to protect the users privacy.Heutige Mobilgeräte mit Touchscreen haben unsere Kommunikationsgewohnheiten grundlegend geändert. Ihre intuitive Benutzbarkeit gepaart mit unbegrenztem Internetzugang erlaubt es uns jederzeit und überall zu kommunizieren und führt dazu, dass immer mehr (vertrauliche) Informationen publiziert werden. Des Weiteren hat der Erfolg mobiler Geräte zur Einführung neuer Dienste die auf vertraulichen Daten aufbauen (z.B. positionsabhängige Dienste) beigetragen. Mit den aktuellen Mobilgeräten wurde zudem das Internet die wichtigste Informationsquelle (z.B. für Nachrichten) und die Nutzer müssen sich auf die Korrektheit der von dort bezogenen Daten verlassen. Allerdings bieten die involvierten Systeme weder robuste Datenschutzgarantien, noch die Möglichkeit die Korrektheit bezogener Daten zu verifizieren. Diese Dissertation führt drei neue Mechanismen für das Vertrauen und den Datenschutz ein, die die aktuelle Situation in weit verbreiteten Systemen verbessern. WebTrust, ein robustes Authentizitäts- und Integritätssystem ermöglicht es den Nutzern sowohl die Korrektheit als auch die Autorenschaft von über HTTP übertragenen Daten zu verifizieren. X-pire! und X-pire 2.0 bieten ein digitales Ablaufdatum für Bilder in sozialen Netzwerken um Daten auch nach der Publikation noch vor Zugriff durch Dritte zu schützen. AppGuard ermöglicht das Durchsetzen von feingranularen Datenschutzrichtlinien für Drittanbieteranwendungen in Android um einen angemessen Schutz der Nutzerdaten zu gewährleisten

    Plugging in trust and privacy : three systems to improve widely used ecosystems

    Get PDF
    The era of touch-enabled mobile devices has fundamentally changed our communication habits. Their high usability and unlimited data plans provide the means to communicate any place, any time and lead people to publish more and more (sensitive) information. Moreover, the success of mobile devices also led to the introduction of new functionality that crucially relies on sensitive data (e.g., location-based services). With our today’s mobile devices, the Internet has become the prime source for information (e.g., news) and people need to rely on the correctness of information provided on the Internet. However, most of the involved systems are neither prepared to provide robust privacy guarantees for the users, nor do they provide users with the means to verify and trust in delivered content. This dissertation introduces three novel trust and privacy mechanisms that overcome the current situation by improving widely used ecosystems. With WebTrust we introduce a robust authenticity and integrity framework that provides users with the means to verify both the correctness and authorship of data transmitted via HTTP. X-pire! and X-pire 2.0 offer a digital expiration date for images in social networks to enforce post-publication privacy. AppGuard enables the enforcement of fine-grained privacy policies on third-party applications in Android to protect the users privacy.Heutige Mobilgeräte mit Touchscreen haben unsere Kommunikationsgewohnheiten grundlegend geändert. Ihre intuitive Benutzbarkeit gepaart mit unbegrenztem Internetzugang erlaubt es uns jederzeit und überall zu kommunizieren und führt dazu, dass immer mehr (vertrauliche) Informationen publiziert werden. Des Weiteren hat der Erfolg mobiler Geräte zur Einführung neuer Dienste die auf vertraulichen Daten aufbauen (z.B. positionsabhängige Dienste) beigetragen. Mit den aktuellen Mobilgeräten wurde zudem das Internet die wichtigste Informationsquelle (z.B. für Nachrichten) und die Nutzer müssen sich auf die Korrektheit der von dort bezogenen Daten verlassen. Allerdings bieten die involvierten Systeme weder robuste Datenschutzgarantien, noch die Möglichkeit die Korrektheit bezogener Daten zu verifizieren. Diese Dissertation führt drei neue Mechanismen für das Vertrauen und den Datenschutz ein, die die aktuelle Situation in weit verbreiteten Systemen verbessern. WebTrust, ein robustes Authentizitäts- und Integritätssystem ermöglicht es den Nutzern sowohl die Korrektheit als auch die Autorenschaft von über HTTP übertragenen Daten zu verifizieren. X-pire! und X-pire 2.0 bieten ein digitales Ablaufdatum für Bilder in sozialen Netzwerken um Daten auch nach der Publikation noch vor Zugriff durch Dritte zu schützen. AppGuard ermöglicht das Durchsetzen von feingranularen Datenschutzrichtlinien für Drittanbieteranwendungen in Android um einen angemessen Schutz der Nutzerdaten zu gewährleisten
    • …
    corecore