311 research outputs found

    Reordering Webpage Objects for Optimizing Quality-of-Experience

    Get PDF
    The quality of experience (QoE) perceived by users is a critical performance measure for Web browsing. “Above-The-Fold” (ATF) time has been recently recognized and widely used as a direct measure of user-end QoE by a number of studies. To reduce the ATF time, the existing works mainly focus on reducing the delay of networking. However, we observe that the webpage structures and content orders can also significantly affect theWeb QoE. In this paper, we propose a novel optimization framework that reorders the webpage objects to minimize the user-end ATF time. Our core idea is to first identify the webpage objects that consume the ATF time but have no impact on the page experience and then change the positions of these objects to achieve the minimum ATF time. We implement this framework and evaluate its performance with popular websites. The results show that the ATF time is greatly reduced compared with the existing works, especially for complex webpages

    Clustering and recommendation techniques for access control policy management

    Get PDF
    Managing access control policies can be a daunting process, given the frequent policy decisions that need to be made, and the potentially large number of policy rules involved. Policy management includes, but is not limited to: policy optimization, configuration, and analysis. Such tasks require a deep understanding of the policy and its building compo- nents, especially in scenarios where it frequently changes and needs to adapt to different environments. Assisting both administrators and users in performing these tasks is impor- tant in avoiding policy misconfigurations and ill-informed policy decisions. We investigate a number of clustering and recommendation techniques, and implement a set of tools that assist administrators and users in managing their policies. First, we propose and imple- ment an optimization technique, based on policy clustering and adaptable rule ranking, to achieve optimal request evaluation performance. Second, we implement a policy analysis framework that simplifies and visualizes analysis results, based on a hierarchical cluster- ing algorithm. The framework utilizes a similarity-based model that provides a basis of risk analysis on newly introduced policy rules. In addition to administrators, we focus on regular individuals whom nowadays manage their own access control polices on a regular basis. Users are making frequent policy decisions, especially with the increasing popular- ity of social network sites, such as Facebook and Twitter. For example, users are required to allow/deny access to their private data on social sites each time they install a 3rd party application. To make matters worse, 3rd party access requests are mostly uncustomizable by the user. We propose a framework that allows users to customize their policy decisions on social sites, and provides a set of recommendations that assist users in making well- informed decisions. Finally, as the browser has become the main medium for the users online presence, we investigate the access control models for 3rd party browser extensions. Even though, extensions enrich the browsing experience of users, they could potentially represent a threat to their privacy. We propose and implement a framework that 1) monitors 3rd party extension accesses, 2) provides fine-grained permission controls, and 3) Provides detailed permission information to users in effort to increase their privacy aware- ness. To evaluate the framework we conducted a within-subjects user study and found the framework to effectively increase user awareness of requested permissions

    Patia: Adaptive distributed webserver (A position paper)

    No full text
    This paper introduces the Patia Adaptive Webserver architecture, which is distributed and consists of semi-autonomous agents called FLYs. The FLY carries with it the set of rules and adaptivity policies required to deliver the data to the requesting client. Where a change in the FLY’s external environment could affect performance, it is the FLY’s responsibility to change the method of delivery (or the actual object being delivered). It is our conjecture that the success of today’s multimedia websites in terms of performance lies in the architecture of the underlying servers and their ability to adapt to changes in demand and resource availability, as well as their ability to scale. We believe that the distributed and autonomous nature of this system are key factors in achieving this.

    Metrics for Broadband Networks in the Context of the Digital Economies

    Get PDF
    In a transition to automated digital management of broadband networks, communication service providers must look for new metrics to monitor these networks. Complete metrics frameworks are already emerging whereas majority of the new metrics are being proposed in technical papers. Considering common metrics for broadband networks and related technologies, this chapter offers insights into what metrics are available, and also suggests active areas of research. The broadband networks being a key component of the digital ecosystems are also an enabler to many other digital technologies and services. Reviewing first the metrics for computing systems, websites and digital platforms, the chapter focus then shifts to the most important technical and business metrics which are used for broadband networks. The demand-side and supply-side metrics including the key metrics of broadband speed and broadband availability are touched on. After outlining the broadband metrics which have been standardized and the metrics for measuring Internet traffic, the most commonly used metrics for broadband networks are surveyed in five categories: energy and power metrics, quality of service, quality of experience, security metrics, and robustness and resilience metrics. The chapter concludes with a discussion on machine learning, big data and the associated metrics

    Experiences with formal engineering: model-based specification, implementation and testing of a software bus at Neopost

    Get PDF
    We report on the actual industrial use of formal methods during the development of a software bus. During an internship at Neopost Inc., of 14 weeks, we developed the server component of a software bus, called the XBus, using formal methods during the design, validation and testing phase: we modeled our design of the XBus in the process algebra mCRL2, validated the design using the mCRL2-simulator, and fully automatically tested our implementation with the model-based test tool JTorX. This resulted in a well- tested software bus with a maintainable architecture. Writing the model (mdev), simulating it, and testing the implementation with JTorX only took 17% of the total development time. Moreover, the errors found with model-based testing would have been hard to find with conventional test methods. Thus, we show that formal engineering can be feasible, beneficial and cost-effective.\ud The findings above, reported earlier by us in (Sijtema et al., 2011) [1], were well- received, also in industrially oriented conferences (Ferreira and Romanenko, 2010) [2] and [3]. In this paper, we look back on the case study, and carefully analyze its merits and shortcomings. We reflect on (1) the added benefits of model checking, (2) model completeness and (3) the quality and performance of the test process.\ud Thus, in a second phase, after the internship, we model checked the XBus protocol—this was not done in [1] since the Neopost business process required a working implementation after 14 weeks. We used the CADP tool evaluator4 to check the behavioral requirements obtained during the development. Model checking did not uncover errors in model mdev, but revealed that model mdev was neither complete nor optimized: in particular, requirements to the so-called bad weather behavior (exceptions, unexpected inputs, etc.) were missing. Therefore, we created several improved models, checked that we could validate them, and used them to analyze quality and performance of the test process. Model checking was expensive: it took us approx. 4 weeks in total, compared to 3 weeks for the entire model-based testing approach during the internship.\ud In the second phase, we analyzed the quality and performance of the test process, where we looked at both code and model coverage. We found that high code coverage (almost 100%) is in most cases obtained within 1000 test steps and 2 minutes, which matches the fact that the faults in the XBus were discovered within a few minutes.\ud Summarizing, we firmly believe that the formal engineering approach is cost-effective, and produces high quality software products. Model checking does yield significantly better models, but is also costly. Thus, system developers should trade off higher model quality against higher costs

    TD-SCDMA Relay Networks

    Get PDF
    PhDWhen this research was started, TD-SCDMA (Time Division Synchronous Code Division Multiple Access) was still in the research/ development phase, but now, at the time of writing this thesis, it is in commercial use in 10 large cities in China including Beijing and Shang Hai. In all of these cities HSDPA is enabled. The roll-out of the commercial deployment is progressing fast with installations in another 28 cities being underway now. However, during the pre-commercial TD-SCDM trail in China, which started from year 2006, some interference problems have been noticed especially in the network planning and initialization phases. Interference is always an issue in any network and the goal of the work reported in this thesis is to improve network coverage and capacity in the presence of interference. Based on an analysis of TD-SCDMA issues and how network interference arises, this thesis proposes two enhancements to the network in addition to the standard N-frequency technique. These are (i) the introduction of the concentric circle cell concept and (ii) the addition of a relay network that makes use of other users at the cell boundary. This overall approach not only optimizes the resilience to interference but increases the network coverage without adding more Node Bs. Based on the cell planning parameters from the research, TD-SCDMA HSDPA services in dense urban area and non-HSDPA services in rural areas were simulated to investigate the network performance impact after introducing the relay network into a TD-SCDMA network. The results for HSDPA applications show significant improvement in the TDSCDMA relay network both for network capacity and network interference aspects compared to standard TD-SCDMA networks. The results for non- HSDPA service show that although the network capacity has not changed after adding in the relay network (due to the code limitation in TD-SCDMA), the TD-SCDMA relay network has better interference performance and greater coverage

    Wireless Bandwidth Aggregation for Internet Traffic

    Get PDF
    This MQP proposes a new method for bandwidth aggregation, utilize-able by the typical home network owner. The methods explained herein aggregate a network of coordinating routers within local WiFi communication range to achieve increased bandwidth at the application layer, over the HTTP protocol. Our protocol guarantees content delivery and reliability, as well as non-repudiation measures that hold each participant, rather then the group of routers, accountable for the content they download

    Visualization and user interactions in RDF data representation

    Get PDF
    The spreading of linked data in digital technologies creates the need to develop new approaches to handle this kind of data. The modern trends in the information technology encourage usage of human-friendly interfaces and graphical tools, which helps users to understand the system and speeds up the work processes. In this study my goal is to develop a set of best practices for solving the problem of visualizations and interactions with linked data and to create a working prototype based on this practices. My work is a part of a project developed by Fail-Safe IT Solutions Oy. During the research process I study various existing products that try to solve the problem of human-friendly interactions with linked data, compare them and based on the comparison develop my own approach for solving the problem in the given environment, which satisfies the provided specifications. The key findings of the research can be grouped in two categories. The first category of findings is based on the existing solution examinations and is related to the features I find beneficial to the project. The second category is based on the experience acquired during the project development and includes environment-specific and project-related findings

    Performance and Power Characterization of Cellular Networks and Mobile Application Optimizations.

    Full text link
    Smartphones with cellular data access have become increasingly popular with the wide variety of mobile applications. However, the performance and power footprint of these mobile applications are not well-understood, and due to the unawareness of the cellular specific characteristics, many of these applications are causing inefficient radio resource and device energy usage. In this dissertation, we aim at providing a suite of systematic methodology and tools to better understand the performance and power characteristics of cellular networks (3G and the new LTE 4G networks) and the mobile applications relying upon, and to optimize the mobile application design based on this understanding. We have built the MobiPerf tool to understand the characteristics of cellular networks. With this knowledge, we make detailed analysis on smartphone application performance via controlled experiments and via a large-scale data set from one major U.S. cellular carrier. To understand the power footprint of mobile applications, we have derived comprehensive power models for different network types and characterize radio energy usage of various smartphone applications via both controlled experiments and 7-month-long traces collected from 20 real users. Specifically, we characterize the radio and energy impact of the network traffic generated when the phone screen is off and propose the screen-aware traffic optimization. In addition to shedding light to the mobile application design throughout our characterization analysis, we further design and implement a real optimization system RadioProphet, which uses historical traffic features to make predictions and intelligently deallocate radio resource for improved radio and energy efficiency.PhDComputer Science & EngineeringUniversity of Michigan, Horace H. Rackham School of Graduate Studieshttp://deepblue.lib.umich.edu/bitstream/2027.42/99905/1/hjx_1.pd
    • …
    corecore