613 research outputs found

    Optimal web-scale tiering as a flow problem

    Get PDF
    We present a fast online solver for large scale parametric max-flow problems as they occur in portfolio optimization, inventory management, computer vision, and logistics. Our algorithm solves an integer linear program in an online fashion. It exploits total unimodularity of the constraint matrix and a Lagrangian relaxation to solve the problem as a convex online game. The algorithm generates approximate solutions of max-flow problems by performing stochastic gradient descent on a set of flows. We apply the algorithm to optimize tier arrangement of over 84 million web pages on a layered set of caches to serve an incoming query stream optimally

    Stochastic Query Covering for Fast Approximate Document Retrieval

    Get PDF
    We design algorithms that, given a collection of documents and a distribution over user queries, return a small subset of the document collection in such a way that we can efficiently provide high-quality answers to user queries using only the selected subset. This approach has applications when space is a constraint or when the query-processing time increases significantly with the size of the collection. We study our algorithms through the lens of stochastic analysis and prove that even though they use only a small fraction of the entire collection, they can provide answers to most user queries, achieving a performance close to the optimal. To complement our theoretical findings, we experimentally show the versatility of our approach by considering two important cases in the context of Web search. In the first case, we favor the retrieval of documents that are relevant to the query, whereas in the second case we aim for document diversification. Both the theoretical and the experimental analysis provide strong evidence of the potential value of query covering in diverse application scenarios

    Downstream Bandwidth Management for Emerging DOCSIS-based Networks

    Get PDF
    In this dissertation, we consider the downstream bandwidth management in the context of emerging DOCSIS-based cable networks. The latest DOCSIS 3.1 standard for cable access networks represents a significant change to cable networks. For downstream, the current 6 MHz channel size is replaced by a much larger 192 MHz channel which potentially can provide data rates up to 10 Gbps. Further, the current standard requires equipment to support a relatively new form of active queue management (AQM) referred to as delay-based AQM. Given that more than 50 million households (and climbing) use cable for Internet access, a clear understanding of the impacts of bandwidth management strategies used in these emerging networks is crucial. Further, given the scope of the change provided by emerging cable systems, now is the time to develop and introduce innovative new methods for managing bandwidth. With this motivation, we address research questions pertaining to next generation of cable access networks. The cable industry has had to deal with the problem of a small number of subscribers who utilize the majority of network resources. This problem will grow as access rates increase to gigabits per second. Fundamentally this is a problem on how to manage data flows in a fair manner and provide protection. A well known performance issue in the Internet, referred to as bufferbloat, has received significant attention recently. High throughput network flows need sufficiently large buffer to keep the pipe full and absorb occasional burstiness. Standard practice however has led to equipment offering very large unmanaged buffers that can result in sustained queue levels increasing packet latency. One reason why these problems continue to plague cable access networks is the desire for low complexity and easily explainable (to access network subscribers and to the Federal Communications Commission) bandwidth management. This research begins by evaluating modern delay-based AQM algorithms in downstream DOCSIS 3.0 environments with a focus on fairness and application performance capabilities of single queue AQMs. We are especially interested in delay-based AQM schemes that have been proposed to combat the bufferbloat problem. Our evaluation involves a variety of scenarios that include tiered services and application workloads. Based on our results, we show that in scenarios involving realistic workloads, modern delay-based AQMs can effectively mitigate bufferbloat. However they do not address the other problem related to managing the fairness. To address the combined problem of fairness and bufferbloat, we propose a novel approach to bandwidth management that provides a compromise among the conflicting requirements. We introduce a flow quantization method referred to as adaptive bandwidth binning where flows that are observed to consume similar levels of bandwidth are grouped together with the system managed through a hierarchical scheduler designed to approximate weighted fairness while addressing bufferbloat. Based on a simulation study that considers many system experimental parameters including workloads and network configurations, we provide evidence of the efficacy of the idea. Our results suggest that the scheme is able to provide long term fairness and low delay with a performance close to that of a reference approach based on fair queueing. A further contribution is our idea for replacing `tiered\u27 levels of service based on service rates with tiering based on weights. The application of our bandwidth binning scheme offers a timely and innovative alternative to broadband service that leverages the potential offered by emerging DOCSIS-based cable systems

    How the High Performance Analytics Work with SAP HANA

    Get PDF
    Informed decision-making, better communication and faster response to business situation are the key differences between leaders and followers in this competitive global marketplace. A data-driven organization can analyze patterns & anomalies to make sense of the current situation and be ready for future opportunities. Organizations no longer have the problem of “lack of data”, but the problem of “actionable data” at the right time to act, direct and influence their business decisions. The data exists in different transactional systems and/or data warehouse systems, which takes significant time to retrieve/ process relevant information and negatively impacts the time window to out-maneuver the competition. To solve the problem of “actionable data”, enterprises can take advantage of the SAP HANA [1] in-memory platform that enables rapid processing and analysis of huge volumes of data in real-time. This paper discusses how SAP HANA virtual data models can be used for on-the-fly analysis of live transactional data to derive insight, perform what-if analysis and execute business transactions in real-time without using persisted aggregates

    The Economics of Net Neutrality: Implications of Priority Pricing in Access Networks

    Get PDF
    This work systematically analyzes Net Neutrality from an economic point of view. To this end a framework is developed which helps to structure the Net Neutrality debate. Furthermore, the introduction of prioritization is studied by analyzing potential effects of Quality of Service (QoS) on Content and Service Providers (CSPs) and Internet Users (IUs)

    Network Neutrality, Consumers, and Innovation

    Get PDF
    In this Article, Professor Christopher Yoo directly engages claims that mandating network neutrality is essential to protect consumers and to promote innovation on the Internet. It begins by analyzing the forces that are placing pressure on the basic network architecture to evolve, such as the emergence of Internet video and peer-to-peer architectures and the increasing heterogeneity in business relationships and transmission technologies. It then draws on the insights of demand-side price discrimination (such as Ramsey pricing) and the two-sided markets, as well as the economics of product differentiation and congestion, to show how deviating from network neutrality can benefit consumers, a conclusion bolstered by the empirical literature showing that vertical restraints tend to increase rather than reduce consumer welfare. In fact, limiting network providers’ ability to vary the prices charged to content and applications providers may actually force consumers to bear a greater proportion of the costs to upgrade the network. Restricting network providers’ ability to experiment with different protocols may also reduce innovation by foreclosing applications and content that depend on a different network architecture and by dampening the price signals needed to stimulate investment in new applications and content. In the process, Professor Yoo draws on the distinction between generalizing and exemplifying theory to address some of the arguments advanced by his critics. While the exemplifying theories on which these critics rely are useful for rebutting calls for broad, categorical, ex ante rules, their restrictive nature leaves them ill suited to serve as the foundation for broad, categorical ex ante mandates pointing in the other direction. Thus, in the absence of some empirical showing that the factual preconditions of any particular exemplifying theory have been satisfied, the existence of exemplifying theories pointing in both directions actually supports an ex post, case-by-case approach that allows network providers to experiment with different pricing regimes unless and until a concrete harm to competition can be shown

    Network Neutrality, Consumers, and Innovation

    Get PDF
    In this Article, Professor Christopher Yoo directly engages claims that mandating network neutrality is essential to protect consumers and to promote innovation on the Internet. It begins by analyzing the forces that are placing pressure on the basic network architecture to evolve, such as the emergence of Internet video and peer-to-peer architectures and the increasing heterogeneity in business relationships and transmission technologies. It then draws on the insights of demand-side price discrimination (such as Ramsey pricing) and the two-sided markets, as well as the economics of product differentiation and congestion, to show how deviating from network neutrality can benefit consumers, a conclusion bolstered by the empirical literature showing that vertical restraints tend to increase rather than reduce consumer welfare. In fact, limiting network providers’ ability to vary the prices charged to content and applications providers may actually force consumers to bear a greater proportion of the costs to upgrade the network. Restricting network providers’ ability to experiment with different protocols may also reduce innovation by foreclosing applications and content that depend on a different network architecture and by dampening the price signals needed to stimulate investment in new applications and content. In the process, Professor Yoo draws on the distinction between generalizing and exemplifying theory to address some of the arguments advanced by his critics. While the exemplifying theories on which these critics rely are useful for rebutting calls for broad, categorical, ex ante rules, their restrictive nature leaves them ill suited to serve as the foundation for broad, categorical ex ante mandates pointing in the other direction. Thus, in the absence of some empirical showing that the factual preconditions of any particular exemplifying theory have been satisfied, the existence of exemplifying theories pointing in both directions actually supports an ex post, case-by-case approach that allows network providers to experiment with different pricing regimes unless and until a concrete harm to competition can be shown
    • …
    corecore