102 research outputs found

    Measuring time to interactivity for modern Web pages

    Get PDF
    Thesis: S.M., Massachusetts Institute of Technology, Department of Electrical Engineering and Computer Science, 2018.Cataloged from PDF version of thesis.Includes bibliographical references (pages 53-56).Web pages continually strive for faster loading times to improve user experience. However, a good metric for "page load time" is elusive. In particular, we contend that modern web pages should be evaluated with respect to interactivity: a page should be considered loaded when the user can fully interact with all visible content. However, existing metrics fail to accurately measure interactivity. On one hand, "page load time", the most widely used metric, overestimates the time to full interactivity by requiring that all content on a page has been both fetched and evaluated, including content below-the-fold that is not immediately visible to the user. Newer metrics like Above-the-Fold Time and Speed Index solve this problem by focusing primarily on above-the-fold content; however, these metrics only evaluate the time at which a page is fully visible to the user, disregarding page functionality, and thus interactivity. In this thesis, we define a new metric called Ready Index, which explicitly captures interactivity. Defining the metric is straightforward, but measuring it is not, since web developers do not explicitly annotate the parts of a page that support user interaction. To solve this problem, we introduce Vesper, a tool which rewrites a page's source code to automatically discover the page's interactive state. Armed with Vesper, we compare Ready Index to prior load time metrics like Speed Index. We find that, across a variety of network conditions, prior metrics underestimate or overestimate the true load time for a page by between 24% and 64%. Additionally, we introduce a tool that optimizes a page for Ready Index and is able to decrease the median time to page interactivity by between 29% and 32%.by Vikram Nathan.S.M

    Tsunami: A Learned Multi-dimensional Index for Correlated Data and Skewed Workloads

    Full text link
    Filtering data based on predicates is one of the most fundamental operations for any modern data warehouse. Techniques to accelerate the execution of filter expressions include clustered indexes, specialized sort orders (e.g., Z-order), multi-dimensional indexes, and, for high selectivity queries, secondary indexes. However, these schemes are hard to tune and their performance is inconsistent. Recent work on learned multi-dimensional indexes has introduced the idea of automatically optimizing an index for a particular dataset and workload. However, the performance of that work suffers in the presence of correlated data and skewed query workloads, both of which are common in real applications. In this paper, we introduce Tsunami, which addresses these limitations to achieve up to 6X faster query performance and up to 8X smaller index size than existing learned multi-dimensional indexes, in addition to up to 11X faster query performance and 170X smaller index size than optimally-tuned traditional indexes

    Learning Multi-dimensional Indexes

    Full text link
    Scanning and filtering over multi-dimensional tables are key operations in modern analytical database engines. To optimize the performance of these operations, databases often create clustered indexes over a single dimension or multi-dimensional indexes such as R-trees, or use complex sort orders (e.g., Z-ordering). However, these schemes are often hard to tune and their performance is inconsistent across different datasets and queries. In this paper, we introduce Flood, a multi-dimensional in-memory index that automatically adapts itself to a particular dataset and workload by jointly optimizing the index structure and data storage. Flood achieves up to three orders of magnitude faster performance for range scans with predicates than state-of-the-art multi-dimensional indexes or sort orders on real-world datasets and workloads. Our work serves as a building block towards an end-to-end learned database system

    X-ray Emission from SN 2012ca: A Type Ia-CSM Supernova Explosion in a Dense Surrounding Medium

    Get PDF
    X-ray emission is one of the signposts of circumstellar interaction in supernovae (SNe), but until now, it has been observed only in core-collapse SNe. The level of thermal X-ray emission is a direct measure of the density of the circumstellar medium (CSM), and the absence of X-ray emission from Type Ia SNe has been interpreted as a sign of a very low density CSM. In this paper, we report late-time (500--800 days after discovery) X-ray detections of SN 2012ca in {\it Chandra} data. The presence of hydrogen in the initial spectrum led to a classification of Type Ia-CSM, ostensibly making it the first SN~Ia detected with X-rays. Our analysis of the X-ray data favors an asymmetric medium, with a high-density component which supplies the X-ray emission. The data suggest a number density >108> 10^8 cm−3^{-3} in the higher-density medium, which is consistent with the large observed Balmer decrement if it arises from collisional excitation. This is high compared to most core-collapse SNe, but it may be consistent with densities suggested for some Type IIn or superluminous SNe. If SN 2012ca is a thermonuclear SN, the large CSM density could imply clumps in the wind, or a dense torus or disk, consistent with the single-degenerate channel. A remote possibility for a core-degenerate channel involves a white dwarf merging with the degenerate core of an asymptotic giant branch star shortly before the explosion, leading to a common envelope around the SN.Comment: 11 pages, 4 figures. Accepted to MNRA

    Hardware-Software Co-Design for Network Performance Measurement

    Get PDF
    Diagnosing performance problems in networks is important, for example to determine where packets experience high latency or loss. However, existing performance diagnoses are constrained by limited switch mechanisms for measurement. Alternatively, operators use endpoint information indirectly to infer root causes for problematic latency or drops. Instead of designing piecemeal solutions to work around such switch restrictions, we believe that the right approach is to co-design language abstractions and switch hardware primitives for network performance measurement. This approach provides confidence that the switch primitives are sufficiently general, i.e., they can support a variety of existing and unanticipated use cases. We present a declarative query language that allows operators to ask a diverse set of network performance questions. We show that these queries can be implemented efficiently in switch hardware using a novel programmable key-value store primitive. Our preliminary evaluations show that our hardware design is feasible at modest chip area overhead relative to existing switching chips
    • …
    corecore