67 research outputs found

    Why everyone should have to learn computer programming

    Get PDF
    First paragraph: News that numerous cathedrals are offering short courses in Latin is a reminder of the long decline of the language over the years. It was a core subject in the British education system until fairly recently – and not because anyone planned to speak it, of course. It was believed to offer valuable training for intellectual composition, as well as skills and thinking that were transferable to other fields.  Access this article on The Conversation website: https://theconversation.com/why-everyone-should-have-to-learn-computer-programming-6232

    TCP Goes to Hollywood

    Get PDF
    Real-time multimedia applications use either TCP or UDP at the transport layer, yet neither of these protocols offer all of the features required. Deploying a new protocol that does offer these features is made difficult by ossification: firewalls, and other middleboxes, in the network expect TCP or UDP, and block other types of traffic. We present TCP Hollywood, a protocol that is wire-compatible with TCP, while offering an unordered, partially reliable messageoriented transport service that is well suited to multimedia applications. Analytical results show that TCP Hollywood extends the feasibility of using TCP for real-time multimedia applications, by reducing latency and increasing utility. Preliminary evaluations also show that TCP Hollywood is deployable on the public Internet, with safe failure modes. Measurements across all major UK fixed-line and cellular networks validate the possibility of deployment

    Implementing Real-Time Transport Services over an Ossified Network

    Get PDF
    Real-time applications require a set of transport services not currently provided by widely-deployed transport protocols. Ossification prevents the deployment of novel protocols, restricting solutions to protocols using either TCP or UDP as a substrate. We describe the transport services required by real-time applications. We show that, in the short-term (i.e., while UDP is blocked at current levels), TCP offers a feasible substrate for providing these services. Over the longer term, protocols using UDP may reduce the number of networks blocking UDP, enabling a shift towards its use as a demultiplexing layer for novel transport protocols

    An Analysis of Planarity in Face-Routing

    Get PDF
    In this report we investigate the limits of routing according to left- or right-hand rule (LHR). Using LHR, a node upon receipt of a message will forward to the neighbour that sits next in counter-clockwise order in the network graph. When used to recover from greedy routing failures, LHR guarantees success if implemented over planar graphs. This is often referred to as face or geographic routing. In the current body of knowledge it is known that if planarity is violated then LHR is guaranteed only to eventually return to the point of origin. Our work seeks to understand why a non-planar environment stops LHR from making delivery guarantees. Our investigation begins with an analysis to enumerate all node con gurations that cause intersections. A trace over each con guration reveals that LHR is able to recover from all but a single case, the `umbrella' con guration so named for its appearance. We use this information to propose the Prohibitive Link Detection Protocol (PDLP) that can guarantee delivery over non-planar graphs using standard face-routing techniques. As the name implies, the protocol detects and circumvents the `bad' links that hamper LHR. The goal of this work is to maintain routing guarantees while disturbing the network graph as little as possible. In doing so, a new starting point emerges from which to build rich distributed protocols in the spirit of protocols such as CLDP and GDSTR

    Prohibitive-link Detection and Routing Protocol

    Get PDF
    Abstract In this paper we investigate the limits of routing according to left-or righthand rule (LHR). Using LHR, a node upon receipt of a message will forward to the neighbour that sits next in counter-clockwise order in the network graph. When used to recover from greedy routing failures, LHR guarantees success if implemented over planar graphs. We note, however, that if planarity is violated then LHR is only guaranteed to eventually return to the point of origin. Our work seeks to understand why. An enumeration and analysis of possible intersections leads us to propose the Prohibitive-link Detection and Routing Protocol (PDRP) that can guarantee delivery over non-planar graphs. As the name implies, the protocol detects and circumvents the 'bad' links that hamper LHR. Our implementation of PDRP in TinyOS reveals the same level of service as face-routing protocols despite preserving most intersecting links in the network

    Evaluating practical QUIC website fingerprinting defenses for the masses

    Get PDF
    Abstract: Website fingerprinting (WF) is a well-known threat to users' web privacy. New Internet standards, such as QUIC, include padding to support defenses against WF. Previous work on QUIC WF only analyzes the effectiveness of defenses when users are behind a VPN. Yet, this is not how most users browse the Internet. In this paper, we provide a comprehensive evaluation of QUIC-padding-based defenses against WF when users directly browse the web, i.e., without VPNs, HTTPS proxies, or other tunneling protocols. We confirm previous claims that network-layer padding cannot provide effective protection against powerful adversaries capable of observing all traffic traces. We show that the claims hold even against adversaries with constraints on traffic visibility and processing power. We then show that the current approach to web development, in which the use of third-party resources is the norm, impedes the effective use of padding-based defenses as it requires first and third parties to coordinate in order to thwart traffic analysis. We show that even when coordination is possible, in most cases, protection comes at a high cost.Peer reviewe

    Bitrate adaptation-aware cache partitioning for video streaming over Information-centric Networks

    Get PDF
    Recent studies suggest that performance gains for content delivery over Information-centric Networks (ICNs) may be negated by Dynamic Adaptive Streaming (DAS), the de facto method for retrieval of multimedia content. The bitrate adaptation mechanism that drives video streaming appears to clash with generic ICN caching techniques in ways that affect users' Quality of Experience (QoE). Cache performance diminishes as video consumers dynamically select content encoded at different bitrates. Motivated by preliminary evidence suggesting the merits of bitrate-based cache partitioning, we introduce a scheme to dissect the cache capacity of routers along a forwarding path according to dedicated bitrates. To facilitate this partitioning, we propose a guiding principle RippleCache, which stabilizes bandwidth fluctuation while achieving high cache utilization by safeguarding high-bitrate content on the edge and pushing low-bitrate content into the network core. We further propose a cache placement scheme, RippleFinder, to realize this RippleCache principle and highlight its impact on users' QoE by cache partitioning. The performance gains are reinforced by evaluations in NS-3. Measurements show RippleFinder can significantly reduce bitrate oscillation, while ensuring high video quality, indicating overall improvement to QoE.Postprin

    A cache-level quality of experience metric to characterize ICNs for adaptive streaming

    Get PDF
    Adaptive streaming has motivated information-centric network (ICN) designs to improve end-user quality of experience (QoE). However, their management and evaluation rely either on conventional cache-level metrics that are poor representations of QoE, or consumer-side indicators that are opaque to network services. This letter proposes a measure to bridge the gap between cache performance and consumer QoE. We introduce maximal sustainable bitrate (MSB), defined as the highest bitrate deliverable in time to be in time to meet a given request without buffering. Based on our observations, we posit that QoE is maximal when requested bitrates match a cache’s MSB for that content. We design a cache-level reward function as a benchmark metric that measures the difference between requested bitrates and MSB. We hypothesize that aggregated rewards are an indicator of overall system performance. Performance evaluations show high correlation between the sum of cache rewards and consumer QoE.PostprintPeer reviewe

    Oblivious DNS over HTTPS (ODoH) : a practical privacy enhancement to DNS

    Get PDF
    Abstract: The Internet’s Domain Name System (DNS) responds to client hostname queries with corresponding IP addresses and records. Traditional DNS is unencrypted and leaks user information to on-lookers. Recent efforts to secure DNS using DNS over TLS (DoT) and DNS over HTTPS (DoH) have been gaining traction, ostensibly protecting DNS messages from third parties. However, the small number of available public largescale DoT and DoH resolvers has reinforced DNS privacy concerns, specifically that DNS operators could use query contents and client IP addresses to link activities with identities. Oblivious DNS over HTTPS (ODoH) safeguards against these problems. In this paper we implement and deploy interoperable instantiations of the protocol, construct a corresponding formal model and analysis, and evaluate the protocols’ performance with wide-scale measurements. Results suggest that ODoH is a practical privacy-enhancing replacement for DNS
    • …
    corecore