584 research outputs found

    TCP Goes to Hollywood

    Get PDF
    Real-time multimedia applications use either TCP or UDP at the transport layer, yet neither of these protocols offer all of the features required. Deploying a new protocol that does offer these features is made difficult by ossification: firewalls, and other middleboxes, in the network expect TCP or UDP, and block other types of traffic. We present TCP Hollywood, a protocol that is wire-compatible with TCP, while offering an unordered, partially reliable messageoriented transport service that is well suited to multimedia applications. Analytical results show that TCP Hollywood extends the feasibility of using TCP for real-time multimedia applications, by reducing latency and increasing utility. Preliminary evaluations also show that TCP Hollywood is deployable on the public Internet, with safe failure modes. Measurements across all major UK fixed-line and cellular networks validate the possibility of deployment

    Extending the Functionality of the Realm Gateway

    Get PDF
    The promise of 5G and Internet of Things (IoT) expects the coming years to witness substantial growth of connected devices. This increase in the number of connected devices further aggravates the IPv4 address exhaustion problem. Network Address Translation (NAT) is a widely known solution to cater to the issue of IPv4 address depletion but it poses an issue of reachability. Since Hypertext Transfer Protocol (HTTP) and Hypertext Transfer Protocol Secure (HTTPS) application layer protocols play a vital role in the communication of the mobile devices and IoT devices, the NAT reachability problem needs to be addressed particularly for these protocols. Realm Gateway (RGW) is a solution proposed to overcome the NAT traversal issue. It acts as a Destination NAT (DNAT) for inbound connections initiated towards the private hosts while acting as a Source NAT (SNAT) for the connections in the outbound direction. The DNAT functionality of RGW is based on a circular pool algorithm that relies on the Domain Name System (DNS) queries sent by the client to maintain the correct connection state. However, an additional reverse proxy is needed with RGW for dealing with HTTP and HTTPS connections. In this thesis, a custom Application Layer Gateway (ALG) is designed to enable end-to-end communication between the public clients and private web servers over HTTP and HTTPS. The ALG replaces the reverse proxy used in the original RGW software. Our solution uses a custom parser-lexer for the hostname detection and routing of the traffic to the correct back-end web server. Furthermore, we integrated the RGW with a policy management system called Security Policy Management (SPM) for storing and retrieving the policies of RGW. We analyzed the impact of the new extensions on the performance of RGW in terms of scalability and computational overhead. Our analysis shows that ALG's performance is directly dependent on the hardware specification of the system. ALG has an advantage over the reverse proxy as it does not require the private keys of the back-end servers for forwarding the encrypted HTTPS traffic. Therefore, using a system with powerful processing capabilities improves the performance of RGW as ALG outperforms the NGINX reverse proxy used in the original RGW solution

    Best effort measurement based congestion control

    Get PDF
    Abstract available: p.

    Encore: Lightweight Measurement of Web Censorship with Cross-Origin Requests

    Full text link
    Despite the pervasiveness of Internet censorship, we have scant data on its extent, mechanisms, and evolution. Measuring censorship is challenging: it requires continual measurement of reachability to many target sites from diverse vantage points. Amassing suitable vantage points for longitudinal measurement is difficult; existing systems have achieved only small, short-lived deployments. We observe, however, that most Internet users access content via Web browsers, and the very nature of Web site design allows browsers to make requests to domains with different origins than the main Web page. We present Encore, a system that harnesses cross-origin requests to measure Web filtering from a diverse set of vantage points without requiring users to install custom software, enabling longitudinal measurements from many vantage points. We explain how Encore induces Web clients to perform cross-origin requests that measure Web filtering, design a distributed platform for scheduling and collecting these measurements, show the feasibility of a global-scale deployment with a pilot study and an analysis of potentially censored Web content, identify several cases of filtering in six months of measurements, and discuss ethical concerns that would arise with widespread deployment

    Fast-Flux Botnet Detection Based on Traffic Response and Search Engines Credit Worthiness

    Get PDF
    Botnets are considered as the primary threats on the Internet and there have been many research efforts to detect and mitigate them. Today, Botnet uses a DNS technique fast-flux to hide malware sites behind a constantly changing network of compromised hosts. This technique is similar to trustworthy Round Robin DNS technique and Content Delivery Network (CDN). In order to distinguish the normal network traffic from Botnets different techniques are developed with more or less success. The aim of this paper is to improve Botnet detection using an Intrusion Detection System (IDS) or router. A novel classification method for online Botnet detection based on DNS traffic features that distinguish Botnet from CDN based traffic is presented. Botnet features are classified according to the possibility of usage and implementation in an embedded system. Traffic response is analysed as a strong candidate for online detection. Its disadvantage lies in specific areas where CDN acts as a Botnet. A new feature based on search engine hits is proposed to improve the false positive detection. The experimental evaluations show that proposed classification could significantly improve Botnet detection. A procedure is suggested to implement such a system as a part of IDS

    Deployable transport services for low-latency multimedia applications

    Get PDF
    Low-latency multimedia applications generate a growing and significant majority of all Internet traffic. These applications are characterised by tight bounds on end-to-end latency that typically range from tens to a few hundred milliseconds. Operating within these bounds is challenging, with the best-effort delivery service of the Internet giving rise to unreliable delivery with unpredictable latency. The way in which the upper layers of the protocol stack manage this unreliability and unpredictability can greatly impact the quality-of-experience that applications can provide. In this thesis, I focus on the services and abstractions that the transport layer provides to applications. The delivery model provided by the transport layer can have a significant impact on the quality-of-experience that can be provided by the application. Reliability and order, for example, introduce delay while packet loss is detected and the lost data retransmitted. This enforces a particular trade-off between latency, loss, and application quality-of-experience, with reliability taking priority. This trade-off is not suitable for low-latency multimedia applications, which prefer predictable and bounded latency to strict reliability and order. No widely-deployed transport protocol provides a delivery model that fully supports low-latency applications: UDP provides no reliability guarantees, while TCP enforces reliability. Implementing a protocol that does support these applications is difficult: ossification restricts protocols to appearing as UDP or TCP on-the-wire. To meet both challenges -- of better supporting low-latency multimedia applications, and of deploying a new protocol within an ossified transport layer -- I propose TCP Hollywood, a protocol that maintains wire compatibility with TCP, while exposing the trade-off between reliability and delay such that applications can improve their quality-of-experience. I show that TCP Hollywood is deployable on the public Internet, and that it achieves its goal of improving support for low-latency multimedia applications. I conclude by evaluating the API changes that are required to support TCP Hollywood, distilling the protocol into the set of transport services that it provides
    corecore