656 research outputs found

    On The Impact of Internet Naming Evolution: Deployment, Performance, and Security Implications

    Get PDF
    As one of the most critical components of the Internet, the Domain Name System (DNS) provides naming services for Internet users, who rely on DNS to perform the translation between the domain names and network entities before establishing an In- ternet connection. In this dissertation, we present our studies on different aspects of the naming infrastructure in today’s Internet, including DNS itself and the network services based on the naming infrastructure such as Content Delivery Networks (CDNs). We first characterize the evolution and features of the DNS resolution in web ser- vices under the emergence of third-party hosting services and cloud platforms. at the bottom level of the DNS hierarchy, the authoritative DNS servers (ADNSes) maintain the actual mapping records and answer the DNS queries. The increasing use of upstream ADNS services (i.e., third-party ADNS-hosting services) and Infrastructure-as-a-Service (IaaS) clouds facilitates the deployment of web services, and has been fostering the evo- lution of the deployment of ADNS servers. to shed light on this trend, we conduct a large-scale measurement to investigate the ADNS deployment patterns of modern web services and examine the characteristics of different deployment styles, such as perfor- mance, life-cycle of servers, and availability. Furthermore, we specifically focus on the DNS deployment for subdomains hosted in IaaS clouds. Then, we examine a pervasive misuse of DNS names and explore a straightforward solution to mitigate the performance penalty in DNS cache. DNS cache plays a critical role in domain name resolution, providing (1) high scalability at Root and Top-level- domain nameservers with reduced workloads and (2) low response latency to clients when the resource records of the queried domains are cached. However, the pervasive misuses of domain names, e.g., the domain names of “one-time-use” pattern, have negative impact on the effectiveness of DNS caching as the cache has been filled with those entries that are highly unlikely to be retrieved. By leveraging the domain name based features that are explicitly available from a domain name itself, we propose simple policies for improving DNS cache performance and validate their efficacy using real traces. Finally, we investigate the security implications of a fundamental vulnerability in DNS- based CDNs. The success of CDNs relies on the mapping system that leverages the dynamically generated DNS records to distribute a client’s request to a proximal server for achieving optimal content delivery. However, the mapping system is vulnerable to malicious hijacks, as it is very difficult to provide pre-computed DNSSEC signatures for dynamically generated records in CDNs. We illustrate that an adversary can deliberately tamper with the resolvers to hijack CDN’s redirection by injecting crafted but legitimate mappings between end-users and edge servers, while remaining undetectable by exist- ing security practices, which can cause serious threats that nullify the benefits offered by CDNs, such as proximal access, load balancing, and DoS protection. We further demonstrate that DNSSEC is ineffective to address this problem, even with the newly adopted ECDSA that is capable of achieving live signing for dynamically generated DNS records. We then discuss countermeasures against this redirection hijacking

    Parallelization and Visual Analysis of Multidimensional Fields: Application to Ozone Production, Destruction, and Transport in Three Dimensions

    Get PDF
    This final report has four sections. We first describe the actual scientific results attained by our research team, followed by a description of the high performance computing research enhancing those results and prompted by the scientific tasks being undertaken. Next, we describe our research in data and program visualization motivated by the scientific research and also enabling it. Last, we comment on the indirect effects this research effort has had on our work, in terms of follow up or additional funding, student training, etc

    Fast-Flux Botnet Detection Based on Traffic Response and Search Engines Credit Worthiness

    Get PDF
    Botnets are considered as the primary threats on the Internet and there have been many research efforts to detect and mitigate them. Today, Botnet uses a DNS technique fast-flux to hide malware sites behind a constantly changing network of compromised hosts. This technique is similar to trustworthy Round Robin DNS technique and Content Delivery Network (CDN). In order to distinguish the normal network traffic from Botnets different techniques are developed with more or less success. The aim of this paper is to improve Botnet detection using an Intrusion Detection System (IDS) or router. A novel classification method for online Botnet detection based on DNS traffic features that distinguish Botnet from CDN based traffic is presented. Botnet features are classified according to the possibility of usage and implementation in an embedded system. Traffic response is analysed as a strong candidate for online detection. Its disadvantage lies in specific areas where CDN acts as a Botnet. A new feature based on search engine hits is proposed to improve the false positive detection. The experimental evaluations show that proposed classification could significantly improve Botnet detection. A procedure is suggested to implement such a system as a part of IDS

    Data Movement Challenges and Solutions with Software Defined Networking

    Get PDF
    With the recent rise in cloud computing, applications are routinely accessing and interacting with data on remote resources. Interaction with such remote resources for the operation of media-rich applications in mobile environments is also on the rise. As a result, the performance of the underlying network infrastructure can have a significant impact on the quality of service experienced by the user. Despite receiving significant attention from both academia and industry, computer networks still face a number of challenges. Users oftentimes report and complain about poor experiences with their devices and applications, which can oftentimes be attributed to network performance when downloading or uploading application data. This dissertation investigates problems that arise with data movement across computer networks and proposes novel solutions to address these issues through software defined networking (SDN). SDN is lauded to be the paradigm of choice for next generation networks. While academia explores use cases in various contexts, industry has focused on data center and wide area networks. There is a significant range of complex and application-specific network services that can potentially benefit from SDN, but introduction and adoption of such solutions remains slow in production networks. One impeding factor is the lack of a simple yet expressive enough framework applicable to all SDN services across production network domains. Without a uniform framework, SDN developers create disjoint solutions, resulting in untenable management and maintenance overhead. The SDN-based solutions developed in this dissertation make use of a common agent-based approach. The architecture facilitates application-oriented SDN design with an abstraction composed of software agents on top of the underlying network. There are three key components modern and future networks require to deliver exceptional data transfer performance to the end user: (1) user and application mobility, (2) high throughput data transfer, and (3) efficient and scalable content distribution. Meeting these key components will not only ensure the network can provide robust and reliable end-to-end connectivity, but also that network resources will be used efficiently. First, mobility support is critical for user applications to maintain connectivity to remote, cloud-based resources. Today\u27s network users are frequently accessing such resources while on the go, transitioning from network to network with the expectation that their applications will continue to operate seamlessly. As users perform handovers between heterogeneous networks or between networks across administrative domains, the application becomes responsible for maintaining or establishing new connections to remote resources. Although application developers often account for such handovers, the result is oftentimes visible to the user through diminished quality of service (e.g. rebuffering in video streaming applications). Many intra-domain handover solutions exist for handovers in WiFi and cellular networks, such as mobile IP, but they are architecturally complex and have not been integrated to form a scalable, inter-domain solution. A scalable framework is proposed that leverages SDN features to implement both horizontal and vertical handovers for heterogeneous wireless networks within and across administrative domains. User devices can select an appropriate network using an on-board virtual SDN implementation that manages available network interfaces. An SDN-based counterpart operates in the network core and edge to handle user migrations as they transition from one edge attachment point to another. The framework was developed and deployed as an extension to the Global Environment for Network Innovations (GENI) testbed; however, the framework can be deployed on any OpenFlow enabled network. Evaluation revealed users can maintain existing application connections without breaking the sockets and requiring the application to recover. Second, high throughput data transfer is essential for user applications to acquire large remote data sets. As data sizes become increasingly large, often combined with their locations being far from the applications, the well known impact of lower Transmission Control Protocol (TCP) throughput over large delay-bandwidth product paths becomes more significant to these applications. While myriads of solutions exist to alleviate the problem, they require specialized software and/or network stacks at both the application host and the remote data server, making it hard to scale up to a large range of applications and execution environments. This results in high throughput data transfer that is available to only a select subset of network users who have access to such specialized software. An SDN based solution called Steroid OpenFlow Service (SOS) has been proposed as a network service that transparently increases the throughput of TCP-based data transfers across large networks. SOS shifts the complexity of high performance data transfer from the end user to the network; users do not need to configure anything on the client and server machines participating in the data transfer. The SOS architecture supports seamless high performance data transfer at scale for multiple users and for high bandwidth connections. Emphasis is placed on the use of SOS as a part of a larger, richer data transfer ecosystem, complementing and compounding the efforts of existing data transfer solutions. Non-TCP-based solutions, such as Aspera, can operate seamlessly alongside an SOS deployment, while those based on TCP, such as wget, curl, and GridFTP, can leverage SOS for throughput improvement beyond what a single TCP connection can provide. Through extensive evaluation in real-world environments, the SOS architecture is proven to be flexibly deployable on a variety of network architectures, from cloud-based, to production networks, to scaled up, high performance data center environments. Evaluation showed that the SOS architecture scales linearly through the addition of SOS “agents†to the SOS deployment, providing data transfer performance improvement to multiple users simultaneously. An individual data transfer enhanced by SOS was shown to have increased throughput nearly forty times the same data transfer without SOS assistance. Third, efficient and scalable video content distribution is imperative as the demand for multimedia content over the Internet increases. Current state of the art solutions consist of vast content distribution networks (CDNs) where content is oftentimes hosted in duplicate at various geographically distributed locations. Although CDNs are useful for the dissemination of static content, they do not provide a clear and scalable model for the on demand production and distribution of live, streaming content. IP multicast is a popular solution to scalable video content distribution; however, it is seldom used due to deployment and operational complexity. Inspired from the distributed design of todays CDNs and the distribution trees used by IP multicast, a SDN based framework called GENI Cinema (GC) is proposed to allow for the distribution of live video content at scale. GC allows for the efficient management and distribution of live video content at scale without the added architectural complexity and inefficiencies inherent to contemporary solutions such as IP multicast. GC has been deployed as an experimental, nation-wide live video distribution service using the GENI network, broadcasting live and prerecorded video streams from conferences for remote attendees, from the classroom for distance education, and for live sporting events. GC clients can easily and efficiently switch back and forth between video streams with improved switching latency latency over cable, satellite, and other live video providers. The real world dep loyments and evaluation of the proposed solutions show how SDN can be used as a novel way to solve current data transfer problems across computer networks. In addition, this dissertation is expected to provide guidance for designing, deploying, and debugging SDN-based applications across a variety of network topologies

    Internet censorship in the European Union

    Get PDF
    Diese Arbeit befasst sich mit Internetzensur innnerhalb der EU, und hier insbesondere mit der technischen Umsetzung, das heißt mit den angewandten Sperrmethoden und Filterinfrastrukturen, in verschiedenen EU-Ländern. Neben einer Darstellung einiger Methoden und Infrastrukturen wird deren Nutzung zur Informationskontrolle und die Sperrung des Zugangs zu Websites und anderen im Internet verfügbaren Netzdiensten untersucht. Die Arbeit ist in drei Teile gegliedert. Zunächst werden Fälle von Internetzensur in verschiedenen EU-Ländern untersucht, insbesondere in Griechenland, Zypern und Spanien. Anschließend wird eine neue Testmethodik zur Ermittlung der Zensur mittels einiger Anwendungen, welche in mobilen Stores erhältlich sind, vorgestellt. Darüber hinaus werden alle 27 EU-Länder anhand historischer Netzwerkmessungen, die von freiwilligen Nutzern von OONI aus der ganzen Welt gesammelt wurden, öffentlich zugänglichen Blocklisten der EU-Mitgliedstaaten und Berichten von Netzwerkregulierungsbehörden im jeweiligen Land analysiert.This is a thesis on Internet censorship in the European Union (EU), specifically regarding the technical implementation of blocking methodologies and filtering infrastructure in various EU countries. The analysis examines the use of this infrastructure for information controls and the blocking of access to websites and other network services available on the Internet. The thesis follows a three-part structure. Firstly, it examines the cases of Internet censorship in various EU countries, specifically Greece, Cyprus, and Spain. Subsequently, this paper presents a new testing methodology for determining censorship of mobile store applications. Additionally, it analyzes all 27 EU countries using historical network measurements collected by Open Observatory of Network Interference (OONI) volunteers from around the world, publicly available blocklists used by EU member states, and reports issued by network regulators in each country

    Future of networking is the future of Big Data, The

    Get PDF
    2019 Summer.Includes bibliographical references.Scientific domains such as Climate Science, High Energy Particle Physics (HEP), Genomics, Biology, and many others are increasingly moving towards data-oriented workflows where each of these communities generates, stores and uses massive datasets that reach into terabytes and petabytes, and projected soon to reach exabytes. These communities are also increasingly moving towards a global collaborative model where scientists routinely exchange a significant amount of data. The sheer volume of data and associated complexities associated with maintaining, transferring, and using them, continue to push the limits of the current technologies in multiple dimensions - storage, analysis, networking, and security. This thesis tackles the networking aspect of big-data science. Networking is the glue that binds all the components of modern scientific workflows, and these communities are becoming increasingly dependent on high-speed, highly reliable networks. The network, as the common layer across big-science communities, provides an ideal place for implementing common services. Big-science applications also need to work closely with the network to ensure optimal usage of resources, intelligent routing of requests, and data. Finally, as more communities move towards data-intensive, connected workflows - adopting a service model where the network provides some of the common services reduces not only application complexity but also the necessity of duplicate implementations. Named Data Networking (NDN) is a new network architecture whose service model aligns better with the needs of these data-oriented applications. NDN's name based paradigm makes it easier to provide intelligent features at the network layer rather than at the application layer. This thesis shows that NDN can push several standard features to the network. This work is the first attempt to apply NDN in the context of large scientific data; in the process, this thesis touches upon scientific data naming, name discovery, real-world deployment of NDN for scientific data, feasibility studies, and the designs of in-network protocols for big-data science

    Evaluating and Improving Internet Load Balancing with Large-Scale Latency Measurements

    Full text link
    Load balancing is used in the Internet to distribute load across resources at different levels, from global load balancing that distributes client requests across servers at the Internet level to path-level load balancing that balances traffic across load-balanced paths. These load balancing algorithms generally work under certain assumptions on performance similarity. Specifically, global load balancing divides the Internet address space into client aggregations and assumes that clients in the same aggregation have similar performance to the same server; load-balanced paths are generally selected for load balancing as if they have similar performance. However, as performance similarity is typically achieved with similarity in path properties, e.g., topology and hop count, which do not necessarily lead to similar performance, performance between clients in the same aggregation and between load-balanced paths could differ significantly. This dissertation evaluates and improves global and path-level load balancing in terms of performance similarity. We achieve this with large-scale latency measurements, which not only allow us to systematically identify and evaluate the performance issues of Internet load balancing at scale, but also enable us to develop data-driven approaches to improve the performance. Specifically, this dissertation consists of three parts. First, we study the issues of existing client aggregations for global load balancing and then design AP-atoms, a data-driven client aggregation learned from passive large-scale latency measurements. Second, we show that the latency imbalance between load-balanced paths, previously deemed insignificant, is now both significant and prevalent. We present Flipr, a network prober that actively collects large-scale latency measurements to characterize the latency imbalance issue. Lastly, we design another network prober, Congi, that can detect congestion at scale and use Congi to study the congestion imbalance problem at scale. For both latency and congestion imbalance, we demonstrate that they could greatly affect the performance of various applications.PHDComputer Science & EngineeringUniversity of Michigan, Horace H. Rackham School of Graduate Studieshttp://deepblue.lib.umich.edu/bitstream/2027.42/168012/1/yibo_1.pd
    • …
    corecore