15 research outputs found

    Experience a Faster and More Private Internet in Library and Information Centres with 1.1.1.1 DNS Resolver

    Get PDF
    As a densely populated country with a large user base, the library and information centres (LICs) in India always suffer from a lack of internet speed. Indian LICs frequently employ open DNS resolvers to achieve high internet speeds. In the new normal, the cyberattacks on Indian educational institutions have skyrocketed, necessitating the development of a more secure and quicker DNS resolver. To meet this demand, the paper assesses Cloudflare\u27s 1.1.1.1 DNS resolver and demonstrates its suitability for Indian LICs. To examine the demand of Cloudflare’s 1.1.1.1 DNS resolver in providing the most secure and fastest private network, the researchers use six Windows PCs connected with ten different networks with the same ISP provider and calculate the Internet speed Measurement Lab and the nPerf online free internet speed checker in Chrome browser (v. 96.0). The internet speed was manually collected before and after the configuration and further analysed to evaluate the increase/decrease percentage. The study findings reveal that after the configuration of 1.1.1.1 set up, the internet speed of the chrome browser in the targeted PCs increased 36.33% for download and 73.88% for upload. And from the result, we can conclude that Indian LICs can rely on the 1.1.1.1 DNS resolver to experience fast & secure private network connections. There are several studies that compare two or more DNS resolvers in terms of speed and security, but none of them specifically addresses their usability and benefits in LICs. Although the usage of DNS resolvers at the institutional level is not a new concept, there is still a lack of understanding among library professionals in India about the deployment criteria and benefits. As a result of this research, library professionals may be inspired to use DNS resolvers comprehensively to provide lightning-fast services to their users

    A Macroscopic Study of Network Security Threats at the Organizational Level.

    Full text link
    Defenders of today's network are confronted with a large number of malicious activities such as spam, malware, and denial-of-service attacks. Although many studies have been performed on how to mitigate security threats, the interaction between attackers and defenders is like a game of Whac-a-Mole, in which the security community is chasing after attackers rather than helping defenders to build systematic defensive solutions. As a complement to these studies that focus on attackers or end hosts, this thesis studies security threats from the perspective of the organization, the central authority that manages and defends a group of end hosts. This perspective provides a balanced position to understand security problems and to deploy and evaluate defensive solutions. This thesis explores how a macroscopic view of network security from an organization's perspective can be formed to help measure, understand, and mitigate security threats. To realize this goal, we bring together a broad collection of reputation blacklists. We first measure the properties of the malicious sources identified by these blacklists and their impact on an organization. We then aggregate the malicious sources to Internet organizations and characterize the maliciousness of organizations and their evolution over a period of two and half years. Next, we aim to understand the cause of different maliciousness levels in different organizations. By examining the relationship between eight security mismanagement symptoms and the maliciousness of organizations, we find a strong positive correlation between mismanagement and maliciousness. Lastly, motivated by the observation that there are organizations that have a significant fraction of their IP addresses involved in malicious activities, we evaluate the tradeoff of one type of mitigation solution at the organization level --- network takedowns.PhDComputer Science and EngineeringUniversity of Michigan, Horace H. Rackham School of Graduate Studieshttp://deepblue.lib.umich.edu/bitstream/2027.42/116714/1/jingzj_1.pd

    Optimizing the delivery of multimedia over mobile networks

    Get PDF
    Mención Internacional en el título de doctorThe consumption of multimedia content is moving from a residential environment to mobile phones. Mobile data traffic, driven mostly by video demand, is increasing rapidly and wireless spectrum is becoming a more and more scarce resource. This makes it highly important to operate mobile networks efficiently. To tackle this, recent developments in anticipatory networking schemes make it possible to to predict the future capacity of mobile devices and optimize the allocation of the limited wireless resources. Further, optimizing Quality of Experience—smooth, quick, and high quality playback—is more difficult in the mobile setting, due to the highly dynamic nature of wireless links. A key requirement for achieving, both anticipatory networking schemes and QoE optimization, is estimating the available bandwidth of mobile devices. Ideally, this should be done quickly and with low overhead. In summary, we propose a series of improvements to the delivery of multimedia over mobile networks. We do so, be identifying inefficiencies in the interconnection of mobile operators with the servers hosting content, propose an algorithm to opportunistically create frequent capacity estimations suitable for use in resource optimization solutions and finally propose another algorithm able to estimate the bandwidth class of a device based on minimal traffic in order to identify the ideal streaming quality its connection may support before commencing playback. The main body of this thesis proposes two lightweight algorithms designed to provide bandwidth estimations under the high constraints of the mobile environment, such as and most notably the usually very limited traffic quota. To do so, we begin with providing a thorough overview of the communication path between a content server and a mobile device. We continue with analysing how accurate smartphone measurements can be and also go in depth identifying the various artifacts adding noise to the fidelity of on device measurements. Then, we first propose a novel lightweight measurement technique that can be used as a basis for advanced resource optimization algorithms to be run on mobile phones. Our main idea leverages an original packet dispersion based technique to estimate per user capacity. This allows passive measurements by just sampling the existing mobile traffic. Our technique is able to efficiently filter outliers introduced by mobile network schedulers and phone hardware. In order to asses and verify our measurement technique, we apply it to a diverse dataset generated by both extensive simulations and a week-long measurement campaign spanning two cities in two countries, different radio technologies, and covering all times of the day. The results demonstrate that our technique is effective even if it is provided only with a small fraction of the exchanged packets of a flow. The only requirement for the input data is that it should consist of a few consecutive packets that are gathered periodically. This makes the measurement algorithm a good candidate for inclusion in OS libraries to allow for advanced resource optimization and application-level traffic scheduling, based on current and predicted future user capacity. We proceed with another algorithm that takes advantage of the traffic generated by short-lived TCP connections, which form the majority of the mobile connections, to passively estimate the currently available bandwidth class. Our algorithm is able to extract useful information even if the TCP connection never exits the slow start phase. To the best of our knowledge, no other solution can operate with such constrained input. Our estimation method is able to achieve good precision despite artifacts introduced by the slow start behavior of TCP, mobile scheduler and phone hardware. We evaluate our solution against traces collected in 4 European countries. Furthermore, the small footprint of our algorithm allows its deployment on resource limited devices. Finally, in an attempt to face the rapid traffic increase, mobile application developers outsource their cloud infrastructure deployment and content delivery to cloud computing services and content delivery networks. Studying how these services, which we collectively denote Cloud Service Providers (CSPs), perform over Mobile Network Operators (MNOs) is crucial to understanding some of the performance limitations of today’s mobile apps. To that end, we perform the first empirical study of the complex dynamics between applications, MNOs and CSPs. First, we use real mobile app traffic traces that we gathered through a global crowdsourcing campaign to identify the most prevalent CSPs supporting today’s mobile Internet. Then, we investigate how well these services interconnect with major European MNOs at a topological level, and measure their performance over European MNO networks through a month-long measurement campaign on the MONROE mobile broadband testbed. We discover that the top 6 most prevalent CSPs are used by 85% of apps, and observe significant differences in their performance across different MNOs due to the nature of their services, peering relationships with MNOs, and deployment strategies. We also find that CSP performance in MNOs is affected by inflated path length, roaming, and presence of middleboxes, but not influenced by the choice of DNS resolver. We also observe that the choice of operator’s Point of Presence (PoP) may inflate by at least 20% the delay towards popular websites.This work has been supported by IMDEA Networks Institute.Programa Oficial de Doctorado en Ingeniería TelemáticaPresidente: Ahmed Elmokashfi.- Secretario: Rubén Cuevas Rumín.- Vocal: Paolo Din

    Methods for revealing and reshaping the African Internet Ecosystem as a case study for developing regions: from isolated networks to a connected continent

    Get PDF
    Mención Internacional en el título de doctorWhile connecting end-users worldwide, the Internet increasingly promotes local development by making challenges much simpler to overcome, regardless of the field in which it is used: governance, economy, education, health, etc. However, African Network Information Centre (AfriNIC), the Regional Internet Registry (RIR) of Africa, is characterized by the lowest Internet penetration: 28.6% as of March 2017 compared to an average of 49.7% worldwide according to the International Telecommunication Union (ITU) estimates [139]. Moreover, end-users experience a poor Quality of Service (QoS) provided at high costs. It is thus of interest to enlarge the Internet footprint in such under-connected regions and determine where the situation can be improved. Along these lines, this doctoral thesis thoroughly inspects, using both active and passive data analysis, the critical aspects of the African Internet ecosystem and outlines the milestones of a methodology that could be adopted for achieving similar purposes in other developing regions. The thesis first presents our efforts to help build measurements infrastructures for alleviating the shortage of a diversified range of Vantage Points (VPs) in the region, as we cannot improve what we can not measure. It then unveils our timely and longitudinal inspection of the African interdomain routing using the enhanced RIPE Atlas measurements infrastructure for filling the lack of knowledge of both IPv4 and IPv6 topologies interconnecting local Internet Service Providers (ISPs). It notably proposes reproducible data analysis techniques suitable for the treatment of any set of similar measurements to infer the behavior of ISPs in the region. The results show a large variety of transit habits, which depend on socio-economic factors such as the language, the currency area, or the geographic location of the country in which the ISP operates. They indicate the prevailing dominance of ISPs based outside Africa for the provision of intracontinental paths, but also shed light on the efforts of stakeholders for traffic localization. Next, the thesis investigates the causes and impacts of congestion in the African IXP substrate, as the prevalence of this endemic phenomenon in local Internet markets may hinder their growth. Towards this end, Ark monitors were deployed at six strategically selected local Internet eXchange Points (IXPs) and used for collecting Time-Sequence Latency Probes (TSLP) measurements during a whole year. The analysis of these datasets reveals no evidence of widespread congestion: only 2.2% of the monitored links experienced noticeable indication of congestion, thus promoting peering. The causes of these events were identified during IXP operator interviews, showing how essential collaboration with stakeholders is to understanding the causes of performance degradations. As part of the Internet Society (ISOC) strategy to allow the Internet community to profile the IXPs of a particular region and monitor their evolution, a route-collector data analyzer was then developed and afterward, it was deployed and tested in AfriNIC. This open source web platform titled the “African” Route-collectors Data Analyzer (ARDA) provides metrics, which picture in real-time the status of interconnection at different levels, using public routing information available at local route-collectors with a peering viewpoint of the Internet. The results highlight that a small proportion of Autonomous System Numbers (ASNs) assigned by AfriNIC (17 %) are peering in the region, a fraction that remained static from April to September 2017 despite the significant growth of IXPs in some countries. They show how ARDA can help detect the impact of a policy on the IXP substrate and help ISPs worldwide identify new interconnection opportunities in Africa, the targeted region. Since broadening the underlying network is not useful without appropriately provisioned services to exploit it, the thesis then delves into the availability and utilization of the web infrastructure serving the continent. Towards this end, a comprehensive measurement methodology is applied to collect data from various sources. A focus on Google reveals that its content infrastructure in Africa is, indeed, expanding; nevertheless, much of its web content is still served from the United States (US) and Europe, although being the most popular content source in many African countries. Further, the same analysis is repeated across top global and regional websites, showing that even top African websites prefer to host their content abroad. Following that, the primary bottlenecks faced by Content Providers (CPs) in the region such as the lack of peering between the networks hosting our probes and poorly configured DNS resolvers are explored to outline proposals for further ISP and CP deployments. Considering the above, an option to enrich connectivity and incentivize CPs to establish a presence in the region is to interconnect ISPs present at isolated IXPs by creating a distributed IXP layout spanning the continent. In this respect, the thesis finally provides a four-step interconnection scheme, which parameterizes socio-economic, geographical, and political factors using public datasets. It demonstrates that this constrained solution doubles the percentage of continental intra-African paths, reduces their length, and drastically decreases the median of their Round Trip Times (RTTs) as well as RTTs to ASes hosting the top 10 global and top 10 regional Alexa websites. We hope that quantitatively demonstrating the benefits of this framework will incentivize ISPs to intensify peering and CPs to increase their presence, for enabling fast, affordable, and available access at the Internet frontier.Programa Oficial de Doctorado en Ingeniería TelemáticaPresidente: David Fernández Cambronero.- Secretario: Alberto García Martínez.- Vocal: Cristel Pelsse

    Web page performance analysis

    Get PDF
    Computer systems play an increasingly crucial and ubiquitous role in human endeavour by carrying out or facilitating tasks and providing information and services. How much work these systems can accomplish, within a certain amount of time, using a certain amount of resources, characterises the systems’ performance, which is a major concern when the systems are planned, designed, implemented, deployed, and evolve. As one of the most popular computer systems, the Web is inevitably scrutinised in terms of performance analysis that deals with its speed, capacity, resource utilisation, and availability. Performance analyses for the Web are normally done from the perspective of the Web servers and the underlying network (the Internet). This research, on the other hand, approaches Web performance analysis from the perspective of Web pages. The performance metric of interest here is response time. Response time is studied as an attribute of Web pages, instead of being considered purely a result of network and server conditions. A framework that consists of measurement, modelling, and monitoring (3Ms) of Web pages that revolves around response time is adopted to support the performance analysis activity. The measurement module enables Web page response time to be measured and is used to support the modelling module, which in turn provides references for the monitoring module. The monitoring module estimates response time. The three modules are used in the software development lifecycle to ensure that developed Web pages deliver at worst satisfactory response time (within a maximum acceptable time), or preferably much better response time, thereby maximising the efficiency of the pages. The framework proposes a systematic way to understand response time as it is related to specific characteristics of Web pages and explains how individual Web page response time can be examined and improved

    Using Context to Improve Network-based Exploit Kit Detection

    Get PDF
    Today, our computers are routinely compromised while performing seemingly innocuous activities like reading articles on trusted websites (e.g., the NY Times). These compromises are perpetrated via complex interactions involving the advertising networks that monetize these sites. Web-based compromises such as exploit kits are similar to any other scam -- the attacker wants to lure an unsuspecting client into a trap to steal private information, or resources -- generating 10s of millions of dollars annually. Exploit kits are web-based services specifically designed to capitalize on vulnerabilities in unsuspecting client computers in order to install malware without a user's knowledge. Sadly, it only takes a single successful infection to ruin a user's financial life, or lead to corporate breaches that result in millions of dollars of expense and loss of customer trust. Exploit kits use a myriad of techniques to obfuscate each attack instance, making current network-based defenses such as signature-based network intrusion detection systems far less effective than in years past. Dynamic analysis or honeyclient analysis on these exploits plays a key role in identifying new attacks for signature generation, but provides no means of inspecting end-user traffic on the network to identify attacks in real time. As a result, defenses designed to stop such malfeasance often arrive too late or not at all resulting in high false positive and false negative (error) rates. In order to deal with these drawbacks, three new detection approaches are presented. To deal with the issue of a high number of errors, a new technique for detecting exploit kit interactions on a network is proposed. The technique capitalizes on the fact that an exploit kit leads its potential victim through a process of exploitation by forcing the browser to download multiple web resources from malicious servers. This process has an inherent structure that can be captured in HTTP traffic and used to significantly reduce error rates. The approach organizes HTTP traffic into tree-like data structures, and, using a scalable index of exploit kit traces as samples, models the detection process as a subtree similarity search problem. The technique is evaluated on 3,800 hours of web traffic on a large enterprise network, and results show that it reduces false positive rates by four orders of magnitude over current state-of-the-art approaches. While utilizing structure can vastly improve detection rates over current approaches, it does not go far enough in helping defenders detect new, previously unseen attacks. As a result, a new framework that applies dynamic honeyclient analysis directly on network traffic at scale is proposed. The framework captures and stores a configurable window of reassembled HTTP objects network wide, uses lightweight content rendering to establish the chain of requests leading up to a suspicious event, then serves the initial response content back to the honeyclient in an isolated network. The framework is evaluated on a diverse collection of exploit kits as they evolve over a 1 year period. The empirical evaluation suggests that the approach offers significant operational value, and a single honeyclient can support a campus deployment of thousands of users. While the above approaches attempt to detect exploit kits before they have a chance to infect the client, they cannot protect a client that has already been infected. The final technique detects signs of post infection behavior by intrusions that abuses the domain name system (DNS) to make contact with an attacker. Contemporary detection approaches utilize the structure of a domain name and require hundreds of DNS messages to detect such malware. As a result, these detection mechanisms cannot detect malware in a timely manner and are susceptible to high error rates. The final technique, based on sequential hypothesis testing, uses the DNS message patterns of a subset of DNS traffic to detect malware in as little as four DNS messages, and with orders of magnitude reduction in error rates. The results of this work can make a significant operational impact on network security analysis, and open several exciting future directions for network security research.Doctor of Philosoph

    How\u27s My Network - Incentives and Impediments of Home Network Measurements

    Get PDF
    Gathering meaningful information from Home Networking (HN) environments has presented researchers with measurement strategy challenges. A measurement platform is typically designed around the process of gathering data from a range of devices or usage statistics in a network that are specifically behind the HN firewall. HN studies require a fine balance between incentives and impediments to promote usage and minimize efforts for user participation with the focus on gathering robust datasets and results. In this dissertation we explore how to gather data from the HN Ecosystem (e.g. devices, apps, permissions, configurations) and feedback from HN users across a multitude of HN infrastructures, leveraging low impediment and low/high incentive methods to entice user participation. We look to understand the trade-offs of using a variety of approach types (e.g. Java Applet, Mobile app, survey) for data collections, user preferences, and how HN users react and make changes to the HN environment when presented with privacy/security concerns, norms of comparisons (e.g. comparisons to the local environment and to other HNs) and other HN results. We view that the HN Ecosystem is more than just “the network” as it also includes devices and apps within the HN. We have broken this dissertation down into the following three pillars of work to understand incentives and impediments of user participation and data collections. These pillars include: 1) preliminary work, as part of the How\u27s My Network (HMN) measurement platform, a deployed signed Java applet that provided a user-centered network measurement platform to minimize user impediments for data collection, 2) a HN user survey on preference, comfort, and usability of HNs to understand incentives, and 3) the creation and deployment of a multi-faceted How\u27s My Network Mobile app tool to gather and compare attributes and feedback with high incentives for user participation; as part of this flow we also include related approaches and background work. The HMN Java applet work demonstrated the viability of using a Web browser to obtain network performance data from HNs via a user-centric network measurement platform that minimizes impediments for user participation. The HMN HN survey work found that users prefer to leverage a Mobile app for HN data collections, and can be incentivized to participate in a HN study by providing attributes and characteristics of the HN Ecosystem. The HMN Mobile app was found to provide high incentives, with minimal impediments, for participation with focus on user Privacy and Security concerns. The HMN Mobile app work found that 84\% of users reported a change in perception of privacy and security, 32\% of users uninstalled apps, and 24\% revoked permissions in their HN. As a by-product of this work we found it was possible to gather sensitive information such as previously attached networks, installed apps and devices on the network. This information exposure to any installed app with minimal or no granted permissions is a potential privacy concern

    DDoS Capability and Readiness - Evidence from Australian Organisations

    Get PDF
    A common perception of cyber defence is that it should protect systems and data from malicious attacks, ideally keeping attackers outside of secure perimeters and preventing entry. Much of the effort in traditional cyber security defence is focused on removing gaps in security design and preventing those with legitimate permissions from becoming a gateway or resource for those seeking illegitimate access. By contrast, Distributed Denial of Service (DDoS) attacks do not use application backdoors or software vulnerabilities to create their impact. They instead utilise legitimate entry points and knowledge of system processes for illegitimate purposes. DDoS seeks to overwhelm system and infrastructure resources so that legitimate requests are prevented from reaching their intended destination. For this thesis, a literature review was performed using sources from two perspectives. Reviews of both industry literature and academic literature were combined to build a balanced view of knowledge of this area. Industry and academic literature revealed that DDoS is outpacing internet growth, with vandalism, criminal and ideological motivations rising to prominence. From a defence perspective, the human factor remains a weak link in cyber security due to proneness for mistakes, oversights and the variance in approach and methods expressed by differing cultures. How cyber security is perceived, approached, and applied can have a critical effect on the overall outcome achieved, even when similar technologies are implemented. In addition, variance in the technical capabilities of those responsible for the implementation may create further gaps and vulnerabilities. While discussing technical challenges and theoretical concepts, existing literature failed to cover the experiences held by the victim organisations, or the thoughts and feelings of their personnel. This thesis addresses these identified gaps through exploratory research, which used a mix of descriptive and qualitative analysis to develop results and conclusions. The websites of 60 Australian organisations were analysed to uncover the level and quality of cyber security information they were willing to share and the methods and processes they used to engage with their audience. In addition, semi-structured interviews were conducted with 30 employees from around half of those websites analysed. These were analysed using NVivo12 qualitative analysis software. The difficulty experienced with attracting willing participants reflected the comfort that organisations showed with sharing cyber security information and experiences. However, themes found within the results show that, while DDoS is considered a valid threat, without encouragement to collaborate and standardise minimum security levels, firms may be missing out on valuable strategies to improve their cyber security postures. Further, this reluctance to share leads organisations to rely on their own internal skill and expertise, thus failing to realise the benefits of established frameworks and increased diversity in the workforce. Along with the size of the participant pool, other limitations included the diversity of participants and the impact of COVID-19 which may have influenced participants' thoughts and reflections. These limitations however, present opportunity for future studies using greater participant numbers or a narrower target focus. Either option would be beneficial to the recommendations of this study which were made from a practical, social, theoretical and policy perspective. On a practical and social level, organisational capabilities suffer due to the lack of information sharing and this extends to the community when similar restrictions prevent collaboration. Sharing of knowledge and experiences while protecting sensitive information is a worthy goal and this is something that can lead to improved defence. However, while improved understanding is one way to reduce the impact of cyber-attacks, the introduction of minimum cyber security standards for products, could reduce the ease at which devices can be used to facilitate attacks, but only if policy and effective governance ensures product compliance with legislation. One positive side to COVID-19's push to remote working, was an increase in digital literacy. As more roles were temporarily removed from their traditional physical workplace, many employees needed to rapidly accelerate their digital competency to continue their employment. To assist this transition, organisations acted to implement technology solutions that eased the ability for these roles to be undertaken remotely and as a consequence, they opened up these roles to a greater pool of available candidates. Many of these roles are no longer limited to the geographical location of potential employees or traditional hours of availability. Many of these roles could be accessed from almost anywhere, at any time, which had a positive effect on organisational capability and digital sustainability

    Statistical learning in network architecture

    Get PDF
    Thesis (Ph. D.)--Massachusetts Institute of Technology, Dept. of Electrical Engineering and Computer Science, 2008.This electronic version was submitted by the student author. The certified thesis is available in the Institute Archives and Special Collections.Includes bibliographical references (p. 167-[177]).The Internet has become a ubiquitous substrate for communication in all parts of society. However, many original assumptions underlying its design are changing. Amid problems of scale, complexity, trust and security, the modern Internet accommodates increasingly critical services. Operators face a security arms race while balancing policy constraints, network demands and commercial relationships. This thesis espouses learning to embrace the Internet's inherent complexity, address diverse problems and provide a component of the network's continued evolution. Malicious nodes, cooperative competition and lack of instrumentation on the Internet imply an environment with partial information. Learning is thus an attractive and principled means to ensure generality and reconcile noisy, missing or conflicting data. We use learning to capitalize on under-utilized information and infer behavior more reliably, and on faster time-scales, than humans with only local perspective. Yet the intrinsic dynamic and distributed nature of networks presents interesting challenges to learning. In pursuit of viable solutions to several real-world Internet performance and security problems, we apply statistical learning methods as well as develop new, network-specific algorithms as a step toward overcoming these challenges. Throughout, we reconcile including intelligence at different points in the network with the end-to-end arguments. We first consider learning as an end-node optimization for efficient peer-to-peer overlay neighbor selection and agent-centric latency prediction. We then turn to security and use learning to exploit fundamental weaknesses in malicious traffic streams. Our method is both adaptable and not easily subvertible. Next, we show that certain security and optimization problems require collaboration, global scope and broad views.(cont.) We employ ensembles of weak classifiers within the network core to mitigate IP source address forgery attacks, thereby removing incentive and coordination issues surrounding existing practice. Finally, we argue for learning within the routing plane as a means to directly optimize and balance provider and user objectives. This thesis thus serves first to validate the potential for using learning methods to address several distinct problems on the Internet and second to illuminate design principles in building such intelligent systems in network architecture.by Robert Edward Beverly, IV.Ph.D
    corecore