267 research outputs found

    Inferring Network Usage from Passive Measurements in ISP Networks: Bringing Visibility of the Network to Internet Operators

    Get PDF
    The Internet is evolving with us along the time, nowadays people are more dependent of it, being used for most of the simple activities of their lives. It is not uncommon use the Internet for voice and video communications, social networking, banking and shopping. Current trends in Internet applications such as Web 2.0, cloud computing, and the internet of things are bound to bring higher traffic volume and more heterogeneous traffic. In addition, privacy concerns and network security traits have widely promoted the usage of encryption on the network communications. All these factors make network management an evolving environment that becomes every day more difficult. This thesis focuses on helping to keep track on some of these changes, observing the Internet from an ISP viewpoint and exploring several aspects of the visibility of a network, giving insights on what contents or services are retrieved by customers and how these contents are provided to them. Generally, inferring these information, it is done by means of characterization and analysis of data collected using passive traffic monitoring tools on operative networks. As said, analysis and characterization of traffic collected passively is challenging. Internet end-users are not controlled on the network traffic they generate. Moreover, this traffic in the network might be encrypted or coded in a way that is unfeasible to decode, creating the need for reverse engineering for providing a good picture to the Internet operator. In spite of the challenges, it is presented a characterization of P2P-TV usage of a commercial, proprietary and closed application, that encrypts or encodes its traffic, making quite difficult discerning what is going on by just observing the data carried by the protocol. Then it is presented DN-Hunter, which is an application for rendering visible a great part of the network traffic even when encryption or encoding is available. Finally, it is presented a case study of DNHunter for understanding Amazon Web Services, the most prominent cloud provider that offers computing, storage, and content delivery platforms. In this paper is unveiled the infrastructure, the pervasiveness of content and their traffic allocation policies. Findings reveal that most of the content residing on cloud computing and Internet storage infrastructures is served by one single Amazon datacenter located in Virginia despite it appears to be the worst performing one for Italian users. This causes traffic to take long and expensive paths in the network. Since no automatic migration and load-balancing policies are offered by AWS among different locations, content is exposed to outages, as it is observed in the datasets presented

    AUTOMATED NETWORK SECURITY WITH EXCEPTIONS USING SDN

    Get PDF
    Campus networks have recently experienced a proliferation of devices ranging from personal use devices (e.g. smartphones, laptops, tablets), to special-purpose network equipment (e.g. firewalls, network address translation boxes, network caches, load balancers, virtual private network servers, and authentication servers), as well as special-purpose systems (badge readers, IP phones, cameras, location trackers, etc.). To establish directives and regulations regarding the ways in which these heterogeneous systems are allowed to interact with each other and the network infrastructure, organizations typically appoint policy writing committees (PWCs) to create acceptable use policy (AUP) documents describing the rules and behavioral guidelines that all campus network interactions must abide by. While users are the audience for AUP documents produced by an organization\u27s PWC, network administrators are the responsible party enforcing the contents of such policies using low-level CLI instructions and configuration files that are typically difficult to understand and are almost impossible to show that they do, in fact, enforce the AUPs. In other words, mapping the contents of imprecise unstructured sentences into technical configurations is a challenging task that relies on the interpretation and expertise of the network operator carrying out the policy enforcement. Moreover, there are multiple places where policy enforcement can take place. For example, policies governing servers (e.g., web, mail, and file servers) are often encoded into the server\u27s configuration files. However, from a security perspective, conflating policy enforcement with server configuration is a dangerous practice because minor server misconfigurations could open up avenues for security exploits. On the other hand, policies that are enforced in the network tend to rarely change over time and are often based on one-size-fits-all policies that can severely limit the fast-paced dynamics of emerging research workflows found in campus networks. This dissertation addresses the above problems by leveraging recent advances in Software-Defined Networking (SDN) to support systems that enable novel in-network approaches developed to support an organization\u27s network security policies. Namely, we introduce PoLanCO, a human-readable yet technically-precise policy language that serves as a middle-ground between the imprecise statements found in AUPs and the technical low-level mechanisms used to implement them. Real-world examples show that PoLanCO is capable of implementing a wide range of policies found in campus networks. In addition, we also present the concept of Network Security Caps, an enforcement layer that separates server/device functionality from policy enforcement. A Network Security Cap intercepts packets coming from, and going to, servers and ensures policy compliance before allowing network devices to process packets using the traditional forwarding mechanisms. Lastly, we propose the on-demand security exceptions model to cope with the dynamics of emerging research workflows that are not suited for a one-size-fits-all security approach. In the proposed model, network users and providers establish trust relationships that can be used to temporarily bypass the policy compliance checks applied to general-purpose traffic -- typically by network appliances that perform Deep Packet Inspection, thereby creating network bottlenecks. We describe the components of a prototype exception system as well as experiments showing that through short-lived exceptions researchers can realize significant improvements for their special-purpose traffic

    Characterising attacks targeting low-cost routers: a MikroTik case study (Extended)

    Get PDF
    Attacks targeting network infrastructure devices pose a threat to the security of the internet. An attack targeting such devices can affect an entire autonomous system. In recent years, malware such as VPNFilter, Navidade, and SonarDNS has been used to compromise low-cost routers and commit all sorts of cybercrimes from DDoS attacks to ransomware deployments. Routers of the type concerned are used both to provide last-mile access for home users and to manage interdomain routing (BGP). MikroTik is a particular brand of low-cost router. In our previous research, we found more than 4 million MikroTik routers available on the internet. We have shown that these devices are also popular in Internet Exchange infrastructures. Despite their popularity, these devices are known to have numerous vulnerabilities. In this paper, we extend our previous analysis by presenting a long-term investigation of MikroTik-targeted attacks. By using a highly interactive honeypot that we developed, we collected more than 44 million packets over 120 days, from sensors deployed in Australia, Brazil, China, India, the Netherlands, and the United States. The incoming traffic was classified on the basis of Common Vulnerabilities and Exposures to detect attacks targeting MikroTik devices. That enabled us to identify a wide range of activities on the system, such as cryptocurrency mining, DNS server redirection, and more than 3,000 successfully established tunnels used for eavesdropping. Although this research focuses on Mikrotik devices, both the methodology and the publicly available scripts can be easily applied to any other type of network device

    Detecting spam relays by SMTP traffic characteristics using an autonomous detection system

    Get PDF
    Spam emails are flooding the Internet. Research to prevent spam is an ongoing concern. SMTP traffic was collected from different sources in real networks and analyzed to determine the difference regarding SMTP traffic characteristics of legitimate email clients, legitimate email servers and spam relays. It is found that SMTP traffic from legitimate sites and non-legitimate sites are different and could be distinguished from each other. Some methods, which are based on analyzing SMTP traffic characteristics, were purposed to identify spam relays in the network in this thesis. An autonomous combination system, in which machine learning technologies were employed, was developed to identify spam relays in this thesis. This system identifies spam relays in real time before spam emails get to an end user by using SMTP traffic characteristics never involving email real content. A series of tests were conducted to evaluate the performance of this system. And results show that the system can identify spam relays with a high spam relay detection rate and an acceptable ratio of false positive errors

    Design and implementation of a multi-agent opportunistic grid computing platform

    Get PDF
    Opportunistic Grid Computing involves joining idle computing resources in enterprises into a converged high performance commodity infrastructure. The research described in this dissertation investigates the viability of public resource computing in offering a plethora of possibilities through seamless access to shared compute and storage resources. The research proposes and conceptualizes the Multi-Agent Opportunistic Grid (MAOG) solution in an Information and Communication Technologies for Development (ICT4D) initiative to address some limitations prevalent in traditional distributed system implementations. Proof-of-concept software components based on JADE (Java Agent Development Framework) validated Multi-Agent Systems (MAS) as an important tool for provisioning of Opportunistic Grid Computing platforms. Exploration of agent technologies within the research context identified two key components which improve access to extended computer capabilities. The first component is a Mobile Agent (MA) compute component in which a group of agents interact to pool shared processor cycles. The compute component integrates dynamic resource identification and allocation strategies by incorporating the Contract Net Protocol (CNP) and rule based reasoning concepts. The second service is a MAS based storage component realized through disk mirroring and Google file-system’s chunking with atomic append storage techniques. This research provides a candidate Opportunistic Grid Computing platform design and implementation through the use of MAS. Experiments conducted validated the design and implementation of the compute and storage services. From results, support for processing user applications; resource identification and allocation; and rule based reasoning validated the MA compute component. A MAS based file-system that implements chunking optimizations was considered to be optimum based on evaluations. The findings from the undertaken experiments also validated the functional adequacy of the implementation, and show the suitability of MAS for provisioning of robust, autonomous, and intelligent platforms. The context of this research, ICT4D, provides a solution to optimizing and increasing the utilization of computing resources that are usually idle in these contexts

    Block the Root Takeover: Validating Devices Using Blockchain Protocol

    Get PDF
    This study addresses a vulnerability in the trust-based STP protocol that allows malicious users to target an Ethernet LAN with an STP Root-Takeover Attack. This subject is relevant because an STP Root-Takeover attack is a gateway to unauthorized control over the entire network stack of a personal or enterprise network. This study aims to address this problem with a potentially trustless research solution called the STP DApp. The STP DApp is the combination of a kernel /net modification called stpverify and a Hyperledger Fabric blockchain framework in a NodeJS runtime environment in userland. The STP DApp works as an Intrusion Detection System (IPS) by intercepting Ethernet traffic and blocking forged Ethernet frames sent by STP Root-Takeover attackers. This study’s research methodology is a quantitative pre-experimental design that provides conclusive results through empirical data and analysis using experimental control groups. In this study, data collection was based on active RAM utilization and CPU Usage during a performance evaluation of the STP DApp. It blocks an STP Root-Takeover Attack launched by the Yersinia attack tool installed on a virtual machine with the Kali operating system. The research solution is a test blockchain framework using Hyperledger Fabric. It is made up of an experimental test network made up of nodes on a host virtual machine and is used to validate Ethernet frames extracted from stpverify

    Establishing security and privacy policies for an on-line auction

    Get PDF
    The current Enterprise Resource Planning (ERP) project is a proposal to use business-to-business electronic commerce to provide a means of developing markets for end-of-life products and their components. The objective is to develop a science and technology base for a scalable and secure hub for reverse logistics e-commerce in which users can buy and sell used or surplus products, components, and materials as well as provide a service for disposing of them responsibly. A critical part of the project is the design of security architecture, as well as security and privacy policies for the project\u27s on-line electronic marketplace. Security for the auction website should focus on three concerns: prevention, detection, and response. Prevention consists of four basic characteristics of computer security: authentication, confidentiality, integrity, and availability. We will also analyze some of the vulnerabilities and common attacks of sites on the web, and ways to defend against them. Detection involves several approaches to monitor traffic on the internal network and log the activities of users. This is important to provide forensic evidence when a site is compromised. Detection, however, is useless without some type of response, either through patching new-found security holes, contacting vendors to report security weaknesses and new viruses, or contacting local and federal agencies to assist in closing those holes or bringing violators to justice. We will look at these issues, as well as trust in auctions - allowing buyers and sellers to determine if a user if trustworthy or not - and automatic schemes for preventing a fraudulent user from exploiting that trust

    Opnet, Arne, and the Classroom

    Get PDF
    This paper examines OPNET Technology, Inc\u27s management programs, and Regis University\u27s Academic Research Network (ARNe) needs to find out which OPNET programs can meet the needs of ARNe. The method used was to examine ARNe\u27s needs, and research Microsoft\u27s SMF/MOF management framework, research OPNET\u27s program and module offerings, research OPNET\u27s University Program, and research how OPNET\u27s programs are used at some other universities. The research was used to create a match up between Microsoft\u27s Service Management Functions and OPNET\u27s programs and modules. And it was used to create a list of textbooks, labs, and lab manuals that would work with OPNET\u27s IT Guru and Modeler in a classroom to help teach networking theory. The examination was combined with the research to create an evaluation criteria matrix from which project recommendations could be drawn. The conclusion was that the following OPNET Technology programs and modules could be of benefit to Regis University\u27s ARNe - ACE, Automation module, Commander, DAC module, Flow Analysis module, IT Sentinel, IT Guru, NetDoctor, Report Server, and VNE Server
    • …
    corecore