30 research outputs found

    The strategic implications of the current Internet design for cyber security

    Get PDF
    Thesis (S.M. in Engineering and Management)--Massachusetts Institute of Technology, Engineering Systems Division, System Design and Management Program, 2011.Cataloged from PDF version of thesis.Includes bibliographical references (p. 87-89).In the last two decades, the Internet system has evolved from a collection point of a few networks to a worldwide interconnection of millions of networks and users who connect to transact virtually all kinds of business. The evolved network system is also known as Cyberspace. The use of Cyberspace is now greatly expanded to all fields of human endeavor by far exceeding the original design projection. And even though, the Internet architecture and design has been robust enough to accommodate the extended domains of uses and applications, it has also become a medium used to launch all sorts of Cyber attacks that results into several undesirable consequences to users. This thesis analyzes the current Internet system architecture and design and how their flaws are exploited to launch Cyber attacks; evaluates reports from Internet traffic monitoring activities and research reports from several organizations; provides a mapping of Cyber attacks to Internet architecture and design flaw origin; conducts Internet system stakeholder analysis; derives strategic implications of the impact of Internet system weaknesses on Cyber security; and makes recommendations on the broader issues of developing effective strategies to implement Cyber security in enterprise systems that have increasingly become complex. From a global architectural design perspective, the study conducted demonstrates that although the Internet is a robust design, the lack of any means of authentication on the system is primarily responsible for the host of Cyber security issues and thus has become the bane of the system. Following the analysis, extrapolation of facts and by inferences we conclude that the myriad of Cyber security problems will remain and continue on the current exponential growth path until the Internet and in particular the TCP/IP stack is given the ability to authenticate and that only through a collaborative effort by all stakeholders of the Internet system can the other major Cyber security issues be resolved especially as it relates to envisioning and fashioning new Cyber security centric technologies.by Charles M. Iheagwara.S.M.in Engineering and Managemen

    Intrusion Detection and Security Assessment in a University Network

    Get PDF
    This thesis first explores how intrusion detection (ID) techniques can be used to provide an extra security layer for today‟s typically security-unaware Internet user. A review of the ever-growing network security threat is presented along with an analysis of the suitability of existing ID systems (IDS) for protecting users of varying security expertise. In light of the impracticality of many IDS for today‟s users, a web-enabled, agent-based, hybrid IDS is proposed. The motivations for the system are presented along with details of its design and implementation. As a test case, the system is deployed on the DCU network and results analysed. One of the aims of an IDS is to uncover security-related issues in its host network. The issues revealed by our IDS demonstrate that a full DCU network security assessment is warranted. This thesis describes how such an assessment should be carried out and presents corresponding results. A set of security-enhancing recommendations for the DCU network are presented

    Topology-Aware Vulnerability Mitigation Worms

    Get PDF
    In very dynamic Information and Communication Technology (ICT) infrastructures, with rapidly growing applications, malicious intrusions have become very sophisticated, effective, and fast. Industries have suffered billions of US dollars losses due only to malicious worm outbreaks. Several calls have been issued by governments and industries to the research community to propose innovative solutions that would help prevent malicious breaches, especially with enterprise networks becoming more complex, large, and volatile. In this thesis we approach self-replicating, self-propagating, and self-contained network programs (i.e. worms) as vulnerability mitigation mechanisms to eliminate threats to networks. These programs provide distinctive features, including: Short distance communication with network nodes, intermittent network node vulnerability probing, and network topology discovery. Such features become necessary, especially for networks with frequent node association and disassociation, dynamically connected links, and where hosts concurrently run multiple operating systems. We propose -- to the best of our knowledge -- the first computer worm that utilize the second layer of the OSI model (Data Link Layer) as its main propagation medium. We name our defensive worm Seawave, a controlled interactive, self-replicating, self-propagating, and self-contained vulnerability mitigation mechanism. We develop, experiment, and evaluate Seawave under different simulation environments that mimic to a large extent enterprise networks. We also propose a threat analysis model to help identify weaknesses, strengths, and threats within and towards our vulnerability mitigation mechanism, followed by a mathematical propagation model to observe Seawave's performance under large scale enterprise networks. We also preliminary propose another vulnerability mitigation worm that utilizes the Link Layer Discovery Protocol (LLDP) for its propagation, along with an evaluation of its performance. In addition, we describe a preliminary taxonomy that rediscovers the relationship between different types of self-replicating programs (i.e. viruses, worms, and botnets) and redefines these programs based on their properties. The taxonomy provides a classification that can be easily applied within the industry and the research community and paves the way for a promising research direction that would consider the defensive side of self-replicating programs

    INTEGRATION OF INTELLIGENCE TECHNIQUES ON THE EXECUTION OF PENETRATION TESTS (iPENTEST)

    Get PDF
    Penetration Tests (Pentests) identify potential vulnerabilities in the security of computer systems via security assessment. However, it should also benefit from widely recognized methodologies and recommendations within this field, as the Penetration Testing Execution Standard (PTES). The objective of this research is to explore PTES, particularly the three initial phases: 1. Pre-Engagement Interactions; 2. Intelligence Gathering; 3. Threat Modeling; and ultimately to apply Intelligence techniques to the Threat Modeling phase. To achieve this, we will use open-source and/or commercial tools to structure a process to clarify how the results were reached using the research inductive methodology. The following steps were implemented: i) critical review of the “Penetration Testing Execution Standard (PTES)”; ii) critical review of Intelligence Production Process; iii) specification and classification of contexts in which Intelligence could be applied; iv) definition of a methodology to apply Intelligence Techniques to the specified contexts; v) application and evaluation of the proposed methodology to real case study as proof of concept. This research has the ambition to develop a model grounded on Intelligence techniques to be applied on PTES Threat Modeling phase

    Overcoming Data Breaches and Human Factors in Minimizing Threats to Cyber-Security Ecosystems

    Get PDF
    This mixed-methods study focused on the internal human factors responsible for data breaches that could cause adverse impacts on organizations. Based on the Swiss cheese theory, the study was designed to examine preventative measures that managers could implement to minimize potential data breaches resulting from internal employees\u27 behaviors. The purpose of this study was to provide insight to managers about developing strategies that could prevent data breaches from cyber-threats by focusing on the specific internal human factors responsible for data breaches, the root causes, and the preventive measures that could minimize threats from internal employees. Data were collected from 10 managers and 12 employees from the business sector, and 5 government managers in Ivory Coast, Africa. The mixed methodology focused on the why and who using the phenomenological approach, consisting of a survey, face-to-face interviews using open-ended questions, and a questionnaire to extract the experiences and perceptions of the participants about preventing the adverse consequences from cyber-threats. The results indicated the importance of top managers to be committed to a coordinated, continuous effort throughout the organization to ensure cyber security awareness, training, and compliance of security policies and procedures, as well as implementing and upgrading software designed to detect and prevent data breaches both internally and externally. The findings of this study could contribute to social change by educating managers about preventing data breaches who in turn may implement information accessibility without retribution. Protecting confidential data is a major concern because one data breach could impact many people as well as jeopardize the viability of the entire organization

    Leveraging Conventional Internet Routing Protocol Behavior to Defeat DDoS and Adverse Networking Conditions

    Get PDF
    The Internet is a cornerstone of modern society. Yet increasingly devastating attacks against the Internet threaten to undermine the Internet\u27s success at connecting the unconnected. Of all the adversarial campaigns waged against the Internet and the organizations that rely on it, distributed denial of service, or DDoS, tops the list of the most volatile attacks. In recent years, DDoS attacks have been responsible for large swaths of the Internet blacking out, while other attacks have completely overwhelmed key Internet services and websites. Core to the Internet\u27s functionality is the way in which traffic on the Internet gets from one destination to another. The set of rules, or protocol, that defines the way traffic travels the Internet is known as the Border Gateway Protocol, or BGP, the de facto routing protocol on the Internet. Advanced adversaries often target the most used portions of the Internet by flooding the routes benign traffic takes with malicious traffic designed to cause widespread traffic loss to targeted end users and regions. This dissertation focuses on examining the following thesis statement. Rather than seek to redefine the way the Internet works to combat advanced DDoS attacks, we can leverage conventional Internet routing behavior to mitigate modern distributed denial of service attacks. The research in this work breaks down into a single arc with three independent, but connected thrusts, which demonstrate that the aforementioned thesis is possible, practical, and useful. The first thrust demonstrates that this thesis is possible by building and evaluating Nyx, a system that can protect Internet networks from DDoS using BGP, without an Internet redesign and without cooperation from other networks. This work reveals that Nyx is effective in simulation for protecting Internet networks and end users from the impact of devastating DDoS. The second thrust examines the real-world practicality of Nyx, as well as other systems which rely on real-world BGP behavior. Through a comprehensive set of real-world Internet routing experiments, this second thrust confirms that Nyx works effectively in practice beyond simulation as well as revealing novel insights about the effectiveness of other Internet security defensive and offensive systems. We then follow these experiments by re-evaluating Nyx under the real-world routing constraints we discovered. The third thrust explores the usefulness of Nyx for mitigating DDoS against a crucial industry sector, power generation, by exposing the latent vulnerability of the U.S. power grid to DDoS and how a system such as Nyx can protect electric power utilities. This final thrust finds that the current set of exposed U.S. power facilities are widely vulnerable to DDoS that could induce blackouts, and that Nyx can be leveraged to reduce the impact of these targeted DDoS attacks

    Statistical learning in network architecture

    Get PDF
    Thesis (Ph. D.)--Massachusetts Institute of Technology, Dept. of Electrical Engineering and Computer Science, 2008.This electronic version was submitted by the student author. The certified thesis is available in the Institute Archives and Special Collections.Includes bibliographical references (p. 167-[177]).The Internet has become a ubiquitous substrate for communication in all parts of society. However, many original assumptions underlying its design are changing. Amid problems of scale, complexity, trust and security, the modern Internet accommodates increasingly critical services. Operators face a security arms race while balancing policy constraints, network demands and commercial relationships. This thesis espouses learning to embrace the Internet's inherent complexity, address diverse problems and provide a component of the network's continued evolution. Malicious nodes, cooperative competition and lack of instrumentation on the Internet imply an environment with partial information. Learning is thus an attractive and principled means to ensure generality and reconcile noisy, missing or conflicting data. We use learning to capitalize on under-utilized information and infer behavior more reliably, and on faster time-scales, than humans with only local perspective. Yet the intrinsic dynamic and distributed nature of networks presents interesting challenges to learning. In pursuit of viable solutions to several real-world Internet performance and security problems, we apply statistical learning methods as well as develop new, network-specific algorithms as a step toward overcoming these challenges. Throughout, we reconcile including intelligence at different points in the network with the end-to-end arguments. We first consider learning as an end-node optimization for efficient peer-to-peer overlay neighbor selection and agent-centric latency prediction. We then turn to security and use learning to exploit fundamental weaknesses in malicious traffic streams. Our method is both adaptable and not easily subvertible. Next, we show that certain security and optimization problems require collaboration, global scope and broad views.(cont.) We employ ensembles of weak classifiers within the network core to mitigate IP source address forgery attacks, thereby removing incentive and coordination issues surrounding existing practice. Finally, we argue for learning within the routing plane as a means to directly optimize and balance provider and user objectives. This thesis thus serves first to validate the potential for using learning methods to address several distinct problems on the Internet and second to illuminate design principles in building such intelligent systems in network architecture.by Robert Edward Beverly, IV.Ph.D
    corecore