545 research outputs found

    Accountable infrastructure and its impact on internet security and privacy

    Get PDF
    The Internet infrastructure relies on the correct functioning of the basic underlying protocols, which were designed for functionality. Security and privacy have been added post hoc, mostly by applying cryptographic means to different layers of communication. In the absence of accountability, as a fundamental property, the Internet infrastructure does not have a built-in ability to associate an action with the responsible entity, neither to detect or prevent misbehavior. In this thesis, we study accountability from a few different perspectives. First, we study the need of having accountability in anonymous communication networks as a mechanism that provides repudiation for the proxy nodes by tracing back selected outbound traffic in a provable manner. Second, we design a framework that provides a foundation to support the enforcement of the right to be forgotten law in a scalable and automated manner. The framework provides a technical mean for the users to prove their eligibility for content removal from the search results. Third, we analyze the Internet infrastructure determining potential security risks and threats imposed by dependencies among the entities on the Internet. Finally, we evaluate the feasibility of using hop count filtering as a mechanism for mitigating Distributed Reflective Denial-of-Service attacks, and conceptually show that it cannot work to prevent these attacks.Die Internet-Infrastrutur stützt sich auf die korrekte Ausführung zugrundeliegender Protokolle, welche mit Fokus auf Funktionalität entwickelt wurden. Sicherheit und Datenschutz wurden nachträglich hinzugefügt, hauptsächlich durch die Anwendung kryptografischer Methoden in verschiedenen Schichten des Protokollstacks. Fehlende Zurechenbarkeit, eine fundamentale Eigenschaft Handlungen mit deren Verantwortlichen in Verbindung zu bringen, verhindert jedoch, Fehlverhalten zu erkennen und zu unterbinden. Diese Dissertation betrachtet die Zurechenbarkeit im Internet aus verschiedenen Blickwinkeln. Zuerst untersuchen wir die Notwendigkeit für Zurechenbarkeit in anonymisierten Kommunikationsnetzen um es Proxyknoten zu erlauben Fehlverhalten beweisbar auf den eigentlichen Verursacher zurückzuverfolgen. Zweitens entwerfen wir ein Framework, das die skalierbare und automatisierte Umsetzung des Rechts auf Vergessenwerden unterstützt. Unser Framework bietet Benutzern die technische Möglichkeit, ihre Berechtigung für die Entfernung von Suchergebnissen nachzuweisen. Drittens analysieren wir die Internet-Infrastruktur, um mögliche Sicherheitsrisiken und Bedrohungen aufgrund von Abhängigkeiten zwischen den verschiedenen beteiligten Entitäten zu bestimmen. Letztlich evaluieren wir die Umsetzbarkeit von Hop Count Filtering als ein Instrument DRDoS Angriffe abzuschwächen und wir zeigen, dass dieses Instrument diese Art der Angriffe konzeptionell nicht verhindern kann

    Perfomance Analysis and Resource Optimisation of Critical Systems Modelled by Petri Nets

    Get PDF
    Un sistema crítico debe cumplir con su misión a pesar de la presencia de problemas de seguridad. Este tipo de sistemas se suele desplegar en entornos heterogéneos, donde pueden ser objeto de intentos de intrusión, robo de información confidencial u otro tipo de ataques. Los sistemas, en general, tienen que ser rediseñados después de que ocurra un incidente de seguridad, lo que puede conducir a consecuencias graves, como el enorme costo de reimplementar o reprogramar todo el sistema, así como las posibles pérdidas económicas. Así, la seguridad ha de ser concebida como una parte integral del desarrollo de sistemas y como una necesidad singular de lo que el sistema debe realizar (es decir, un requisito no funcional del sistema). Así pues, al diseñar sistemas críticos es fundamental estudiar los ataques que se pueden producir y planificar cómo reaccionar frente a ellos, con el fin de mantener el cumplimiento de requerimientos funcionales y no funcionales del sistema. A pesar de que los problemas de seguridad se consideren, también es necesario tener en cuenta los costes incurridos para garantizar un determinado nivel de seguridad en sistemas críticos. De hecho, los costes de seguridad puede ser un factor muy relevante ya que puede abarcar diferentes dimensiones, como el presupuesto, el rendimiento y la fiabilidad. Muchos de estos sistemas críticos que incorporan técnicas de tolerancia a fallos (sistemas FT) para hacer frente a las cuestiones de seguridad son sistemas complejos, que utilizan recursos que pueden estar comprometidos (es decir, pueden fallar) por la activación de los fallos y/o errores provocados por posibles ataques. Estos sistemas pueden ser modelados como sistemas de eventos discretos donde los recursos son compartidos, también llamados sistemas de asignación de recursos. Esta tesis se centra en los sistemas FT con recursos compartidos modelados mediante redes de Petri (Petri nets, PN). Estos sistemas son generalmente tan grandes que el cálculo exacto de su rendimiento se convierte en una tarea de cálculo muy compleja, debido al problema de la explosión del espacio de estados. Como resultado de ello, una tarea que requiere una exploración exhaustiva en el espacio de estados es incomputable (en un plazo prudencial) para sistemas grandes. Las principales aportaciones de esta tesis son tres. Primero, se ofrecen diferentes modelos, usando el Lenguaje Unificado de Modelado (Unified Modelling Language, UML) y las redes de Petri, que ayudan a incorporar las cuestiones de seguridad y tolerancia a fallos en primer plano durante la fase de diseño de los sistemas, permitiendo así, por ejemplo, el análisis del compromiso entre seguridad y rendimiento. En segundo lugar, se proporcionan varios algoritmos para calcular el rendimiento (también bajo condiciones de fallo) mediante el cálculo de cotas de rendimiento superiores, evitando así el problema de la explosión del espacio de estados. Por último, se proporcionan algoritmos para calcular cómo compensar la degradación de rendimiento que se produce ante una situación inesperada en un sistema con tolerancia a fallos

    A composable approach to design of newer techniques for large-scale denial-of-service attack attribution

    Get PDF
    Since its early days, the Internet has witnessed not only a phenomenal growth, but also a large number of security attacks, and in recent years, denial-of-service (DoS) attacks have emerged as one of the top threats. The stateless and destination-oriented Internet routing combined with the ability to harness a large number of compromised machines and the relative ease and low costs of launching such attacks has made this a hard problem to address. Additionally, the myriad requirements of scalability, incremental deployment, adequate user privacy protections, and appropriate economic incentives has further complicated the design of DDoS defense mechanisms. While the many research proposals to date have focussed differently on prevention, mitigation, or traceback of DDoS attacks, the lack of a comprehensive approach satisfying the different design criteria for successful attack attribution is indeed disturbing. Our first contribution here has been the design of a composable data model that has helped us represent the various dimensions of the attack attribution problem, particularly the performance attributes of accuracy, effectiveness, speed and overhead, as orthogonal and mutually independent design considerations. We have then designed custom optimizations along each of these dimensions, and have further integrated them into a single composite model, to provide strong performance guarantees. Thus, the proposed model has given us a single framework that can not only address the individual shortcomings of the various known attack attribution techniques, but also provide a more wholesome counter-measure against DDoS attacks. Our second contribution here has been a concrete implementation based on the proposed composable data model, having adopted a graph-theoretic approach to identify and subsequently stitch together individual edge fragments in the Internet graph to reveal the true routing path of any network data packet. The proposed approach has been analyzed through theoretical and experimental evaluation across multiple metrics, including scalability, incremental deployment, speed and efficiency of the distributed algorithm, and finally the total overhead associated with its deployment. We have thereby shown that it is realistically feasible to provide strong performance and scalability guarantees for Internet-wide attack attribution. Our third contribution here has further advanced the state of the art by directly identifying individual path fragments in the Internet graph, having adopted a distributed divide-and-conquer approach employing simple recurrence relations as individual building blocks. A detailed analysis of the proposed approach on real-life Internet topologies with respect to network storage and traffic overhead, has provided a more realistic characterization. Thus, not only does the proposed approach lend well for simplified operations at scale but can also provide robust network-wide performance and security guarantees for Internet-wide attack attribution. Our final contribution here has introduced the notion of anonymity in the overall attack attribution process to significantly broaden its scope. The highly invasive nature of wide-spread data gathering for network traceback continues to violate one of the key principles of Internet use today - the ability to stay anonymous and operate freely without retribution. In this regard, we have successfully reconciled these mutually divergent requirements to make it not only economically feasible and politically viable but also socially acceptable. This work opens up several directions for future research - analysis of existing attack attribution techniques to identify further scope for improvements, incorporation of newer attributes into the design framework of the composable data model abstraction, and finally design of newer attack attribution techniques that comprehensively integrate the various attack prevention, mitigation and traceback techniques in an efficient manner

    Extending the Functionality of the Realm Gateway

    Get PDF
    The promise of 5G and Internet of Things (IoT) expects the coming years to witness substantial growth of connected devices. This increase in the number of connected devices further aggravates the IPv4 address exhaustion problem. Network Address Translation (NAT) is a widely known solution to cater to the issue of IPv4 address depletion but it poses an issue of reachability. Since Hypertext Transfer Protocol (HTTP) and Hypertext Transfer Protocol Secure (HTTPS) application layer protocols play a vital role in the communication of the mobile devices and IoT devices, the NAT reachability problem needs to be addressed particularly for these protocols. Realm Gateway (RGW) is a solution proposed to overcome the NAT traversal issue. It acts as a Destination NAT (DNAT) for inbound connections initiated towards the private hosts while acting as a Source NAT (SNAT) for the connections in the outbound direction. The DNAT functionality of RGW is based on a circular pool algorithm that relies on the Domain Name System (DNS) queries sent by the client to maintain the correct connection state. However, an additional reverse proxy is needed with RGW for dealing with HTTP and HTTPS connections. In this thesis, a custom Application Layer Gateway (ALG) is designed to enable end-to-end communication between the public clients and private web servers over HTTP and HTTPS. The ALG replaces the reverse proxy used in the original RGW software. Our solution uses a custom parser-lexer for the hostname detection and routing of the traffic to the correct back-end web server. Furthermore, we integrated the RGW with a policy management system called Security Policy Management (SPM) for storing and retrieving the policies of RGW. We analyzed the impact of the new extensions on the performance of RGW in terms of scalability and computational overhead. Our analysis shows that ALG's performance is directly dependent on the hardware specification of the system. ALG has an advantage over the reverse proxy as it does not require the private keys of the back-end servers for forwarding the encrypted HTTPS traffic. Therefore, using a system with powerful processing capabilities improves the performance of RGW as ALG outperforms the NGINX reverse proxy used in the original RGW solution

    Network Traffic Analysis Framework For Cyber Threat Detection

    Get PDF
    The growing sophistication of attacks and newly emerging cyber threats requires advanced cyber threat detection systems. Although there are several cyber threat detection tools in use, cyber threats and data breaches continue to rise. This research is intended to improve the cyber threat detection approach by developing a cyber threat detection framework using two complementary technologies, search engine and machine learning, combining artificial intelligence and classical technologies. In this design science research, several artifacts such as a custom search engine library, a machine learning-based engine and different algorithms have been developed to build a new cyber threat detection framework based on self-learning search and machine learning engines. Apache Lucene.Net search engine library was customized in order to function as a cyber threat detector, and Microsoft ML.NET was used to work with and train the customized search engine. This research proves that a custom search engine can function as a cyber threat detection system. Using both search and machine learning engines in the newly developed framework provides improved cyber threat detection capabilities such as self-learning and predicting attack details. When the two engines run together, the search engine is continuously trained by the machine learning engine and grow smarter to predict yet unknown threats with greater accuracy. While customizing the search engine to function as a cyber threat detector, this research also identified and proved the best algorithms for the search engine based cyber threat detection model. For example, the best scoring algorithm was found to be the Manhattan distance. The validation case study also shows that not every network traffic feature makes an equal contribution to determine the status of the traffic, and thus the variable-dimension Vector Space Model (VSM) achieves better detection accuracy than n-dimensional VSM. Although the use of different technologies and approaches improved detection results, this research is primarily focused on developing techniques rather than building a complete threat detection system. Additional components such as those that can track and investigate the impact of network traffic on the destination devices make the newly developed framework robust enough to build a comprehensive cyber threat detection appliance

    Standards and practices necessary to implement a successful security review program for intrusion management systems

    Get PDF
    Thesis (Master)--Izmir Institute of Technology, Computer Engineering, Izmir, 2002Includes bibliographical references (leaves: 84-85)Text in English; Abstract: Turkish and Englishviii, 91 leavesIntrusion Management Systems are being used to prevent the information systems from successful intrusions and their consequences. They also have detection features. They try to detect intrusions, which have passed the implemented measures. Also the recovery of the system after a successful intrusion is made by the Intrusion Management Systems. The investigation of the intrusion is made by Intrusion Management Systems also. These functions can be existent in an intrusion management system model, which has a four layers architecture. The layers of the model are avoidance, assurance, detection and recovery. At the avoidance layer necessary policies, standards and practices are implemented to prevent the information system from successful intrusions. At the avoidance layer, the effectiveness of implemented measures are measured by some test and reviews. At the detection layer the identification of an intrusion or intrusion attempt is made in the real time. The recovery layer is responsible from restoring the information system after a successful intrusion. It has also functions to investigate the intrusion. Intrusion Management Systems are used to protect information and computer assets from intrusions. An organization aiming to protect its assets must use such a system. After the implementation of the system, continuous reviews must be conducted in order to ensure the effectiveness of the measures taken. Such a review can achieve its goal by using principles and standards. In this thesis, the principles necessary to implement a successful review program for Intrusion Management Systems have been developed in the guidance of Generally Accepted System Security Principles (GASSP). These example principles are developed for tools of each Intrusion Management System layer. These tools are firewalls for avoidance layer, vulnerability scanners for assurance layer, intrusion detection systems for detection layer and integrity checkers for recovery layer of Intrusion Management Systems

    Statistical learning in network architecture

    Get PDF
    Thesis (Ph. D.)--Massachusetts Institute of Technology, Dept. of Electrical Engineering and Computer Science, 2008.This electronic version was submitted by the student author. The certified thesis is available in the Institute Archives and Special Collections.Includes bibliographical references (p. 167-[177]).The Internet has become a ubiquitous substrate for communication in all parts of society. However, many original assumptions underlying its design are changing. Amid problems of scale, complexity, trust and security, the modern Internet accommodates increasingly critical services. Operators face a security arms race while balancing policy constraints, network demands and commercial relationships. This thesis espouses learning to embrace the Internet's inherent complexity, address diverse problems and provide a component of the network's continued evolution. Malicious nodes, cooperative competition and lack of instrumentation on the Internet imply an environment with partial information. Learning is thus an attractive and principled means to ensure generality and reconcile noisy, missing or conflicting data. We use learning to capitalize on under-utilized information and infer behavior more reliably, and on faster time-scales, than humans with only local perspective. Yet the intrinsic dynamic and distributed nature of networks presents interesting challenges to learning. In pursuit of viable solutions to several real-world Internet performance and security problems, we apply statistical learning methods as well as develop new, network-specific algorithms as a step toward overcoming these challenges. Throughout, we reconcile including intelligence at different points in the network with the end-to-end arguments. We first consider learning as an end-node optimization for efficient peer-to-peer overlay neighbor selection and agent-centric latency prediction. We then turn to security and use learning to exploit fundamental weaknesses in malicious traffic streams. Our method is both adaptable and not easily subvertible. Next, we show that certain security and optimization problems require collaboration, global scope and broad views.(cont.) We employ ensembles of weak classifiers within the network core to mitigate IP source address forgery attacks, thereby removing incentive and coordination issues surrounding existing practice. Finally, we argue for learning within the routing plane as a means to directly optimize and balance provider and user objectives. This thesis thus serves first to validate the potential for using learning methods to address several distinct problems on the Internet and second to illuminate design principles in building such intelligent systems in network architecture.by Robert Edward Beverly, IV.Ph.D

    A reputation framework for behavioural history: developing and sharing reputations from behavioural history of network clients

    Get PDF
    The open architecture of the Internet has enabled its massive growth and success by facilitating easy connectivity between hosts. At the same time, the Internet has also opened itself up to abuse, e.g. arising out of unsolicited communication, both intentional and unintentional. It remains an open question as to how best servers should protect themselves from malicious clients whilst offering good service to innocent clients. There has been research on behavioural profiling and reputation of clients, mostly at the network level and also for email as an application, to detect malicious clients. However, this area continues to pose open research challenges. This thesis is motivated by the need for a generalised framework capable of aiding efficient detection of malicious clients while being able to reward clients with behaviour profiles conforming to the acceptable use and other relevant policies. The main contribution of this thesis is a novel, generalised, context-aware, policy independent, privacy preserving framework for developing and sharing client reputation based on behavioural history. The framework, augmenting existing protocols, allows fitting in of policies at various stages, thus keeping itself open and flexible to implementation. Locally recorded behavioural history of clients with known identities are translated to client reputations, which are then shared globally. The reputations enable privacy for clients by not exposing the details of their behaviour during interactions with the servers. The local and globally shared reputations facilitate servers in selecting service levels, including restricting access to malicious clients. We present results and analyses of simulations, with synthetic data and some proposed example policies, of client-server interactions and of attacks on our model. Suggestions presented for possible future extensions are drawn from our experiences with simulation

    An approach to preventing spam using Access Codes with a combination of anti-spam mechanisms

    Get PDF
    Spam is becoming a more and more severe problem for individuals, networks, organisations and businesses. The losses caused by spam are billions of dollars every year. Research shows that spam contributes more than 80% of e-mails with an increased in its growth rate every year. Spam is not limited to emails; it has started affecting other technologies like VoIP, cellular and traditional telephony, and instant messaging services. None of the approaches (including legislative, collaborative, social awareness and technological) separately or in combination with other approaches, can prevent sufficient of the spam to be deemed a solution to the spam problem. The severity of the spam problem and the limitations of the state-of-the-Art solutions create a strong need for an efficient anti-spam mechanism that can prevent significant volumes of spam without showing any false positives. This can be achieved by an efficient anti-spam mechanism such as the proposed anti-spam mechanism known as "Spam Prevention using Access Codes", SPAC. SPAC targets spam from two angles i.e. to prevent/block spam and to discourage spammers by making the infrastructure environment very unpleasant for them. In addition to the idea of Access Codes, SPAC combines the ideas behind some of the key current technological anti-spam measures to increase effectiveness. The difference in this work is that SPAC uses those ideas effectively and combines them in a unique way which enables SPAC to acquire the good features of a number of technological anti-spam approaches without showing any of the drawbacks of these approaches. Sybil attacks, Dictionary attacks and address spoofing have no impact on the performance of SPAC. In fact SPAC functions in a similar way (i.e. as for unknown persons) for these sorts of attacks. An application known as the "SPAC application" has been developed to test the performance of the SPAC mechanism. The results obtained from various tests on the SPAC application show that SPAC has a clear edge over the existing anti-spam technological approaches
    corecore