695 research outputs found

    Systemization of Pluggable Transports for Censorship Resistance

    Full text link
    An increasing number of countries implement Internet censorship at different scales and for a variety of reasons. In particular, the link between the censored client and entry point to the uncensored network is a frequent target of censorship due to the ease with which a nation-state censor can control it. A number of censorship resistance systems have been developed thus far to help circumvent blocking on this link, which we refer to as link circumvention systems (LCs). The variety and profusion of attack vectors available to a censor has led to an arms race, leading to a dramatic speed of evolution of LCs. Despite their inherent complexity and the breadth of work in this area, there is no systematic way to evaluate link circumvention systems and compare them against each other. In this paper, we (i) sketch an attack model to comprehensively explore a censor's capabilities, (ii) present an abstract model of a LC, a system that helps a censored client communicate with a server over the Internet while resisting censorship, (iii) describe an evaluation stack that underscores a layered approach to evaluate LCs, and (iv) systemize and evaluate existing censorship resistance systems that provide link circumvention. We highlight open challenges in the evaluation and development of LCs and discuss possible mitigations.Comment: Content from this paper was published in Proceedings on Privacy Enhancing Technologies (PoPETS), Volume 2016, Issue 4 (July 2016) as "SoK: Making Sense of Censorship Resistance Systems" by Sheharbano Khattak, Tariq Elahi, Laurent Simon, Colleen M. Swanson, Steven J. Murdoch and Ian Goldberg (DOI 10.1515/popets-2016-0028

    A hybrid and cross-protocol architecture with semantics and syntax awareness to improve intrusion detection efficiency in Voice over IP environments

    Get PDF
    Includes abstract.Includes bibliographical references (leaves 134-140).Voice and data have been traditionally carried on different types of networks based on different technologies, namely, circuit switching and packet switching respectively. Convergence in networks enables carrying voice, video, and other data on the same packet-switched infrastructure, and provides various services related to these kinds of data in a unified way. Voice over Internet Protocol (VoIP) stands out as the standard that benefits from convergence by carrying voice calls over the packet-switched infrastructure of the Internet. Although sharing the same physical infrastructure with data networks makes convergence attractive in terms of cost and management, it also makes VoIP environments inherit all the security weaknesses of Internet Protocol (IP). In addition, VoIP networks come with their own set of security concerns. Voice traffic on converged networks is packet-switched and vulnerable to interception with the same techniques used to sniff other traffic on a Local Area Network (LAN) or Wide Area Network (WAN). Denial of Service attacks (DoS) are among the most critical threats to VoIP due to the disruption of service and loss of revenue they cause. VoIP systems are supposed to provide the same level of security provided by traditional Public Switched Telephone Networks (PSTNs), although more functionality and intelligence are distributed to the endpoints, and more protocols are involved to provide better service. A new design taking into consideration all the above factors with better techniques in Intrusion Detection are therefore needed. This thesis describes the design and implementation of a host-based Intrusion Detection System (IDS) that targets VoIP environments. Our intrusion detection system combines two types of modules for better detection capabilities, namely, a specification-based and a signaturebased module. Our specification-based module takes the specifications of VoIP applications and protocols as the detection baseline. Any deviation from the protocol’s proper behavior described by its specifications is considered anomaly. The Communicating Extended Finite State Machines model (CEFSMs) is used to trace the behavior of the protocols involved in VoIP, and to help exchange detection results among protocols in a stateful and cross-protocol manner. The signature-based module is built in part upon State Transition Analysis Techniques which are used to model and detect computer penetrations. Both detection modules allow for protocol-syntax and protocol-semantics awareness. Our intrusion detection uses the aforementioned techniques to cover the threats propagated via low-level protocols such as IP, ICMP, UDP, and TCP

    Intrusion Detection from Heterogenous Sensors

    Get PDF
    RÉSUMÉ De nos jours, la protection des systèmes et réseaux informatiques contre différentes attaques avancées et distribuées constitue un défi vital pour leurs propriétaires. L’une des menaces critiques à la sécurité de ces infrastructures informatiques sont les attaques réalisées par des individus dont les intentions sont malveillantes, qu’ils soient situés à l’intérieur et à l’extérieur de l’environnement du système, afin d’abuser des services disponibles, ou de révéler des informations confidentielles. Par conséquent, la gestion et la surveillance des systèmes informatiques est un défi considérable considérant que de nouvelles menaces et attaques sont découvertes sur une base quotidienne. Les systèmes de détection d’intrusion, Intrusion Detection Systems (IDS) en anglais, jouent un rôle clé dans la surveillance et le contrôle des infrastructures de réseau informatique. Ces systèmes inspectent les événements qui se produisent dans les systèmes et réseaux informatiques et en cas de détection d’activité malveillante, ces derniers génèrent des alertes afin de fournir les détails des attaques survenues. Cependant, ces systèmes présentent certaines limitations qui méritent d’être adressées si nous souhaitons les rendre suffisamment fiables pour répondre aux besoins réels. L’un des principaux défis qui caractérise les IDS est le grand nombre d’alertes redondantes et non pertinentes ainsi que le taux de faux-positif générés, faisant de leur analyse une tâche difficile pour les administrateurs de sécurité qui tentent de déterminer et d’identifier les alertes qui sont réellement importantes. Une partie du problème réside dans le fait que la plupart des IDS ne prennent pas compte les informations contextuelles (type de systèmes, applications, utilisateurs, réseaux, etc.) reliées à l’attaque. Ainsi, une grande partie des alertes générées par les IDS sont non pertinentes en ce sens qu’elles ne permettent de comprendre l’attaque dans son contexte et ce, malgré le fait que le système ait réussi à correctement détecter une intrusion. De plus, plusieurs IDS limitent leur détection à un seul type de capteur, ce qui les rend inefficaces pour détecter de nouvelles attaques complexes. Or, ceci est particulièrement important dans le cas des attaques ciblées qui tentent d’éviter la détection par IDS conventionnels et par d’autres produits de sécurité. Bien que de nombreux administrateurs système incorporent avec succès des informations de contexte ainsi que différents types de capteurs et journaux dans leurs analyses, un problème important avec cette approche reste le manque d’automatisation, tant au niveau du stockage que de l’analyse. Afin de résoudre ces problèmes d’applicabilité, divers types d’IDS ont été proposés dans les dernières années, dont les IDS de type composant pris sur étagère, commercial off-the-shelf (COTS) en anglais, qui sont maintenant largement utilisés dans les centres d’opérations de sécurité, Security Operations Center (SOC) en anglais, de plusieurs grandes organisations. D’un point de vue plus général, les différentes approches proposées peuvent être classées en différentes catégories : les méthodes basées sur l’apprentissage machine, tel que les réseaux bayésiens, les méthodes d’extraction de données, les arbres de décision, les réseaux de neurones, etc., les méthodes impliquant la corrélation d’alertes et les approches fondées sur la fusion d’alertes, les systèmes de détection d’intrusion sensibles au contexte, les IDS dit distribués et les IDS qui reposent sur la notion d’ontologie de base. Étant donné que ces différentes approches se concentrent uniquement sur un ou quelques-uns des défis courants reliés aux IDS, au meilleure de notre connaissance, le problème dans son ensemble n’a pas été résolu. Par conséquent, il n’existe aucune approche permettant de couvrir tous les défis des IDS modernes précédemment mentionnés. Par exemple, les systèmes qui reposent sur des méthodes d’apprentissage machine classent les événements sur la base de certaines caractéristiques en fonction du comportement observé pour un type d’événements, mais ils ne prennent pas en compte les informations reliées au contexte et les relations pouvant exister entre plusieurs événements. La plupart des techniques de corrélation d’alerte proposées ne considèrent que la corrélation entre plusieurs capteurs du même type ayant un événement commun et une sémantique d’alerte similaire (corrélation homogène), laissant aux administrateurs de sécurité la tâche d’effectuer la corrélation entre les différents types de capteurs hétérogènes. Pour leur part, les approches sensibles au contexte n’emploient que des aspects limités du contexte sous-jacent. Une autre limitation majeure des différentes approches proposées est l’absence d’évaluation précise basée sur des ensembles de données qui contiennent des scénarios d’attaque complexes et modernes. À cet effet, l’objectif de cette thèse est de concevoir un système de corrélation d’événements qui peut prendre en considération plusieurs types hétérogènes de capteurs ainsi que les journaux de plusieurs applications (par exemple, IDS/IPS, pare-feu, base de données, système d’exploitation, antivirus, proxy web, routeurs, etc.). Cette méthode permettra de détecter des attaques complexes qui laissent des traces dans les différents systèmes, et d’incorporer les informations de contexte dans l’analyse afin de réduire les faux-positifs. Nos contributions peuvent être divisées en quatre parties principales : 1) Nous proposons la Pasargadae, une solution complète sensible au contexte et reposant sur une ontologie de corrélation des événements, laquelle effectue automatiquement la corrélation des événements par l’analyse des informations recueillies auprès de diverses sources. Pasargadae utilise le concept d’ontologie pour représenter et stocker des informations sur les événements, le contexte et les vulnérabilités, les scénarios d’attaques, et utilise des règles d’ontologie de logique simple écrites en Semantic Query-Enhance Web Rule Language (SQWRL) afin de corréler diverse informations et de filtrer les alertes non pertinentes, en double, et les faux-positifs. 2) Nous proposons une approche basée sur, méta-événement , tri topologique et l‘approche corrélation d‘événement basée sur sémantique qui emploie Pasargadae pour effectuer la corrélation d’événements à travers les événements collectés de plusieurs capteurs répartis dans un réseau informatique. 3) Nous proposons une approche alerte de fusion basée sur sémantique, contexte sensible, qui s‘appuie sur certains des sous-composantes de Pasargadae pour effectuer une alerte fusion hétérogène recueillies auprès IDS hétérogènes. 4) Dans le but de montrer le niveau de flexibilité de Pasargadae, nous l’utilisons pour mettre en oeuvre d’autres approches proposées d‘alertes et de corrélation d‘événements. La somme de ces contributions représente une amélioration significative de l’applicabilité et la fiabilité des IDS dans des situations du monde réel. Afin de tester la performance et la flexibilité de l’approche de corrélation d’événements proposés, nous devons aborder le manque d’infrastructures expérimental adéquat pour la sécurité du réseau. Une étude de littérature montre que les approches expérimentales actuelles ne sont pas adaptées pour générer des données de réseau de grande fidélité. Par conséquent, afin d’accomplir une évaluation complète, d’abord, nous menons nos expériences sur deux scénarios d’étude d‘analyse de cas distincts, inspirés des ensembles de données d’évaluation DARPA 2000 et UNB ISCX IDS. Ensuite, comme une étude déposée complète, nous employons Pasargadae dans un vrai réseau informatique pour une période de deux semaines pour inspecter ses capacités de détection sur un vrai terrain trafic de réseau. Les résultats obtenus montrent que, par rapport à d’autres améliorations IDS existants, les contributions proposées améliorent considérablement les performances IDS (taux de détection) tout en réduisant les faux positifs, non pertinents et alertes en double.----------ABSTRACT Nowadays, protecting computer systems and networks against various distributed and multi-steps attack has been a vital challenge for their owners. One of the essential threats to the security of such computer infrastructures is attacks by malicious individuals from inside and outside of the system environment to abuse available services, or reveal their confidential information. Consequently, managing and supervising computer systems is a considerable challenge, as new threats and attacks are discovered on a daily basis. Intrusion Detection Systems (IDSs) play a key role in the surveillance and monitoring of computer network infrastructures. These systems inspect events occurred in computer systems and networks and in case of any malicious behavior they generate appropriate alerts describing the attacks’ details. However, there are a number of shortcomings that need to be addressed to make them reliable enough in the real-world situations. One of the fundamental challenges in real-world IDS is the large number of redundant, non-relevant, and false positive alerts that they generate, making it a difficult task for security administrators to determine and identify real and important alerts. Part of the problem is that most of the IDS do not take into account contextual information (type of systems, applications, users, networks, etc.), and therefore a large portion of the alerts are non-relevant in that even though they correctly recognize an intrusion, the intrusion fails to reach its objectives. Additionally, to detect newer and complicated attacks, relying on only one detection sensor type is not adequate, and as a result many of the current IDS are unable to detect them. This is especially important with respect to targeted attacks that try to avoid detection by conventional IDS and by other security products. While many system administrators are known to successfully incorporate context information and many different types of sensors and logs into their analysis, an important problem with this approach is the lack of automation in both storage and analysis. In order to address these problems in IDS applicability, various IDS types have been proposed in the recent years and commercial off-the-shelf (COTS) IDS products have found their way into Security Operations Centers (SOC) of many large organizations. From a general perspective, these works can be categorized into: machine learning based approaches including Bayesian networks, data mining methods, decision trees, neural networks, etc., alert correlation and alert fusion based approaches, context-aware intrusion detection systems, distributed intrusion detection systems, and ontology based intrusion detection systems. To the best of our knowledge, since these works only focus on one or few of the IDS challenges, the problem as a whole has not been resolved. Hence, there is no comprehensive work addressing all the mentioned challenges of modern intrusion detection systems. For example, works that utilize machine learning approaches only classify events based on some features depending on behavior observed with one type of events, and they do not take into account contextual information and event interrelationships. Most of the proposed alert correlation techniques consider correlation only across multiple sensors of the same type having a common event and alert semantics (homogeneous correlation), leaving it to security administrators to perform correlation across heterogeneous types of sensors. Context-aware approaches only employ limited aspects of the underlying context. The lack of accurate evaluation based on the data sets that encompass modern complex attack scenarios is another major shortcoming of most of the proposed approaches. The goal of this thesis is to design an event correlation system that can correlate across several heterogeneous types of sensors and logs (e.g. IDS/IPS, firewall, database, operating system, anti-virus, web proxy, routers, etc.) in order to hope to detect complex attacks that leave traces in various systems, and incorporate context information into the analysis, in order to reduce false positives. To this end, our contributions can be split into 4 main parts: 1) we propose the Pasargadae comprehensive context-aware and ontology-based event correlation framework that automatically performs event correlation by reasoning on the information collected from various information resources. Pasargadae uses ontologies to represent and store information on events, context and vulnerability information, and attack scenarios, and uses simple ontology logic rules written in Semantic Query-Enhance Web Rule Language (SQWRL) to correlate various information and filter out non-relevant alerts and duplicate alerts, and false positives. 2) We propose a meta-event based, topological sort based and semantic-based event correlation approach that employs Pasargadae to perform event correlation across events collected form several sensors distributed in a computer network. 3) We propose a semantic-based context-aware alert fusion approach that relies on some of the subcomponents of Pasargadae to perform heterogeneous alert fusion collected from heterogeneous IDS. 4) In order to show the level of flexibility of Pasargadae, we use it to implement some other proposed alert and event correlation approaches. The sum of these contributions represent a significant improvement in the applicability and reliability of IDS in real-world situations. In order to test the performance and flexibility of the proposed event correlation approach, we need to address the lack of experimental infrastructure suitable for network security. A study of the literature shows that current experimental approaches are not appropriate to generate high fidelity network data. Consequently, in order to accomplish a comprehensive evaluation, first, we conduct our experiments on two separate analysis case study scenarios, inspired from the DARPA 2000 and UNB ISCX IDS evaluation data sets. Next, as a complete field study, we employ Pasargadae in a real computer network for a two weeks period to inspect its detection capabilities on a ground truth network traffic. The results obtained show that compared to other existing IDS improvements, the proposed contributions significantly improve IDS performance (detection rate) while reducing false positives, non-relevant and duplicate alerts

    From Understanding Telephone Scams to Implementing Authenticated Caller ID Transmission

    Get PDF
    abstract: The telephone network is used by almost every person in the modern world. With the rise of Internet access to the PSTN, the telephone network today is rife with telephone spam and scams. Spam calls are significant annoyances for telephone users, unlike email spam, spam calls demand immediate attention. They are not only significant annoyances but also result in significant financial losses in the economy. According to complaint data from the FTC, complaints on illegal calls have made record numbers in recent years. Americans lose billions to fraud due to malicious telephone communication, despite various efforts to subdue telephone spam, scam, and robocalls. In this dissertation, a study of what causes the users to fall victim to telephone scams is presented, and it demonstrates that impersonation is at the heart of the problem. Most solutions today primarily rely on gathering offending caller IDs, however, they do not work effectively when the caller ID has been spoofed. Due to a lack of authentication in the PSTN caller ID transmission scheme, fraudsters can manipulate the caller ID to impersonate a trusted entity and further a variety of scams. To provide a solution to this fundamental problem, a novel architecture and method to authenticate the transmission of the caller ID is proposed. The solution enables the possibility of a security indicator which can provide an early warning to help users stay vigilant against telephone impersonation scams, as well as provide a foundation for existing and future defenses to stop unwanted telephone communication based on the caller ID information.Dissertation/ThesisDoctoral Dissertation Computer Science 201

    Secure covert communications over streaming media using dynamic steganography

    Get PDF
    Streaming technologies such as VoIP are widely embedded into commercial and industrial applications, so it is imperative to address data security issues before the problems get really serious. This thesis describes a theoretical and experimental investigation of secure covert communications over streaming media using dynamic steganography. A covert VoIP communications system was developed in C++ to enable the implementation of the work being carried out. A new information theoretical model of secure covert communications over streaming media was constructed to depict the security scenarios in streaming media-based steganographic systems with passive attacks. The model involves a stochastic process that models an information source for covert VoIP communications and the theory of hypothesis testing that analyses the adversary‘s detection performance. The potential of hardware-based true random key generation and chaotic interval selection for innovative applications in covert VoIP communications was explored. Using the read time stamp counter of CPU as an entropy source was designed to generate true random numbers as secret keys for streaming media steganography. A novel interval selection algorithm was devised to choose randomly data embedding locations in VoIP streams using random sequences generated from achaotic process. A dynamic key updating and transmission based steganographic algorithm that includes a one-way cryptographical accumulator integrated into dynamic key exchange for covert VoIP communications, was devised to provide secure key exchange for covert communications over streaming media. The discrete logarithm problem in mathematics and steganalysis using t-test revealed the algorithm has the advantage of being the most solid method of key distribution over a public channel. The effectiveness of the new steganographic algorithm for covert communications over streaming media was examined by means of security analysis, steganalysis using non parameter Mann-Whitney-Wilcoxon statistical testing, and performance and robustness measurements. The algorithm achieved the average data embedding rate of 800 bps, comparable to other related algorithms. The results indicated that the algorithm has no or little impact on real-time VoIP communications in terms of speech quality (< 5% change in PESQ with hidden data), signal distortion (6% change in SNR after steganography) and imperceptibility, and it is more secure and effective in addressing the security problems than other related algorithms

    Network communication privacy: traffic masking against traffic analysis

    Get PDF
    An increasing number of recent experimental works have been demonstrating the supposedly secure channels in the Internet are prone to privacy breaking under many respects, due to traffic features leaking information on the user activity and traffic content. As a matter of example, traffic flow classification at application level, web page identification, language/phrase detection in VoIP communications have all been successfully demonstrated against encrypted channels. In this thesis I aim at understanding if and how complex it is to obfuscate the information leaked by traffic features, namely packet lengths, direction, times. I define a security model that points out what the ideal target of masking is, and then define the optimized and practically implementable masking algorithms, yielding a trade-off between privacy and overhead/complexity of the masking algorithm. Numerical results are based on measured Internet traffic traces. Major findings are that: i) optimized full masking achieves similar overhead values with padding only and in case fragmentation is allowed; ii) if practical realizability is accounted for, optimized statistical masking algorithms attain only moderately better overhead than simple fixed pattern masking algorithms, while still leaking correlation information that can be exploited by the adversary

    Multimedia

    Get PDF
    The nowadays ubiquitous and effortless digital data capture and processing capabilities offered by the majority of devices, lead to an unprecedented penetration of multimedia content in our everyday life. To make the most of this phenomenon, the rapidly increasing volume and usage of digitised content requires constant re-evaluation and adaptation of multimedia methodologies, in order to meet the relentless change of requirements from both the user and system perspectives. Advances in Multimedia provides readers with an overview of the ever-growing field of multimedia by bringing together various research studies and surveys from different subfields that point out such important aspects. Some of the main topics that this book deals with include: multimedia management in peer-to-peer structures & wireless networks, security characteristics in multimedia, semantic gap bridging for multimedia content and novel multimedia applications

    Anomaly Detection in Network Streams Through a Distributional Lens

    Get PDF
    Anomaly detection in computer networks yields valuable information on events relating to the components of a network, their states, the users in a network and their activities. This thesis provides a unified distribution-based methodology for online detection of anomalies in network traffic streams. The methodology is distribution-based in that it regards the traffic stream as a time series of distributions (histograms), and monitors metrics of distributions in the time series. The effectiveness of the methodology is demonstrated in three application scenarios. First, in 802.11 wireless traffic, we show the ability to detect certain classes of attacks using the methodology. Second, in information network update streams (specifically in Wikipedia) we show the ability to detect the activity of bots, flash events, and outages, as they occur. Third, in Voice over IP traffic streams, we show the ability to detect covert channels that exfiltrate confidential information out of the network. Our experiments show the high detection rate of the methodology when compared to other existing methods, while maintaining a low rate of false positives. Furthermore, we provide algorithmic results that enable efficient and scalable implementation of the above methodology, to accomodate the massive data rates observed in modern infomation streams on the Internet. Through these applications, we present an extensive study of several aspects of the methodology. We analyze the behavior of metrics we consider, providing justification of our choice of those metrics, and how they can be used to diagnose anomalies. We provide insight into the choice of parameters, like window length and threshold, used in anomaly detection

    A Comprehensive Review on Adaptability of Network Forensics Frameworks for Mobile Cloud Computing

    Get PDF
    Network forensics enables investigation and identification of network attacks through the retrieved digital content. The proliferation of smartphones and the cost-effective universal data access through cloud has made Mobile Cloud Computing (MCC) a congenital target for network attacks. However, confines in carrying out forensics in MCC is interrelated with the autonomous cloud hosting companies and their policies for restricted access to the digital content in the back-end cloud platforms. It implies that existing Network Forensic Frameworks (NFFs) have limited impact in the MCC paradigm. To this end, we qualitatively analyze the adaptability of existing NFFs when applied to the MCC. Explicitly, the fundamental mechanisms of NFFs are highlighted and then analyzed using the most relevant parameters. A classification is proposed to help understand the anatomy of existing NFFs. Subsequently, a comparison is given that explores the functional similarities and deviations among NFFs. The paper concludes by discussing research challenges for progressive network forensics in MCC
    • …
    corecore