10 research outputs found

    Quantification and Modeling of Broken Links Prevalence in Hyper Traffic Websites Homepages

    Full text link
    Broken links in websites external resources pose a serious threat to cybersecurity and the credibility of websites. They can be hijacked to eavesdrop user traffic or to inject malicious software. In this paper, we present the first result of an ongoing research. We focus on the prevalence of broken links in external resources on home pages of the most visited websites in the world. The analysis was conducted on the top 88 000 homepages extracted from the Majestic Million rankings. 35,2% of them have at least one broken link. We also identify the common causes of these broken links and highlight improper implementation of testing phases to prevent such errors. We provide a formal model for the distribution of external links. At the next research step, we are exploring the potential impact on privacy of broken links by analyzing inherited traffic of purchasable expired domains.Comment: 4 pages, 3 tables, 1 figur

    Improving Desktop System Security Using Compartmentalization

    Get PDF
    abstract: Compartmentalizing access to content, be it websites accessed in a browser or documents and applications accessed outside the browser, is an established method for protecting information integrity [12, 19, 21, 60]. Compartmentalization solutions change the user experience, introduce performance overhead and provide varying degrees of security. Striking a balance between usability and security is not an easy task. If the usability aspects are neglected or sacrificed in favor of more security, the resulting solution would have a hard time being adopted by end-users. The usability is affected by factors including (1) the generality of the solution in supporting various applications, (2) the type of changes required, (3) the performance overhead introduced by the solution, and (4) how much the user experience is preserved. The security is affected by factors including (1) the attack surface of the compartmentalization mechanism, and (2) the security decisions offloaded to the user. This dissertation evaluates existing solutions based on the above factors and presents two novel compartmentalization solutions that are arguably more practical than their existing counterparts. The first solution, called FlexICon, is an attractive alternative in the design space of compartmentalization solutions on the desktop. FlexICon allows for the creation of a large number of containers with small memory footprint and low disk overhead. This is achieved by using lightweight virtualization based on Linux namespaces. FlexICon uses two mechanisms to reduce user mistakes: 1) a trusted file dialog for selecting files for opening and launching it in the appropriate containers, and 2) a secure URL redirection mechanism that detects the user’s intent and opens the URL in the proper container. FlexICon also provides a language to specify the access constraints that should be enforced by various containers. The second solution called Auto-FBI, deals with web-based attacks by creating multiple instances of the browser and providing mechanisms for switching between the browser instances. The prototype implementation for Firefox and Chrome uses system call interposition to control the browser’s network access. Auto-FBI can be ported to other platforms easily due to simple design and the ubiquity of system call interposition methods on all major desktop platforms.Dissertation/ThesisDoctoral Dissertation Computer Science 201

    Flexible Information-Flow Control

    Get PDF
    As more and more sensitive data is handled by software, its trustworthinessbecomes an increasingly important concern. This thesis presents work on ensuringthat information processed by computing systems is not disclosed to thirdparties without the user\u27s permission; i.e. to prevent unwanted flows ofinformation. While this problem is widely studied, proposed rigorousinformation-flow control approaches that enforce strong securityproperties like noninterference have yet to see widespread practical use.Conversely, lightweight techniques such as taint tracking are more prevalent inpractice, but lack formal underpinnings, making it unclear what guarantees theyprovide.This thesis aims to shrink the gap between heavyweight information-flow controlapproaches that have been proven sound and lightweight practical techniqueswithout formal guarantees such as taint tracking. This thesis attempts toreconcile these areas by (a) providing formal foundations to taint trackingapproaches, (b) extending information-flow control techniques to more realisticlanguages and settings, and (c) exploring security policies and mechanisms thatfall in between information-flow control and taint tracking and investigating whattrade-offs they incur

    Evolution of security engineering artifacts: a state of the art survey

    Get PDF
    Security is an important quality aspect of modern open software systems. However, it is challenging to keep such systems secure because of evolution. Security evolution can only be managed adequately if it is considered for all artifacts throughout the software development lifecycle. This article provides state of the art on the evolution of security engineering artifacts. The article covers the state of the art on evolution of security requirements, security architectures, secure code, security tests, security models, and security risks as well as security monitoring. For each of these artifacts the authors give an overview of evolution and security aspects and discuss the state of the art on its security evolution in detail. Based on this comprehensive survey, they summarize key issues and discuss directions of future research

    Code-injection Verwundbarkeiten in Web Anwendungen am Beispiel von Cross-site Scripting

    Get PDF
    The majority of all security problems in today's Web applications is caused by string-based code injection, with Cross-site Scripting (XSS)being the dominant representative of this vulnerability class. This thesis discusses XSS and suggests defense mechanisms. We do so in three stages: First, we conduct a thorough analysis of JavaScript's capabilities and explain how these capabilities are utilized in XSS attacks. We subsequently design a systematic, hierarchical classification of XSS payloads. In addition, we present a comprehensive survey of publicly documented XSS payloads which is structured according to our proposed classification scheme. Secondly, we explore defensive mechanisms which dynamically prevent the execution of some payload types without eliminating the actual vulnerability. More specifically, we discuss the design and implementation of countermeasures against the XSS payloads Session Hijacking'', Cross-site Request Forgery'', and attacks that target intranet resources. We build upon this and introduce a general methodology for developing such countermeasures: We determine a necessary set of basic capabilities an adversary needs for successfully executing an attack through an analysis of the targeted payload type. The resulting countermeasure relies on revoking one of these capabilities, which in turn renders the payload infeasible. Finally, we present two language-based approaches that prevent XSS and related vulnerabilities: We identify the implicit mixing of data and code during string-based syntax assembly as the root cause of string-based code injection attacks. Consequently, we explore data/code separation in web applications. For this purpose, we propose a novel methodology for token-level data/code partitioning of a computer language's syntactical elements. This forms the basis for our two distinct techniques: For one, we present an approach to detect data/code confusion on run-time and demonstrate how this can be used for attack prevention. Furthermore, we show how vulnerabilities can be avoided through altering the underlying programming language. We introduce a dedicated datatype for syntax assembly instead of using string datatypes themselves for this purpose. We develop a formal, type-theoretical model of the proposed datatype and proof that it provides reliable separation between data and code hence, preventing code injection vulnerabilities. We verify our approach's applicability utilizing a practical implementation for the J2EE application server.Cross-site Scripting (XSS) ist eine der häufigsten Verwundbarkeitstypen im Bereich der Web Anwendungen. Die Dissertation behandelt das Problem XSS ganzheitlich: Basierend auf einer systematischen Erarbeitung der Ursachen und potentiellen Konsequenzen von XSS, sowie einer umfassenden Klassifikation dokumentier Angriffsarten, wird zunächst eine Methodik vorgestellt, die das Design von dynamischen Gegenmaßnahmen zur Angriffseingrenzung erlaubt. Unter Verwendung dieser Methodik wird das Design und die Evaluation von drei Gegemaßnahmen für die Angriffsunterklassen "Session Hijacking", "Cross-site Request Forgery" und "Angriffe auf das Intranet" vorgestellt. Weiterhin, um das unterliegende Problem grundsätzlich anzugehen, wird ein Typ-basierter Ansatz zur sicheren Programmierung von Web Anwendungen beschrieben, der zuverlässigen Schutz vor XSS Lücken garantiert

    Evaluation with uncertainty

    Get PDF
    Experimental uncertainty arises as a consequence of: (1) bias (systematic error), and (2) variance in measurements. Popular evaluation techniques only account for the variance due to sampling of experimental units, and assume the other sources of uncertainty can be ignored. For example, only the uncertainty due to sampling of topics (queries) and sampling of training:test datasets is considered in standard information retrieval (IR) and classifier system evaluation respectively. However, incomplete relevance judgements, assessor disagreement, non-deterministic systems, and the measurement bias can also cause uncertainty in these experiments. In this thesis, the impact of other sources of uncertainty on evaluating IR and classification experiments are investigated. The uncertainty due to:(1) incomplete relevance judgements in IR test collections,(2) non-determinism in IR systems / classifiers, and (3) high variance of classifiers is analysed using case studies from distributed information retrieval and information security. The thesis illustrates the importance of reducing and accurately accounting for uncertainty when evaluating complex IR and classifier systems. Novel techniques to(1) reduce uncertainty due to test collection bias in IR evaluation and high classifier variance (overfitting) in detecting drive-by download attacks,(2) account for multidimensional variance due to sampling of IR systems instances from non-deterministic IR systems in addition to sampling of topics, and (3) account for repeated measurements due to non-deterministic classification algorithms are introduced

    A behavioural study in runtime analysis environments and drive-by download attacks

    Get PDF
    In the information age, the growth in availability of both technology and exploit kits have continuously contributed in a large volume of websites being compromised or set up with malicious intent. The issue of drive-by-download attacks formulate a high percentage (77%) of the known attacks against client systems. These attacks originate from malicious web-servers or compromised web-servers and attack client systems by pushing malware upon interaction. Within the detection and intelligence gathering area of research, high-interaction honeypot approaches have been a longstanding and well-established technology. These are however not without challenges: analysing the entirety of the world wide web using these approaches is unviable due to time and resource intensiveness. Furthermore, the volume of data that is generated as a result of a run-time analysis of the interaction between website and an analysis environment is huge, varied and not well understood. The volume of malicious servers in addition to the large datasets created as a result of run-time analysis are contributing factors in the difficulty of analysing and verifying actual malicious behaviour. The work in this thesis attempts to overcome the difficulties in the analysis process of log files to optimise malicious and anomaly behaviour detection. The main contribution of this work is focused on reducing the volume of data generated from run-time analysis to reduce the impact of noise within behavioural log file datasets. This thesis proposes an alternate approach that uses an expert lead approach to filtering benign behaviour from potentially malicious and unknown behaviour. Expert lead filtering is designed in a risk-averse method that takes into account known benign and expected behaviours before filtering the log file. Moreover, the approach relies upon behavioural investigation as well as potential for 5 system compromisation before filtering out behaviour within dynamic analysis log files. Consequently, this results in a significantly lower volume of data that can be analysed in greater detail. The proposed filtering approach has been implemented and tested in real-world context using a prudent experimental framework. An average of 96.96% reduction in log file size has been achieved which is transferable to behaviour analysis environments. The other contributions of this work include the understanding of observable operating system interactions. Within the study of behaviour analysis environments, it was concluded that run-time analysis environments are sensitive to application and operating system versions. Understanding key changes in operating systems behaviours within Windows is an unexplored area of research yet Windows is currently one of the most popular client operating system. As part of understanding system behaviours for the creation of behavioural filters, this study undertakes a number of experiments to identify the key behaviour differences between operating systems. The results show that there are significant changes in core processes and interactions which can be taken into account in the development of filters for updated systems. Finally, from the analysis of 110,000 potentially malicious websites, typical attacks are explored. These attacks actively exploited the honeypot and offer knowledge on a section of the active web-based attacks faced in the world wide web. Trends and attack vectors are identified and evaluated

    On JavaScript Malware and related threats

    No full text
    corecore