40 research outputs found

    Using Deception to Enhance Security: A Taxonomy, Model, and Novel Uses

    Get PDF
    As the convergence between our physical and digital worlds continue at a rapid pace, securing our digital information is vital to our prosperity. Most current typical computer systems are unwittingly helpful to attackers through their predictable responses. In everyday security, deception plays a prominent role in our lives and digital security is no different. The use of deception has been a cornerstone technique in many successful computer breaches. Phishing, social engineering, and drive-by-downloads are some prime examples. The work in this dissertation is structured to enhance the security of computer systems by using means of deception and deceit

    D2WFP: A Novel Protocol for Forensically Identifying, Extracting, and Analysing Deep and Dark Web Browsing Activities

    Full text link
    The use of the un-indexed web, commonly known as the deep web and dark web, to commit or facilitate criminal activity has drastically increased over the past decade. The dark web is an in-famously dangerous place where all kinds of criminal activities take place [1-2], despite advances in web forensics techniques, tools, and methodologies, few studies have formally tackled the dark and deep web forensics and the technical differences in terms of investigative techniques and artefacts identification and extraction. This research proposes a novel and comprehensive protocol to guide and assist digital forensics professionals in investigating crimes committed on or via the deep and dark web, The protocol named D2WFP establishes a new sequential approach for performing investigative activities by observing the order of volatility and implementing a systemic approach covering all browsing related hives and artefacts which ultimately resulted into improv-ing the accuracy and effectiveness. Rigorous quantitative and qualitative research has been conducted by assessing D2WFP following a scientifically-sound and comprehensive process in different scenarios and the obtained results show an apparent increase in the number of artefacts re-covered when adopting D2WFP which outperform any current industry or opensource browsing forensics tools. The second contribution of D2WFP is the robust formulation of artefact correlation and cross-validation within D2WFP which enables digital forensics professionals to better document and structure their analysis of host-based deep and dark web browsing artefacts

    Addressing the new generation of spam (Spam 2.0) through Web usage models

    Get PDF
    New Internet collaborative media introduce new ways of communicating that are not immune to abuse. A fake eye-catching profile in social networking websites, a promotional review, a response to a thread in online forums with unsolicited content or a manipulated Wiki page, are examples of new the generation of spam on the web, referred to as Web 2.0 Spam or Spam 2.0. Spam 2.0 is defined as the propagation of unsolicited, anonymous, mass content to infiltrate legitimate Web 2.0 applications.The current literature does not address Spam 2.0 in depth and the outcome of efforts to date are inadequate. The aim of this research is to formalise a definition for Spam 2.0 and provide Spam 2.0 filtering solutions. Early-detection, extendibility, robustness and adaptability are key factors in the design of the proposed method.This dissertation provides a comprehensive survey of the state-of-the-art web spam and Spam 2.0 filtering methods to highlight the unresolved issues and open problems, while at the same time effectively capturing the knowledge in the domain of spam filtering.This dissertation proposes three solutions in the area of Spam 2.0 filtering including: (1) characterising and profiling Spam 2.0, (2) Early-Detection based Spam 2.0 Filtering (EDSF) approach, and (3) On-the-Fly Spam 2.0 Filtering (OFSF) approach. All the proposed solutions are tested against real-world datasets and their performance is compared with that of existing Spam 2.0 filtering methods.This work has coined the term ‘Spam 2.0’, provided insight into the nature of Spam 2.0, and proposed filtering mechanisms to address this new and rapidly evolving problem

    Analysing web-based malware behaviour through client honeypots

    Get PDF
    With an increase in the use of the internet, there has been a rise in the number of attacks on servers. These attacks can be successfully defended against using security technologies such as firewalls, IDS and anti-virus software, so attackers have developed new methods to spread their malicious code by using web pages, which can affect many more victims than the traditional approach. The attackers now use these websites to threaten users without the user’s knowledge or permission. The defence against such websites is less effective than traditional security products meaning the attackers have the advantage of being able to target a greater number of users. Malicious web pages attack users through their web browsers and the attack can occur even if the user only visits the web page; this type of attack is called a drive-by download attack. This dissertation explores how web-based attacks work and how users can be protected from this type of attack based on the behaviour of a remote web server. We propose a system that is based on the use of client Honeypot technology. The client Honeypot is able to scan malicious web pages based on their behaviour and can therefore work as an anomaly detection system. The proposed system has three main models: state machine, clustering and prediction models. All these three models work together in order to protect users from known and unknown web-based attacks. This research demonstrates the challenges faced by end users and how the attacker can easily target systems using drive-by download attacks. In this dissertation we discuss how the proposed system works and the research challenges that we are trying to solve, such as how to group web-based attacks into behaviour groups, how to avoid attempts at obfuscation used by attackers and how to predict future malicious behaviour for a given web-based attack based on its behaviour in real time. Finally, we have demonstrate how the proposed system will work by implementing a prototype application and conducting a number of experiments to show how we were able to model, cluster and predict web-based attacks based on their behaviour. The experiment data was collected randomly from online blacklist websites.EThOS - Electronic Theses Online ServiceGBUnited Kingdo

    Analysing web-based malware behaviour through client honeypots

    Get PDF
    With an increase in the use of the internet, there has been a rise in the number of attacks on servers. These attacks can be successfully defended against using security technologies such as firewalls, IDS and anti-virus software, so attackers have developed new methods to spread their malicious code by using web pages, which can affect many more victims than the traditional approach. The attackers now use these websites to threaten users without the user’s knowledge or permission. The defence against such websites is less effective than traditional security products meaning the attackers have the advantage of being able to target a greater number of users. Malicious web pages attack users through their web browsers and the attack can occur even if the user only visits the web page; this type of attack is called a drive-by download attack. This dissertation explores how web-based attacks work and how users can be protected from this type of attack based on the behaviour of a remote web server. We propose a system that is based on the use of client Honeypot technology. The client Honeypot is able to scan malicious web pages based on their behaviour and can therefore work as an anomaly detection system. The proposed system has three main models: state machine, clustering and prediction models. All these three models work together in order to protect users from known and unknown web-based attacks. This research demonstrates the challenges faced by end users and how the attacker can easily target systems using drive-by download attacks. In this dissertation we discuss how the proposed system works and the research challenges that we are trying to solve, such as how to group web-based attacks into behaviour groups, how to avoid attempts at obfuscation used by attackers and how to predict future malicious behaviour for a given web-based attack based on its behaviour in real time. Finally, we have demonstrate how the proposed system will work by implementing a prototype application and conducting a number of experiments to show how we were able to model, cluster and predict web-based attacks based on their behaviour. The experiment data was collected randomly from online blacklist websites.EThOS - Electronic Theses Online ServiceGBUnited Kingdo

    Analysing web-based malware behaviour through client honeypots

    Get PDF
    With an increase in the use of the internet, there has been a rise in the number of attacks on servers. These attacks can be successfully defended against using security technologies such as firewalls, IDS and anti-virus software, so attackers have developed new methods to spread their malicious code by using web pages, which can affect many more victims than the traditional approach. The attackers now use these websites to threaten users without the user’s knowledge or permission. The defence against such websites is less effective than traditional security products meaning the attackers have the advantage of being able to target a greater number of users. Malicious web pages attack users through their web browsers and the attack can occur even if the user only visits the web page; this type of attack is called a drive-by download attack. This dissertation explores how web-based attacks work and how users can be protected from this type of attack based on the behaviour of a remote web server. We propose a system that is based on the use of client Honeypot technology. The client Honeypot is able to scan malicious web pages based on their behaviour and can therefore work as an anomaly detection system. The proposed system has three main models: state machine, clustering and prediction models. All these three models work together in order to protect users from known and unknown web-based attacks. This research demonstrates the challenges faced by end users and how the attacker can easily target systems using drive-by download attacks. In this dissertation we discuss how the proposed system works and the research challenges that we are trying to solve, such as how to group web-based attacks into behaviour groups, how to avoid attempts at obfuscation used by attackers and how to predict future malicious behaviour for a given web-based attack based on its behaviour in real time. Finally, we have demonstrate how the proposed system will work by implementing a prototype application and conducting a number of experiments to show how we were able to model, cluster and predict web-based attacks based on their behaviour. The experiment data was collected randomly from online blacklist websites

    The Proceedings of 15th Australian Information Security Management Conference, 5-6 December, 2017, Edith Cowan University, Perth, Australia

    Get PDF
    Conference Foreword The annual Security Congress, run by the Security Research Institute at Edith Cowan University, includes the Australian Information Security and Management Conference. Now in its fifteenth year, the conference remains popular for its diverse content and mixture of technical research and discussion papers. The area of information security and management continues to be varied, as is reflected by the wide variety of subject matter covered by the papers this year. The papers cover topics from vulnerabilities in “Internet of Things” protocols through to improvements in biometric identification algorithms and surveillance camera weaknesses. The conference has drawn interest and papers from within Australia and internationally. All submitted papers were subject to a double blind peer review process. Twenty two papers were submitted from Australia and overseas, of which eighteen were accepted for final presentation and publication. We wish to thank the reviewers for kindly volunteering their time and expertise in support of this event. We would also like to thank the conference committee who have organised yet another successful congress. Events such as this are impossible without the tireless efforts of such people in reviewing and editing the conference papers, and assisting with the planning, organisation and execution of the conference. To our sponsors, also a vote of thanks for both the financial and moral support provided to the conference. Finally, thank you to the administrative and technical staff, and students of the ECU Security Research Institute for their contributions to the running of the conference
    corecore