1,224 research outputs found

    DeltaPhish: Detecting Phishing Webpages in Compromised Websites

    Full text link
    The large-scale deployment of modern phishing attacks relies on the automatic exploitation of vulnerable websites in the wild, to maximize profit while hindering attack traceability, detection and blacklisting. To the best of our knowledge, this is the first work that specifically leverages this adversarial behavior for detection purposes. We show that phishing webpages can be accurately detected by highlighting HTML code and visual differences with respect to other (legitimate) pages hosted within a compromised website. Our system, named DeltaPhish, can be installed as part of a web application firewall, to detect the presence of anomalous content on a website after compromise, and eventually prevent access to it. DeltaPhish is also robust against adversarial attempts in which the HTML code of the phishing page is carefully manipulated to evade detection. We empirically evaluate it on more than 5,500 webpages collected in the wild from compromised websites, showing that it is capable of detecting more than 99% of phishing webpages, while only misclassifying less than 1% of legitimate pages. We further show that the detection rate remains higher than 70% even under very sophisticated attacks carefully designed to evade our system.Comment: Preprint version of the work accepted at ESORICS 201

    Artificial intelligence in the cyber domain: Offense and defense

    Get PDF
    Artificial intelligence techniques have grown rapidly in recent years, and their applications in practice can be seen in many fields, ranging from facial recognition to image analysis. In the cybersecurity domain, AI-based techniques can provide better cyber defense tools and help adversaries improve methods of attack. However, malicious actors are aware of the new prospects too and will probably attempt to use them for nefarious purposes. This survey paper aims at providing an overview of how artificial intelligence can be used in the context of cybersecurity in both offense and defense.Web of Science123art. no. 41

    A Survey on Phishing Website Detection Using Hadoop

    Get PDF
    Phishing is an activity carried out by phishers with the aim of stealing personal data of internet users such as user IDs, password, and banking account, that data will be used for their personal interests. Average internet user will be easily trapped by phishers due to the similarity of the websites they visit to the original websites. Because there are several attributes that must be considered, most of internet user finds it difficult to distinguish between an authentic website or not. There are many ways to detecting a phishing website, but the existing phishing website detection system is too time-consuming and very dependent on the database it has. In this research, the focus of Hadoop MapReduce is to quickly retrieve some of the attributes of a phishing website that has an important role in identifying a phishing website, and then informing to users whether the website is a phishing website or not

    Detecting Abnormal Behavior in Web Applications

    Get PDF
    The rapid advance of web technologies has made the Web an essential part of our daily lives. However, network attacks have exploited vulnerabilities of web applications, and caused substantial damages to Internet users. Detecting network attacks is the first and important step in network security. A major branch in this area is anomaly detection. This dissertation concentrates on detecting abnormal behaviors in web applications by employing the following methodology. For a web application, we conduct a set of measurements to reveal the existence of abnormal behaviors in it. We observe the differences between normal and abnormal behaviors. By applying a variety of methods in information extraction, such as heuristics algorithms, machine learning, and information theory, we extract features useful for building a classification system to detect abnormal behaviors.;In particular, we have studied four detection problems in web security. The first is detecting unauthorized hotlinking behavior that plagues hosting servers on the Internet. We analyze a group of common hotlinking attacks and web resources targeted by them. Then we present an anti-hotlinking framework for protecting materials on hosting servers. The second problem is detecting aggressive behavior of automation on Twitter. Our work determines whether a Twitter user is human, bot or cyborg based on the degree of automation. We observe the differences among the three categories in terms of tweeting behavior, tweet content, and account properties. We propose a classification system that uses the combination of features extracted from an unknown user to determine the likelihood of being a human, bot or cyborg. Furthermore, we shift the detection perspective from automation to spam, and introduce the third problem, namely detecting social spam campaigns on Twitter. Evolved from individual spammers, spam campaigns manipulate and coordinate multiple accounts to spread spam on Twitter, and display some collective characteristics. We design an automatic classification system based on machine learning, and apply multiple features to classifying spam campaigns. Complementary to conventional spam detection methods, our work brings efficiency and robustness. Finally, we extend our detection research into the blogosphere to capture blog bots. In this problem, detecting the human presence is an effective defense against the automatic posting ability of blog bots. We introduce behavioral biometrics, mainly mouse and keyboard dynamics, to distinguish between human and bot. By passively monitoring user browsing activities, this detection method does not require any direct user participation, and improves the user experience

    Fake-Website Detection Tools: Identifying Elements that Promote Individuals’ Use and Enhance Their Performance

    Get PDF
    By successfully exploiting human vulnerabilities, fake websites have emerged as a major source of online fraud. Fake websites continue to inflict exorbitant monetary losses and also have significant ramifications for online security. We explore the process by which salient performance-related elements could increase the reliance on protective tools and, thus, reduce the success rate of fake websites. We develop the theory of detection tool impact (DTI) for this investigation by borrowing and contextualizing the protection motivation theory. Based on the DTI theory, we conceptualize a model to investigate how salient performance and cost-related elements of detection tools could influence users’ perceptions of the tools and threats, efficacy in dealing with threats, and reliance on such tools. The research method was a controlled lab experiment with a novel and extensive experimental design and protocol in two distinct domains: online pharmacies and banks. We found that the detector accuracy and speed, reflecting in response efficacy as perceived by users, form the pivotal coping mechanism in dealing with security threats and are major conduits for transforming salient performance-related elements into increased reliance on the detector. Furthermore, reported reliance on the detector showed a significant impact on the users’ performance in terms of self-protection. Therefore, users’ perceived response efficacy should be used as a critical metric to evaluate the design, assess the performance, and promote the use of fake-website detectors. We also found that cost of detector error had profound impacts on threat perceptions. We discuss the significant theoretical and empirical implications of the findings

    Encountering social engineering activities with a novel honeypot mechanism

    Get PDF
    Communication and conducting businesses have eventually transformed to be performed through information and communication technology (ICT). While computer network security challenges have become increasingly significant, the world is facing a new era of crimes that can be conducted easily, quickly, and, on top of all, anonymously. Because system penetration is primarily dependent on human psychology and awareness, 80% of network cyberattacks use some form of social engineering tactics to deceive the target, exposing systems at risk, regardless of the security system's robustness. This study highlights the significance of technological solutions in making users more safe and secure. Throughout this paper, a novel approach to detecting and preventing social engineering attacks will be proposed, combining multiple security systems, and utilizing the concept of Honeypots to provide an automated prevention mechanism employing artificial intelligence (AI). This study aims to merge AI and honeypot with intrusion prevention system (IPS) to detect social engineering attacks, threaten the attacker, and restrict his session to keep users away from these manipulation tactics
    • …
    corecore