232 research outputs found

    Emerging Phishing Trends and Effectiveness of the Anti-Phishing Landing Page

    Full text link
    Each month, more attacks are launched with the aim of making web users believe that they are communicating with a trusted entity which compels them to share their personal, financial information. Phishing costs Internet users billions of dollars every year. Researchers at Carnegie Mellon University (CMU) created an anti-phishing landing page supported by Anti-Phishing Working Group (APWG) with the aim to train users on how to prevent themselves from phishing attacks. It is used by financial institutions, phish site take down vendors, government organizations, and online merchants. When a potential victim clicks on a phishing link that has been taken down, he / she is redirected to the landing page. In this paper, we present the comparative analysis on two datasets that we obtained from APWG's landing page log files; one, from September 7, 2008 - November 11, 2009, and other from January 1, 2014 - April 30, 2014. We found that the landing page has been successful in training users against phishing. Forty six percent users clicked lesser number of phishing URLs from January 2014 to April 2014 which shows that training from the landing page helped users not to fall for phishing attacks. Our analysis shows that phishers have started to modify their techniques by creating more legitimate looking URLs and buying large number of domains to increase their activity. We observed that phishers are exploiting ICANN accredited registrars to launch their attacks even after strict surveillance. We saw that phishers are trying to exploit free subdomain registration services to carry out attacks. In this paper, we also compared the phishing e-mails used by phishers to lure victims in 2008 and 2014. We found that the phishing e-mails have changed considerably over time. Phishers have adopted new techniques like sending promotional e-mails and emotionally targeting users in clicking phishing URLs

    Categorization of Phishing Detection Features And Using the Feature Vectors to Classify Phishing Websites

    Get PDF
    abstract: Phishing is a form of online fraud where a spoofed website tries to gain access to user's sensitive information by tricking the user into believing that it is a benign website. There are several solutions to detect phishing attacks such as educating users, using blacklists or extracting phishing characteristics found to exist in phishing attacks. In this thesis, we analyze approaches that extract features from phishing websites and train classification models with extracted feature set to classify phishing websites. We create an exhaustive list of all features used in these approaches and categorize them into 6 broader categories and 33 finer categories. We extract 59 features from the URL, URL redirects, hosting domain (WHOIS and DNS records) and popularity of the website and analyze their robustness in classifying a phishing website. Our emphasis is on determining the predictive performance of robust features. We evaluate the classification accuracy when using the entire feature set and when URL features or site popularity features are excluded from the feature set and show how our approach can be used to effectively predict specific types of phishing attacks such as shortened URLs and randomized URLs. Using both decision table classifiers and neural network classifiers, our results indicate that robust features seem to have enough predictive power to be used in practice.Dissertation/ThesisMasters Thesis Computer Science 201

    Pythia: a Framework for the Automated Analysis of Web Hosting Environments

    Get PDF
    A common approach when setting up a website is to utilize third party Web hosting and content delivery networks. Without taking this trend into account, any measurement study inspecting the deployment and operation of websites can be heavily skewed. Unfortunately, the research community lacks generalizable tools that can be used to identify how and where a given website is hosted. Instead, a number of ad hoc techniques have emerged, e.g., using Autonomous System databases, domain prefixes for CNAME records. In this work we propose Pythia, a novel lightweight approach for identifying Web content hosted on third-party infrastructures, including both traditional Web hosts and content delivery networks. Our framework identifies the organization to which a given Web page belongs, and it detects which Web servers are self-hosted and which ones leverage third-party services to provide contents. To test our framework we run it on 40,000 URLs and evaluate its accuracy, both by comparing the results with similar services and with a manually validated groundtruth. Our tool achieves an accuracy of 90% and detects that under 11% of popular domains are self-hosted. We publicly release our tool to allow other researchers to reproduce our findings, and to apply it to their own studies

    Unbiased phishing detection using domain name based features

    Get PDF
    2018 Summer.Includes bibliographical references.Internet users are coming under a barrage of phishing attacks of increasing frequency and sophistication. While these attacks have been remarkably resilient against the vast range of defenses proposed by academia, industry, and research organizations, machine learning approaches appear to be a promising one in distinguishing between phishing and legitimate websites. There are three main concerns with existing machine learning approaches for phishing detection. The first concern is there is neither a framework, preferably open-source, for extracting feature and keeping the dataset updated nor an updated dataset of phishing and legitimate website. The second concern is the large number of features used and the lack of validating arguments for the choice of the features selected to train the machine learning classifier. The last concern relates to the type of datasets used in the literature that seems to be inadvertently biased with respect to the features based on URL or content. In this thesis, we describe the implementation of our open-source and extensible framework to extract features and create up-to-date phishing dataset. With having this framework, named Fresh-Phish, we implemented 29 different features that we used to detect whether a given website is legitimate or phishing. We used 26 features that were reported in related work and added 3 new features and created a dataset of 6,000 websites with these features of which 3,000 were malicious and 3,000 were genuine and tested our approach. Using 6 different classifiers we achieved the accuracy of 93% which is a reasonable high in this field. To address the second and third concerns, we put forward the intuition that the domain name of phishing websites is the tell-tale sign of phishing and holds the key to successful phishing detection. We focus on this aspect of phishing websites and design features that explore the relationship of the domain name to the key elements of the website. Our work differs from existing state-of-the-art as our feature set ensures that there is minimal or no bias with respect to a dataset. Our learning model trains with only seven features and achieves a true positive rate of 98% and a classification accuracy of 97%, on sample dataset. Compared to the state-of-the-art work, our per data instance processing and classification is 4 times faster for legitimate websites and 10 times faster for phishing websites. Importantly, we demonstrate the shortcomings of using features based on URLs as they are likely to be biased towards dataset collection and usage. We show the robustness of our learning algorithm by testing our classifiers on unknown live phishing URLs and achieve a higher detection accuracy of 99.7% compared to the earlier known best result of 95% detection rate

    Detection of suspicious URLs in online social networks using supervised machine learning algorithms

    Get PDF
    This thesis proposes the use of several supervised machine learning classification models that were built to detect the distribution of malicious content in OSNs. The main focus was on ensemble learning algorithms such as Random Forest, gradient boosting trees, extra trees, and XGBoost. Features were used to identify social network posts that contain malicious URLs derived from several sources, such as domain WHOIS record, web page content, URL lexical and redirection data, and Twitter metadata. The thesis describes a systematic analysis of the hyper-parameters of tree-based models. The impact of key parameters, such as the number of trees, depth of trees and minimum size of leaf nodes on classification performance, was assessed. The results show that controlling the complexity of Random Forest classifiers applied to social media spam is essential to avoid overfitting and optimise performance. The model complexity could be reduced by removing uninformative features, as the complexity they add to the model is greater than the advantages they give to the model to make decisions. Moreover, model-combining methods were tested, which are the voting and stacking methods. Both show advantages and disadvantages; however, in general, they appear to provide a statistically significant improvement in comparison to the highest singular model. The critical benefit of applying the stacking method to automate the model selection process is that it is effective in giving more weight to more topperforming models and less affected by weak ones. Finally, 'SuspectRate', an online malicious URL detection system, was built to offer a service to give a suspicious probability of tweets with attached URLs. A key feature of this system is that it can dynamically retrain and expand current models

    Feature-Rich Models and Feature Reduction for Malicious URLs Classification and Prediction

    Get PDF
    Malicious web site is a foundation of criminal activities on Internet. This links enables partial or full machine control to the attackers. This results in victim systems, which get easily infected allowing attackers to utilize systems for quite a number of cyber-crimes such as stealing credentials, spamming, phishing, denial-of-service and many extra such attacks. Therefore, the methodology and technique to detect such crimes should be fast and precise with the additional capability to detect new malicious websites or content. This paper introduces an automatic tool to extract 110 significant features for a URL. Additionally, this paper also propose various aspects associated with the URL (Uniform Resource Locator) classification process which recognizes whether the target website is a malicious or benign. Standard datasets are utilized for training purpose from diverse sources. The rising issue related to spamming, phishing and malware, has created a requirement for solid framework solution which can analyze the extracted features, classify and further recognize the malicious URL

    FNDaaS: Content-agnostic Detection of Fake News sites

    Full text link
    Automatic fake news detection is a challenging problem in misinformation spreading, and it has tremendous real-world political and social impacts. Past studies have proposed machine learning-based methods for detecting such fake news, focusing on different properties of the published news articles, such as linguistic characteristics of the actual content, which however have limitations due to the apparent language barriers. Departing from such efforts, we propose FNDaaS, the first automatic, content-agnostic fake news detection method, that considers new and unstudied features such as network and structural characteristics per news website. This method can be enforced as-a-Service, either at the ISP-side for easier scalability and maintenance, or user-side for better end-user privacy. We demonstrate the efficacy of our method using data crawled from existing lists of 637 fake and 1183 real news websites, and by building and testing a proof of concept system that materializes our proposal. Our analysis of data collected from these websites shows that the vast majority of fake news domains are very young and appear to have lower time periods of an IP associated with their domain than real news ones. By conducting various experiments with machine learning classifiers, we demonstrate that FNDaaS can achieve an AUC score of up to 0.967 on past sites, and up to 77-92% accuracy on newly-flagged ones
    • …
    corecore