5 research outputs found
PhishDef: URL Names Say It All
Phishing is an increasingly sophisticated method to steal personal user
information using sites that pretend to be legitimate. In this paper, we take
the following steps to identify phishing URLs. First, we carefully select
lexical features of the URLs that are resistant to obfuscation techniques used
by attackers. Second, we evaluate the classification accuracy when using only
lexical features, both automatically and hand-selected, vs. when using
additional features. We show that lexical features are sufficient for all
practical purposes. Third, we thoroughly compare several classification
algorithms, and we propose to use an online method (AROW) that is able to
overcome noisy training data. Based on the insights gained from our analysis,
we propose PhishDef, a phishing detection system that uses only URL names and
combines the above three elements. PhishDef is a highly accurate method (when
compared to state-of-the-art approaches over real datasets), lightweight (thus
appropriate for online and client-side deployment), proactive (based on online
classification rather than blacklists), and resilient to training data
inaccuracies (thus enabling the use of large noisy training data).Comment: 9 pages, submitted to IEEE INFOCOM 201
Tool to Detect Spam Websites
Traveling is about relaxing and not worrying about being scammed and losing money. The Internet is a wide place that is changing every day. Having a guide to navigate unknown places goes a long way to staying safe. In this paper, I have used several identifiers and provided analysis on the effectiveness of these identifiers. Travel Website fraud is a severe problem which is widespread around the world. Studies of travel scams are mainly focused on finding different ways that attacker’s targets on the innocent travelers. Now the technology is advanced and that is causing newer and newer techniques to scam people. By illustrating the techniques to find such frauds and to prevent people from getting scammed is something this paper is trying to achieve. In this paper we will look at the various problems and preventive measures that need to be taken while browsing the internet. In this paper I will be talking about the tools I have used for analyzing a website. I will also be providing the analysis of the different identifiers in finding out if a website is fake or genuine. The results and the conclusions from the analysis can then be used in designing a safe tool which can be used for keeping the internet users safer and wiser. The safe rules can be used to develop browser addons, computer applications etc
Fake-Website Detection Tools: Identifying Elements that Promote Individuals’ Use and Enhance Their Performance
By successfully exploiting human vulnerabilities, fake websites have emerged as a major source of online fraud. Fake websites continue to inflict exorbitant monetary losses and also have significant ramifications for online security. We explore the process by which salient performance-related elements could increase the reliance on protective tools and, thus, reduce the success rate of fake websites. We develop the theory of detection tool impact (DTI) for this investigation by borrowing and contextualizing the protection motivation theory. Based on the DTI theory, we conceptualize a model to investigate how salient performance and cost-related elements of detection tools could influence users’ perceptions of the tools and threats, efficacy in dealing with threats, and reliance on such tools. The research method was a controlled lab experiment with a novel and extensive experimental design and protocol in two distinct domains: online pharmacies and banks. We found that the detector accuracy and speed, reflecting in response efficacy as perceived by users, form the pivotal coping mechanism in dealing with security threats and are major conduits for transforming salient performance-related elements into increased reliance on the detector. Furthermore, reported reliance on the detector showed a significant impact on the users’ performance in terms of self-protection. Therefore, users’ perceived response efficacy should be used as a critical metric to evaluate the design, assess the performance, and promote the use of fake-website detectors. We also found that cost of detector error had profound impacts on threat perceptions. We discuss the significant theoretical and empirical implications of the findings