124 research outputs found

    Web Tracking: Mechanisms, Implications, and Defenses

    Get PDF
    This articles surveys the existing literature on the methods currently used by web services to track the user online as well as their purposes, implications, and possible user's defenses. A significant majority of reviewed articles and web resources are from years 2012-2014. Privacy seems to be the Achilles' heel of today's web. Web services make continuous efforts to obtain as much information as they can about the things we search, the sites we visit, the people with who we contact, and the products we buy. Tracking is usually performed for commercial purposes. We present 5 main groups of methods used for user tracking, which are based on sessions, client storage, client cache, fingerprinting, or yet other approaches. A special focus is placed on mechanisms that use web caches, operational caches, and fingerprinting, as they are usually very rich in terms of using various creative methodologies. We also show how the users can be identified on the web and associated with their real names, e-mail addresses, phone numbers, or even street addresses. We show why tracking is being used and its possible implications for the users (price discrimination, assessing financial credibility, determining insurance coverage, government surveillance, and identity theft). For each of the tracking methods, we present possible defenses. Apart from describing the methods and tools used for keeping the personal data away from being tracked, we also present several tools that were used for research purposes - their main goal is to discover how and by which entity the users are being tracked on their desktop computers or smartphones, provide this information to the users, and visualize it in an accessible and easy to follow way. Finally, we present the currently proposed future approaches to track the user and show that they can potentially pose significant threats to the users' privacy.Comment: 29 pages, 212 reference

    Tapjacking Threats and Mitigation Techniques for Android Applications

    Get PDF
    With the increased dependency on web applications through mobile devices, malicious attack techniques have now shifted from traditional web applications running on desktop or laptop (allowing mouse click- based interactions) to mobile applications running on mobile devices (allowing touch-based interactions). Clickjacking is a type of malicious attack originating in web applications, where victims are lured to click on seemingly benign objects in web pages. However, when clicked, unintended actions are performed without the user’s knowledge. In particular, it is shown that users are lured to touch an object of an application triggering unintended actions not actually intended by victims. This new form of clickjacking on mobile devices is called tapjacking. There is little research that thoroughly investigates attacks and mitigation techniques due to tapjacking in mobile devices. In this thesis, we identify coding practices that can be helpful for software practitioners to avoid malicious attacks and define a detection techniques to prevent the consequence of malicious attacks for the end users. We first find out where tapjacking attack type falls within the broader literature of malware, in particular for Android malware. In this direction, we propose a classification of Android malware. Then, we propose a novel technique based on Kullback-Leibler Divergence (KLD) to identify possible tapjacking behavior in applications. We validate the approach with a set of benign and malicious android applications. We also implemented a prototype tool for detecting tapjacking attack symptom using the KLD based measurement. The evaluation results show that tapjacking can be detected effectively with KLD

    Using Hover to Compromise the Confidentiality of User Input on Android

    Full text link
    We show that the new hover (floating touch) technology, available in a number of today's smartphone models, can be abused by any Android application running with a common SYSTEM_ALERT_WINDOW permission to record all touchscreen input into other applications. Leveraging this attack, a malicious application running on the system is therefore able to profile user's behavior, capture sensitive input such as passwords and PINs as well as record all user's social interactions. To evaluate our attack we implemented Hoover, a proof-of-concept malicious application that runs in the system background and records all input to foreground applications. We evaluated Hoover with 40 users, across two different Android devices and two input methods, stylus and finger. In the case of touchscreen input by finger, Hoover estimated the positions of users' clicks within an error of 100 pixels and keyboard input with an accuracy of 79%. Hoover captured users' input by stylus even more accurately, estimating users' clicks within 2 pixels and keyboard input with an accuracy of 98%. We discuss ways of mitigating this attack and show that this cannot be done by simply restricting access to permissions or imposing additional cognitive load on the users since this would significantly constrain the intended use of the hover technology.Comment: 11 page

    Real time detection of malicious webpages using machine learning techniques

    Get PDF
    In today's Internet, online content and especially webpages have increased exponentially. Alongside this huge rise, the number of users has also amplified considerably in the past two decades. Most responsible institutions such as banks and governments follow specific rules and regulations regarding conducts and security. But, most websites are designed and developed using little restrictions on these issues. That is why it is important to protect users from harmful webpages. Previous research has looked at to detect harmful webpages, by running the machine learning models on a remote website. The problem with this approach is that the detection rate is slow, because of the need to handle large number of webpages. There is a gap in knowledge to research into which machine learning algorithms are capable of detecting harmful web applications in real time on a local machine. The conventional method of detecting malicious webpages is going through the black list and checking whether the webpages are listed. Black list is a list of webpages which are classified as malicious from a user's point of view. These black lists are created by trusted organisations and volunteers. They are then used by modern web browsers such as Chrome, Firefox, Internet Explorer, etc. However, black list is ineffective because of the frequent-changing nature of webpages, growing numbers of webpages that pose scalability issues and the crawlers' inability to visit intranet webpages that require computer operators to login as authenticated users. The thesis proposes to use various machine learning algorithms, both supervised and unsupervised to categorise webpages based on parsing their features such as content (which played the most important role in this thesis), URL information, URL links and screenshots of webpages. The features were then converted to a format understandable by machine learning algorithms which analysed these features to make one important decision: whether a given webpage is malicious or not, using commonly available software and hardware. Prototype tools were developed to compare and analyse the efficiency of these machine learning techniques. These techniques include supervised algorithms such as Support Vector Machine, NaĂŻve Bayes, Random Forest, Linear Discriminant Analysis, Quantitative Discriminant Analysis and Decision Tree. The unsupervised techniques are Self-Organising Map, Affinity Propagation and K-Means. Self-Organising Map was used instead of Neural Networks and the research suggests that the new version of Neural Network i.e. Deep Learning would be great for this research. The supervised algorithms performed better than the unsupervised algorithms and the best out of all these techniques is SVM that achieves 98% accuracy. The result was validated by the Chrome extension which used the classifier in real time. Unsupervised algorithms came close to supervised algorithms. This is surprising given the fact that they do not have access to the class information beforehand

    Using Context and Interactions to Verify User-Intended Network Requests

    Full text link
    Client-side malware can attack users by tampering with applications or user interfaces to generate requests that users did not intend. We propose Verified Intention (VInt), which ensures a network request, as received by a service, is user-intended. VInt is based on "seeing what the user sees" (context). VInt screenshots the user interface as the user interacts with a security-sensitive form. There are two main components. First, VInt ensures output integrity and authenticity by validating the context, ensuring the user sees correctly rendered information. Second, VInt extracts user-intended inputs from the on-screen user-provided inputs, with the assumption that a human user checks what they entered. Using the user-intended inputs, VInt deems a request to be user-intended if the request is generated properly from the user-intended inputs while the user is shown the correct information. VInt is implemented using image analysis and Optical Character Recognition (OCR). Our evaluation shows that VInt is accurate and efficient

    A survey on web tracking: mechanisms, implications, and defenses

    Get PDF
    Privacy seems to be the Achilles' heel of today's web. Most web services make continuous efforts to track their users and to obtain as much personal information as they can from the things they search, the sites they visit, the people they contact, and the products they buy. This information is mostly used for commercial purposes, which go far beyond targeted advertising. Although many users are already aware of the privacy risks involved in the use of internet services, the particular methods and technologies used for tracking them are much less known. In this survey, we review the existing literature on the methods used by web services to track the users online as well as their purposes, implications, and possible user's defenses. We present five main groups of methods used for user tracking, which are based on sessions, client storage, client cache, fingerprinting, and other approaches. A special focus is placed on mechanisms that use web caches, operational caches, and fingerprinting, as they are usually very rich in terms of using various creative methodologies. We also show how the users can be identified on the web and associated with their real names, e-mail addresses, phone numbers, or even street addresses. We show why tracking is being used and its possible implications for the users. For each of the tracking methods, we present possible defenses. Some of them are specific to a particular tracking approach, while others are more universal (block more than one threat). Finally, we present the future trends in user tracking and show that they can potentially pose significant threats to the users' privacy.Peer ReviewedPostprint (author's final draft

    Cyber Security Concerns in Social Networking Service

    Get PDF
    Today’s world is unimaginable without online social networks. Nowadays, millions of people connect with their friends and families by sharing their personal information with the help of different forms of social media. Sometimes, individuals face different types of issues while maintaining the multimedia contents like, audios, videos, photos because it is difficult to maintain the security and privacy of these multimedia contents uploaded on a daily basis. In fact, sometimes personal or sensitive information could get viral if that leaks out even unintentionally. Any leaked out content can be shared and made a topic of popular talk all over the world within few seconds with the help of the social networking sites. In the setting of Internet of Things (IoT) that would connect millions of devices, such contents could be shared from anywhere anytime. Considering such a setting, in this work, we investigate the key security and privacy concerns faced by individuals who use different social networking sites differently for different reasons. We also discuss the current state-of-the-art defense mechanisms that can bring somewhat long-term solutions to tackling these threats

    From Chatbots to PhishBots? -- Preventing Phishing scams created using ChatGPT, Google Bard and Claude

    Full text link
    The advanced capabilities of Large Language Models (LLMs) have made them invaluable across various applications, from conversational agents and content creation to data analysis, research, and innovation. However, their effectiveness and accessibility also render them susceptible to abuse for generating malicious content, including phishing attacks. This study explores the potential of using four popular commercially available LLMs - ChatGPT (GPT 3.5 Turbo), GPT 4, Claude and Bard to generate functional phishing attacks using a series of malicious prompts. We discover that these LLMs can generate both phishing emails and websites that can convincingly imitate well-known brands, and also deploy a range of evasive tactics for the latter to elude detection mechanisms employed by anti-phishing systems. Notably, these attacks can be generated using unmodified, or "vanilla," versions of these LLMs, without requiring any prior adversarial exploits such as jailbreaking. As a countermeasure, we build a BERT based automated detection tool that can be used for the early detection of malicious prompts to prevent LLMs from generating phishing content attaining an accuracy of 97\% for phishing website prompts, and 94\% for phishing email prompts

    Shielding against Web Application Attacks - Detection Techniques and Classification

    Get PDF
    The field of IoT web applications is facing a range of security risks and system attacks due to the increasing complexity and size of home automation datasets. One of the primary concerns is the identification of Distributed Denial of Service (DDoS) attacks in home automation systems. Attackers can easily access various IoT web application assets by entering a home automation dataset or clicking a link, making them vulnerable to different types of web attacks. To address these challenges, the cloud has introduced the Edge of Things paradigm, which uses multiple concurrent deep models to enhance system stability and enable easy data revelation updates. Therefore, identifying malicious attacks is crucial for improving the reliability and security of IoT web applications. This paper uses a Machine Learning algorithm that can accurately identify web attacks using unique keywords. Smart home devices are classified into four classes based on their traffic predictability levels, and a neural system recognition model is proposed to classify these attacks with a high degree of accuracy, outperforming other classification models. The application of deep learning in identifying and classifying attacks has significant theoretical and scientific value for web security investigations. It also provides innovative ideas for intelligent security detection by classifying web visitors, making it possible to identify and prevent potential security threats

    A semi-automated security advisory system to resist cyber-attack in social networks

    Get PDF
    Social networking sites often witness various types of social engineering (SE) attacks. Yet, limited research has addressed the most severe types of social engineering in social networks (SNs). The present study investigates the extent to which people respond differently to different types of attack in a social network context and how we can segment users based on their vulnerability. In turn, this leads to the prospect of a personalised security advisory system. 316 participants have completed an online-questionnaire that includes a scenario-based experiment. The study result reveals that people respond to cyber-attacks differently based on their demographics. Furthermore, people’s competence, social network experience, and their limited connections with strangers in social networks can decrease their likelihood of falling victim to some types of attacks more than others
    • …
    corecore