88 research outputs found

    Detecting Abnormal Behavior in Web Applications

    Get PDF
    The rapid advance of web technologies has made the Web an essential part of our daily lives. However, network attacks have exploited vulnerabilities of web applications, and caused substantial damages to Internet users. Detecting network attacks is the first and important step in network security. A major branch in this area is anomaly detection. This dissertation concentrates on detecting abnormal behaviors in web applications by employing the following methodology. For a web application, we conduct a set of measurements to reveal the existence of abnormal behaviors in it. We observe the differences between normal and abnormal behaviors. By applying a variety of methods in information extraction, such as heuristics algorithms, machine learning, and information theory, we extract features useful for building a classification system to detect abnormal behaviors.;In particular, we have studied four detection problems in web security. The first is detecting unauthorized hotlinking behavior that plagues hosting servers on the Internet. We analyze a group of common hotlinking attacks and web resources targeted by them. Then we present an anti-hotlinking framework for protecting materials on hosting servers. The second problem is detecting aggressive behavior of automation on Twitter. Our work determines whether a Twitter user is human, bot or cyborg based on the degree of automation. We observe the differences among the three categories in terms of tweeting behavior, tweet content, and account properties. We propose a classification system that uses the combination of features extracted from an unknown user to determine the likelihood of being a human, bot or cyborg. Furthermore, we shift the detection perspective from automation to spam, and introduce the third problem, namely detecting social spam campaigns on Twitter. Evolved from individual spammers, spam campaigns manipulate and coordinate multiple accounts to spread spam on Twitter, and display some collective characteristics. We design an automatic classification system based on machine learning, and apply multiple features to classifying spam campaigns. Complementary to conventional spam detection methods, our work brings efficiency and robustness. Finally, we extend our detection research into the blogosphere to capture blog bots. In this problem, detecting the human presence is an effective defense against the automatic posting ability of blog bots. We introduce behavioral biometrics, mainly mouse and keyboard dynamics, to distinguish between human and bot. By passively monitoring user browsing activities, this detection method does not require any direct user participation, and improves the user experience

    Clustering Web Users By Mouse Movement to Detect Bots and Botnet Attacks

    Get PDF
    The need for website administrators to efficiently and accurately detect the presence of web bots has shown to be a challenging problem. As the sophistication of modern web bots increases, specifically their ability to more closely mimic the behavior of humans, web bot detection schemes are more quickly becoming obsolete by failing to maintain effectiveness. Though machine learning-based detection schemes have been a successful approach to recent implementations, web bots are able to apply similar machine learning tactics to mimic human users, thus bypassing such detection schemes. This work seeks to address the issue of machine learning based bots bypassing machine learning-based detection schemes, by introducing a novel unsupervised learning approach to cluster users based on behavioral biometrics. The idea is that, by differentiating users based on their behavior, for example how they use the mouse or type on the keyboard, information can be provided for website administrators to make more informed decisions on declaring if a user is a human or a bot. This approach is similar to how modern websites require users to login before browsing their website; which in doing so, website administrators can make informed decisions on declaring if a user is a human or a bot. An added benefit of this approach is that it is a human observational proof (HOP); meaning that it will not inconvenience the user (user friction) with human interactive proofs (HIP) such as CAPTCHA, or with login requirement

    Detecting mobility context over smartphones using typing and smartphone engagement patterns

    Get PDF
    Most of the latest context-based applications capture the mobility of a user using Inertial Measurement Unit (IMU) sensors like accelerometer and gyroscope which do not need explicit user-permission for application access. Although these sensors provide highly accurate mobility context information, existing studies have shown that they can lead to undesirable leakage of location information. To evade this breach of location privacy, many of the state-of-the-art studies suggest to impose stringent restrictions over the usage of IMU sensors. However, in this paper, we show that typing and smartphone engagement patterns can act as an alternative modality to sniff the mobility context of a user, even if the IMU sensors are not sampled at all. We develop an adversarial framework, named ConType, which exploits the signatures exposed by typing and smartphone engagement patterns to track the mobility of a user. Rigorous experiments with in-the-wild dataset show that ConType can track the mobility contexts with an average micro-F1 of 0.87 (±0.09), without using IMU data. Through additional experiments, we also show that ConType can track mobility stealthily with very low power and resource footprints, thus further aggravating the risk

    User-Behavior Based Detection of Infection Onset

    Get PDF
    A major vector of computer infection is through exploiting software or design flaws in networked applications such as the browser. Malicious code can be fetched and executed on a victim’s machine without the user’s permission, as in drive-by download (DBD) attacks. In this paper, we describe a new tool called DeWare for detecting the onset of infection delivered through vulnerable applications. DeWare explores and enforces causal relationships between computer-related human behaviors and system properties, such as file-system access and process execution. Our tool can be used to provide real time protection of a personal computer, as well as for diagnosing and evaluating untrusted websites for forensic purposes. Besides the concrete DBD detection solution, we also formally define causal relationships between user actions and system events on a host. Identifying and enforcing correct causal relationships have important applications in realizing advanced and secure operating systems. We perform extensive experimental evaluation, including a user study with 21 participants, thousands of legitimate websites (for testing false alarms), as well as 84 malicious websites in the wild. Our results show that DeWare is able to correctly distinguish legitimate download events from unauthorized system events with a low false positive rate (< 1%)

    Detecting Peripheral-based Attacks on the Host Memory

    Full text link

    Detecting and Modelling Stress Levels in E-Learning Environment Users

    Get PDF
    A modern Intelligent Tutoring System (ITS) should be sentient of a learner's cognitive and affective states, as a learner’s performance could be affected by motivational and emotional factors. It is important to design a method that supports low-cost, task-independent and unobtrusive sensing of a learner’s cognitive and affective states, to improve a learner's experience in e-learning, as well as to enable personalized learning. Although tremendous related affective computing research were done in this area, there is a lack of empirical research that can automatically measure a learner's stress using objective methods. This research is set to examine how an objective stress measurement model can be developed, to compute a learner’s cognitive and emotional stress automatically using mouse and keystroke dynamics. To ensure the measurement is not affected even if the user switches between tasks, three preliminary research experiments were carried out based on three common tasks during e-learning − search, assessment and typing. A stress measurement model was then built using the datasets collected from the experiments. Three stress classifiers were tested, namely certainty factors, feedforward back-propagation neural network and adaptive neuro-fuzzy inference system. The best classifier was then integrated into the ITS stress inference engine, which is designed to decide necessary adaptation, and to provide analytical information of learners' performances, which include stress levels and learners’ behaviours when answering questions

    Analyzing Variable Human Actions for Robotic Process Automation

    Get PDF
    Robotic Process Automation (RPA) provides a means to automate mundane and repetitive human tasks. Task Mining approaches can be used to discover the actions that humans take to carry out a particular task. A weakness of such approaches, however, is that they cannot deal well with humans who carry out the same task differently for different cases according to some hidden rule. The logs that are used for Task Mining generally do not contain sufficient data to distinguish the exact drivers behind this variability. In this paper, we propose a new Task Mining framework that has been designed to support engineers who wish to apply RPA to a task that is subject to variable human actions. This framework extracts features from User Interface (UI) Logs that are extended with a new source of data, namely screen captures. The framework invokes Supervised Machine Learning algorithms to generate decision models, which characterize the decisions behind variable human actions in a machine-and-human-readable form. We evaluated the pro posed Task Mining framework with a set of synthetic UI Logs. Despite the use of only relatively small logs, our results demonstrate that a high accuracy is generally achieved.Ministerio de Ciencia, Innovación y Universidades PID2019-105455GB-C31 (NICO)Centro para el Desarrollo Tecnológico Industrial (CDTI) EXP 00130458/IDI-20210319-P018-20/E09 (CODICE

    Malware detection and analysis via layered annotative execution

    Get PDF
    Malicious software (i.e., malware) has become a severe threat to interconnected computer systems for decades and has caused billions of dollars damages each year. A large volume of new malware samples are discovered daily. Even worse, malware is rapidly evolving to be more sophisticated and evasive to strike against current malware analysis and defense systems. This dissertation takes a root-cause oriented approach to the problem of automatic malware detection and analysis. In this approach, we aim to capture the intrinsic natures of malicious behaviors, rather than the external symptoms of existing attacks. We propose a new architecture for binary code analysis, which is called whole-system out-of-the-box fine-grained dynamic binary analysis, to address the common challenges in malware detection and analysis. to realize this architecture, we build a unified and extensible analysis platform, codenamed TEMU. We propose a core technique for fine-grained dynamic binary analysis, called layered annotative execution, and implement this technique in TEMU. Then on the basis of TEMU, we have proposed and built a series of novel techniques for automatic malware detection and analysis. For postmortem malware analysis, we have developed Renovo, Panorama, HookFinder, and MineSweeper, for detecting and analyzing various aspects of malware. For proactive malware detection, we have built HookScout as a proactive hook detection system. These techniques capture intrinsic characteristics of malware and thus are well suited for dealing with new malware samples and attack mechanisms
    • …
    corecore