722 research outputs found

    Identification of potential malicious web pages

    Get PDF
    Malicious web pages are an emerging security concern on the Internet due to their popularity and their potential serious impact. Detecting and analysing them are very costly because of their qualities and complexities. In this paper, we present a lightweight scoring mechanism that uses static features to identify potential malicious pages. This mechanism is intended as a filter that allows us to reduce the number suspicious web pages requiring more expensive analysis by other mechanisms that require loading and interpretation of the web pages to determine whether they are malicious or benign. Given its role as a filter, our main aim is to reduce false positives while minimising false negatives. The scoring mechanism has been developed by identifying candidate static features of malicious web pages that are evaluate using a feature selection algorithm. This identifies the most appropriate set of features that can be used to efficiently distinguish between benign and malicious web pages. These features are used to construct a scoring algorithm that allows us to calculate a score for a web page's potential maliciousness. The main advantage of this scoring mechanism compared to a binary classifier is the ability to make a trade-off between accuracy and performance. This allows us to adjust the number of web pages passed to the more expensive analysis mechanism in order to tune overall performance

    Practical Dynamic Symbolic Execution for JavaScript

    Get PDF

    Ontology for Blind SQL Injection

    Get PDF
    In cyberspace, there exists a prevalent problem that heavily occurs to web application databases and that is the exploitation of websites by using SQL injection attacks. This kind of attack becomes more difficult when it comes to blind SQL vulnerabilities. In this paper, we will first make use of this vulnerability, and subsequently, we will build an ontology (OBSQL) to address the detection of the blind SQL weakness. Therefore, to achieve the exploitation, we reproduce the attacks against a website in production mode. We primarily detect the presence of the vulnerability, after we use our tools to abuse it. Last but not least, we prove the importance of applying ontology in cybersecurity for this matter. The mitigation techniques in our ontology will be addressed in our future work

    09141 Abstracts Collection -- Web Application Security

    Get PDF
    From 29th March to 3rd April 2009 the Dagstuhl Seminar 09141 Web Application Security was held in Schloss Dagstuhl -- Leibniz Center for Informatics. During the seminar, several participants presented their current research, and ongoing work and open problems were discussed. Abstracts of the presentations given during the seminar are put together in this paper. Links to full papers (if available) are provided in the corresponding seminar summary document

    Securing the Next Generation Web

    Get PDF
    With the ever-increasing digitalization of society, the need for secure systems is growing. While some security features, like HTTPS, are popular, securing web applications, and the clients we use to interact with them remains difficult.To secure web applications we focus on both the client-side and server-side. For the client-side, mainly web browsers, we analyze how new security features might solve a problem but introduce new ones. We show this by performing a systematic analysis of the new Content Security Policy (CSP)\ua0 directive navigate-to. In our research, we find that it does introduce new vulnerabilities, to which we recommend countermeasures. We also create AutoNav, a tool capable of automatically suggesting navigation policies for this directive. Finding server-side vulnerabilities in a black-box setting where\ua0 there is no access to the source code is challenging. To improve this, we develop novel black-box methods for automatically finding vulnerabilities. We\ua0 accomplish this by identifying key challenges in web scanning and combining the best of previous methods. Additionally, we leverage SMT solvers to\ua0 further improve the coverage and vulnerability detection rate of scanners.In addition to browsers, browser extensions also play an important role in the web ecosystem. These small programs, e.g. AdBlockers and password\ua0 managers, have powerful APIs and access to sensitive user data like browsing history. By systematically analyzing the extension ecosystem we find new\ua0 static and dynamic methods for detecting both malicious and vulnerable extensions. In addition, we develop a method for detecting malicious extensions\ua0 solely based on the meta-data of downloads over time. We analyze new attack vectors introduced by Google’s new vehicle OS, Android Automotive. This\ua0 is based on Android with the addition of vehicle APIs. Our analysis results in new attacks pertaining to safety, privacy, and availability. Furthermore, we\ua0 create AutoTame, which is designed to analyze third-party apps for vehicles for the vulnerabilities we found

    Real time detection of malicious webpages using machine learning techniques

    Get PDF
    In today's Internet, online content and especially webpages have increased exponentially. Alongside this huge rise, the number of users has also amplified considerably in the past two decades. Most responsible institutions such as banks and governments follow specific rules and regulations regarding conducts and security. But, most websites are designed and developed using little restrictions on these issues. That is why it is important to protect users from harmful webpages. Previous research has looked at to detect harmful webpages, by running the machine learning models on a remote website. The problem with this approach is that the detection rate is slow, because of the need to handle large number of webpages. There is a gap in knowledge to research into which machine learning algorithms are capable of detecting harmful web applications in real time on a local machine. The conventional method of detecting malicious webpages is going through the black list and checking whether the webpages are listed. Black list is a list of webpages which are classified as malicious from a user's point of view. These black lists are created by trusted organisations and volunteers. They are then used by modern web browsers such as Chrome, Firefox, Internet Explorer, etc. However, black list is ineffective because of the frequent-changing nature of webpages, growing numbers of webpages that pose scalability issues and the crawlers' inability to visit intranet webpages that require computer operators to login as authenticated users. The thesis proposes to use various machine learning algorithms, both supervised and unsupervised to categorise webpages based on parsing their features such as content (which played the most important role in this thesis), URL information, URL links and screenshots of webpages. The features were then converted to a format understandable by machine learning algorithms which analysed these features to make one important decision: whether a given webpage is malicious or not, using commonly available software and hardware. Prototype tools were developed to compare and analyse the efficiency of these machine learning techniques. These techniques include supervised algorithms such as Support Vector Machine, Naïve Bayes, Random Forest, Linear Discriminant Analysis, Quantitative Discriminant Analysis and Decision Tree. The unsupervised techniques are Self-Organising Map, Affinity Propagation and K-Means. Self-Organising Map was used instead of Neural Networks and the research suggests that the new version of Neural Network i.e. Deep Learning would be great for this research. The supervised algorithms performed better than the unsupervised algorithms and the best out of all these techniques is SVM that achieves 98% accuracy. The result was validated by the Chrome extension which used the classifier in real time. Unsupervised algorithms came close to supervised algorithms. This is surprising given the fact that they do not have access to the class information beforehand

    Cyber Security

    Get PDF
    This open access book constitutes the refereed proceedings of the 18th China Annual Conference on Cyber Security, CNCERT 2022, held in Beijing, China, in August 2022. The 17 papers presented were carefully reviewed and selected from 64 submissions. The papers are organized according to the following topical sections: ​​data security; anomaly detection; cryptocurrency; information security; vulnerabilities; mobile internet; threat intelligence; text recognition

    An Implementation to Detect Fraud App Using Fuzzy Logic

    Get PDF
    Fraudulent behavior is most popular in app stores like Google play store, Apple�s app store, etc. The popularity information in app stores, such as chart rankings, user ratings, and user reviews, provides an extraordinary chance to recognize user experiences with mobile apps. Many fraud app detection tools are available these days which extract evidences of reviews and ratings to detect the fake apps with different approaches. But most of the existing tools work on two groups i.e., good and bad. So, we propose a system that works on more than two groups namely, very bad, bad, neutral, good and very good. Each group has been assigned a score which will improve the differentiation of reviews and ratings. For this the proposed system uses fuzzy logic algorithm. We have performed experimentation on 80 app ids taken from App-Review-Dataset, results show that proposed method is efficient in terms of accuracy and time required for retrieval

    Cyber Security

    Get PDF
    This open access book constitutes the refereed proceedings of the 18th China Annual Conference on Cyber Security, CNCERT 2022, held in Beijing, China, in August 2022. The 17 papers presented were carefully reviewed and selected from 64 submissions. The papers are organized according to the following topical sections: ​​data security; anomaly detection; cryptocurrency; information security; vulnerabilities; mobile internet; threat intelligence; text recognition
    • …
    corecore