46 research outputs found

    Denial of Service in Web-Domains: Building Defenses Against Next-Generation Attack Behavior

    Get PDF
    The existing state-of-the-art in the field of application layer Distributed Denial of Service (DDoS) protection is generally designed, and thus effective, only for static web domains. To the best of our knowledge, our work is the first that studies the problem of application layer DDoS defense in web domains of dynamic content and organization, and for next-generation bot behaviour. In the first part of this thesis, we focus on the following research tasks: 1) we identify the main weaknesses of the existing application-layer anti-DDoS solutions as proposed in research literature and in the industry, 2) we obtain a comprehensive picture of the current-day as well as the next-generation application-layer attack behaviour and 3) we propose novel techniques, based on a multidisciplinary approach that combines offline machine learning algorithms and statistical analysis, for detection of suspicious web visitors in static web domains. Then, in the second part of the thesis, we propose and evaluate a novel anti-DDoS system that detects a broad range of application-layer DDoS attacks, both in static and dynamic web domains, through the use of advanced techniques of data mining. The key advantage of our system relative to other systems that resort to the use of challenge-response tests (such as CAPTCHAs) in combating malicious bots is that our system minimizes the number of these tests that are presented to valid human visitors while succeeding in preventing most malicious attackers from accessing the web site. The results of the experimental evaluation of the proposed system demonstrate effective detection of current and future variants of application layer DDoS attacks

    Analysis and Design Security Primitives Based on Chaotic Systems for eCommerce

    Get PDF
    Security is considered the most important requirement for the success of electronic commerce, which is built based on the security of hash functions, encryption algorithms and pseudorandom number generators. Chaotic systems and security algorithms have similar properties including sensitivity to any change or changes in the initial parameters, unpredictability, deterministic nature and random-like behaviour. Several security algorithms based on chaotic systems have been proposed; unfortunately some of them were found to be insecure and/or slow. In view of this, designing new secure and fast security algorithms based on chaotic systems which guarantee integrity, authentication and confidentiality is essential for electronic commerce development. In this thesis, we comprehensively explore the analysis and design of security primitives based on chaotic systems for electronic commerce: hash functions, encryption algorithms and pseudorandom number generators. Novel hash functions, encryption algorithms and pseudorandom number generators based on chaotic systems for electronic commerce are proposed. The securities of the proposed algorithms are analyzed based on some well-know statistical tests in this filed. In addition, a new one-dimensional triangle-chaotic map (TCM) with perfect chaotic behaviour is presented. We have compared the proposed chaos-based hash functions, block cipher and pseudorandom number generator with well-know algorithms. The comparison results show that the proposed algorithms are better than some other existing algorithms. Several analyses and computer simulations are performed on the proposed algorithms to verify their characteristics, confirming that these proposed algorithms satisfy the characteristics and conditions of security algorithms. The proposed algorithms in this thesis are high-potential for adoption in e-commerce applications and protocols

    Advanced security aspects on Industrial Control Network.

    Get PDF
    Security threats are one of the main problems of this computer-based era. All systems making use of information and communication technologies (ICT) are prone to failures and vulnerabilities that can be exploited by malicious software and agents. In the latest years, Industrial Critical Installations started to use massively network interconnections as well, and what it is worst they came in contact with the public network, i.e. with Internet. Industrial networks are responsible for process and manufacturing operations of almost every scale, and as a result the successful penetration of a control system network can be used to directly impact those processes. Consequences could potentially range from relatively benign disruptions, such as the disruption of the operation (taking a facility offline), the alteration of an operational process (changing the formula of a chemical process), all the way to deliberate acts of sabotage that are intended to cause harm. The interconnectivity of Industrial Control Systems with corporate networks and the Internet has significantly increased the threats to critical infrastructure assets. Meanwhile, traditional IT security solutions such as firewalls, intrusion detection systems and antivirus software are relatively ineffective against attacks that specifically target vulnerabilities in SCADA protocols. This presents presents an innovative approach to Intrusion Detection in SCADA systems based on the concept of Critical State Analysis and State Proximity. The theoretical framework is supported by tests conducted with an Intrusion Detection System prototype implementing the proposed detection approach

    ADDRESSING GEOGRAPHICAL CHALLENGES IN THE BIG DATA ERA UTILIZING CLOUD COMPUTING

    Get PDF
    Processing, mining and analyzing big data adds significant value towards solving previously unverified research questions or improving our ability to understand problems in geographical sciences. This dissertation contributes to developing a solution that supports researchers who may not otherwise have access to traditional high-performance computing resources so they benefit from the “big data” era, and implement big geographical research in ways that have not been previously possible. Using approaches from the fields of geographic information science, remote sensing and computer science, this dissertation addresses three major challenges in big geographical research: 1) how to exploit cloud computing to implement a universal scalable solution to classify multi-sourced remotely sensed imagery datasets with high efficiency; 2) how to overcome the missing data issue in land use land cover studies with a high-performance framework on the cloud through the use of available auxiliary datasets; and 3) the design considerations underlying a universal massive scale voxel geographical simulation model to implement complex geographical systems simulation using a three dimensional spatial perspective. This dissertation implements an in-memory distributed remotely sensed imagery classification framework on the cloud using both unsupervised and supervised classifiers, and classifies remotely sensed imagery datasets of the Suez Canal area, Egypt and Inner Mongolia, China under different cloud environments. This dissertation also implements and tests a cloud-based gap filling model with eleven auxiliary datasets in biophysical and social-economics in Inner Mongolia, China. This research also extends a voxel-based Cellular Automata model using graph theory and develops this model as a massive scale voxel geographical simulation framework to simulate dynamic processes, such as air pollution particles dispersal on cloud

    Designing Robust Collaborative Services in Distributed Wireless Networks

    Get PDF
    Wireless Sensor Networks (WSNs) are a popular class of distributed collaborative networks finding suitability from medical to military applications. However, their vulnerability to capture, their "open" wireless interfaces, limited battery life, all result in potential vulnerabilities. WSN-based services inherit these vulnerabilities. We focus on tactical environments where sensor nodes play complex roles in data sensing, aggregation and decision making. Services in such environments demand a high level of reliability and robustness. The first problem we studied is robust target localization. Location information is important for surveillance, monitoring, secure routing, intrusion detection, on-demand services etc. Target localization means tracing the path of moving entities through some known surveillance area. In a tactical environment, an adversary can often capture nodes and supply incorrect surveillance data to the system. In this thesis we create a target localization protocol that is robust against large amounts of such falsified data. Location estimates are generated by a Bayesian maximum-likelihood estimator. In order to achieve improved results with respect to fraudulent data attacks, we introduce various protection mechanisms. Further, our novel approach of employing watchdog nodes improves our ability to detect anomalies reducing the impact of an adversarial attack and limiting the amount of falsified data that gets accepted into the system. By concealing and altering the location where data is aggregated, we restrict the adversary to making probabilistic "guess" attacks at best, and increase robustness further. By formulating the problem of robust node localization under adversarial settings and casting it as a multivariate optimization problem, we solve for the system design parameters that correspond to the optimal solution. Together this results in a highly robust protocol design. In order for any collaboration to succeed, collaborating entities must have the same relative sense of time. This ensures that any measurements, surveillance data, mission commands, etc will be processed in the same epoch they are intended to serve. In most cases, data disseminated in a WSN is transient in nature, and applies for a short period of time. New data routinely replaces old data. It is imperative that data be placed in its correct time context; therefore..

    Artificial Intelligence-based Cybersecurity for Connected and Automated Vehicles

    Get PDF
    The damaging effects of cyberattacks to an industry like the Cooperative Connected and Automated Mobility (CCAM) can be tremendous. From the least important to the worst ones, one can mention for example the damage in the reputation of vehicle manufacturers, the increased denial of customers to adopt CCAM, the loss of working hours (having direct impact on the European GDP), material damages, increased environmental pollution due e.g., to traffic jams or malicious modifications in sensors’ firmware, and ultimately, the great danger for human lives, either they are drivers, passengers or pedestrians. Connected vehicles will soon become a reality on our roads, bringing along new services and capabilities, but also technical challenges and security threats. To overcome these risks, the CARAMEL project has developed several anti-hacking solutions for the new generation of vehicles. CARAMEL (Artificial Intelligence-based Cybersecurity for Connected and Automated Vehicles), a research project co-funded by the European Union under the Horizon 2020 framework programme, is a project consortium with 15 organizations from 8 European countries together with 3 Korean partners. The project applies a proactive approach based on Artificial Intelligence and Machine Learning techniques to detect and prevent potential cybersecurity threats to autonomous and connected vehicles. This approach has been addressed based on four fundamental pillars, namely: Autonomous Mobility, Connected Mobility, Electromobility, and Remote Control Vehicle. This book presents theory and results from each of these technical directions

    Artificial Intelligence-based Cybersecurity for Connected and Automated Vehicles

    Get PDF
    The damaging effects of cyberattacks to an industry like the Cooperative Connected and Automated Mobility (CCAM) can be tremendous. From the least important to the worst ones, one can mention for example the damage in the reputation of vehicle manufacturers, the increased denial of customers to adopt CCAM, the loss of working hours (having direct impact on the European GDP), material damages, increased environmental pollution due e.g., to traffic jams or malicious modifications in sensors’ firmware, and ultimately, the great danger for human lives, either they are drivers, passengers or pedestrians. Connected vehicles will soon become a reality on our roads, bringing along new services and capabilities, but also technical challenges and security threats. To overcome these risks, the CARAMEL project has developed several anti-hacking solutions for the new generation of vehicles. CARAMEL (Artificial Intelligence-based Cybersecurity for Connected and Automated Vehicles), a research project co-funded by the European Union under the Horizon 2020 framework programme, is a project consortium with 15 organizations from 8 European countries together with 3 Korean partners. The project applies a proactive approach based on Artificial Intelligence and Machine Learning techniques to detect and prevent potential cybersecurity threats to autonomous and connected vehicles. This approach has been addressed based on four fundamental pillars, namely: Autonomous Mobility, Connected Mobility, Electromobility, and Remote Control Vehicle. This book presents theory and results from each of these technical directions

    Enhancing numerical modelling efficiency for electromagnetic simulation of physical layer components.

    Get PDF
    The purpose of this thesis is to present solutions to overcome several key difficulties that limit the application of numerical modelling in communication cable design and analysis. In particular, specific limiting factors are that simulations are time consuming, and the process of comparison requires skill and is poorly defined and understood. When much of the process of design consists of optimisation of performance within a well defined domain, the use of artificial intelligence techniques may reduce or remove the need for human interaction in the design process. The automation of human processes allows round-the-clock operation at a faster throughput. Achieving a speedup would permit greater exploration of the possible designs, improving understanding of the domain. This thesis presents work that relates to three facets of the efficiency of numerical modelling: minimizing simulation execution time, controlling optimization processes and quantifying comparisons of results. These topics are of interest because simulation times for most problems of interest run into tens of hours. The design process for most systems being modelled may be considered an optimisation process in so far as the design is improved based upon a comparison of the test results with a specification. Development of software to automate this process permits the improvements to continue outside working hours, and produces decisions unaffected by the psychological state of a human operator. Improved performance of simulation tools would facilitate exploration of more variations on a design, which would improve understanding of the problem domain, promoting a virtuous circle of design. The minimization of execution time was achieved through the development of a Parallel TLM Solver which did not use specialized hardware or a dedicated network. Its design was novel because it was intended to operate on a network of heterogeneous machines in a manner which was fault tolerant, and included a means to reduce vulnerability of simulated data without encryption. Optimisation processes were controlled by genetic algorithms and particle swarm optimisation which were novel applications in communication cable design. The work extended the range of cable parameters, reducing conductor diameters for twisted pair cables, and reducing optical coverage of screens for a given shielding effectiveness. Work on the comparison of results introduced ―Colour maps‖ as a way of displaying three scalar variables over a two-dimensional surface, and comparisons were quantified by extending 1D Feature Selective Validation (FSV) to two dimensions, using an ellipse shaped filter, in such a way that it could be extended to higher dimensions. In so doing, some problems with FSV were detected, and suggestions for overcoming these presented: such as the special case of zero valued DC signals. A re-description of Feature Selective Validation, using Jacobians and tensors is proposed, in order to facilitate its implementation in higher dimensional spaces

    IoT Applications Computing

    Get PDF
    The evolution of emerging and innovative technologies based on Industry 4.0 concepts are transforming society and industry into a fully digitized and networked globe. Sensing, communications, and computing embedded with ambient intelligence are at the heart of the Internet of Things (IoT), the Industrial Internet of Things (IIoT), and Industry 4.0 technologies with expanding applications in manufacturing, transportation, health, building automation, agriculture, and the environment. It is expected that the emerging technology clusters of ambient intelligence computing will not only transform modern industry but also advance societal health and wellness, as well as and make the environment more sustainable. This book uses an interdisciplinary approach to explain the complex issue of scientific and technological innovations largely based on intelligent computing

    Third Conference on Artificial Intelligence for Space Applications, part 1

    Get PDF
    The application of artificial intelligence to spacecraft and aerospace systems is discussed. Expert systems, robotics, space station automation, fault diagnostics, parallel processing, knowledge representation, scheduling, man-machine interfaces and neural nets are among the topics discussed
    corecore