246 research outputs found

    An intrusion detection system on network security for web application

    Get PDF
    For the last 15 years, significant amount of resources are invested to enhance the security at system and network level, such as firewalls, IDS, anti-virus, etc. IT infrastructure tends to be more and more secure than ever before. As an ever-increasing number of businesses move to take advantage of the Internet, web applications are becoming more prevalent and increasingly more sophisticated, and as such they are critical to almost all major online businesses. The very nature of web applications, their abilities to collect, process and disseminate information over the Internet, exposes thern to rnalicious hackers. However, the traditional security solutions such as firewall, network and host IDS, do not provide comprehensive protection against the attacks common in the web applications. The thesis concentrates on the research of an advanced intrusion detection framework. An intrusion detection framework was designed which works along with any custom web application to collect and analyze HTTP traffic with various advanced algorithms. Two intrusion detection algorithms are tested and adopted in the framework. Pattern Matching is the most popular intrusion detection technology adopted by most of the commercial intrusion detection system. Behavior Modeling is a new technology that can dynamically adapt the detection algorithms in accordance with the application behavior. The combination of the two intrusion technologies has dramatically reduced false positive and false negative alarms. Moreover, a Servlet filter-based Web Agent is used to capture HTTP request. An isolated Response Module is developed to execute pre-defined action according to the analysis result. A database is involved to provide persistence support for the framework. Also, several simulation experiments are developed for evaluating the efficiency of detecting capability.\ud _____________________________________________________________________________

    Resilient and Trustworthy Dynamic Data-driven Application Systems (DDDAS) Services for Crisis Management Environments

    Get PDF
    Future crisis management systems needresilient and trustworthy infrastructures to quickly develop reliable applications and processes, andensure end-to-end security, trust, and privacy. Due to the multiplicity and diversity of involved actors, volumes of data, and heterogeneity of shared information;crisis management systems tend to be highly vulnerable and subjectto unforeseen incidents. As a result, the dependability of crisis management systems can be at risk. This paper presents a cloud-based resilient and trustworthy infrastructure (known as rDaaS) to quickly develop secure crisis management systems. The rDaaS integrates the Dynamic Data-Driven Application Systems (DDDAS) paradigm into a service-oriented architecture over cloud technology and provides a set of resilient DDDAS-As-A Service (rDaaS) components to build secure and trusted adaptable crisis processes. The rDaaS also ensures resilience and security by obfuscating the execution environment and applying Behavior Software Encryption and Moving Technique Defense. A simulation environment for a nuclear plant crisis management case study is illustrated to build resilient and trusted crisis response processes

    Improving intrusion detection systems using data mining techniques

    Get PDF
    Recent surveys and studies have shown that cyber-attacks have caused a lot of damage to organisations, governments, and individuals around the world. Although developments are constantly occurring in the computer security field, cyber-attacks still cause damage as they are developed and evolved by hackers. This research looked at some industrial challenges in the intrusion detection area. The research identified two main challenges; the first one is that signature-based intrusion detection systems such as SNORT lack the capability of detecting attacks with new signatures without human intervention. The other challenge is related to multi-stage attack detection, it has been found that signature-based is not efficient in this area. The novelty in this research is presented through developing methodologies tackling the mentioned challenges. The first challenge was handled by developing a multi-layer classification methodology. The first layer is based on decision tree, while the second layer is a hybrid module that uses two data mining techniques; neural network, and fuzzy logic. The second layer will try to detect new attacks in case the first one fails to detect. This system detects attacks with new signatures, and then updates the SNORT signature holder automatically, without any human intervention. The obtained results have shown that a high detection rate has been obtained with attacks having new signatures. However, it has been found that the false positive rate needs to be lowered. The second challenge was approached by evaluating IP information using fuzzy logic. This approach looks at the identity of participants in the traffic, rather than the sequence and contents of the traffic. The results have shown that this approach can help in predicting attacks at very early stages in some scenarios. However, it has been found that combining this approach with a different approach that looks at the sequence and contents of the traffic, such as event- correlation, will achieve a better performance than each approach individually

    Anomaly Based Intrusion Detection and Artificial Intelligence

    Get PDF

    Applying Machine Learning to Cyber Security

    Get PDF
    Intrusion Detection Systems (IDS) nowadays are a very important part of a system. In the last years many methods have been proposed to implement this kind of security measure against cyber attacks, including Machine Learning and Data Mining based. In this work we discuss in details the family of anomaly based IDSs, which are able to detect never seen attacks, paying particular attention to adherence to the FAIR principles. This principles include the Accessibility and the Reusability of software. Moreover, as the purpose of this work is the assessment of what is going on in the state of the art we have selected three approaches, according to their reproducibility and we have compared their performances with a common experimental setting. Lastly real world use case has been analyzed, resulting in the proposal of an usupervised ML model for pre-processing and analyzing web server logs. The proposed solution uses clustering and outlier detection techniques to detect attacks in an unsupervised way

    Multi-Source Data Fusion for Cyberattack Detection in Power Systems

    Full text link
    Cyberattacks can cause a severe impact on power systems unless detected early. However, accurate and timely detection in critical infrastructure systems presents challenges, e.g., due to zero-day vulnerability exploitations and the cyber-physical nature of the system coupled with the need for high reliability and resilience of the physical system. Conventional rule-based and anomaly-based intrusion detection system (IDS) tools are insufficient for detecting zero-day cyber intrusions in the industrial control system (ICS) networks. Hence, in this work, we show that fusing information from multiple data sources can help identify cyber-induced incidents and reduce false positives. Specifically, we present how to recognize and address the barriers that can prevent the accurate use of multiple data sources for fusion-based detection. We perform multi-source data fusion for training IDS in a cyber-physical power system testbed where we collect cyber and physical side data from multiple sensors emulating real-world data sources that would be found in a utility and synthesizes these into features for algorithms to detect intrusions. Results are presented using the proposed data fusion application to infer False Data and Command injection-based Man-in- The-Middle (MiTM) attacks. Post collection, the data fusion application uses time-synchronized merge and extracts features followed by pre-processing such as imputation and encoding before training supervised, semi-supervised, and unsupervised learning models to evaluate the performance of the IDS. A major finding is the improvement of detection accuracy by fusion of features from cyber, security, and physical domains. Additionally, we observed the co-training technique performs at par with supervised learning methods when fed with our features

    Prospeção geoquímica e avaliação do impacto ambiental no setor português da Faixa Piritosa Ibérica

    Get PDF
    ABSTRACT: This work intends to briefly report the history and application of geochemical exploration techniques in the Iberian Pyrite Belt (IPB). The use of geochemistry in IPB for exploration purposes started in 1950’s. Together with geophysics, the soil geochemical exploration surveys performed over several decades were responsible for important discoveries such as Carrasco and Feitais ore-bodies. However, the continuous development of analytical methods and the progress in data processing/modelling led to significant changes in the planning of sampling surveys, and their specific objectives, as well as in the accuracy of geochemical anomalies definition and corresponding interpretation. As a consequence, the number of samples involved in each survey was significantly reduced, but the chemical elements analysed with improved detection limits were considerably extended; additionally, geochemical anomalies were better resolved. Notwithstanding this evolution, data obtained in early soil geochemical surveys (notably by the Serviço de Fomento Mineiro) are still useful in the development of preliminary approaches at a regional scale. Over the years, many studies were made for exploration and environmental assessments, the most relevant of them reported in this chapter. Natural distributions of chemical elements were also identified in these studies as background (if pristine conditions are present) or baseline (depending how disturbed is the area covered by the sampling survey) values. Large part of IPB was, and still is, subjected to poly-metallic mineral exploration or mining, being also the focus of environmental evaluation and/or remediation projects on particular areas that, being the target of long-lasting human intervention, represent paradigmatic case-study examples. The exploration and exploitation works carried out by national and foreign private companies were, and still are, very important for innovative achievements in IPB along with copious contributions from the Portuguese R&D public institutions. Presently, LNEG possesses a vast quantity of geochemical data that can be provided for companies that wish to start their activity in the IPB; some of these datasets are compiled to a unique integrative map also presented in this work. Stream-sediments geochemistry, hydrogeochemistry and lithogeochemistry (of outcropping rock and drill-core samples) represent also important sources of geochemical data in regional or detailed studies over specific target areas in the IPB. However, these techniques are beyond the scope of the present paper which aimed at providing a general overview of the importance of soil geochemistry studies in the current knowledge of the IPB.RESUMO: A geoquímica é aqui abordada através da história da sua utilização na Faixa Piritosa Ibérica (FPI) e correspondente influência na evolução do conhecimento sobre os recursos minerais, distribuição natural dos elementos químicos e avaliação ambiental desta importante província metalogenética. Ao longo do tempo usaram-se na FPI técnicas analíticas cada vez mais precisas para determinar a concentração de um número crescente de elementos químicos. As campanhas de prospecção geoquímica tornaram-se assim progressivamente mais dispendiosas, levando à redução do número de amostras colhidas e analisadas em cada expedição. Consequentemente, no início da década de 1990, foi abandonado o uso de redes densas de amostragem de solos, muitas vezes elaboradas segundo uma esquadria retangular, como prática de rotina em campanhas de prospeção estratégica e tática, desenvolvidas pelos serviços do Estado vocacionados para este tipo de estudos, nomeadamente o Serviço de Fomento Mineiro (SFM). Reconhece-se, no entanto, que mesmo com um número reduzido de elementos químicos e baixa resolução analítica, a elevada densidade de amostragem combinada com diversas técnicas de geofísica desempenhou papel crucial na descoberta de jazigos de sulfuretos maciços na FPI, como são exemplo as massas Carrasco e Feitais em Aljustrel. Presentemente, as atividades de prospeção são maioritariamente desenvolvidas por empresas mineiras com contratos de pesquisa outorgados pelo Estado Português. O Laboratório Nacional de Energia e Geologia (LNEG) é o atual depositário de uma vasta informação de estudos geoquímicos, com destaque para um grande volume de dados produzido pelo SFM e por empresas. As bases de dados do LNEG são frequentemente requisitadas para efeitos de reavaliação de setores estratégicos da província e, em alguns casos, para reprocessamento de dados e reanálise de amostras físicas existentes em arquivo. Estudos geoquímicos desenvolvidos em áreas específicas da FPI são apresentados sumariamente neste artigo, abordando a importância do mapeamento geostatístico multivariado e multifractal de dados de geoquímica, para além de contribuir para a definição dos fundos (concentrações) naturais dos elementos metálicos com interesse económico; isto é, procurando identificar critérios objetivos úteis à separação entre o fundo geoquímico, o nível de referência e a anomalia. Todos estes estudos revelam que as formações constituintes do Complexo Vulcano Sedimentar (CVS - Devónico Superior – Carbónico Inferior) são fontes de metais como o Cu, Zn e Pb, podendo haver ainda alguma contribuição das sequências de metassedimentos pertencentes ao Grupo Filito-Quartzítico (Devónico Médio - Devónico Superior) e ao Grupo do Chança (Devónico Superior). Após um período de intensa prospeção e pesquisa mineral até finais dos anos 90 seguiu-se cerca de uma década e meia de abrandamento desta atividade na Europa, a qual foi, na FPI, gradualmente substituída por estudos de diagnóstico ambiental, procurando responder a novas inquietações sociais e políticas. Alguns desses estudos são também abordados de forma sumária neste capítulo, salientando os que contribuíram para a identificação e caracterização dos principais centros mineiros da FPI, geradores de grande volume de resíduos mineiros e importante drenagem ácida. Salientam-se ainda os sítios da província onde a atividade mineira decorreu por longo período de tempo (ex. S. Domingos, Aljustrel, Lousal e Caveira), em épocas em que o impacto ambiental não fazia parte das preocupações sociais, políticas e económicas das empresas mineiras e das entidades reguladoras. Nestes mesmos locais, e muito recentemente, como resposta à necessidade conjunta de tratamento/valorização de resíduos e salvaguarda da segurança de abastecimento de matérias-primas minerais na Europa (reduzindo a sua dependência externa e fomentando o seu crescimento económico), outros estudos geoquímicos têm vindo a ser realizados. Estes visam a identificação de novas oportunidades e mercados para os resíduos mineiros históricos, considerando-os como recursos secundários de matéria-prima que, por vezes, contêm quantidades acessórias de metais escassos e valiosos, alguns especialmente importantes na manufactura de componentes da “alta tecnologia”. A prospecção geoquímica não se restringe à geoquímica de solos, muito embora o presente artigo lhe seja inteiramente dedicado por a mesma representar uma abordagem geral dos trabalhos desenvolvidos na FPI, ao longo de mais de meio século.info:eu-repo/semantics/publishedVersio

    Detection of attack-targeted scans from the Apache HTTP Server access logs

    Get PDF
    A web application could be visited for different purposes. It is possible for a web site to be visited by a regular user as a normal (natural) visit, to be viewed by crawlers, bots, spiders, etc. for indexing purposes, lastly to be exploratory scanned by malicious users prior to an attack. An attack targeted web scan can be viewed as a phase of a potential attack and can lead to more attack detection as compared to traditional detection methods. In this work, we propose a method to detect attack-oriented scans and to distinguish them from other types of visits. In this context, we use access log files of Apache (or ISS) web servers and try to determine attack situations through examination of the past data. In addition to web scan detections, we insert a rule set to detect SQL Injection and XSS attacks. Our approach has been applied on sample data sets and results have been analyzed in terms of performance measures to compare our method and other commonly used detection techniques. Furthermore, various tests have been made on log samples from real systems. Lastly, several suggestions about further development have been also discussed

    Anomaly Detection in IoT: Recent Advances, AI and ML Perspectives and Applications

    Get PDF
    IoT comprises sensors and other small devices interconnected locally and via the Internet. Typical IoT devices collect data from the environment through sensors, analyze it and act back on the physical world through actuators. We can find them integrated into home appliances, Healthcare, Control systems, and wearables. This chapter presents a variety of applications where IoT devices are used for anomaly detection and correction. We review recent advancements in Machine/Deep Learning Models and Techniques for Anomaly Detection in IoT networks. We describe significant in-depth applications in various domains, Anomaly Detection for IoT Time-Series Data, Cybersecurity, Healthcare, Smart city, and more. The number of connected devices is increasing daily; by 2025, there will be approximately 85 billion IoT devices, spreading everywhere in Manufacturing (40%), Medical (30%), Retail, and Security (20%). This significant shift toward the Internet of Things (IoT) has created opportunities for future IoT applications. The chapter examines the security issues of IoT standards, protocols, and practical operations and identifies the hazards associated with the existing IoT model. It analyzes new security protocols and solutions to moderate these challenges. This chapter’s outcome can benefit the research community by encapsulating the Information related to IoT and proposing innovative solutions

    Discovering New Vulnerabilities in Computer Systems

    Get PDF
    Vulnerability research plays a key role in preventing and defending against malicious computer system exploitations. Driven by a multi-billion dollar underground economy, cyber criminals today tirelessly launch malicious exploitations, threatening every aspect of daily computing. to effectively protect computer systems from devastation, it is imperative to discover and mitigate vulnerabilities before they fall into the offensive parties\u27 hands. This dissertation is dedicated to the research and discovery of new design and deployment vulnerabilities in three very different types of computer systems.;The first vulnerability is found in the automatic malicious binary (malware) detection system. Binary analysis, a central piece of technology for malware detection, are divided into two classes, static analysis and dynamic analysis. State-of-the-art detection systems employ both classes of analyses to complement each other\u27s strengths and weaknesses for improved detection results. However, we found that the commonly seen design patterns may suffer from evasion attacks. We demonstrate attacks on the vulnerabilities by designing and implementing a novel binary obfuscation technique.;The second vulnerability is located in the design of server system power management. Technological advancements have improved server system power efficiency and facilitated energy proportional computing. However, the change of power profile makes the power consumption subjected to unaudited influences of remote parties, leaving the server systems vulnerable to energy-targeted malicious exploit. We demonstrate an energy abusing attack on a standalone open Web server, measure the extent of the damage, and present a preliminary defense strategy.;The third vulnerability is discovered in the application of server virtualization technologies. Server virtualization greatly benefits today\u27s data centers and brings pervasive cloud computing a step closer to the general public. However, the practice of physical co-hosting virtual machines with different security privileges risks introducing covert channels that seriously threaten the information security in the cloud. We study the construction of high-bandwidth covert channels via the memory sub-system, and show a practical exploit of cross-virtual-machine covert channels on virtualized x86 platforms
    corecore