35 research outputs found

    A Survey on Intrusion Detection in Wireless Sensor Networks

    Get PDF
    ABSTRACT In recent years, the applications based on the Wireless Sensor Networks are growing very fast. The application areas include agriculture, healthcare, military, hospitality management, mobiles and many others. So these networks are very important for us and the security of the network from the various attacks is also a more important issue in WSN application now days. Stopping these attacks or enhancing the security of the WSN system various intrusion detection policies are developed till date to detect the node/s that is/are not working normally. Out of various detection techniques three major categories explored in this paper are Anomaly detection, Misuse detection and Specificationbased detection. Here in this review paper various attacks on Wireless Sensor Networks and existing Intrusion detection techniques are discussed to detect the compromised node/s. The paper also provides a brief discussion about the characteristics of the Wireless Sensor Networks and the classification of attacks

    Real-Time Machine Learning for Quickest Detection

    Get PDF
    Safety-critical Cyber-Physical Systems (CPS) require real-time machine learning for control and decision making. One promising solution is to use deep learning to discover useful patterns for event detection from heterogeneous data. However, deep learning algorithms encounter challenges in CPS with assurability requirements: 1) Decision explainability, 2) Real-time and quickest event detection, and 3) Time-eficient incremental learning. To address these obstacles, I developed a real-time Machine Learning Framework for Quickest Detection (MLQD). To be specific, I first propose the zero-bias neural network, which removes decision bias and preferabilities from regular neural networks and provides an interpretable decision process. Second, I discover the latent space characteristic of the zero-bias neural network and the method to mathematically convert a Deep Neural Network (DNN) classifier into a performance-assured binary abnormality detector. In this way, I can seamlessly integrate the deep neural networks\u27 data processing capability with Quickest Detection (QD) and provide real-time sequential event detection paradigm. Thirdly, after discovering that a critical factor that impedes the incremental learning of neural networks is the concept interference (confusion) in latent space, and I prove that to minimize interference, the concept representation vectors (class fingerprints) within the latent space need to be organized orthogonally and I invent a new incremental learning strategy using the findings, I facilitate deep neural networks in the CPS to evolve eficiently without retraining. All my algorithms are evaluated on real-world applications, ADS-B (Automatic Dependent Surveillance Broadcasting) signal identification, and spoofing detection in the aviation communication system. Finally, I discuss the current trends in MLQD and conclude this dissertation by presenting the future research directions and applications. As a summary, the innovations of this dissertation are as follows: i) I propose the zerobias neural network, which provides transparent latent space characteristics, I apply it to solve the wireless device identification problem. ii) I discover and prove the orthogonal memory organization mechanism in artificial neural networks and apply this mechanism in time-efficient incremental learning. iii) I discover and mathematically prove the converging point theorem, with which we can predict the latent space topological characteristics and estimate the topological maturity of neural networks. iv) I bridge the gap between machine learning and quickest detection with assurable performance

    Literature review on the smart city resources analysis with big data methodologies

    Get PDF
    This article provides a systematic literature review on applying different algorithms to municipal data processing, aiming to understand how the data were collected, stored, pre-processed, and analyzed, to compare various methods, and to select feasible solutions for further research. Several algorithms and data types are considered, finding that clustering, classification, correlation, anomaly detection, and prediction algorithms are frequently used. As expected, the data is of several types, ranging from sensor data to images. It is a considerable challenge, although several algorithms work very well, such as Long Short-Term Memory (LSTM) for timeseries prediction and classification.Open access funding provided by FCT|FCCN (b-on).info:eu-repo/semantics/publishedVersio

    On-line anomaly detection and resilience in classifier ensembles

    Get PDF
    Detection of anomalies is a broad field of study, which is applied in different areas such as data monitoring, navigation, and pattern recognition. In this paper we propose two measures to detect anomalous behaviors in an ensemble of classifiers by monitoring their decisions; one based on Mahalanobis distance and another based on information theory. These approaches are useful when an ensemble of classifiers is used and a decision is made by ordinary classifier fusion methods, while each classifier is devoted to monitor part of the environment. Upon detection of anomalous classifiers we propose a strategy that attempts to minimize adverse effects of faulty classifiers by excluding them from the ensemble. We applied this method to an artificial dataset and sensor-based human activity datasets, with different sensor configurations and two types of noise (additive and rotational on inertial sensors). We compared our method with two other well-known approaches, generalized likelihood ratio (GLR) and One-Class Support Vector Machine (OCSVM), which detect anomalies at data/feature level. We found that our method is comparable with GLR and OCSVM. The advantages of our method compared to them is that it avoids monitoring raw data or features and only takes into account the decisions that are made by their classifiers, therefore it is independent of sensor modality and nature of anomaly. On the other hand, we found that OCSVM is very sensitive to the chosen parameters and furthermore in different types of anomalies it may react differently. In this paper we discuss the application domains which benefit from our method

    Performance Evaluation of Network Anomaly Detection Systems

    Get PDF
    Nowadays, there is a huge and growing concern about security in information and communication technology (ICT) among the scientific community because any attack or anomaly in the network can greatly affect many domains such as national security, private data storage, social welfare, economic issues, and so on. Therefore, the anomaly detection domain is a broad research area, and many different techniques and approaches for this purpose have emerged through the years. Attacks, problems, and internal failures when not detected early may badly harm an entire Network system. Thus, this thesis presents an autonomous profile-based anomaly detection system based on the statistical method Principal Component Analysis (PCADS-AD). This approach creates a network profile called Digital Signature of Network Segment using Flow Analysis (DSNSF) that denotes the predicted normal behavior of a network traffic activity through historical data analysis. That digital signature is used as a threshold for volume anomaly detection to detect disparities in the normal traffic trend. The proposed system uses seven traffic flow attributes: Bits, Packets and Number of Flows to detect problems, and Source and Destination IP addresses and Ports, to provides the network administrator necessary information to solve them. Via evaluation techniques, addition of a different anomaly detection approach, and comparisons to other methods performed in this thesis using real network traffic data, results showed good traffic prediction by the DSNSF and encouraging false alarm generation and detection accuracy on the detection schema. The observed results seek to contribute to the advance of the state of the art in methods and strategies for anomaly detection that aim to surpass some challenges that emerge from the constant growth in complexity, speed and size of today’s large scale networks, also providing high-value results for a better detection in real time.Atualmente, existe uma enorme e crescente preocupação com segurança em tecnologia da informação e comunicação (TIC) entre a comunidade científica. Isto porque qualquer ataque ou anomalia na rede pode afetar a qualidade, interoperabilidade, disponibilidade, e integridade em muitos domínios, como segurança nacional, armazenamento de dados privados, bem-estar social, questões econômicas, e assim por diante. Portanto, a deteção de anomalias é uma ampla área de pesquisa, e muitas técnicas e abordagens diferentes para esse propósito surgiram ao longo dos anos. Ataques, problemas e falhas internas quando não detetados precocemente podem prejudicar gravemente todo um sistema de rede. Assim, esta Tese apresenta um sistema autônomo de deteção de anomalias baseado em perfil utilizando o método estatístico Análise de Componentes Principais (PCADS-AD). Essa abordagem cria um perfil de rede chamado Assinatura Digital do Segmento de Rede usando Análise de Fluxos (DSNSF) que denota o comportamento normal previsto de uma atividade de tráfego de rede por meio da análise de dados históricos. Essa assinatura digital é utilizada como um limiar para deteção de anomalia de volume e identificar disparidades na tendência de tráfego normal. O sistema proposto utiliza sete atributos de fluxo de tráfego: bits, pacotes e número de fluxos para detetar problemas, além de endereços IP e portas de origem e destino para fornecer ao administrador de rede as informações necessárias para resolvê-los. Por meio da utilização de métricas de avaliação, do acrescimento de uma abordagem de deteção distinta da proposta principal e comparações com outros métodos realizados nesta tese usando dados reais de tráfego de rede, os resultados mostraram boas previsões de tráfego pelo DSNSF e resultados encorajadores quanto a geração de alarmes falsos e precisão de deteção. Com os resultados observados nesta tese, este trabalho de doutoramento busca contribuir para o avanço do estado da arte em métodos e estratégias de deteção de anomalias, visando superar alguns desafios que emergem do constante crescimento em complexidade, velocidade e tamanho das redes de grande porte da atualidade, proporcionando também alta performance. Ainda, a baixa complexidade e agilidade do sistema proposto contribuem para que possa ser aplicado a deteção em tempo real

    Collaborative Intrusion Detection in Federated Cloud Environments using Dempster-Shafer Theory of Evidence

    Get PDF
    Moving services to the Cloud environment is a trend that has been increasing in recent years, with a constant increase in sophistication and complexity of such services. Today, even critical infrastructure operators are considering moving their services and data to the Cloud. As Cloud computing grows in popularity, new models are deployed to further the associated benefits. Federated Clouds are one such concept, which are an alternative for companies reluctant to move their data out of house to a Cloud Service Providers (CSP) due to security and confidentiality concerns. Lack of collaboration among different components within a Cloud federation, or among CSPs, for detection or prevention of attacks is an issue. For protecting these services and data, as Cloud environments and Cloud federations are large scale, it is essential that any potential solution should scale alongside the environment adapt to the underlying infrastructure without any issues or performance implications. This thesis presents a novel architecture for collaborative intrusion detection specifically for CSPs within a Cloud federation. Our approach offers a proactive model for Cloud intrusion detection based on the distribution of responsibilities, whereby the responsibility for managing the elements of the Cloud is distributed among several monitoring nodes and brokering, utilising our Service-based collaborative intrusion detection – “Security as a Service” methodology. For collaborative intrusion detection, the Dempster-Shafer (D-S) theory of evidence is applied, executing as a fusion node with the role of collecting and fusing the information provided by the monitoring entities, taking the final decision regarding a possible attack. This type of detection and prevention helps increase resilience to attacks in the Cloud. The main novel contribution of this project is that it provides the means by which DDoS attacks are detected within a Cloud federation, so as to enable an early propagated response to block the attack. This inter-domain cooperation will offer holistic security, and add to the defence in depth. However, while the utilisation of D-S seems promising, there is an issue regarding conflicting evidences which is addressed with an extended two stage D-S fusion process. The evidence from the research strongly suggests that fusion algorithms can play a key role in autonomous decision making schemes, however our experimentation highlights areas upon which improvements are needed before fully applying to federated environments

    Machine learning based anomaly detection for industry 4.0 systems.

    Get PDF
    223 p.This thesis studies anomaly detection in industrial systems using technologies from the Fourth Industrial Revolution (4IR), such as the Internet of Things, Artificial Intelligence, 3D Printing, and Augmented Reality. The goal is to provide tools that can be used in real-world scenarios to detect system anomalies, intending to improve production and maintenance processes. The thesis investigates the applicability and implementation of 4IR technology architectures, AI-driven machine learning systems, and advanced visualization tools to support decision-making based on the detection of anomalies. The work covers a range of topics, including the conception of a 4IR system based on a generic architecture, the design of a data acquisition system for analysis and modelling, the creation of ensemble supervised and semi-supervised models for anomaly detection, the detection of anomalies through frequency analysis, and the visualization of associated data using Visual Analytics. The results show that the proposed methodology for integrating anomaly detection systems in new or existing industries is valid and that combining 4IR architectures, ensemble machine learning models, and Visual Analytics tools significantly enhances theanomaly detection processes for industrial systems. Furthermore, the thesis presents a guiding framework for data engineers and end-users

    Detecting Specific Types of DDoS Attacks in Cloud Environment by Using Anomaly Detection

    Get PDF
    RÉSUMÉ Un des avantages les plus importants de l'utilisation du cloud computing est d'avoir des services sur demande, et donc la méthode de paiement dans l'environnement du cloud est de type payer selon l'utilisation (pay per use). Cette caractéristique introduit un nouveau type d'attaque de déni des services appelée déni économique de la durabilité (Economic Denial of Sustainability EDoS) où le client paie des montants supplémentaires au fournisseur du cloud à cause de l'attaque. Les attaques DDoS avec leur nouvelle version sont divisées en trois catégories: 1) Les attaques de consommation de la bande passante. 2) Les attaques qui ciblent des applications spécifiques. 3) Les attaques d'épuisement sur la couche des connections. Dans ce travail, nous avons proposé un nouveau modèle pour détecter précisément les différents types des attaques DDoS et EDoS en comparant le trafic et l'utilisation des ressources dans des situations normale et d'attaque. Des caractéristiques (features) qui sont liées au trafic et à l'utilisation des ressources dans le cas de chaque attaque ont été recueillies. Elles constituent les métriques de notre modèle de détection. Dans la conception de notre modèle, nous avons utilisé les caractéristiques liées à tous les 3 types d'attaques puisque les caractéristiques d'un type d'attaque jouent un rôle important pour détecter un autre type. En effet, pour trouver un point de changement dans l'utilisation des ressources et le comportement du trafic nous avons utilisé l'algorithme des sommes cumulées CUSUM. La précision de notre algorithme a ensuite été étudiée en comparant sa performance avec celle d'un travail populaire précédent. Le taux de détection du modele était élevé, Ce qui indique la haute précision de l'algorithme conçu.----------ABSTRACT One of the most important benefits of using cloud computing is to have on-demand services; accordingly the method of payment in cloud environment is pay per use. This feature results in a new kind of DDOS attack called Economic Denial of Sustainability (EDoS) in which the customer pays extra to the cloud provider because of the attack. DDoS attacks and a new version of these attacks which called EDoS attack are divided into three different categories: 1) Bandwidth–consuming attacks, 2) Attacks which target specific applications and 3) Connection–layer exhaustion attacks. In this work we proposed a novel and inclusive model to precisely detect different types of DDoS and EDoS attacks by comparing the traffic and resource usage in normal and attack situations. Features which are related to traffic and resource usage in each attack were collected as the metrics of our detection model. In designing our model, we used the metrics related to all 3 types of attacks since features of one kind of attack play an important role to detect another type. Moreover, to find a change point in resource usage and traffic behavior we used CUSUM algorithm. The accuracy of our algorithm was then investigated by comparing its performance with one of the popular previous works. Achieving a higher rate of correct detection in our model proved the high accuracy of the designed algorithm

    Extracting temporal patterns from smart city data

    Get PDF
    Mestrado de dupla diplomação com a DULAY UNIVERSITYIn the modern world data and information become a powerful instrument of management, business, safety, medicine and others. The most fashionable sciences are the sciences which allow us to extract valuable knowledge from big volumes of information. Novel data processing techniques remains a trend for the last five years, in a way that continues to provide interesting results. This paper investigates the algorithms and approaches for processing smart city data, in particular, water consumption data for the city of Bragança, Portugal. Data from the last seven years was processed according to a rigorous methodology, that includes five stages: cleaning, preparation, exploratory analysis, identification of patterns and critical interpretation of the results. After understanding the data and choosing the best algorithms, a web-based data visualizing tools was developed, providing dashboards to geospatial data representation, useful in the decision making of municipalities.В современном мире данные и информация стали одним из самых мощных инстру- ментов в управлении, бизнесе, безопасности, медицине, науке и социальной сфере. Са- мыми модными и востребованными науками в настоящий момент являются науки, поз- воляющие извлекать полезные знания из больших объемов информации. Новые методы обработки данных остаются тенденцией последних пяти лет и продолжают генерировать интересные результаты. В данной работе исследуются алгоритмы и подходы для обработ-ки данных "умного города", в частности, данных о потреблении воды в городе Браганса, Португалия. Данные за последние семь лет обрабатывались в соответствии со строгой методологией, включающей пять этапов: очистка, подготовка, исследовательский анализ, выявление закономерностей и критическая интерпретация результатов. Цель исследова-ниия - определение шаблонов поведения в потрблении воды связанных с определенными событиями и построение модели прогнозова на основе найденных закономерностей. В результате исчерпывающего анализа с помощью множества методов было установлено отсутствие систематических зависимостей в рассматриваемом типе данных. На заключи-тельном этапе был создан инструмент визуализации данных, обеспечивающий динами-ческие панели для представления аналитических данных о распределении потребления. Разработанный инструмент управления аналитикой полезен для принятия решений му-ниципалитетом
    corecore