216 research outputs found

    Universal Adversarial Defense in Remote Sensing Based on Pre-trained Denoising Diffusion Models

    Full text link
    Deep neural networks (DNNs) have achieved tremendous success in many remote sensing (RS) applications, in which DNNs are vulnerable to adversarial perturbations. Unfortunately, current adversarial defense approaches in RS studies usually suffer from performance fluctuation and unnecessary re-training costs due to the need for prior knowledge of the adversarial perturbations among RS data. To circumvent these challenges, we propose a universal adversarial defense approach in RS imagery (UAD-RS) using pre-trained diffusion models to defend the common DNNs against multiple unknown adversarial attacks. Specifically, the generative diffusion models are first pre-trained on different RS datasets to learn generalized representations in various data domains. After that, a universal adversarial purification framework is developed using the forward and reverse process of the pre-trained diffusion models to purify the perturbations from adversarial samples. Furthermore, an adaptive noise level selection (ANLS) mechanism is built to capture the optimal noise level of the diffusion model that can achieve the best purification results closest to the clean samples according to their Frechet Inception Distance (FID) in deep feature space. As a result, only a single pre-trained diffusion model is needed for the universal purification of adversarial samples on each dataset, which significantly alleviates the re-training efforts and maintains high performance without prior knowledge of the adversarial perturbations. Experiments on four heterogeneous RS datasets regarding scene classification and semantic segmentation verify that UAD-RS outperforms state-of-the-art adversarial purification approaches with a universal defense against seven commonly existing adversarial perturbations. Codes and the pre-trained models are available online (https://github.com/EricYu97/UAD-RS).Comment: Added the GitHub link to the abstrac

    Toward robust deep neural networks

    Get PDF
    Dans cette thĂšse, notre objectif est de dĂ©velopper des modĂšles d’apprentissage robustes et fiables mais prĂ©cis, en particulier les Convolutional Neural Network (CNN), en prĂ©sence des exemples anomalies, comme des exemples adversaires et d’échantillons hors distribution –Out-of-Distribution (OOD). Comme la premiĂšre contribution, nous proposons d’estimer la confiance calibrĂ©e pour les exemples adversaires en encourageant la diversitĂ© dans un ensemble des CNNs. À cette fin, nous concevons un ensemble de spĂ©cialistes diversifiĂ©s avec un mĂ©canisme de vote simple et efficace en termes de calcul pour prĂ©dire les exemples adversaires avec une faible confiance tout en maintenant la confiance prĂ©dicative des Ă©chantillons propres Ă©levĂ©e. En prĂ©sence de dĂ©saccord dans notre ensemble, nous prouvons qu’une borne supĂ©rieure de 0:5 + _0 peut ĂȘtre Ă©tablie pour la confiance, conduisant Ă  un seuil de dĂ©tection global fixe de tau = 0; 5. Nous justifions analytiquement le rĂŽle de la diversitĂ© dans notre ensemble sur l’attĂ©nuation du risque des exemples adversaires Ă  la fois en boĂźte noire et en boĂźte blanche. Enfin, nous Ă©valuons empiriquement la robustesse de notre ensemble aux attaques de la boĂźte noire et de la boĂźte blanche sur plusieurs donnĂ©es standards. La deuxiĂšme contribution vise Ă  aborder la dĂ©tection d’échantillons OOD Ă  travers un modĂšle de bout en bout entraĂźnĂ© sur un ensemble OOD appropriĂ©. À cette fin, nous abordons la question centrale suivante : comment diffĂ©rencier des diffĂ©rents ensembles de donnĂ©es OOD disponibles par rapport Ă  une tĂąche de distribution donnĂ©e pour sĂ©lectionner la plus appropriĂ©e, ce qui induit Ă  son tour un modĂšle calibrĂ© avec un taux de dĂ©tection des ensembles inaperçus de donnĂ©es OOD? Pour rĂ©pondre Ă  cette question, nous proposons de diffĂ©rencier les ensembles OOD par leur niveau de "protection" des sub-manifolds. Pour mesurer le niveau de protection, nous concevons ensuite trois nouvelles mesures efficaces en termes de calcul Ă  l’aide d’un CNN vanille prĂ©formĂ©. Dans une vaste sĂ©rie d’expĂ©riences sur les tĂąches de classification d’image et d’audio, nous dĂ©montrons empiriquement la capacitĂ© d’un CNN augmentĂ© (A-CNN) et d’un CNN explicitement calibrĂ© pour dĂ©tecter une portion significativement plus grande des exemples OOD. Fait intĂ©ressant, nous observons Ă©galement qu’un tel A-CNN (nommĂ© A-CNN) peut Ă©galement dĂ©tecter les adversaires exemples FGS en boĂźte noire avec des perturbations significatives. En tant que troisiĂšme contribution, nous Ă©tudions de plus prĂšs de la capacitĂ© de l’A-CNN sur la dĂ©tection de types plus larges d’adversaires boĂźte noire (pas seulement ceux de type FGS). Pour augmenter la capacitĂ© d’A-CNN Ă  dĂ©tecter un plus grand nombre d’adversaires,nous augmentons l’ensemble d’entraĂźnement OOD avec des Ă©chantillons interpolĂ©s inter-classes. Ensuite, nous dĂ©montrons que l’A-CNN, entraĂźnĂ© sur tous ces donnĂ©es, a un taux de dĂ©tection cohĂ©rent sur tous les types des adversaires exemples invisibles. Alors que la entraĂźnement d’un A-CNN sur des adversaires PGD ne conduit pas Ă  un taux de dĂ©tection stable sur tous les types d’adversaires, en particulier les types inaperçus. Nous Ă©valuons Ă©galement visuellement l’espace des fonctionnalitĂ©s et les limites de dĂ©cision dans l’espace d’entrĂ©e d’un CNN vanille et de son homologue augmentĂ© en prĂ©sence d’adversaires et de ceux qui sont propres. Par un A-CNN correctement formĂ©, nous visons Ă  faire un pas vers un modĂšle d’apprentissage debout en bout unifiĂ© et fiable avec de faibles taux de risque sur les Ă©chantillons propres et les Ă©chantillons inhabituels, par exemple, les Ă©chantillons adversaires et OOD. La derniĂšre contribution est de prĂ©senter une application de A-CNN pour l’entraĂźnement d’un dĂ©tecteur d’objet robuste sur un ensemble de donnĂ©es partiellement Ă©tiquetĂ©es, en particulier un ensemble de donnĂ©es fusionnĂ©. La fusion de divers ensembles de donnĂ©es provenant de contextes similaires mais avec diffĂ©rents ensembles d’objets d’intĂ©rĂȘt (OoI) est un moyen peu coĂ»teux de crĂ©er un ensemble de donnĂ©es Ă  grande Ă©chelle qui couvre un plus large spectre d’OoI. De plus, la fusion d’ensembles de donnĂ©es permet de rĂ©aliser un dĂ©tecteur d’objet unifiĂ©, au lieu d’en avoir plusieurs sĂ©parĂ©s, ce qui entraĂźne une rĂ©duction des coĂ»ts de calcul et de temps. Cependant, la fusion d’ensembles de donnĂ©es, en particulier Ă  partir d’un contexte similaire, entraĂźne de nombreuses instances d’étiquetĂ©es manquantes. Dans le but d’entraĂźner un dĂ©tecteur d’objet robuste intĂ©grĂ© sur un ensemble de donnĂ©es partiellement Ă©tiquetĂ©es mais Ă  grande Ă©chelle, nous proposons un cadre d’entraĂźnement auto-supervisĂ© pour surmonter le problĂšme des instances d’étiquettes manquantes dans les ensembles des donnĂ©es fusionnĂ©s. Notre cadre est Ă©valuĂ© sur un ensemble de donnĂ©es fusionnĂ© avec un taux Ă©levĂ© d’étiquettes manquantes. Les rĂ©sultats empiriques confirment la viabilitĂ© de nos pseudo-Ă©tiquettes gĂ©nĂ©rĂ©es pour amĂ©liorer les performances de YOLO, en tant que dĂ©tecteur d’objet Ă  la pointe de la technologie.In this thesis, our goal is to develop robust and reliable yet accurate learning models, particularly Convolutional Neural Networks (CNNs), in the presence of adversarial examples and Out-of-Distribution (OOD) samples. As the first contribution, we propose to predict adversarial instances with high uncertainty through encouraging diversity in an ensemble of CNNs. To this end, we devise an ensemble of diverse specialists along with a simple and computationally efficient voting mechanism to predict the adversarial examples with low confidence while keeping the predictive confidence of the clean samples high. In the presence of high entropy in our ensemble, we prove that the predictive confidence can be upper-bounded, leading to have a globally fixed threshold over the predictive confidence for identifying adversaries. We analytically justify the role of diversity in our ensemble on mitigating the risk of both black-box and white-box adversarial examples. Finally, we empirically assess the robustness of our ensemble to the black-box and the white-box attacks on several benchmark datasets.The second contribution aims to address the detection of OOD samples through an end-to-end model trained on an appropriate OOD set. To this end, we address the following central question: how to differentiate many available OOD sets w.r.t. a given in distribution task to select the most appropriate one, which in turn induces a model with a high detection rate of unseen OOD sets? To answer this question, we hypothesize that the “protection” level of in-distribution sub-manifolds by each OOD set can be a good possible property to differentiate OOD sets. To measure the protection level, we then design three novel, simple, and cost-effective metrics using a pre-trained vanilla CNN. In an extensive series of experiments on image and audio classification tasks, we empirically demonstrate the abilityof an Augmented-CNN (A-CNN) and an explicitly-calibrated CNN for detecting a significantly larger portion of unseen OOD samples, if they are trained on the most protective OOD set. Interestingly, we also observe that the A-CNN trained on the most protective OOD set (calledA-CNN) can also detect the black-box Fast Gradient Sign (FGS) adversarial examples. As the third contribution, we investigate more closely the capacity of the A-CNN on the detection of wider types of black-box adversaries. To increase the capability of A-CNN to detect a larger number of adversaries, we augment its OOD training set with some inter-class interpolated samples. Then, we demonstrate that the A-CNN trained on the most protective OOD set along with the interpolated samples has a consistent detection rate on all types of unseen adversarial examples. Where as training an A-CNN on Projected Gradient Descent (PGD) adversaries does not lead to a stable detection rate on all types of adversaries, particularly the unseen types. We also visually assess the feature space and the decision boundaries in the input space of a vanilla CNN and its augmented counterpart in the presence of adversaries and the clean ones. By a properly trained A-CNN, we aim to take a step toward a unified and reliable end-to-end learning model with small risk rates on both clean samples and the unusual ones, e.g. adversarial and OOD samples.The last contribution is to show a use-case of A-CNN for training a robust object detector on a partially-labeled dataset, particularly a merged dataset. Merging various datasets from similar contexts but with different sets of Object of Interest (OoI) is an inexpensive way to craft a large-scale dataset which covers a larger spectrum of OoIs. Moreover, merging datasets allows achieving a unified object detector, instead of having several separate ones, resultingin the reduction of computational and time costs. However, merging datasets, especially from a similar context, causes many missing-label instances. With the goal of training an integrated robust object detector on a partially-labeled but large-scale dataset, we propose a self-supervised training framework to overcome the issue of missing-label instances in the merged datasets. Our framework is evaluated on a merged dataset with a high missing-label rate. The empirical results confirm the viability of our generated pseudo-labels to enhance the performance of YOLO, as the current (to date) state-of-the-art object detector

    Performance Evaluation of Network Anomaly Detection Systems

    Get PDF
    Nowadays, there is a huge and growing concern about security in information and communication technology (ICT) among the scientific community because any attack or anomaly in the network can greatly affect many domains such as national security, private data storage, social welfare, economic issues, and so on. Therefore, the anomaly detection domain is a broad research area, and many different techniques and approaches for this purpose have emerged through the years. Attacks, problems, and internal failures when not detected early may badly harm an entire Network system. Thus, this thesis presents an autonomous profile-based anomaly detection system based on the statistical method Principal Component Analysis (PCADS-AD). This approach creates a network profile called Digital Signature of Network Segment using Flow Analysis (DSNSF) that denotes the predicted normal behavior of a network traffic activity through historical data analysis. That digital signature is used as a threshold for volume anomaly detection to detect disparities in the normal traffic trend. The proposed system uses seven traffic flow attributes: Bits, Packets and Number of Flows to detect problems, and Source and Destination IP addresses and Ports, to provides the network administrator necessary information to solve them. Via evaluation techniques, addition of a different anomaly detection approach, and comparisons to other methods performed in this thesis using real network traffic data, results showed good traffic prediction by the DSNSF and encouraging false alarm generation and detection accuracy on the detection schema. The observed results seek to contribute to the advance of the state of the art in methods and strategies for anomaly detection that aim to surpass some challenges that emerge from the constant growth in complexity, speed and size of today’s large scale networks, also providing high-value results for a better detection in real time.Atualmente, existe uma enorme e crescente preocupação com segurança em tecnologia da informação e comunicação (TIC) entre a comunidade cientĂ­fica. Isto porque qualquer ataque ou anomalia na rede pode afetar a qualidade, interoperabilidade, disponibilidade, e integridade em muitos domĂ­nios, como segurança nacional, armazenamento de dados privados, bem-estar social, questĂ”es econĂŽmicas, e assim por diante. Portanto, a deteção de anomalias Ă© uma ampla ĂĄrea de pesquisa, e muitas tĂ©cnicas e abordagens diferentes para esse propĂłsito surgiram ao longo dos anos. Ataques, problemas e falhas internas quando nĂŁo detetados precocemente podem prejudicar gravemente todo um sistema de rede. Assim, esta Tese apresenta um sistema autĂŽnomo de deteção de anomalias baseado em perfil utilizando o mĂ©todo estatĂ­stico AnĂĄlise de Componentes Principais (PCADS-AD). Essa abordagem cria um perfil de rede chamado Assinatura Digital do Segmento de Rede usando AnĂĄlise de Fluxos (DSNSF) que denota o comportamento normal previsto de uma atividade de trĂĄfego de rede por meio da anĂĄlise de dados histĂłricos. Essa assinatura digital Ă© utilizada como um limiar para deteção de anomalia de volume e identificar disparidades na tendĂȘncia de trĂĄfego normal. O sistema proposto utiliza sete atributos de fluxo de trĂĄfego: bits, pacotes e nĂșmero de fluxos para detetar problemas, alĂ©m de endereços IP e portas de origem e destino para fornecer ao administrador de rede as informaçÔes necessĂĄrias para resolvĂȘ-los. Por meio da utilização de mĂ©tricas de avaliação, do acrescimento de uma abordagem de deteção distinta da proposta principal e comparaçÔes com outros mĂ©todos realizados nesta tese usando dados reais de trĂĄfego de rede, os resultados mostraram boas previsĂ”es de trĂĄfego pelo DSNSF e resultados encorajadores quanto a geração de alarmes falsos e precisĂŁo de deteção. Com os resultados observados nesta tese, este trabalho de doutoramento busca contribuir para o avanço do estado da arte em mĂ©todos e estratĂ©gias de deteção de anomalias, visando superar alguns desafios que emergem do constante crescimento em complexidade, velocidade e tamanho das redes de grande porte da atualidade, proporcionando tambĂ©m alta performance. Ainda, a baixa complexidade e agilidade do sistema proposto contribuem para que possa ser aplicado a deteção em tempo real

    Cyber Security

    Get PDF
    This open access book constitutes the refereed proceedings of the 18th China Annual Conference on Cyber Security, CNCERT 2022, held in Beijing, China, in August 2022. The 17 papers presented were carefully reviewed and selected from 64 submissions. The papers are organized according to the following topical sections: ​​data security; anomaly detection; cryptocurrency; information security; vulnerabilities; mobile internet; threat intelligence; text recognition

    Cyber Security

    Get PDF
    This open access book constitutes the refereed proceedings of the 18th China Annual Conference on Cyber Security, CNCERT 2022, held in Beijing, China, in August 2022. The 17 papers presented were carefully reviewed and selected from 64 submissions. The papers are organized according to the following topical sections: ​​data security; anomaly detection; cryptocurrency; information security; vulnerabilities; mobile internet; threat intelligence; text recognition

    Application of ArtiïŹcial Intelligence Approaches in the Flood Management Process for Assessing Blockage at Cross-Drainage Hydraulic Structures

    Get PDF
    Floods are the most recurrent, widespread and damaging natural disasters, and are ex-pected to become further devastating because of global warming. Blockage of cross-drainage hydraulic structures (e.g., culverts, bridges) by ïŹ‚ood-borne debris is an inïŹ‚uen-tial factor which usually results in reducing hydraulic capacity, diverting the ïŹ‚ows, dam-aging structures and downstream scouring. Australia is among the countries adversely impacted by blockage issues (e.g., 1998 ïŹ‚oods in Wollongong, 2007 ïŹ‚oods in Newcas-tle). In this context, Wollongong City Council (WCC), under the Australian Rainfall and Runoff (ARR), investigated the impact of blockage on ïŹ‚oods and proposed guidelines to consider blockage in the design process for the ïŹrst time. However, existing WCC guide-lines are based on various assumptions (i.e., visual inspections as representative of hy-draulic behaviour, post-ïŹ‚ood blockage as representative of peak ïŹ‚oods, blockage remains constant during the whole ïŹ‚ooding event), that are not supported by scientiïŹc research while also being criticised by hydraulic design engineers. This suggests the need to per-form detailed investigations of blockage from both visual and hydraulic perspectives, in order to develop quantiïŹable relationships and incorporate blockage into design guide-lines of hydraulic structures. However, because of the complex nature of blockage as a process and the lack of blockage-related data from actual ïŹ‚oods, conventional numerical modelling-based approaches have not achieved much success. The research in this thesis applies artiïŹcial intelligence (AI) approaches to assess the blockage at cross-drainage hydraulic structures, motivated by recent success achieved by AI in addressing complex real-world problems (e.g., scour depth estimation and ïŹ‚ood inundation monitoring). The research has been carried out in three phases: (a) litera-ture review, (b) hydraulic blockage assessment, and (c) visual blockage assessment. The ïŹrst phase investigates the use of computer vision in the ïŹ‚ood management domain and provides context for blockage. The second phase investigates hydraulic blockage using lab scale experiments and the implementation of multiple machine learning approaches on datasets collected from lab experiments (i.e., Hydraulics-Lab Dataset (HD), Visual Hydraulics-Lab Dataset (VHD)). The artiïŹcial neural network (ANN) and end-to-end deep learning approaches reported top performers among the implemented approaches and demonstrated the potential of learning-based approaches in addressing blockage is-sues. The third phase assesses visual blockage at culverts using deep learning classiïŹ-cation, detection and segmentation approaches for two types of visual assessments (i.e., blockage status classiïŹcation, percentage visual blockage estimation). Firstly, a range of existing convolutional neural network (CNN) image classiïŹcation models are imple-mented and compared using visual datasets (i.e., Images of Culvert Openings and Block-age (ICOB), VHD, Synthetic Images of Culverts (SIC)), with the aim to automate the process of manual visual blockage classiïŹcation of culverts. The Neural Architecture Search Network (NASNet) model achieved best classiïŹcation results among those im-plemented. Furthermore, the study identiïŹed background noise and simpliïŹed labelling criteria as two contributing factors in degraded performance of existing CNN models for blockage classiïŹcation. To address the background clutter issue, a detection-classiïŹcation pipeline is proposed and achieved improved visual blockage classiïŹcation performance. The proposed pipeline has been deployed using edge computing hardware for blockage monitoring of actual culverts. The role of synthetic data (i.e., SIC) on the performance of culvert opening detection is also investigated. Secondly, an automated segmentation-classiïŹcation deep learning pipeline is proposed to estimate the percentage of visual blockage at circular culverts to better prioritise culvert maintenance. The AI solutions proposed in this thesis are integrated into a blockage assessment framework, designed to be deployed through edge computing to monitor, record and assess blockage at cross-drainage hydraulic structures

    Design of a Controlled Language for Critical Infrastructures Protection

    Get PDF
    We describe a project for the construction of controlled language for critical infrastructures protection (CIP). This project originates from the need to coordinate and categorize the communications on CIP at the European level. These communications can be physically represented by official documents, reports on incidents, informal communications and plain e-mail. We explore the application of traditional library science tools for the construction of controlled languages in order to achieve our goal. Our starting point is an analogous work done during the sixties in the field of nuclear science known as the Euratom Thesaurus.JRC.G.6-Security technology assessmen

    Enhancing Computer Network Security through Improved Outlier Detection for Data Streams

    Get PDF
    V několika poslednĂ­ch letech se metody strojovĂ©ho učenĂ­ (zvlĂĄĆĄtě ty zabĂœvajĂ­cĂ­ se detekcĂ­ odlehlĂœch hodnot - OD) v oblasti kyberbezpečnosti opĂ­raly o zjiĆĄĆ„ovĂĄnĂ­ anomĂĄliĂ­ sĂ­Ć„ovĂ©ho provozu spočívajĂ­cĂ­ch v novĂœch schĂ©matech ĂștokĆŻ. Detekce anomĂĄliĂ­ v počítačovĂœch sĂ­tĂ­ch reĂĄlnĂ©ho světa se ale stala stĂĄle obtĂ­ĆŸnějĆĄĂ­ kvĆŻli trvalĂ©mu nĂĄrĆŻstu vysoce objemnĂœch, rychlĂœch a dimenzionĂĄlnĂ­ch prĆŻbÄ›ĆŸně pƙichĂĄzejĂ­cĂ­ch dat (SD), pro kterĂĄ nejsou k dispozici obecně uznanĂ© a pravdivĂ© informace o anomalitě. ÚčinnĂĄ detekčnĂ­ schĂ©mata pro vestavěnĂĄ sĂ­Ć„ovĂĄ zaƙízenĂ­ musejĂ­ bĂœt rychlĂĄ a paměƄově nenĂĄročnĂĄ a musejĂ­ bĂœt schopna se potĂœkat se změnami konceptu, kdyĆŸ se vyskytnou. CĂ­lem tĂ©to disertace je zlepĆĄit bezpečnost počítačovĂœch sĂ­tĂ­ zesĂ­lenou detekcĂ­ odlehlĂœch hodnot v datovĂœch proudech, obzvlĂĄĆĄtě SD, a dosĂĄhnout kyberodolnosti, kterĂĄ zahrnuje jak detekci a analĂœzu, tak reakci na bezpečnostnĂ­ incidenty jako jsou napƙ. novĂ© zlovolnĂ© aktivity. Za tĂ­mto Ășčelem jsou v prĂĄci navrĆŸeny čtyƙi hlavnĂ­ pƙíspěvky, jeĆŸ byly publikovĂĄny nebo se nachĂĄzejĂ­ v recenznĂ­m ƙízenĂ­ časopisĆŻ. ZaprvĂ©, mezera ve volbě vlastnostĂ­ (FS) bez učitele pro zlepĆĄovĂĄnĂ­ jiĆŸ hotovĂœch metod OD v datovĂœch tocĂ­ch byla zaplněna navrĆŸenĂ­m volby vlastnostĂ­ bez učitele pro detekci odlehlĂœch prĆŻbÄ›ĆŸně pƙichĂĄzejĂ­cĂ­ch dat označovanĂ© jako UFSSOD. NĂĄsledně odvozujeme generickĂœ koncept, kterĂœ ukazuje dva aplikačnĂ­ scĂ©náƙe UFSSOD ve spojenĂ­ s online algoritmy OD. RozsĂĄhlĂ© experimenty ukĂĄzaly, ĆŸe UFSSOD coby algoritmus schopnĂœ online zpracovĂĄnĂ­ vykazuje srovnatelnĂ© vĂœsledky jako konkurenčnĂ­ metoda upravenĂĄ pro OD. ZadruhĂ© pƙedstavujeme novĂœ aplikačnĂ­ rĂĄmec nazvanĂœ izolovanĂœ les zaloĆŸenĂœ na počítĂĄnĂ­ vĂœkonu (PCB-iForest), jenĆŸ je obecně schopen vyuĆŸĂ­t jakoukoliv online OD metodu zaloĆŸenou na mnoĆŸinĂĄch dat tak, aby fungovala na SD. Do tohoto algoritmu integrujeme dvě varianty zaloĆŸenĂ© na klasickĂ©m izolovanĂ©m lese. RozsĂĄhlĂ© experimenty provedenĂ© na 23 multidisciplinĂĄrnĂ­ch datovĂœch sadĂĄch tĂœkajĂ­cĂ­ch se bezpečnostnĂ­ problematiky reĂĄlnĂ©ho světa ukĂĄzaly, ĆŸe PCB-iForest jasně pƙekonĂĄvĂĄ uĆŸ zavedenĂ© konkurenčnĂ­ metody v 61 % pƙípadĆŻ a dokonce dosahuje jeĆĄtě slibnějĆĄĂ­ch vĂœsledkĆŻ co do vyvĂĄĆŸenosti mezi vĂœpočetnĂ­mi nĂĄklady na klasifikaci a jejĂ­ ĂșspěơnostĂ­. ZatƙetĂ­ zavĂĄdĂ­me novĂœ pracovnĂ­ rĂĄmec nazvanĂœ detekce odlehlĂœch hodnot a rozpoznĂĄvĂĄnĂ­ schĂ©mat Ăștoku proudovĂœm zpĆŻsobem (SOAAPR), jenĆŸ je na rozdĂ­l od současnĂœch metod schopen zpracovat vĂœstup z rĆŻznĂœch online OD metod bez učitele proudovĂœm zpĆŻsobem, aby zĂ­skal informace o novĂœch schĂ©matech Ăștoku. Ze seshlukovanĂ© mnoĆŸiny korelovanĂœch poplachĆŻ jsou metodou SOAAPR vypočítĂĄny tƙi rĆŻznĂ© soukromĂ­ zachovĂĄvajĂ­cĂ­ podpisy podobnĂ© otiskĆŻm prstĆŻ, kterĂ© charakterizujĂ­ a reprezentujĂ­ potenciĂĄlnĂ­ scĂ©náƙe Ăștoku s ohledem na jejich komunikačnĂ­ vztahy, projevy ve vlastnostech dat a chovĂĄnĂ­ v čase. Evaluace na dvou oblĂ­benĂœch datovĂœch sadĂĄch odhalila, ĆŸe SOAAPR mĆŻĆŸe soupeƙit s konkurenčnĂ­ offline metodou ve schopnosti korelace poplachĆŻ a vĂœznamně ji pƙekonĂĄvĂĄ z hlediska vĂœpočetnĂ­ho času . NavĂ­c se vĆĄechny tƙi typy podpisĆŻ ve větĆĄině pƙípadĆŻ zdajĂ­ spolehlivě charakterizovat scĂ©náƙe ĂștokĆŻ tĂ­m, ĆŸe podobnĂ© seskupujĂ­ k sobě. ZačtvrtĂ© pƙedstavujeme algoritmus nepĂĄrovĂ©ho kĂłdu autentizace zprĂĄv (Uncoupled MAC), kterĂœ propojuje oblasti kryptografickĂ©ho zabezpečenĂ­ a detekce vniknutĂ­ (IDS) pro sĂ­Ć„ovou bezpečnost. Zabezpečuje sĂ­Ć„ovou komunikaci (autenticitu a integritu) kryptografickĂœm schĂ©matem s podporou druhĂ© vrstvy kĂłdy autentizace zprĂĄv, ale takĂ© jako vedlejĆĄĂ­ efekt poskytuje funkcionalitu IDS tak, ĆŸe vyvolĂĄvĂĄ poplach na zĂĄkladě poruĆĄenĂ­ hodnot nepĂĄrovĂ©ho MACu. DĂ­ky novĂ©mu samoregulačnĂ­mu rozơíƙenĂ­ algoritmus adaptuje svoje vzorkovacĂ­ parametry na zĂĄkladě zjiĆĄtěnĂ­ ĆĄkodlivĂœch aktivit. Evaluace ve virtuĂĄlnĂ­m prostƙedĂ­ jasně ukazuje, ĆŸe schopnost detekce se za běhu zvyĆĄuje pro rĆŻznĂ© scĂ©náƙe Ăștoku. Ty zahrnujĂ­ dokonce i situace, kdy se inteligentnĂ­ ĂștočnĂ­ci snaĆŸĂ­ vyuĆŸĂ­t slabĂĄ mĂ­sta vzorkovĂĄnĂ­.ObhĂĄjenoOver the past couple of years, machine learning methods - especially the Outlier Detection (OD) ones - have become anchored to the cyber security field to detect network-based anomalies rooted in novel attack patterns. Due to the steady increase of high-volume, high-speed and high-dimensional Streaming Data (SD), for which ground truth information is not available, detecting anomalies in real-world computer networks has become a more and more challenging task. Efficient detection schemes applied to networked, embedded devices need to be fast and memory-constrained, and must be capable of dealing with concept drifts when they occur. The aim of this thesis is to enhance computer network security through improved OD for data streams, in particular SD, to achieve cyber resilience, which ranges from the detection, over the analysis of security-relevant incidents, e.g., novel malicious activity, to the reaction to them. Therefore, four major contributions are proposed, which have been published or are submitted journal articles. First, a research gap in unsupervised Feature Selection (FS) for the improvement of off-the-shell OD methods in data streams is filled by proposing Unsupervised Feature Selection for Streaming Outlier Detection, denoted as UFSSOD. A generic concept is retrieved that shows two application scenarios of UFSSOD in conjunction with online OD algorithms. Extensive experiments have shown that UFSSOD, as an online-capable algorithm, achieves comparable results with a competitor trimmed for OD. Second, a novel unsupervised online OD framework called Performance Counter-Based iForest (PCB-iForest) is being introduced, which generalized, is able to incorporate any ensemble-based online OD method to function on SD. Two variants based on classic iForest are integrated. Extensive experiments, performed on 23 different multi-disciplinary and security-related real-world data sets, revealed that PCB-iForest clearly outperformed state-of-the-art competitors in 61 % of cases and even achieved more promising results in terms of the tradeoff between classification and computational costs. Third, a framework called Streaming Outlier Analysis and Attack Pattern Recognition, denoted as SOAAPR is being introduced that, in contrast to the state-of-the-art, is able to process the output of various online unsupervised OD methods in a streaming fashion to extract information about novel attack patterns. Three different privacy-preserving, fingerprint-like signatures are computed from the clustered set of correlated alerts by SOAAPR, which characterize and represent the potential attack scenarios with respect to their communication relations, their manifestation in the data's features and their temporal behavior. The evaluation on two popular data sets shows that SOAAPR can compete with an offline competitor in terms of alert correlation and outperforms it significantly in terms of processing time. Moreover, in most cases all three types of signatures seem to reliably characterize attack scenarios to the effect that similar ones are grouped together. Fourth, an Uncoupled Message Authentication Code algorithm - Uncoupled MAC - is presented which builds a bridge between cryptographic protection and Intrusion Detection Systems (IDSs) for network security. It secures network communication (authenticity and integrity) through a cryptographic scheme with layer-2 support via uncoupled message authentication codes but, as a side effect, also provides IDS-functionality producing alarms based on the violation of Uncoupled MAC values. Through a novel self-regulation extension, the algorithm adapts its sampling parameters based on the detection of malicious actions on SD. The evaluation in a virtualized environment clearly shows that the detection rate increases over runtime for different attack scenarios. Those even cover scenarios in which intelligent attackers try to exploit the downsides of sampling

    Assessing the Role of Critical Value Factors (CVFs) on Users’ Resistance of Urban Search and Rescue Robotics

    Get PDF
    Natural and manmade disasters have brought urban search and rescue (USAR) robots to the technology forefront as a means of providing additional support for search and rescue workers. The loss of life among victims and rescue workers necessitates the need for a wider acceptance of this assistive technology. Disasters, such as hurricane Harvey in 2017, hurricane Sandy in 2012, the 2012 United States tornadoes that devastated 17 states, the 2011 Australian floods, the 2011 Japan and 2010 Haiti earthquakes, the 2010 West Virginia coal mine explosions, the 2009 Typhoon caused mudslides in Taiwan, the 2001 Collapse of the World Trade Center, the 2005 Hurricane Katrina, the 1995 Oklahoma City bombing, and the 1995 Kobe Japan earthquake all benefited from the use of USAR. While there has been a push for use of USAR for disaster, user resistance to such technology is still significantly understudied. This study applied a mixed quantitative and qualitative approach to identify important system characteristics and critical value factors (CVFs) that contribute to team members’ resistance to use such technology. The populations for this study included 2,500 USAR team members from the Houston Professional Fire Fighters Association (HPFFA), and the expected sample size of approximately 250 respondents. The main goal of this quantitative study was to examine system characteristics and CVFs that contribute to USAR team members’ resistance to use such technology. System characteristics and CVFs are associated with USAR. Furthermore, the study utilized multivariate linear regression (MLR) and multivariate analysis of covariance (ANCOVA) to determine if, and to what extent, CVFs and computer self-efficacy (CSE) interact to influence USAR team members’ resistance to use such technology. This quantitative study will test for significant differences on CVF’s, CSE, and resistance to use such technology based on age, gender, prior experience with USAR events, years of USAR experience, and organizational role. The contribution of this study was to reduce USAR team members’ resistance to use such technology in an effort minimize risk to USAR team members while maintaining their lifesaving capability
    • 

    corecore