216 research outputs found
Universal Adversarial Defense in Remote Sensing Based on Pre-trained Denoising Diffusion Models
Deep neural networks (DNNs) have achieved tremendous success in many remote
sensing (RS) applications, in which DNNs are vulnerable to adversarial
perturbations. Unfortunately, current adversarial defense approaches in RS
studies usually suffer from performance fluctuation and unnecessary re-training
costs due to the need for prior knowledge of the adversarial perturbations
among RS data. To circumvent these challenges, we propose a universal
adversarial defense approach in RS imagery (UAD-RS) using pre-trained diffusion
models to defend the common DNNs against multiple unknown adversarial attacks.
Specifically, the generative diffusion models are first pre-trained on
different RS datasets to learn generalized representations in various data
domains. After that, a universal adversarial purification framework is
developed using the forward and reverse process of the pre-trained diffusion
models to purify the perturbations from adversarial samples. Furthermore, an
adaptive noise level selection (ANLS) mechanism is built to capture the optimal
noise level of the diffusion model that can achieve the best purification
results closest to the clean samples according to their Frechet Inception
Distance (FID) in deep feature space. As a result, only a single pre-trained
diffusion model is needed for the universal purification of adversarial samples
on each dataset, which significantly alleviates the re-training efforts and
maintains high performance without prior knowledge of the adversarial
perturbations. Experiments on four heterogeneous RS datasets regarding scene
classification and semantic segmentation verify that UAD-RS outperforms
state-of-the-art adversarial purification approaches with a universal defense
against seven commonly existing adversarial perturbations. Codes and the
pre-trained models are available online (https://github.com/EricYu97/UAD-RS).Comment: Added the GitHub link to the abstrac
Toward robust deep neural networks
Dans cette thĂšse, notre objectif est de dĂ©velopper des modĂšles dâapprentissage robustes et fiables mais prĂ©cis, en particulier les Convolutional Neural Network (CNN), en prĂ©sence des exemples anomalies, comme des exemples adversaires et dâĂ©chantillons hors distribution âOut-of-Distribution (OOD). Comme la premiĂšre contribution, nous proposons dâestimer la confiance calibrĂ©e pour les exemples adversaires en encourageant la diversitĂ© dans un ensemble des CNNs. Ă cette fin, nous concevons un ensemble de spĂ©cialistes diversifiĂ©s avec un mĂ©canisme de vote simple et efficace en termes de calcul pour prĂ©dire les exemples adversaires avec une faible confiance tout en maintenant la confiance prĂ©dicative des Ă©chantillons propres Ă©levĂ©e. En prĂ©sence de dĂ©saccord dans notre ensemble, nous prouvons quâune borne supĂ©rieure de 0:5 + _0 peut ĂȘtre Ă©tablie pour la confiance, conduisant Ă un seuil de dĂ©tection global fixe de tau = 0; 5. Nous justifions analytiquement le rĂŽle de la diversitĂ© dans notre ensemble sur lâattĂ©nuation du risque des exemples adversaires Ă la fois en boĂźte noire et en boĂźte blanche. Enfin, nous Ă©valuons empiriquement la robustesse de notre ensemble aux attaques de la boĂźte noire et de la boĂźte blanche sur plusieurs donnĂ©es standards. La deuxiĂšme contribution vise Ă aborder la dĂ©tection dâĂ©chantillons OOD Ă travers un modĂšle de bout en bout entraĂźnĂ© sur un ensemble OOD appropriĂ©. Ă cette fin, nous abordons la question centrale suivante : comment diffĂ©rencier des diffĂ©rents ensembles de donnĂ©es OOD disponibles par rapport Ă une tĂąche de distribution donnĂ©e pour sĂ©lectionner la plus appropriĂ©e, ce qui induit Ă son tour un modĂšle calibrĂ© avec un taux de dĂ©tection des ensembles inaperçus de donnĂ©es OOD? Pour rĂ©pondre Ă cette question, nous proposons de diffĂ©rencier les ensembles OOD par leur niveau de "protection" des sub-manifolds. Pour mesurer le niveau de protection, nous concevons ensuite trois nouvelles mesures efficaces en termes de calcul Ă lâaide dâun CNN vanille prĂ©formĂ©. Dans une vaste sĂ©rie dâexpĂ©riences sur les tĂąches de classification dâimage et dâaudio, nous dĂ©montrons empiriquement la capacitĂ© dâun CNN augmentĂ© (A-CNN) et dâun CNN explicitement calibrĂ© pour dĂ©tecter une portion significativement plus grande des exemples OOD. Fait intĂ©ressant, nous observons Ă©galement quâun tel A-CNN (nommĂ© A-CNN) peut Ă©galement dĂ©tecter les adversaires exemples FGS en boĂźte noire avec des perturbations significatives. En tant que troisiĂšme contribution, nous Ă©tudions de plus prĂšs de la capacitĂ© de lâA-CNN sur la dĂ©tection de types plus larges dâadversaires boĂźte noire (pas seulement ceux de type FGS). Pour augmenter la capacitĂ© dâA-CNN Ă dĂ©tecter un plus grand nombre dâadversaires,nous augmentons lâensemble dâentraĂźnement OOD avec des Ă©chantillons interpolĂ©s inter-classes. Ensuite, nous dĂ©montrons que lâA-CNN, entraĂźnĂ© sur tous ces donnĂ©es, a un taux de dĂ©tection cohĂ©rent sur tous les types des adversaires exemples invisibles. Alors que la entraĂźnement dâun A-CNN sur des adversaires PGD ne conduit pas Ă un taux de dĂ©tection stable sur tous les types dâadversaires, en particulier les types inaperçus. Nous Ă©valuons Ă©galement visuellement lâespace des fonctionnalitĂ©s et les limites de dĂ©cision dans lâespace dâentrĂ©e dâun CNN vanille et de son homologue augmentĂ© en prĂ©sence dâadversaires et de ceux qui sont propres. Par un A-CNN correctement formĂ©, nous visons Ă faire un pas vers un modĂšle dâapprentissage debout en bout unifiĂ© et fiable avec de faibles taux de risque sur les Ă©chantillons propres et les Ă©chantillons inhabituels, par exemple, les Ă©chantillons adversaires et OOD. La derniĂšre contribution est de prĂ©senter une application de A-CNN pour lâentraĂźnement dâun dĂ©tecteur dâobjet robuste sur un ensemble de donnĂ©es partiellement Ă©tiquetĂ©es, en particulier un ensemble de donnĂ©es fusionnĂ©. La fusion de divers ensembles de donnĂ©es provenant de contextes similaires mais avec diffĂ©rents ensembles dâobjets dâintĂ©rĂȘt (OoI) est un moyen peu coĂ»teux de crĂ©er un ensemble de donnĂ©es Ă grande Ă©chelle qui couvre un plus large spectre dâOoI. De plus, la fusion dâensembles de donnĂ©es permet de rĂ©aliser un dĂ©tecteur dâobjet unifiĂ©, au lieu dâen avoir plusieurs sĂ©parĂ©s, ce qui entraĂźne une rĂ©duction des coĂ»ts de calcul et de temps. Cependant, la fusion dâensembles de donnĂ©es, en particulier Ă partir dâun contexte similaire, entraĂźne de nombreuses instances dâĂ©tiquetĂ©es manquantes. Dans le but dâentraĂźner un dĂ©tecteur dâobjet robuste intĂ©grĂ© sur un ensemble de donnĂ©es partiellement Ă©tiquetĂ©es mais Ă grande Ă©chelle, nous proposons un cadre dâentraĂźnement auto-supervisĂ© pour surmonter le problĂšme des instances dâĂ©tiquettes manquantes dans les ensembles des donnĂ©es fusionnĂ©s. Notre cadre est Ă©valuĂ© sur un ensemble de donnĂ©es fusionnĂ© avec un taux Ă©levĂ© dâĂ©tiquettes manquantes. Les rĂ©sultats empiriques confirment la viabilitĂ© de nos pseudo-Ă©tiquettes gĂ©nĂ©rĂ©es pour amĂ©liorer les performances de YOLO, en tant que dĂ©tecteur dâobjet Ă la pointe de la technologie.In this thesis, our goal is to develop robust and reliable yet accurate learning models, particularly Convolutional Neural Networks (CNNs), in the presence of adversarial examples and Out-of-Distribution (OOD) samples. As the first contribution, we propose to predict adversarial instances with high uncertainty through encouraging diversity in an ensemble of CNNs. To this end, we devise an ensemble of diverse specialists along with a simple and computationally efficient voting mechanism to predict the adversarial examples with low confidence while keeping the predictive confidence of the clean samples high. In the presence of high entropy in our ensemble, we prove that the predictive confidence can be upper-bounded, leading to have a globally fixed threshold over the predictive confidence for identifying adversaries. We analytically justify the role of diversity in our ensemble on mitigating the risk of both black-box and white-box adversarial examples. Finally, we empirically assess the robustness of our ensemble to the black-box and the white-box attacks on several benchmark datasets.The second contribution aims to address the detection of OOD samples through an end-to-end model trained on an appropriate OOD set. To this end, we address the following central question: how to differentiate many available OOD sets w.r.t. a given in distribution task to select the most appropriate one, which in turn induces a model with a high detection rate of unseen OOD sets? To answer this question, we hypothesize that the âprotectionâ level of in-distribution sub-manifolds by each OOD set can be a good possible property to differentiate OOD sets. To measure the protection level, we then design three novel, simple, and cost-effective metrics using a pre-trained vanilla CNN. In an extensive series of experiments on image and audio classification tasks, we empirically demonstrate the abilityof an Augmented-CNN (A-CNN) and an explicitly-calibrated CNN for detecting a significantly larger portion of unseen OOD samples, if they are trained on the most protective OOD set. Interestingly, we also observe that the A-CNN trained on the most protective OOD set (calledA-CNN) can also detect the black-box Fast Gradient Sign (FGS) adversarial examples. As the third contribution, we investigate more closely the capacity of the A-CNN on the detection of wider types of black-box adversaries. To increase the capability of A-CNN to detect a larger number of adversaries, we augment its OOD training set with some inter-class interpolated samples. Then, we demonstrate that the A-CNN trained on the most protective OOD set along with the interpolated samples has a consistent detection rate on all types of unseen adversarial examples. Where as training an A-CNN on Projected Gradient Descent (PGD) adversaries does not lead to a stable detection rate on all types of adversaries, particularly the unseen types. We also visually assess the feature space and the decision boundaries in the input space of a vanilla CNN and its augmented counterpart in the presence of adversaries and the clean ones. By a properly trained A-CNN, we aim to take a step toward a unified and reliable end-to-end learning model with small risk rates on both clean samples and the unusual ones, e.g. adversarial and OOD samples.The last contribution is to show a use-case of A-CNN for training a robust object detector on a partially-labeled dataset, particularly a merged dataset. Merging various datasets from similar contexts but with different sets of Object of Interest (OoI) is an inexpensive way to craft a large-scale dataset which covers a larger spectrum of OoIs. Moreover, merging datasets allows achieving a unified object detector, instead of having several separate ones, resultingin the reduction of computational and time costs. However, merging datasets, especially from a similar context, causes many missing-label instances. With the goal of training an integrated robust object detector on a partially-labeled but large-scale dataset, we propose a self-supervised training framework to overcome the issue of missing-label instances in the merged datasets. Our framework is evaluated on a merged dataset with a high missing-label rate. The empirical results confirm the viability of our generated pseudo-labels to enhance the performance of YOLO, as the current (to date) state-of-the-art object detector
Performance Evaluation of Network Anomaly Detection Systems
Nowadays, there is a huge and growing concern about security in information and communication
technology (ICT) among the scientific community because any attack or anomaly in
the network can greatly affect many domains such as national security, private data storage,
social welfare, economic issues, and so on. Therefore, the anomaly detection domain is a broad
research area, and many different techniques and approaches for this purpose have emerged
through the years.
Attacks, problems, and internal failures when not detected early may badly harm an
entire Network system. Thus, this thesis presents an autonomous profile-based anomaly detection
system based on the statistical method Principal Component Analysis (PCADS-AD). This
approach creates a network profile called Digital Signature of Network Segment using Flow Analysis
(DSNSF) that denotes the predicted normal behavior of a network traffic activity through
historical data analysis. That digital signature is used as a threshold for volume anomaly detection
to detect disparities in the normal traffic trend. The proposed system uses seven traffic flow
attributes: Bits, Packets and Number of Flows to detect problems, and Source and Destination IP
addresses and Ports, to provides the network administrator necessary information to solve them.
Via evaluation techniques, addition of a different anomaly detection approach, and
comparisons to other methods performed in this thesis using real network traffic data, results
showed good traffic prediction by the DSNSF and encouraging false alarm generation and detection
accuracy on the detection schema.
The observed results seek to contribute to the advance of the state of the art in methods
and strategies for anomaly detection that aim to surpass some challenges that emerge from
the constant growth in complexity, speed and size of todayâs large scale networks, also providing
high-value results for a better detection in real time.Atualmente, existe uma enorme e crescente preocupação com segurança em tecnologia
da informação e comunicação (TIC) entre a comunidade cientĂfica. Isto porque qualquer
ataque ou anomalia na rede pode afetar a qualidade, interoperabilidade, disponibilidade, e integridade
em muitos domĂnios, como segurança nacional, armazenamento de dados privados,
bem-estar social, questÔes econÎmicas, e assim por diante. Portanto, a deteção de anomalias
é uma ampla årea de pesquisa, e muitas técnicas e abordagens diferentes para esse propósito
surgiram ao longo dos anos.
Ataques, problemas e falhas internas quando nĂŁo detetados precocemente podem prejudicar
gravemente todo um sistema de rede. Assim, esta Tese apresenta um sistema autĂŽnomo
de deteção de anomalias baseado em perfil utilizando o mĂ©todo estatĂstico AnĂĄlise de Componentes
Principais (PCADS-AD). Essa abordagem cria um perfil de rede chamado Assinatura Digital
do Segmento de Rede usando AnĂĄlise de Fluxos (DSNSF) que denota o comportamento normal
previsto de uma atividade de trĂĄfego de rede por meio da anĂĄlise de dados histĂłricos. Essa
assinatura digital é utilizada como um limiar para deteção de anomalia de volume e identificar
disparidades na tendĂȘncia de trĂĄfego normal. O sistema proposto utiliza sete atributos de fluxo
de trĂĄfego: bits, pacotes e nĂșmero de fluxos para detetar problemas, alĂ©m de endereços IP e
portas de origem e destino para fornecer ao administrador de rede as informaçÔes necessårias
para resolvĂȘ-los.
Por meio da utilização de métricas de avaliação, do acrescimento de uma abordagem
de deteção distinta da proposta principal e comparaçÔes com outros métodos realizados nesta
tese usando dados reais de tråfego de rede, os resultados mostraram boas previsÔes de tråfego
pelo DSNSF e resultados encorajadores quanto a geração de alarmes falsos e precisão de deteção.
Com os resultados observados nesta tese, este trabalho de doutoramento busca contribuir
para o avanço do estado da arte em métodos e estratégias de deteção de anomalias,
visando superar alguns desafios que emergem do constante crescimento em complexidade, velocidade
e tamanho das redes de grande porte da atualidade, proporcionando também alta
performance. Ainda, a baixa complexidade e agilidade do sistema proposto contribuem para
que possa ser aplicado a deteção em tempo real
Cyber Security
This open access book constitutes the refereed proceedings of the 18th China Annual Conference on Cyber Security, CNCERT 2022, held in Beijing, China, in August 2022. The 17 papers presented were carefully reviewed and selected from 64 submissions. The papers are organized according to the following topical sections: ââdata security; anomaly detection; cryptocurrency; information security; vulnerabilities; mobile internet; threat intelligence; text recognition
Cyber Security
This open access book constitutes the refereed proceedings of the 18th China Annual Conference on Cyber Security, CNCERT 2022, held in Beijing, China, in August 2022. The 17 papers presented were carefully reviewed and selected from 64 submissions. The papers are organized according to the following topical sections: ââdata security; anomaly detection; cryptocurrency; information security; vulnerabilities; mobile internet; threat intelligence; text recognition
Application of ArtiïŹcial Intelligence Approaches in the Flood Management Process for Assessing Blockage at Cross-Drainage Hydraulic Structures
Floods are the most recurrent, widespread and damaging natural disasters, and are ex-pected to become further devastating because of global warming. Blockage of cross-drainage hydraulic structures (e.g., culverts, bridges) by ïŹood-borne debris is an inïŹuen-tial factor which usually results in reducing hydraulic capacity, diverting the ïŹows, dam-aging structures and downstream scouring. Australia is among the countries adversely impacted by blockage issues (e.g., 1998 ïŹoods in Wollongong, 2007 ïŹoods in Newcas-tle). In this context, Wollongong City Council (WCC), under the Australian Rainfall and Runoff (ARR), investigated the impact of blockage on ïŹoods and proposed guidelines to consider blockage in the design process for the ïŹrst time. However, existing WCC guide-lines are based on various assumptions (i.e., visual inspections as representative of hy-draulic behaviour, post-ïŹood blockage as representative of peak ïŹoods, blockage remains constant during the whole ïŹooding event), that are not supported by scientiïŹc research while also being criticised by hydraulic design engineers. This suggests the need to per-form detailed investigations of blockage from both visual and hydraulic perspectives, in order to develop quantiïŹable relationships and incorporate blockage into design guide-lines of hydraulic structures. However, because of the complex nature of blockage as a process and the lack of blockage-related data from actual ïŹoods, conventional numerical modelling-based approaches have not achieved much success.
The research in this thesis applies artiïŹcial intelligence (AI) approaches to assess the blockage at cross-drainage hydraulic structures, motivated by recent success achieved by AI in addressing complex real-world problems (e.g., scour depth estimation and ïŹood inundation monitoring). The research has been carried out in three phases: (a) litera-ture review, (b) hydraulic blockage assessment, and (c) visual blockage assessment. The ïŹrst phase investigates the use of computer vision in the ïŹood management domain and provides context for blockage. The second phase investigates hydraulic blockage using lab scale experiments and the implementation of multiple machine learning approaches on datasets collected from lab experiments (i.e., Hydraulics-Lab Dataset (HD), Visual Hydraulics-Lab Dataset (VHD)). The artiïŹcial neural network (ANN) and end-to-end deep learning approaches reported top performers among the implemented approaches and demonstrated the potential of learning-based approaches in addressing blockage is-sues. The third phase assesses visual blockage at culverts using deep learning classiïŹ-cation, detection and segmentation approaches for two types of visual assessments (i.e., blockage status classiïŹcation, percentage visual blockage estimation). Firstly, a range of existing convolutional neural network (CNN) image classiïŹcation models are imple-mented and compared using visual datasets (i.e., Images of Culvert Openings and Block-age (ICOB), VHD, Synthetic Images of Culverts (SIC)), with the aim to automate the process of manual visual blockage classiïŹcation of culverts. The Neural Architecture Search Network (NASNet) model achieved best classiïŹcation results among those im-plemented. Furthermore, the study identiïŹed background noise and simpliïŹed labelling criteria as two contributing factors in degraded performance of existing CNN models for blockage classiïŹcation. To address the background clutter issue, a detection-classiïŹcation pipeline is proposed and achieved improved visual blockage classiïŹcation performance. The proposed pipeline has been deployed using edge computing hardware for blockage monitoring of actual culverts. The role of synthetic data (i.e., SIC) on the performance of culvert opening detection is also investigated. Secondly, an automated segmentation-classiïŹcation deep learning pipeline is proposed to estimate the percentage of visual blockage at circular culverts to better prioritise culvert maintenance. The AI solutions proposed in this thesis are integrated into a blockage assessment framework, designed to be deployed through edge computing to monitor, record and assess blockage at cross-drainage hydraulic structures
Design of a Controlled Language for Critical Infrastructures Protection
We describe a project for the construction of controlled language for critical infrastructures protection (CIP). This project originates
from the need to coordinate and categorize the communications on CIP at the European level. These communications can be physically
represented by official documents, reports on incidents, informal communications and plain e-mail. We explore the application of
traditional library science tools for the construction of controlled languages in order to achieve our goal. Our starting point is an
analogous work done during the sixties in the field of nuclear science known as the Euratom Thesaurus.JRC.G.6-Security technology assessmen
Enhancing Computer Network Security through Improved Outlier Detection for Data Streams
V nÄkolika poslednĂch letech se metody strojovĂ©ho uÄenĂ (zvlĂĄĆĄtÄ ty zabĂœvajĂcĂ se detekcĂ odlehlĂœch hodnot - OD) v oblasti kyberbezpeÄnosti opĂraly o zjiĆĄĆ„ovĂĄnĂ anomĂĄliĂ sĂĆ„ovĂ©ho provozu spoÄĂvajĂcĂch v novĂœch schĂ©matech ĂștokĆŻ. Detekce anomĂĄliĂ v poÄĂtaÄovĂœch sĂtĂch reĂĄlnĂ©ho svÄta se ale stala stĂĄle obtĂĆŸnÄjĆĄĂ kvĆŻli trvalĂ©mu nĂĄrĆŻstu vysoce objemnĂœch, rychlĂœch a dimenzionĂĄlnĂch prĆŻbÄĆŸnÄ pĆichĂĄzejĂcĂch dat (SD), pro kterĂĄ nejsou k dispozici obecnÄ uznanĂ© a pravdivĂ© informace o anomalitÄ. ĂÄinnĂĄ detekÄnĂ schĂ©mata pro vestavÄnĂĄ sĂĆ„ovĂĄ zaĆĂzenĂ musejĂ bĂœt rychlĂĄ a pamÄĆ„ovÄ nenĂĄroÄnĂĄ a musejĂ bĂœt schopna se potĂœkat se zmÄnami konceptu, kdyĆŸ se vyskytnou. CĂlem tĂ©to disertace je zlepĆĄit bezpeÄnost poÄĂtaÄovĂœch sĂtĂ zesĂlenou detekcĂ odlehlĂœch hodnot v datovĂœch proudech, obzvlĂĄĆĄtÄ SD, a dosĂĄhnout kyberodolnosti, kterĂĄ zahrnuje jak detekci a analĂœzu, tak reakci na bezpeÄnostnĂ incidenty jako jsou napĆ. novĂ© zlovolnĂ© aktivity. Za tĂmto ĂșÄelem jsou v prĂĄci navrĆŸeny ÄtyĆi hlavnĂ pĆĂspÄvky, jeĆŸ byly publikovĂĄny nebo se nachĂĄzejĂ v recenznĂm ĆĂzenĂ ÄasopisĆŻ. ZaprvĂ©, mezera ve volbÄ vlastnostĂ (FS) bez uÄitele pro zlepĆĄovĂĄnĂ jiĆŸ hotovĂœch metod OD v datovĂœch tocĂch byla zaplnÄna navrĆŸenĂm volby vlastnostĂ bez uÄitele pro detekci odlehlĂœch prĆŻbÄĆŸnÄ pĆichĂĄzejĂcĂch dat oznaÄovanĂ© jako UFSSOD. NĂĄslednÄ odvozujeme generickĂœ koncept, kterĂœ ukazuje dva aplikaÄnĂ scĂ©nĂĄĆe UFSSOD ve spojenĂ s online algoritmy OD. RozsĂĄhlĂ© experimenty ukĂĄzaly, ĆŸe UFSSOD coby algoritmus schopnĂœ online zpracovĂĄnĂ vykazuje srovnatelnĂ© vĂœsledky jako konkurenÄnĂ metoda upravenĂĄ pro OD. ZadruhĂ© pĆedstavujeme novĂœ aplikaÄnĂ rĂĄmec nazvanĂœ izolovanĂœ les zaloĆŸenĂœ na poÄĂtĂĄnĂ vĂœkonu (PCB-iForest), jenĆŸ je obecnÄ schopen vyuĆŸĂt jakoukoliv online OD metodu zaloĆŸenou na mnoĆŸinĂĄch dat tak, aby fungovala na SD. Do tohoto algoritmu integrujeme dvÄ varianty zaloĆŸenĂ© na klasickĂ©m izolovanĂ©m lese. RozsĂĄhlĂ© experimenty provedenĂ© na 23 multidisciplinĂĄrnĂch datovĂœch sadĂĄch tĂœkajĂcĂch se bezpeÄnostnĂ problematiky reĂĄlnĂ©ho svÄta ukĂĄzaly, ĆŸe PCB-iForest jasnÄ pĆekonĂĄvĂĄ uĆŸ zavedenĂ© konkurenÄnĂ metody v 61 % pĆĂpadĆŻ a dokonce dosahuje jeĆĄtÄ slibnÄjĆĄĂch vĂœsledkĆŻ co do vyvĂĄĆŸenosti mezi vĂœpoÄetnĂmi nĂĄklady na klasifikaci a jejĂ ĂșspÄĆĄnostĂ. ZatĆetĂ zavĂĄdĂme novĂœ pracovnĂ rĂĄmec nazvanĂœ detekce odlehlĂœch hodnot a rozpoznĂĄvĂĄnĂ schĂ©mat Ăștoku proudovĂœm zpĆŻsobem (SOAAPR), jenĆŸ je na rozdĂl od souÄasnĂœch metod schopen zpracovat vĂœstup z rĆŻznĂœch online OD metod bez uÄitele proudovĂœm zpĆŻsobem, aby zĂskal informace o novĂœch schĂ©matech Ăștoku. Ze seshlukovanĂ© mnoĆŸiny korelovanĂœch poplachĆŻ jsou metodou SOAAPR vypoÄĂtĂĄny tĆi rĆŻznĂ© soukromĂ zachovĂĄvajĂcĂ podpisy podobnĂ© otiskĆŻm prstĆŻ, kterĂ© charakterizujĂ a reprezentujĂ potenciĂĄlnĂ scĂ©nĂĄĆe Ăștoku s ohledem na jejich komunikaÄnĂ vztahy, projevy ve vlastnostech dat a chovĂĄnĂ v Äase. Evaluace na dvou oblĂbenĂœch datovĂœch sadĂĄch odhalila, ĆŸe SOAAPR mĆŻĆŸe soupeĆit s konkurenÄnĂ offline metodou ve schopnosti korelace poplachĆŻ a vĂœznamnÄ ji pĆekonĂĄvĂĄ z hlediska vĂœpoÄetnĂho Äasu . NavĂc se vĆĄechny tĆi typy podpisĆŻ ve vÄtĆĄinÄ pĆĂpadĆŻ zdajĂ spolehlivÄ charakterizovat scĂ©nĂĄĆe ĂștokĆŻ tĂm, ĆŸe podobnĂ© seskupujĂ k sobÄ. ZaÄtvrtĂ© pĆedstavujeme algoritmus nepĂĄrovĂ©ho kĂłdu autentizace zprĂĄv (Uncoupled MAC), kterĂœ propojuje oblasti kryptografickĂ©ho zabezpeÄenĂ a detekce vniknutĂ (IDS) pro sĂĆ„ovou bezpeÄnost. ZabezpeÄuje sĂĆ„ovou komunikaci (autenticitu a integritu) kryptografickĂœm schĂ©matem s podporou druhĂ© vrstvy kĂłdy autentizace zprĂĄv, ale takĂ© jako vedlejĆĄĂ efekt poskytuje funkcionalitu IDS tak, ĆŸe vyvolĂĄvĂĄ poplach na zĂĄkladÄ poruĆĄenĂ hodnot nepĂĄrovĂ©ho MACu. DĂky novĂ©mu samoregulaÄnĂmu rozĆĄĂĆenĂ algoritmus adaptuje svoje vzorkovacĂ parametry na zĂĄkladÄ zjiĆĄtÄnĂ ĆĄkodlivĂœch aktivit. Evaluace ve virtuĂĄlnĂm prostĆedĂ jasnÄ ukazuje, ĆŸe schopnost detekce se za bÄhu zvyĆĄuje pro rĆŻznĂ© scĂ©nĂĄĆe Ăștoku. Ty zahrnujĂ dokonce i situace, kdy se inteligentnĂ ĂștoÄnĂci snaĆŸĂ vyuĆŸĂt slabĂĄ mĂsta vzorkovĂĄnĂ.ObhĂĄjenoOver the past couple of years, machine learning methods - especially the Outlier Detection (OD) ones - have become anchored to the cyber security field to detect network-based anomalies rooted in novel attack patterns. Due to the steady increase of high-volume, high-speed and high-dimensional Streaming Data (SD), for which ground truth information is not available, detecting anomalies in real-world computer networks has become a more and more challenging task. Efficient detection schemes applied to networked, embedded devices need to be fast and memory-constrained, and must be capable of dealing with concept drifts when they occur. The aim of this thesis is to enhance computer network security through improved OD for data streams, in particular SD, to achieve cyber resilience, which ranges from the detection, over the analysis of security-relevant incidents, e.g., novel malicious activity, to the reaction to them. Therefore, four major contributions are proposed, which have been published or are submitted journal articles. First, a research gap in unsupervised Feature Selection (FS) for the improvement of off-the-shell OD methods in data streams is filled by proposing Unsupervised Feature Selection for Streaming Outlier Detection, denoted as UFSSOD. A generic concept is retrieved that shows two application scenarios of UFSSOD in conjunction with online OD algorithms. Extensive experiments have shown that UFSSOD, as an online-capable algorithm, achieves comparable results with a competitor trimmed for OD. Second, a novel unsupervised online OD framework called Performance Counter-Based iForest (PCB-iForest) is being introduced, which generalized, is able to incorporate any ensemble-based online OD method to function on SD. Two variants based on classic iForest are integrated. Extensive experiments, performed on 23 different multi-disciplinary and security-related real-world data sets, revealed that PCB-iForest clearly outperformed state-of-the-art competitors in 61 % of cases and even achieved more promising results in terms of the tradeoff between classification and computational costs. Third, a framework called Streaming Outlier Analysis and Attack Pattern Recognition, denoted as SOAAPR is being introduced that, in contrast to the state-of-the-art, is able to process the output of various online unsupervised OD methods in a streaming fashion to extract information about novel attack patterns. Three different privacy-preserving, fingerprint-like signatures are computed from the clustered set of correlated alerts by SOAAPR, which characterize and represent the potential attack scenarios with respect to their communication relations, their manifestation in the data's features and their temporal behavior. The evaluation on two popular data sets shows that SOAAPR can compete with an offline competitor in terms of alert correlation and outperforms it significantly in terms of processing time. Moreover, in most cases all three types of signatures seem to reliably characterize attack scenarios to the effect that similar ones are grouped together. Fourth, an Uncoupled Message Authentication Code algorithm - Uncoupled MAC - is presented which builds a bridge between cryptographic protection and Intrusion Detection Systems (IDSs) for network security. It secures network communication (authenticity and integrity) through a cryptographic scheme with layer-2 support via uncoupled message authentication codes but, as a side effect, also provides IDS-functionality producing alarms based on the violation of Uncoupled MAC values. Through a novel self-regulation extension, the algorithm adapts its sampling parameters based on the detection of malicious actions on SD. The evaluation in a virtualized environment clearly shows that the detection rate increases over runtime for different attack scenarios. Those even cover scenarios in which intelligent attackers try to exploit the downsides of sampling
Assessing the Role of Critical Value Factors (CVFs) on Usersâ Resistance of Urban Search and Rescue Robotics
Natural and manmade disasters have brought urban search and rescue (USAR) robots to the technology forefront as a means of providing additional support for search and rescue workers. The loss of life among victims and rescue workers necessitates the need for a wider acceptance of this assistive technology. Disasters, such as hurricane Harvey in 2017, hurricane Sandy in 2012, the 2012 United States tornadoes that devastated 17 states, the 2011 Australian floods, the 2011 Japan and 2010 Haiti earthquakes, the 2010 West Virginia coal mine explosions, the 2009 Typhoon caused mudslides in Taiwan, the 2001 Collapse of the World Trade Center, the 2005 Hurricane Katrina, the 1995 Oklahoma City bombing, and the 1995 Kobe Japan earthquake all benefited from the use of USAR. While there has been a push for use of USAR for disaster, user resistance to such technology is still significantly understudied.
This study applied a mixed quantitative and qualitative approach to identify important system characteristics and critical value factors (CVFs) that contribute to team membersâ resistance to use such technology. The populations for this study included 2,500 USAR team members from the Houston Professional Fire Fighters Association (HPFFA), and the expected sample size of approximately 250 respondents.
The main goal of this quantitative study was to examine system characteristics and CVFs that contribute to USAR team membersâ resistance to use such technology. System characteristics and CVFs are associated with USAR. Furthermore, the study utilized multivariate linear regression (MLR) and multivariate analysis of covariance (ANCOVA) to determine if, and to what extent, CVFs and computer self-efficacy (CSE) interact to influence USAR team membersâ resistance to use such technology.
This quantitative study will test for significant differences on CVFâs, CSE, and resistance to use such technology based on age, gender, prior experience with USAR events, years of USAR experience, and organizational role. The contribution of this study was to reduce USAR team membersâ resistance to use such technology in an effort minimize risk to USAR team members while maintaining their lifesaving capability
- âŠ