2,926 research outputs found
APT-MMF: An advanced persistent threat actor attribution method based on multimodal and multilevel feature fusion
Threat actor attribution is a crucial defense strategy for combating advanced
persistent threats (APTs). Cyber threat intelligence (CTI), which involves
analyzing multisource heterogeneous data from APTs, plays an important role in
APT actor attribution. The current attribution methods extract features from
different CTI perspectives and employ machine learning models to classify CTI
reports according to their threat actors. However, these methods usually
extract only one kind of feature and ignore heterogeneous information,
especially the attributes and relations of indicators of compromise (IOCs),
which form the core of CTI. To address these problems, we propose an APT actor
attribution method based on multimodal and multilevel feature fusion (APT-MMF).
First, we leverage a heterogeneous attributed graph to characterize APT reports
and their IOC information. Then, we extract and fuse multimodal features,
including attribute type features, natural language text features and
topological relationship features, to construct comprehensive node
representations. Furthermore, we design multilevel heterogeneous graph
attention networks to learn the deep hidden features of APT report nodes; these
networks integrate IOC type-level, metapath-based neighbor node-level, and
metapath semantic-level attention. Utilizing multisource threat intelligence,
we construct a heterogeneous attributed graph dataset for verification
purposes. The experimental results show that our method not only outperforms
the existing methods but also demonstrates its good interpretability for
attribution analysis tasks
Knowledge will Propel Machine Understanding of Content: Extrapolating from Current Examples
Machine Learning has been a big success story during the AI resurgence. One
particular stand out success relates to learning from a massive amount of data.
In spite of early assertions of the unreasonable effectiveness of data, there
is increasing recognition for utilizing knowledge whenever it is available or
can be created purposefully. In this paper, we discuss the indispensable role
of knowledge for deeper understanding of content where (i) large amounts of
training data are unavailable, (ii) the objects to be recognized are complex,
(e.g., implicit entities and highly subjective content), and (iii) applications
need to use complementary or related data in multiple modalities/media. What
brings us to the cusp of rapid progress is our ability to (a) create relevant
and reliable knowledge and (b) carefully exploit knowledge to enhance ML/NLP
techniques. Using diverse examples, we seek to foretell unprecedented progress
in our ability for deeper understanding and exploitation of multimodal data and
continued incorporation of knowledge in learning techniques.Comment: Pre-print of the paper accepted at 2017 IEEE/WIC/ACM International
Conference on Web Intelligence (WI). arXiv admin note: substantial text
overlap with arXiv:1610.0770
Exploratory study to explore the role of ICT in the process of knowledge management in an Indian business environment
In the 21st century and the emergence of a digital economy, knowledge and the knowledge base economy are rapidly growing. To effectively be able to understand the processes involved in the creating, managing and sharing of knowledge management in the business environment is critical to the success of an organization. This study builds on the previous research of the authors on the enablers of knowledge management by identifying the relationship between the enablers of knowledge management and the role played by information communication technologies (ICT) and ICT infrastructure in a business setting. This paper provides the findings of a survey collected from the four major Indian cities (Chennai, Coimbatore, Madurai and Villupuram) regarding their views and opinions about the enablers of knowledge management in business setting. A total of 80 organizations participated in the study with 100 participants in each city. The results show that ICT and ICT infrastructure can play a critical role in the creating, managing and sharing of knowledge in an Indian business environment
Trusted Artificial Intelligence in Manufacturing; Trusted Artificial Intelligence in Manufacturing
The successful deployment of AI solutions in manufacturing environments hinges on their security, safety and reliability which becomes more challenging in settings where multiple AI systems (e.g., industrial robots, robotic cells, Deep Neural Networks (DNNs)) interact as atomic systems and with humans. To guarantee the safe and reliable operation of AI systems in the shopfloor, there is a need to address many challenges in the scope of complex, heterogeneous, dynamic and unpredictable environments. Specifically, data reliability, human machine interaction, security, transparency and explainability challenges need to be addressed at the same time. Recent advances in AI research (e.g., in deep neural networks security and explainable AI (XAI) systems), coupled with novel research outcomes in the formal specification and verification of AI systems provide a sound basis for safe and reliable AI deployments in production lines. Moreover, the legal and regulatory dimension of safe and reliable AI solutions in production lines must be considered as well. To address some of the above listed challenges, fifteen European Organizations collaborate in the scope of the STAR project, a research initiative funded by the European Commission in the scope of its H2020 program (Grant Agreement Number: 956573). STAR researches, develops, and validates novel technologies that enable AI systems to acquire knowledge in order to take timely and safe decisions in dynamic and unpredictable environments. Moreover, the project researches and delivers approaches that enable AI systems to confront sophisticated adversaries and to remain robust against security attacks. This book is co-authored by the STAR consortium members and provides a review of technologies, techniques and systems for trusted, ethical, and secure AI in manufacturing. The different chapters of the book cover systems and technologies for industrial data reliability, responsible and transparent artificial intelligence systems, human centered manufacturing systems such as human-centred digital twins, cyber-defence in AI systems, simulated reality systems, human robot collaboration systems, as well as automated mobile robots for manufacturing environments. A variety of cutting-edge AI technologies are employed by these systems including deep neural networks, reinforcement learning systems, and explainable artificial intelligence systems. Furthermore, relevant standards and applicable regulations are discussed. Beyond reviewing state of the art standards and technologies, the book illustrates how the STAR research goes beyond the state of the art, towards enabling and showcasing human-centred technologies in production lines. Emphasis is put on dynamic human in the loop scenarios, where ethical, transparent, and trusted AI systems co-exist with human workers. The book is made available as an open access publication, which could make it broadly and freely available to the AI and smart manufacturing communities
AdvCLIP: Downstream-agnostic Adversarial Examples in Multimodal Contrastive Learning
Multimodal contrastive learning aims to train a general-purpose feature
extractor, such as CLIP, on vast amounts of raw, unlabeled paired image-text
data. This can greatly benefit various complex downstream tasks, including
cross-modal image-text retrieval and image classification. Despite its
promising prospect, the security issue of cross-modal pre-trained encoder has
not been fully explored yet, especially when the pre-trained encoder is
publicly available for commercial use.
In this work, we propose AdvCLIP, the first attack framework for generating
downstream-agnostic adversarial examples based on cross-modal pre-trained
encoders. AdvCLIP aims to construct a universal adversarial patch for a set of
natural images that can fool all the downstream tasks inheriting the victim
cross-modal pre-trained encoder. To address the challenges of heterogeneity
between different modalities and unknown downstream tasks, we first build a
topological graph structure to capture the relevant positions between target
samples and their neighbors. Then, we design a topology-deviation based
generative adversarial network to generate a universal adversarial patch. By
adding the patch to images, we minimize their embeddings similarity to
different modality and perturb the sample distribution in the feature space,
achieving unviersal non-targeted attacks. Our results demonstrate the excellent
attack performance of AdvCLIP on two types of downstream tasks across eight
datasets. We also tailor three popular defenses to mitigate AdvCLIP,
highlighting the need for new defense mechanisms to defend cross-modal
pre-trained encoders.Comment: This paper has been accepted by the ACM International Conference on
Multimedia (ACM MM '23, October 29-November 3, 2023, Ottawa, ON, Canada
Identifying the attack sources of botnets for a renewable energy management system by using a revised locust swarm optimisation scheme
Distributed denial of service (DDoS) attacks often use botnets to generate a high volume of packets and adopt controlled zombies for flooding a victim’s network over the Internet. Analysing the multiple sources of DDoS attacks typically involves reconstructing attack paths between the victim and attackers by using Internet protocol traceback (IPTBK) schemes. In general, traditional route-searching algorithms, such as particle swarm optimisation (PSO), have a high convergence speed for IPTBK, but easily fall into the local optima. This paper proposes an IPTBK analysis scheme for multimodal optimisation problems by applying a revised locust swarm optimisation (LSO) algorithm to the reconstructed attack path in order to identify the most probable attack paths. For evaluating the effectiveness of the DDoS control centres, networks with a topology size of 32 and 64 nodes were simulated using the ns-3 tool. The average accuracy of the LS-PSO algorithm reached 97.06 for the effects of dynamic traffic in two experimental networks (number of nodes = 32 and 64). Compared with traditional PSO algorithms, the revised LSO algorithm exhibited a superior searching performance in multimodal optimisation problems and increased the accuracy in traceability analysis for IPTBK problems
The enablers and implementation model for mobile KMS in Australian healthcare
In this research project, the enablers in implementing mobile KMS in Australian regional healthcare will be investigated, and a validated framework and guidelines to assist healthcare in implementing mobile KMS will also be proposed with both qualitative and quantitative approaches. The outcomes for this study are expected to improve the understanding the enabling factors in implementing mobile KMS in Australian healthcare, as well as provide better guidelines for this process
A Survey on ChatGPT: AI-Generated Contents, Challenges, and Solutions
With the widespread use of large artificial intelligence (AI) models such as
ChatGPT, AI-generated content (AIGC) has garnered increasing attention and is
leading a paradigm shift in content creation and knowledge representation. AIGC
uses generative large AI algorithms to assist or replace humans in creating
massive, high-quality, and human-like content at a faster pace and lower cost,
based on user-provided prompts. Despite the recent significant progress in
AIGC, security, privacy, ethical, and legal challenges still need to be
addressed. This paper presents an in-depth survey of working principles,
security and privacy threats, state-of-the-art solutions, and future challenges
of the AIGC paradigm. Specifically, we first explore the enabling technologies,
general architecture of AIGC, and discuss its working modes and key
characteristics. Then, we investigate the taxonomy of security and privacy
threats to AIGC and highlight the ethical and societal implications of GPT and
AIGC technologies. Furthermore, we review the state-of-the-art AIGC
watermarking approaches for regulatable AIGC paradigms regarding the AIGC model
and its produced content. Finally, we identify future challenges and open
research directions related to AIGC.Comment: 20 pages, 6 figures, 4 table
Multimodal Approach for Malware Detection
Although malware detection is a very active area of research, few works were focused on using physical properties (e.g., power consumption) and multimodal features for malware detection. We designed an experimental testbed that allowed us to run samples of malware and non-malicious software applications and to collect power consumption, network traffic, and system logs data, and subsequently to extract dynamic behavioral-based features. We also extracted code-based static features of both malware and non-malicious software applications. These features were used for malware detection based on: feature level fusion using power consumption and network traffic data, feature level fusion using network traffic data and system logs, and multimodal feature level and decision level fusion.
The contributions when using feature level fusion of power consumption and network traffic data are: (1) We focused on detecting real malware using the extracted dynamic behavioral features (both power-based and network traffic-based) and supervised machine learning algorithms, which has not been done by any of the prior works. (2) We ran a large number of machine learning experiments, which allowed us to identify the best performing learner, DC voltage rails that led to the best malware detection performance, and the subset of features that are the best predictors for malware detection. (3) The comparison of malware detection performance was done using a comprehensive set of metrics that reflect different aspects of the quality of malware detection.
In the case of the feature level fusion using network traffic data and system logs, the contributions are: (1) Most of the previous works that have used network flows-based features have done classification of the network traffic, while our focus was on classifying the software running in a machine as malware and non-malicious software using the extracted dynamic behavioral features. (2) We experimented with different sizes of the training set (i.e., 90%, 75%, 50%, and 25% of the data) and found that smaller training sets produced very good classification results. This aspect of our work has a practical value because the manual labeling of the training set is a tedious and time consuming process.
In this dissertation we present a multimodal deep learning neural network that integrates different modalities (i.e., power consumption, system logs, network traffic, and code-based static data) using decision level fusion. We evaluated the performance of each modality individually, when using feature level fusion, and when using decision level fusion. The contributions of our multimodal approach are as follow: (1) Collecting data from different modalities allowed us to develop a multimodal approach to malware detection, which has not been widely explored by prior works. Even more, none of the previous works compared the performance of feature level fusion with decision level fusion, which is explored in this dissertation. (2) We proposed a multimodal decision level fusion malware detection approach using a deep neural network and compared its performance with the performance of feature level fusion approaches based on deep neural network and standard supervised machine learning algorithms (i.e., Random Forest, J48, JRip, PART, Naive Bayes, and SMO)
- …