9,859 research outputs found
HOFA: Twitter Bot Detection with Homophily-Oriented Augmentation and Frequency Adaptive Attention
Twitter bot detection has become an increasingly important and challenging
task to combat online misinformation, facilitate social content moderation, and
safeguard the integrity of social platforms. Though existing graph-based
Twitter bot detection methods achieved state-of-the-art performance, they are
all based on the homophily assumption, which assumes users with the same label
are more likely to be connected, making it easy for Twitter bots to disguise
themselves by following a large number of genuine users. To address this issue,
we proposed HOFA, a novel graph-based Twitter bot detection framework that
combats the heterophilous disguise challenge with a homophily-oriented graph
augmentation module (Homo-Aug) and a frequency adaptive attention module
(FaAt). Specifically, the Homo-Aug extracts user representations and computes a
k-NN graph using an MLP and improves Twitter's homophily by injecting the k-NN
graph. For the FaAt, we propose an attention mechanism that adaptively serves
as a low-pass filter along a homophilic edge and a high-pass filter along a
heterophilic edge, preventing user features from being over-smoothed by their
neighborhood. We also introduce a weight guidance loss to guide the frequency
adaptive attention module. Our experiments demonstrate that HOFA achieves
state-of-the-art performance on three widely-acknowledged Twitter bot detection
benchmarks, which significantly outperforms vanilla graph-based bot detection
techniques and strong heterophilic baselines. Furthermore, extensive studies
confirm the effectiveness of our Homo-Aug and FaAt module, and HOFA's ability
to demystify the heterophilous disguise challenge.Comment: 11 pages, 7 figure
Audio-visual multi-modality driven hybrid feature learning model for crowd analysis and classification
The high pace emergence in advanced software systems, low-cost hardware and decentralized cloud computing technologies have broadened the horizon for vision-based surveillance, monitoring and control. However, complex and inferior feature learning over visual artefacts or video streams, especially under extreme conditions confine majority of the at-hand vision-based crowd analysis and classification systems. Retrieving event-sensitive or crowd-type sensitive spatio-temporal features for the different crowd types under extreme conditions is a highly complex task. Consequently, it results in lower accuracy and hence low reliability that confines existing methods for real-time crowd analysis. Despite numerous efforts in vision-based approaches, the lack of acoustic cues often creates ambiguity in crowd classification. On the other hand, the strategic amalgamation of audio-visual features can enable accurate and reliable crowd analysis and classification. Considering it as motivation, in this research a novel audio-visual multi-modality driven hybrid feature learning model is developed for crowd analysis and classification. In this work, a hybrid feature extraction model was applied to extract deep spatio-temporal features by using Gray-Level Co-occurrence Metrics (GLCM) and AlexNet transferrable learning model. Once extracting the different GLCM features and AlexNet deep features, horizontal concatenation was done to fuse the different feature sets. Similarly, for acoustic feature extraction, the audio samples (from the input video) were processed for static (fixed size) sampling, pre-emphasis, block framing and Hann windowing, followed by acoustic feature extraction like GTCC, GTCC-Delta, GTCC-Delta-Delta, MFCC, Spectral Entropy, Spectral Flux, Spectral Slope and Harmonics to Noise Ratio (HNR). Finally, the extracted audio-visual features were fused to yield a composite multi-modal feature set, which is processed for classification using the random forest ensemble classifier. The multi-class classification yields a crowd-classification accurac12529y of (98.26%), precision (98.89%), sensitivity (94.82%), specificity (95.57%), and F-Measure of 98.84%. The robustness of the proposed multi-modality-based crowd analysis model confirms its suitability towards real-world crowd detection and classification tasks
An advanced deep learning models-based plant disease detection: A review of recent research
Plants play a crucial role in supplying food globally. Various environmental factors lead to plant diseases which results in significant production losses. However, manual detection of plant diseases is a time-consuming and error-prone process. It can be an unreliable method of identifying and preventing the spread of plant diseases. Adopting advanced technologies such as Machine Learning (ML) and Deep Learning (DL) can help to overcome these challenges by enabling early identification of plant diseases. In this paper, the recent advancements in the use of ML and DL techniques for the identification of plant diseases are explored. The research focuses on publications between 2015 and 2022, and the experiments discussed in this study demonstrate the effectiveness of using these techniques in improving the accuracy and efficiency of plant disease detection. This study also addresses the challenges and limitations associated with using ML and DL for plant disease identification, such as issues with data availability, imaging quality, and the differentiation between healthy and diseased plants. The research provides valuable insights for plant disease detection researchers, practitioners, and industry professionals by offering solutions to these challenges and limitations, providing a comprehensive understanding of the current state of research in this field, highlighting the benefits and limitations of these methods, and proposing potential solutions to overcome the challenges of their implementation
The State of the Art in Deep Learning Applications, Challenges, and Future Prospects::A Comprehensive Review of Flood Forecasting and Management
Floods are a devastating natural calamity that may seriously harm both infrastructure and people. Accurate flood forecasts and control are essential to lessen these effects and safeguard populations. By utilizing its capacity to handle massive amounts of data and provide accurate forecasts, deep learning has emerged as a potent tool for improving flood prediction and control. The current state of deep learning applications in flood forecasting and management is thoroughly reviewed in this work. The review discusses a variety of subjects, such as the data sources utilized, the deep learning models used, and the assessment measures adopted to judge their efficacy. It assesses current approaches critically and points out their advantages and disadvantages. The article also examines challenges with data accessibility, the interpretability of deep learning models, and ethical considerations in flood prediction. The report also describes potential directions for deep-learning research to enhance flood predictions and control. Incorporating uncertainty estimates into forecasts, integrating many data sources, developing hybrid models that mix deep learning with other methodologies, and enhancing the interpretability of deep learning models are a few of these. These research goals can help deep learning models become more precise and effective, which will result in better flood control plans and forecasts. Overall, this review is a useful resource for academics and professionals working on the topic of flood forecasting and management. By reviewing the current state of the art, emphasizing difficulties, and outlining potential areas for future study, it lays a solid basis. Communities may better prepare for and lessen the destructive effects of floods by implementing cutting-edge deep learning algorithms, thereby protecting people and infrastructure
Automatic Caption Generation for Aerial Images: A Survey
Aerial images have attracted attention from researcher community since long time. Generating a caption for an aerial image describing its content in comprehensive way is less studied but important task as it has applications in agriculture, defence, disaster management and many more areas. Though different approaches were followed for natural image caption generation, generating a caption for aerial image remains a challenging task due to its special nature. Use of emerging techniques from Artificial Intelligence (AI) and Natural Language Processing (NLP) domains have resulted in generation of accepted quality captions for aerial images. However lot needs to be done to fully utilize potential of aerial image caption generation task. This paper presents detail survey of the various approaches followed by researchers for aerial image caption generation task. The datasets available for experimentation, criteria used for performance evaluation and future directions are also discussed
Beam scanning by liquid-crystal biasing in a modified SIW structure
A fixed-frequency beam-scanning 1D antenna based on Liquid Crystals (LCs) is designed for application in 2D scanning with lateral alignment. The 2D array environment imposes full decoupling of adjacent 1D antennas, which often conflicts with the LC requirement of DC biasing: the proposed design accommodates both. The LC medium is placed inside a Substrate Integrated Waveguide (SIW) modified to work as a Groove Gap Waveguide, with radiating slots etched on the upper broad wall, that radiates as a Leaky-Wave Antenna (LWA). This allows effective application of the DC bias voltage needed for tuning the LCs. At the same time, the RF field remains laterally confined, enabling the possibility to lay several antennas in parallel and achieve 2D beam scanning. The design is validated by simulation employing the actual properties of a commercial LC medium
An empirical investigation of the relationship between integration, dynamic capabilities and performance in supply chains
This research aimed to develop an empirical understanding of the relationships between integration,
dynamic capabilities and performance in the supply chain domain, based on which, two conceptual
frameworks were constructed to advance the field. The core motivation for the research was that, at
the stage of writing the thesis, the combined relationship between the three concepts had not yet
been examined, although their interrelationships have been studied individually.
To achieve this aim, deductive and inductive reasoning logics were utilised to guide the qualitative
study, which was undertaken via multiple case studies to investigate lines of enquiry that would
address the research questions formulated. This is consistent with the author’s philosophical
adoption of the ontology of relativism and the epistemology of constructionism, which was considered
appropriate to address the research questions. Empirical data and evidence were collected, and
various triangulation techniques were employed to ensure their credibility. Some key features of
grounded theory coding techniques were drawn upon for data coding and analysis, generating two
levels of findings. These revealed that whilst integration and dynamic capabilities were crucial in
improving performance, the performance also informed the former. This reflects a cyclical and
iterative approach rather than one purely based on linearity. Adopting a holistic approach towards
the relationship was key in producing complementary strategies that can deliver sustainable supply
chain performance.
The research makes theoretical, methodological and practical contributions to the field of supply
chain management. The theoretical contribution includes the development of two emerging
conceptual frameworks at the micro and macro levels. The former provides greater specificity, as it
allows meta-analytic evaluation of the three concepts and their dimensions, providing a detailed
insight into their correlations. The latter gives a holistic view of their relationships and how they are
connected, reflecting a middle-range theory that bridges theory and practice. The methodological
contribution lies in presenting models that address gaps associated with the inconsistent use of
terminologies in philosophical assumptions, and lack of rigor in deploying case study research
methods. In terms of its practical contribution, this research offers insights that practitioners could
adopt to enhance their performance. They can do so without necessarily having to forgo certain
desired outcomes using targeted integrative strategies and drawing on their dynamic capabilities
Machine learning and mixed reality for smart aviation: applications and challenges
The aviation industry is a dynamic and ever-evolving sector. As technology advances and becomes more sophisticated, the aviation industry must keep up with the changing trends. While some airlines have made investments in machine learning and mixed reality technologies, the vast majority of regional airlines continue to rely on inefficient strategies and lack digital applications. This paper investigates the state-of-the-art applications that integrate machine learning and mixed reality into the aviation industry. Smart aerospace engineering design, manufacturing, testing, and services are being explored to increase operator productivity. Autonomous systems, self-service systems, and data visualization systems are being researched to enhance passenger experience. This paper investigate safety, environmental, technological, cost, security, capacity, and regulatory challenges of smart aviation, as well as potential solutions to ensure future quality, reliability, and efficiency
A Machine Learning based Empirical Evaluation of Cyber Threat Actors High Level Attack Patterns over Low level Attack Patterns in Attributing Attacks
Cyber threat attribution is the process of identifying the actor of an attack
incident in cyberspace. An accurate and timely threat attribution plays an
important role in deterring future attacks by applying appropriate and timely
defense mechanisms. Manual analysis of attack patterns gathered by honeypot
deployments, intrusion detection systems, firewalls, and via trace-back
procedures is still the preferred method of security analysts for cyber threat
attribution. Such attack patterns are low-level Indicators of Compromise (IOC).
They represent Tactics, Techniques, Procedures (TTP), and software tools used
by the adversaries in their campaigns. The adversaries rarely re-use them. They
can also be manipulated, resulting in false and unfair attribution. To
empirically evaluate and compare the effectiveness of both kinds of IOC, there
are two problems that need to be addressed. The first problem is that in recent
research works, the ineffectiveness of low-level IOC for cyber threat
attribution has been discussed intuitively. An empirical evaluation for the
measure of the effectiveness of low-level IOC based on a real-world dataset is
missing. The second problem is that the available dataset for high-level IOC
has a single instance for each predictive class label that cannot be used
directly for training machine learning models. To address these problems in
this research work, we empirically evaluate the effectiveness of low-level IOC
based on a real-world dataset that is specifically built for comparative
analysis with high-level IOC. The experimental results show that the high-level
IOC trained models effectively attribute cyberattacks with an accuracy of 95%
as compared to the low-level IOC trained models where accuracy is 40%.Comment: 20 page
Distributed Sensing, Computing, Communication, and Control Fabric: A Unified Service-Level Architecture for 6G
With the advent of the multimodal immersive communication system, people can
interact with each other using multiple devices for sensing, communication
and/or control either onsite or remotely. As a breakthrough concept, a
distributed sensing, computing, communications, and control (DS3C) fabric is
introduced in this paper for provisioning 6G services in multi-tenant
environments in a unified manner. The DS3C fabric can be further enhanced by
natively incorporating intelligent algorithms for network automation and
managing networking, computing, and sensing resources efficiently to serve
vertical use cases with extreme and/or conflicting requirements. As such, the
paper proposes a novel end-to-end 6G system architecture with enhanced
intelligence spanning across different network, computing, and business
domains, identifies vertical use cases and presents an overview of the relevant
standardization and pre-standardization landscape
- …