2 research outputs found
GCNIDS: Graph Convolutional Network-Based Intrusion Detection System for CAN Bus
The Controller Area Network (CAN) bus serves as a standard protocol for
facilitating communication among various electronic control units (ECUs) within
contemporary vehicles. However, it has been demonstrated that the CAN bus is
susceptible to remote attacks, which pose risks to the vehicle's safety and
functionality. To tackle this concern, researchers have introduced intrusion
detection systems (IDSs) to identify and thwart such attacks. In this paper, we
present an innovative approach to intruder detection within the CAN bus,
leveraging Graph Convolutional Network (GCN) techniques as introduced by Zhang,
Tong, Xu, and Maciejewski in 2019. By harnessing the capabilities of deep
learning, we aim to enhance attack detection accuracy while minimizing the
requirement for manual feature engineering. Our experimental findings
substantiate that the proposed GCN-based method surpasses existing IDSs in
terms of accuracy, precision, and recall. Additionally, our approach
demonstrates efficacy in detecting mixed attacks, which are more challenging to
identify than single attacks. Furthermore, it reduces the necessity for
extensive feature engineering and is particularly well-suited for real-time
detection systems. To the best of our knowledge, this represents the pioneering
application of GCN to CAN data for intrusion detection. Our proposed approach
holds significant potential in fortifying the security and safety of modern
vehicles, safeguarding against attacks and preventing them from undermining
vehicle functionality
HeteroEdge: Addressing Asymmetry in Heterogeneous Collaborative Autonomous Systems
Gathering knowledge about surroundings and generating situational awareness
for IoT devices is of utmost importance for systems developed for smart urban
and uncontested environments. For example, a large-area surveillance system is
typically equipped with multi-modal sensors such as cameras and LIDARs and is
required to execute deep learning algorithms for action, face, behavior, and
object recognition. However, these systems face power and memory constraints
due to their ubiquitous nature, making it crucial to optimize data processing,
deep learning algorithm input, and model inference communication. In this
paper, we propose a self-adaptive optimization framework for a testbed
comprising two Unmanned Ground Vehicles (UGVs) and two NVIDIA Jetson devices.
This framework efficiently manages multiple tasks (storage, processing,
computation, transmission, inference) on heterogeneous nodes concurrently. It
involves compressing and masking input image frames, identifying similar
frames, and profiling devices to obtain boundary conditions for optimization..
Finally, we propose and optimize a novel parameter split-ratio, which indicates
the proportion of the data required to be offloaded to another device while
considering the networking bandwidth, busy factor, memory (CPU, GPU, RAM), and
power constraints of the devices in the testbed. Our evaluations captured while
executing multiple tasks (e.g., PoseNet, SegNet, ImageNet, DetectNet, DepthNet)
simultaneously, reveal that executing 70% (split-ratio=70%) of the data on the
auxiliary node minimizes the offloading latency by approx. 33% (18.7 ms/image
to 12.5 ms/image) and the total operation time by approx. 47% (69.32s to
36.43s) compared to the baseline configuration (executing on the primary node)