1,110 research outputs found
DroneSig: Lightweight Digital Signature Protocol for Micro Aerial Vehicles
Micro aerial vehicles a.k.a. drones, have become an integral part of a variety of civilian and military application domains, including but not limited to aerial surveying and mapping, aerial surveillance and security, aerial inspection of infrastructure, and aerial delivery. Meanwhile, the cybersecurity of drones is gaining significant attention due to both financial and strategic information and value involved in aerial applications. As a result of the lack of security features in the communication protocol, an adversary can easily interfere with on-going communications or even seize control of the drone. In this thesis, we propose a lightweight digital signature protocol, also referred to as DroneSig, to protect drones from a man-in-the-middle attack, where an adversary eavesdrops the communication between Ground Control Station (GCS) and drone, and impersonates the GCS and sends fake commands to terminate the on-going mission or even take control over the drone. The basic idea of the DroneSig is that the drone will only execute the new command after validating the received digital signature from the GCS, proving that the new command message is coming from the authenticated GCS. If the validation of the digital signature fails, the new command is rejected immediately, and the Return-to-Launch (RTL) mode is initiated and forces the drone to return to the take-off position. We conduct extensive simulation experiments for performance evaluation and comparison using OMNeT++, and simulation results show that the proposed lightweight digital signature protocol achieves better performance in terms of energy consumption and computation time compared to the standard Advanced Encryption Standard (AES) cryptographic technique
Skeleton based action recognition using translation-scale invariant image mapping and multi-scale deep cnn
This paper presents an image classification based approach for skeleton-based
video action recognition problem. Firstly, A dataset independent
translation-scale invariant image mapping method is proposed, which transformes
the skeleton videos to colour images, named skeleton-images. Secondly, A
multi-scale deep convolutional neural network (CNN) architecture is proposed
which could be built and fine-tuned on the powerful pre-trained CNNs, e.g.,
AlexNet, VGGNet, ResNet etal.. Even though the skeleton-images are very
different from natural images, the fine-tune strategy still works well. At
last, we prove that our method could also work well on 2D skeleton video data.
We achieve the state-of-the-art results on the popular benchmard datasets e.g.
NTU RGB+D, UTD-MHAD, MSRC-12, and G3D. Especially on the largest and challenge
NTU RGB+D, UTD-MHAD, and MSRC-12 dataset, our method outperforms other methods
by a large margion, which proves the efficacy of the proposed method
Disentangled and Robust Representation Learning for Bragging Classification in Social Media
Researching bragging behavior on social media arouses interest of
computational (socio) linguists. However, existing bragging classification
datasets suffer from a serious data imbalance issue. Because labeling a
data-balance dataset is expensive, most methods introduce external knowledge to
improve model learning. Nevertheless, such methods inevitably introduce noise
and non-relevance information from external knowledge. To overcome the
drawback, we propose a novel bragging classification method with
disentangle-based representation augmentation and domain-aware adversarial
strategy. Specifically, model learns to disentangle and reconstruct
representation and generate augmented features via disentangle-based
representation augmentation. Moreover, domain-aware adversarial strategy aims
to constrain domain of augmented features to improve their robustness.
Experimental results demonstrate that our method achieves state-of-the-art
performance compared to other methods
Scattering compensation through Fourier-domain open-channel coupling in two-photon microscopy
Light penetration depth in biological tissue is limited by tissue scattering.
There is an urgent need for scattering compensation in vivo focusing and
imaging, particularly challenging in photon-starved scenarios, without access
to the transmission side of the scattering tissue. Here, we introduce a
two-photon microscopy system with Fourier-domain open-channel coupling for
scattering correction (2P-FOCUS). 2P-FOCUS corrects scattering by utilizing the
non-linearity of multiple-beam interference and two-photon excitation,
eliminating the need for a guide star, iterative optimization, or measuring
transmission or reflection matrices. We demonstrate that 2P-FOCUS significantly
enhances two-photon fluorescence signals by several tens of folds when focusing
through a bone sample, compared to cases without scattering compensation at
equivalent excitation power. We also show that 2P-FOCUS can correct scattering
over large volumes by imaging neurons and cerebral blood vessels within a
230x230x500 um\textsuperscript{3} volume in the mouse brain in vitro. 2P-FOCUS
could serve as a powerful tool for deep tissue imaging in bulky organisms or
live animals
An Open Source Data Contamination Report for Large Language Models
Data contamination in model evaluation has become increasingly prevalent with
the growing popularity of large language models. It allows models to "cheat"
via memorisation instead of displaying true capabilities. Therefore,
contamination analysis has become an crucial part of reliable model evaluation
to validate results. However, existing contamination analysis is usually
conducted internally by large language model developers and often lacks
transparency and completeness. This paper presents an extensive data
contamination report for over 15 popular large language models across six
popular multiple-choice QA benchmarks. We also introduce an open-source
pipeline that enables the community to perform contamination analysis on
customised data and models. Our experiments reveal varying contamination levels
ranging from 1\% to 45\% across benchmarks, with the contamination degree
increasing rapidly over time. Performance analysis of large language models
indicates that data contamination does not necessarily lead to increased model
metrics: while significant accuracy boosts of up to 14\% and 7\% are observed
on contaminated C-Eval and Hellaswag benchmarks, only a minimal increase is
noted on contaminated MMLU. We also find larger models seem able to gain more
advantages than smaller models on contaminated test sets
LatestEval: Addressing Data Contamination in Language Model Evaluation through Dynamic and Time-Sensitive Test Construction
Data contamination in evaluation is getting increasingly prevalent with the
emergence of language models pre-trained on super large, automatically crawled
corpora. This problem leads to significant challenges in the accurate
assessment of model capabilities and generalisations. In this paper, we propose
LatestEval, an automatic method that leverages the most recent texts to create
uncontaminated reading comprehension evaluations. LatestEval avoids data
contamination by only using texts published within a recent time window,
ensuring no overlap with the training corpora of pre-trained language models.
We develop the LatestEval automated pipeline to 1) gather the latest texts; 2)
identify key information, and 3) construct questions targeting the information
while removing the existing answers from the context. This encourages models to
infer the answers themselves based on the remaining context, rather than just
copy-paste. Our experiments demonstrate that language models exhibit negligible
memorisation behaviours on LatestEval as opposed to previous benchmarks,
suggesting a significantly reduced risk of data contamination and leading to a
more robust evaluation. Data and code are publicly available at:
https://github.com/liyucheng09/LatestEval.Comment: AAAI 202
- …