30 research outputs found
Authentication enhancement in command and control networks: (a study in Vehicular Ad-Hoc Networks)
Intelligent transportation systems contribute to improved traffic safety by facilitating real time communication between vehicles. By using wireless channels for communication, vehicular networks are susceptible to a wide range of attacks, such as impersonation, modification, and replay. In this context, securing data exchange between intercommunicating terminals, e.g., vehicle-to-everything (V2X) communication, constitutes a technological challenge that needs to be addressed. Hence, message authentication is crucial to safeguard vehicular ad-hoc networks (VANETs) from malicious attacks. The current state-of-the-art for authentication in VANETs relies on conventional cryptographic primitives, introducing significant computation and communication overheads. In this challenging scenario, physical (PHY)-layer authentication has gained popularity, which involves leveraging the inherent characteristics of wireless channels and the hardware imperfections to discriminate between wireless devices. However, PHY-layerbased authentication cannot be an alternative to crypto-based methods as the initial legitimacy detection must be conducted using cryptographic methods to extract the communicating terminal secret features. Nevertheless, it can be a promising complementary solution for the reauthentication problem in VANETs, introducing what is known as “cross-layer authentication.” This thesis focuses on designing efficient cross-layer authentication schemes for VANETs, reducing the communication and computation overheads associated with transmitting and verifying a crypto-based signature for each transmission. The following provides an overview of the proposed methodologies employed in various contributions presented in this thesis.
1. The first cross-layer authentication scheme: A four-step process represents this approach: initial crypto-based authentication, shared key extraction, re-authentication via a PHY challenge-response algorithm, and adaptive adjustments based on channel conditions. Simulation results validate its efficacy, especially in low signal-to-noise ratio (SNR) scenarios while proving its resilience against active and passive attacks.
2. The second cross-layer authentication scheme: Leveraging the spatially and temporally correlated wireless channel features, this scheme extracts high entropy shared keys that can be used to create dynamic PHY-layer signatures for authentication. A 3-Dimensional (3D) scattering Doppler emulator is designed to investigate the scheme’s performance at different speeds of a moving vehicle and SNRs. Theoretical and hardware implementation analyses prove the scheme’s capability to support high detection probability for an acceptable false alarm value ≤ 0.1 at SNR ≥ 0 dB and speed ≤ 45 m/s.
3. The third proposal: Reconfigurable intelligent surfaces (RIS) integration for improved authentication: Focusing on enhancing PHY-layer re-authentication, this proposal explores integrating RIS technology to improve SNR directed at designated vehicles. Theoretical analysis and practical implementation of the proposed scheme are conducted using a 1-bit RIS, consisting of 64 × 64 reflective units. Experimental results show a significant improvement in the Pd, increasing from 0.82 to 0.96 at SNR = − 6 dB for multicarrier communications.
4. The fourth proposal: RIS-enhanced vehicular communication security: Tailored for challenging SNR in non-line-of-sight (NLoS) scenarios, this proposal optimises key extraction and defends against denial-of-service (DoS) attacks through selective signal strengthening. Hardware implementation studies prove its effectiveness, showcasing improved key extraction performance and resilience against potential threats.
5. The fifth cross-layer authentication scheme: Integrating PKI-based initial legitimacy detection and blockchain-based reconciliation techniques, this scheme ensures secure data exchange. Rigorous security analyses and performance evaluations using network simulators and computation metrics showcase its effectiveness, ensuring its resistance against common attacks and time efficiency in message verification.
6. The final proposal: Group key distribution: Employing smart contract-based blockchain technology alongside PKI-based authentication, this proposal distributes group session keys securely. Its lightweight symmetric key cryptography-based method maintains privacy in VANETs, validated via Ethereum’s main network (MainNet) and comprehensive computation and communication evaluations.
The analysis shows that the proposed methods yield a noteworthy reduction, approximately ranging from 70% to 99%, in both computation and communication overheads, as compared to the conventional approaches. This reduction pertains to the verification and transmission of 1000 messages in total
Unsupervised Hashing via Similarity Distribution Calibration
Existing unsupervised hashing methods typically adopt a feature similarity
preservation paradigm. As a result, they overlook the intrinsic similarity
capacity discrepancy between the continuous feature and discrete hash code
spaces. Specifically, since the feature similarity distribution is
intrinsically biased (e.g., moderately positive similarity scores on negative
pairs), the hash code similarities of positive and negative pairs often become
inseparable (i.e., the similarity collapse problem). To solve this problem, in
this paper a novel Similarity Distribution Calibration (SDC) method is
introduced. Instead of matching individual pairwise similarity scores, SDC
aligns the hash code similarity distribution towards a calibration distribution
(e.g., beta distribution) with sufficient spread across the entire similarity
capacity/range, to alleviate the similarity collapse problem. Extensive
experiments show that our SDC outperforms the state-of-the-art alternatives on
both coarse category-level and instance-level image retrieval tasks, often by a
large margin. Code is available at https://github.com/kamwoh/sdc
Neural approaches to spoken content embedding
Comparing spoken segments is a central operation to speech processing.
Traditional approaches in this area have favored frame-level dynamic
programming algorithms, such as dynamic time warping, because they require no
supervision, but they are limited in performance and efficiency. As an
alternative, acoustic word embeddings -- fixed-dimensional vector
representations of variable-length spoken word segments -- have begun to be
considered for such tasks as well. However, the current space of such
discriminative embedding models, training approaches, and their application to
real-world downstream tasks is limited. We start by considering ``single-view"
training losses where the goal is to learn an acoustic word embedding model
that separates same-word and different-word spoken segment pairs. Then, we
consider ``multi-view" contrastive losses. In this setting, acoustic word
embeddings are learned jointly with embeddings of character sequences to
generate acoustically grounded embeddings of written words, or acoustically
grounded word embeddings.
In this thesis, we contribute new discriminative acoustic word embedding
(AWE) and acoustically grounded word embedding (AGWE) approaches based on
recurrent neural networks (RNNs). We improve model training in terms of both
efficiency and performance. We take these developments beyond English to
several low-resource languages and show that multilingual training improves
performance when labeled data is limited. We apply our embedding models, both
monolingual and multilingual, to the downstream tasks of query-by-example
speech search and automatic speech recognition. Finally, we show how our
embedding approaches compare with and complement more recent self-supervised
speech models.Comment: PhD thesi
Semantic Communications for Wireless Sensing: RIS-aided Encoding and Self-supervised Decoding
Semantic communications can reduce the resource consumption by transmitting
task-related semantic information extracted from source messages. However, when
the source messages are utilized for various tasks, e.g., wireless sensing data
for localization and activities detection, semantic communication technique is
difficult to be implemented because of the increased processing complexity. In
this paper, we propose the inverse semantic communications as a new paradigm.
Instead of extracting semantic information from messages, we aim to encode the
task-related source messages into a hyper-source message for data transmission
or storage. Following this paradigm, we design an inverse semantic-aware
wireless sensing framework with three algorithms for data sampling,
reconfigurable intelligent surface (RIS)-aided encoding, and self-supervised
decoding, respectively. Specifically, on the one hand, we propose a novel RIS
hardware design for encoding several signal spectrums into one MetaSpectrum. To
select the task-related signal spectrums for achieving efficient encoding, a
semantic hash sampling method is introduced. On the other hand, we propose a
self-supervised learning method for decoding the MetaSpectrums to obtain the
original signal spectrums. Using the sensing data collected from real-world, we
show that our framework can reduce the data volume by 95% compared to that
before encoding, without affecting the accomplishment of sensing tasks.
Moreover, compared with the typically used uniform sampling scheme, the
proposed semantic hash sampling scheme can achieve 67% lower mean squared error
in recovering the sensing parameters. In addition, experiment results
demonstrate that the amplitude response matrix of the RIS enables the
encryption of the sensing data
Privacy-Preserving Biometric Authentication
Biometric-based authentication provides a highly accurate means of authentication without requiring the user to memorize or possess anything. However, there are three disadvantages to the use of biometrics in authentication; any compromise is permanent as it is impossible to revoke biometrics; there are significant privacy concerns with the loss of biometric data; and humans possess only a limited number of biometrics, which limits how many services can use or reuse the same form of authentication.
As such, enhancing biometric template security is of significant research interest. One of the methodologies is called cancellable biometric template which applies an irreversible transformation on the features of the biometric sample and performs the matching in the transformed domain. Yet, this is itself susceptible to specific classes of attacks, including hill-climb, pre-image, and attacks via records multiplicity.
This work has several outcomes and contributions to the knowledge of privacy-preserving biometric authentication. The first of these is a taxonomy structuring the current state-of-the-art and provisions for future research. The next of these is a multi-filter framework for developing a robust and secure cancellable biometric template, designed specifically for fingerprint biometrics. This framework is comprised of two modules, each of which is a separate cancellable fingerprint template that has its own matching and measures. The matching for this is based on multiple thresholds. Importantly, these methods show strong resistance to the above-mentioned attacks. Another of these outcomes is a method that achieves a stable performance and can be used to be embedded into a Zero-Knowledge-Proof protocol. In this novel method, a new strategy was proposed to improve the recognition error rates which is privacy-preserving in the untrusted environment. The results show promising performance when evaluated on current datasets
Instance-weighted Central Similarity for Multi-label Image Retrieval
Deep hashing has been widely applied to large-scale image retrieval by
encoding high-dimensional data points into binary codes for efficient
retrieval. Compared with pairwise/triplet similarity based hash learning,
central similarity based hashing can more efficiently capture the global data
distribution. For multi-label image retrieval, however, previous methods only
use multiple hash centers with equal weights to generate one centroid as the
learning target, which ignores the relationship between the weights of hash
centers and the proportion of instance regions in the image. To address the
above issue, we propose a two-step alternative optimization approach,
Instance-weighted Central Similarity (ICS), to automatically learn the center
weight corresponding to a hash code. Firstly, we apply the maximum entropy
regularizer to prevent one hash center from dominating the loss function, and
compute the center weights via projection gradient descent. Secondly, we update
neural network parameters by standard back-propagation with fixed center
weights. More importantly, the learned center weights can well reflect the
proportion of foreground instances in the image. Our method achieves the
state-of-the-art performance on the image retrieval benchmarks, and especially
improves the mAP by 1.6%-6.4% on the MS COCO dataset.Comment: 10 pages, 6 figure
Recommended from our members
Fast embedding for image classification & retrieval and its application to the hostel industry
This thesis was submitted for the award of Doctor of Philosophy and was awarded by Brunel University LondonContent-based image classification and retrieval are the automatic processes of taking
an unseen image input and extracting its features representing the input image. Then,
for the classification task, this mathematically measured input is categorized according
to established criteria in the server and consequently shows the output as a result. On
the other hand, for the retrieval task, the extracted features of an unseen query image
are sent to the server to search for the most visually similar images to a given image
and retrieve these images as a result. Despite image features could be represented
by classical features, artificial intelligence-based features, Convolutional Neural
Networks (CNN) to be precise, have become powerful tools in the field. Nonetheless,
the high dimensional CNN features have been a challenge in particular for applications
on mobile or Internet of Things devices. Therefore, in this thesis, several fast
embeddings are explored and proposed to overcome the constraints of low memory,
bandwidth, and power. Furthermore, the first hostel image database is created with
three datasets, hostel image dataset containing 13,908 interior and exterior images of
hostels across the world, and Hostels-900 dataset and Hostels-2K dataset containing
972 images and 2,380 images, respectively, of 20 London hostel buildings. The results
demonstrate that the proposed fast embeddings such as the application of GHM-Rand
operator, GHM-Fix operator, and binary feature vectors are able to outperform or give
competitive results to those state-of-the-art methods with a lot less computational
resource. Additionally, the findings from a ten-year literature review of CBIR study in
the tourism industry could picturize the relevant research activities in the past decade
which are not only beneficial to the hostel industry or tourism sector but also to the
computer science and engineering research communities for the potential real-life
applications of the existing and developing technologies in the field
Scalable Life-long Visual Place Recognition
Visual place recognition (VPR) is the task of using visual inputs to determine if mobile robots are visiting a previously observed place or exploring new regions. To perform convincingly, a practical VPR algorithm must be robust against appearance changes, due to not only short-term (e.g., weather, lighting) and long-term (e.g., seasons, vegetation growth, etc) environmental variations, but also "less cyclical" changes (construction and roadworks, updating of signage, facades and billboards, etc). Such appearance changes invariably occur in real life. It motivates our thesis to fill this research gap. To this end, we firstly investigate probabilistic frameworks to effectively exploit the temporal information from visual data which is in the form of videos. Inspired by Bayes Filter, we propose two VPR methods that respectively perform filtering on discrete and continuous domains, where the temporal information is efficiently used to improve VPR accuracy under appearance changes. Given the fact that the appearance of operational environments uninterruptedly and indefinitely changes, a promising solution for VPR to deal with appearance changes is to continuously accumulate images to incorporate new changes into the internal environmental representation. This demands a VPR technique that is scalable on an ever growing dataset. To this end, inspired by Hidden Markov Models (HMM), we develop novel VPR techniques, that can be efficiently updated and compressed, such that the recognition of new queries can exploit all available data (including recent changes) without suffering from the linear growth in time and space complexity. Another approach to address the scalability issue in VPR is map summarization, which only keeps informative 3D points in a topometric map, according to predefined constraints. In this thesis, we define timestamp as another constraint. Accordingly, we formulate a repeatability predictor (RP) as a regressor, that predicts the repeatability of an interest point as a function of time. We show that the RP can be used to significantly alleviate the degeneration of VPR accuracy from map summarization. The contributions of this thesis not only fill the gap within current state of VPR research; but, more importantly, also enable a wide range of applications, such as, self-driving cars, autonomous robots, augmented reality, and so on.Thesis (Ph.D.) -- University of Adelaide, School of Computer Science, 202
Recent Advances in Signal Processing
The signal processing task is a very critical issue in the majority of new technological inventions and challenges in a variety of applications in both science and engineering fields. Classical signal processing techniques have largely worked with mathematical models that are linear, local, stationary, and Gaussian. They have always favored closed-form tractability over real-world accuracy. These constraints were imposed by the lack of powerful computing tools. During the last few decades, signal processing theories, developments, and applications have matured rapidly and now include tools from many areas of mathematics, computer science, physics, and engineering. This book is targeted primarily toward both students and researchers who want to be exposed to a wide variety of signal processing techniques and algorithms. It includes 27 chapters that can be categorized into five different areas depending on the application at hand. These five categories are ordered to address image processing, speech processing, communication systems, time-series analysis, and educational packages respectively. The book has the advantage of providing a collection of applications that are completely independent and self-contained; thus, the interested reader can choose any chapter and skip to another without losing continuity