168 research outputs found
On the Security of a Novel Probabilistic Signature Based on Bilinear Square Diffie-Hellman Problem and Its Extension
Probabilistic signature scheme has been widely used in modern electronic commerce since it could provide integrity, authenticity, and nonrepudiation. Recently, Wu and Lin proposed a novel probabilistic signature (PS) scheme using the bilinear square Diffie-Hellman (BSDH) problem. They also extended it to a universal designated verifier signature (UDVS) scheme. In this paper, we analyze the security of Wu et al.’s PS scheme and UDVS scheme. Through concrete attacks, we demonstrate both of their schemes are not unforgeable. The security analysis shows that their schemes are not suitable for practical applications
Device-Free WiFi Sensing for Human Activity Recognition
University of Technology Sydney. Faculty of Engineering and Information Technology.Human activity recognition (HAR) using WiFi signals (WiFi-based HAR) has drawn considerable interest from the research community. In contrast to traditional device-based sensing techniques, WiFi-based HAR possesses several advantages, including convenience, wide availability, and privacy protection, making it an attractive sensing solution for a wide range of applications in smart home, health care, and intelligent monitoring.
Recently, applying deep learning (DL) to WiFi-based HAR has received strong research interest. Assisted by signal processing techniques, DL-based HAR methods are able to automatically extract deep features from input signals, contributing to successful recognitions. Despite its effectiveness in improving recognition performance, DL-based HAR methods suffer from several inherent drawbacks. First, feature extraction is a challenging task that always bottlenecks the recognition performance. Second, DL-based HAR requires a large number of training examples from the testing/targeted environment or/and previously seen environments (PSEs) to train the corresponding DL architectures. When the number of required samples is not sufficient, the sensing performance will drop dramatically. Third, the trained model in one environment cannot be directly applied to another environment without additional effort.
My PhD thesis aims to provide novel solutions to the above WiFi-based HAR issues. Specifically, to extract effective features, we propose two advanced methods together with leveraging the property of DL architectures to enhance the quality of input signals of DL networks and extracted representative features. For a reliable recognition with limited training samples, we propose a novel HAR scheme by developing innovative signal processing methods and exploring the characteristics of one-shot learning to reduce the number of required training samples. The proposed HAR scheme is able to accomplish successful recognitions when both the number of PSEs and the amount of samples from the testing environment are quite limited (e.g., one PSE and at the minimum one sample for each activity from the testing environment). To achieve environmental robustness, we propose two novel signal processing algorithms and leverage the features of the matching network. The proposed models are trained once and can be directly applied to various new/testing environments for reliable recognitions without requiring an additional retraining process
simplexreg: an R package for regression analysis of proportional data using the simplex distribution
Outcomes of continuous proportions arise in many applied areas. Such data are typically measured as percentages, rates or proportions confined in the unitary interval. In this paper, the R package simplexreg which provides dispersion model fitting of the simplex distribution is introduced to model such proportional outcomes. The maximum likelihood method and generalized estimating equations techniques are available for parameter estimation in cross-sectional and longitudinal studies, respectively. This paper presents methods and algorithms implemented in the package, including parameter estimation, model checking as well as density, cumulative distribution, quantile and random number generating functions of the simplex distribution. The package is applied to real data sets for illustration
UniTR: A Unified and Efficient Multi-Modal Transformer for Bird's-Eye-View Representation
Jointly processing information from multiple sensors is crucial to achieving
accurate and robust perception for reliable autonomous driving systems.
However, current 3D perception research follows a modality-specific paradigm,
leading to additional computation overheads and inefficient collaboration
between different sensor data. In this paper, we present an efficient
multi-modal backbone for outdoor 3D perception named UniTR, which processes a
variety of modalities with unified modeling and shared parameters. Unlike
previous works, UniTR introduces a modality-agnostic transformer encoder to
handle these view-discrepant sensor data for parallel modal-wise representation
learning and automatic cross-modal interaction without additional fusion steps.
More importantly, to make full use of these complementary sensor types, we
present a novel multi-modal integration strategy by both considering
semantic-abundant 2D perspective and geometry-aware 3D sparse neighborhood
relations. UniTR is also a fundamentally task-agnostic backbone that naturally
supports different 3D perception tasks. It sets a new state-of-the-art
performance on the nuScenes benchmark, achieving +1.1 NDS higher for 3D object
detection and +12.0 higher mIoU for BEV map segmentation with lower inference
latency. Code will be available at https://github.com/Haiyang-W/UniTR .Comment: Accepted by ICCV202
Backward Reasoning in Large Language Models for Verification
Chain-of-Though (CoT) prompting has shown promising performance in various
reasoning tasks. Recently, Self-Consistency \citep{wang2023selfconsistency}
proposes to sample a diverse set of reasoning chains which may lead to
different answers while the answer that receives the most votes is selected. In
this paper, we propose a novel method to use backward reasoning in verifying
candidate answers. We mask a token in the question by and ask the LLM
to predict the masked token when a candidate answer is provided by \textit{a
simple template}, i.e., ``\textit{\textbf{If we know the answer of the above
question is \{a candidate answer\}, what is the value of unknown variable ?}}'' Intuitively, the LLM is expected to predict the masked token
successfully if the provided candidate answer is correct. We further propose
FOBAR to combine forward and backward reasoning for estimating the probability
of candidate answers. We conduct extensive experiments on six data sets and
three LLMs. Experimental results demonstrate that FOBAR achieves
state-of-the-art performance on various reasoning benchmarks.Comment: Preprin
CAGroup3D: Class-Aware Grouping for 3D Object Detection on Point Clouds
We present a novel two-stage fully sparse convolutional 3D object detection
framework, named CAGroup3D. Our proposed method first generates some
high-quality 3D proposals by leveraging the class-aware local group strategy on
the object surface voxels with the same semantic predictions, which considers
semantic consistency and diverse locality abandoned in previous bottom-up
approaches. Then, to recover the features of missed voxels due to incorrect
voxel-wise segmentation, we build a fully sparse convolutional RoI pooling
module to directly aggregate fine-grained spatial information from backbone for
further proposal refinement. It is memory-and-computation efficient and can
better encode the geometry-specific features of each 3D proposal. Our model
achieves state-of-the-art 3D detection performance with remarkable gains of
+\textit{3.6\%} on ScanNet V2 and +\textit{2.6}\% on SUN RGB-D in term of
[email protected]. Code will be available at https://github.com/Haiyang-W/CAGroup3D.Comment: Accept by NeurIPS202
DiffFit: Unlocking Transferability of Large Diffusion Models via Simple Parameter-Efficient Fine-Tuning
Diffusion models have proven to be highly effective in generating
high-quality images. However, adapting large pre-trained diffusion models to
new domains remains an open challenge, which is critical for real-world
applications. This paper proposes DiffFit, a parameter-efficient strategy to
fine-tune large pre-trained diffusion models that enable fast adaptation to new
domains. DiffFit is embarrassingly simple that only fine-tunes the bias term
and newly-added scaling factors in specific layers, yet resulting in
significant training speed-up and reduced model storage costs. Compared with
full fine-tuning, DiffFit achieves 2 training speed-up and only needs
to store approximately 0.12\% of the total model parameters. Intuitive
theoretical analysis has been provided to justify the efficacy of scaling
factors on fast adaptation. On 8 downstream datasets, DiffFit achieves superior
or competitive performances compared to the full fine-tuning while being more
efficient. Remarkably, we show that DiffFit can adapt a pre-trained
low-resolution generative model to a high-resolution one by adding minimal
cost. Among diffusion-based methods, DiffFit sets a new state-of-the-art FID of
3.02 on ImageNet 512512 benchmark by fine-tuning only 25 epochs from a
public pre-trained ImageNet 256256 checkpoint while being 30
more training efficient than the closest competitor.Comment: Tech Repor
- …