173 research outputs found
An Efficient Certificate-Based Designated Verifier Signature Scheme
Certificate-based public key cryptography not only solves certificate revocation problem in traditional PKI but also overcomes key escrow problem inherent in identity-based cryptosystems. This new primitive has become an attractive cryptographic paradigm. In this paper, we propose the notion and the security model of certificate-based designated verifier signatures (CBDVS). We provide the first construction of CBDVS and prove that our scheme is existentially unforgeable against adaptive chosen message attacks in the random oracle model. Our scheme only needs two pairing operations, and the signature is only one element in the bilinear group G1. To the best of our knowledge, our scheme enjoys shortest signature length with less operation cost
Learning to Purify Noisy Labels via Meta Soft Label Corrector
Recent deep neural networks (DNNs) can easily overfit to biased training data
with noisy labels. Label correction strategy is commonly used to alleviate this
issue by designing a method to identity suspected noisy labels and then correct
them. Current approaches to correcting corrupted labels usually need certain
pre-defined label correction rules or manually preset hyper-parameters. These
fixed settings make it hard to apply in practice since the accurate label
correction usually related with the concrete problem, training data and the
temporal information hidden in dynamic iterations of training process. To
address this issue, we propose a meta-learning model which could estimate soft
labels through meta-gradient descent step under the guidance of noise-free meta
data. By viewing the label correction procedure as a meta-process and using a
meta-learner to automatically correct labels, we could adaptively obtain
rectified soft labels iteratively according to current training problems
without manually preset hyper-parameters. Besides, our method is model-agnostic
and we can combine it with any other existing model with ease. Comprehensive
experiments substantiate the superiority of our method in both synthetic and
real-world problems with noisy labels compared with current SOTA label
correction strategies.Comment: 12 pages,6 figure
Text-to-SQL Empowered by Large Language Models: A Benchmark Evaluation
Large language models (LLMs) have emerged as a new paradigm for Text-to-SQL
task. However, the absence of a systematical benchmark inhibits the development
of designing effective, efficient and economic LLM-based Text-to-SQL solutions.
To address this challenge, in this paper, we first conduct a systematical and
extensive comparison over existing prompt engineering methods, including
question representation, example selection and example organization, and with
these experimental results, we elaborate their pros and cons. Based on these
findings, we propose a new integrated solution, named DAIL-SQL, which refreshes
the Spider leaderboard with 86.6% execution accuracy and sets a new bar. To
explore the potential of open-source LLM, we investigate them in various
scenarios, and further enhance their performance with supervised fine-tuning.
Our explorations highlight open-source LLMs' potential in Text-to-SQL, as well
as the advantages and disadvantages of the supervised fine-tuning.
Additionally, towards an efficient and economic LLM-based Text-to-SQL solution,
we emphasize the token efficiency in prompt engineering and compare the prior
studies under this metric. We hope that our work provides a deeper
understanding of Text-to-SQL with LLMs, and inspires further investigations and
broad applications.Comment: We have released code on https://github.com/BeachWang/DAIL-SQ
Learning Accurate Entropy Model with Global Reference for Image Compression
In recent deep image compression neural networks, the entropy model plays a
critical role in estimating the prior distribution of deep image encodings.
Existing methods combine hyperprior with local context in the entropy
estimation function. This greatly limits their performance due to the absence
of a global vision. In this work, we propose a novel Global Reference Model for
image compression to effectively leverage both the local and the global context
information, leading to an enhanced compression rate. The proposed method scans
decoded latents and then finds the most relevant latent to assist the
distribution estimating of the current latent. A by-product of this work is the
innovation of a mean-shifting GDN module that further improves the performance.
Experimental results demonstrate that the proposed model outperforms the
rate-distortion performance of most of the state-of-the-art methods in the
industry
Vizard: A Metadata-hiding Data Analytic System with End-to-End Policy Controls
Owner-centric control is a widely adopted method for easing owners\u27 concerns over data abuses and motivating them to share their data out to gain collective knowledge. However, while many control enforcement techniques have been proposed, privacy threats due to the metadata leakage therein are largely neglected in existing works. Unfortunately, a sophisticated attacker can infer very sensitive information based on either owners\u27 data control policies or their analytic task participation histories (e.g., participating in a mental illness or cancer study can reveal their health conditions). To address this problem, we introduce , a metadata-hiding analytic system that enables privacy-hardened and enforceable control for owners. is built with a tailored suite of lightweight cryptographic tools and designs that help us efficiently handle analytic queries over encrypted data streams coming in real-time (like heart rates). We propose extension designs to further enable advanced owner-centric controls (with AND, OR, NOT operators) and provide owners with release control to additionally regulate how the result should be protected before deliveries. We develop a prototype of that is interfaced with Apache Kafka, and the evaluation results demonstrate the practicality of for large-scale and metadata-hiding analytics over data streams
Revisiting Event-based Video Frame Interpolation
Dynamic vision sensors or event cameras provide rich complementary
information for video frame interpolation. Existing state-of-the-art methods
follow the paradigm of combining both synthesis-based and warping networks.
However, few of those methods fully respect the intrinsic characteristics of
events streams. Given that event cameras only encode intensity changes and
polarity rather than color intensities, estimating optical flow from events is
arguably more difficult than from RGB information. We therefore propose to
incorporate RGB information in an event-guided optical flow refinement
strategy. Moreover, in light of the quasi-continuous nature of the time signals
provided by event cameras, we propose a divide-and-conquer strategy in which
event-based intermediate frame synthesis happens incrementally in multiple
simplified stages rather than in a single, long stage. Extensive experiments on
both synthetic and real-world datasets show that these modifications lead to
more reliable and realistic intermediate frame results than previous video
frame interpolation methods. Our findings underline that a careful
consideration of event characteristics such as high temporal density and
elevated noise benefits interpolation accuracy.Comment: Accepted by IROS2023 Project Site:
https://jiabenchen.github.io/revisit_even
- …