545,050 research outputs found
Recommended from our members
Identifying RFID Tags in Collisions
How to obtain the information from massive tags is a key focus of RFID applications. The occurrence of collisions leads to problems such as reduced identification efficiency in RFID networks. To tackle such challenges, most tag collision arbitration protocols focus on scheduling tag identification with collision avoidance. However, how to effectively identify tags in collisions to improve identification efficiency has not been well explored. In this paper, we propose a group query allocation method to divide the string space into mutually disjoint subsets which contains several strings. Each string can be viewed as a full ID or partial ID of a tag. When multiple string from a subset are sent simultaneously, the reader can identify all of them in a time slot. Based on the group query allocation method, a segment detection based characteristic group query tree (SD-CGQT) protocol is presented for fast tag identification by significantly reducing the collision slots and transmitted bits. Numerous experimental results verify the superiority of the proposed SD-CGQT, compared to prior arts in system efficiency, total identification time, communication complexity and energy consumption
Low Complexity Regularization of Linear Inverse Problems
Inverse problems and regularization theory is a central theme in contemporary
signal processing, where the goal is to reconstruct an unknown signal from
partial indirect, and possibly noisy, measurements of it. A now standard method
for recovering the unknown signal is to solve a convex optimization problem
that enforces some prior knowledge about its structure. This has proved
efficient in many problems routinely encountered in imaging sciences,
statistics and machine learning. This chapter delivers a review of recent
advances in the field where the regularization prior promotes solutions
conforming to some notion of simplicity/low-complexity. These priors encompass
as popular examples sparsity and group sparsity (to capture the compressibility
of natural signals and images), total variation and analysis sparsity (to
promote piecewise regularity), and low-rank (as natural extension of sparsity
to matrix-valued data). Our aim is to provide a unified treatment of all these
regularizations under a single umbrella, namely the theory of partial
smoothness. This framework is very general and accommodates all low-complexity
regularizers just mentioned, as well as many others. Partial smoothness turns
out to be the canonical way to encode low-dimensional models that can be linear
spaces or more general smooth manifolds. This review is intended to serve as a
one stop shop toward the understanding of the theoretical properties of the
so-regularized solutions. It covers a large spectrum including: (i) recovery
guarantees and stability to noise, both in terms of -stability and
model (manifold) identification; (ii) sensitivity analysis to perturbations of
the parameters involved (in particular the observations), with applications to
unbiased risk estimation ; (iii) convergence properties of the forward-backward
proximal splitting scheme, that is particularly well suited to solve the
corresponding large-scale regularized optimization problem
Report on "Geometry and representation theory of tensors for computer science, statistics and other areas."
This is a technical report on the proceedings of the workshop held July 21 to
July 25, 2008 at the American Institute of Mathematics, Palo Alto, California,
organized by Joseph Landsberg, Lek-Heng Lim, Jason Morton, and Jerzy Weyman. We
include a list of open problems coming from applications in 4 different areas:
signal processing, the Mulmuley-Sohoni approach to P vs. NP, matchgates and
holographic algorithms, and entanglement and quantum information theory. We
emphasize the interactions between geometry and representation theory and these
applied areas
A new security architecture for SIP based P2P computer networks
Many applications are transferred from C/S (Client/Server) mode to P2P (Peer-to-Peer) mode such as VoIP (Voice over IP). This paper presents a new security architecture, i.e. a trustworthy authentication algorithm of peers, for Session Initialize Protocol (SIP) based P2P computer networks. A mechanism for node authentication using a cryptographic primitive called one-way accumulator is proposed to secure the P2P SIP computer networks. It leverages the distributed nature of P2P to allow for distributed resource discovery and rendezvous in a SIP network, thus eliminating (or at least reducing) the need for centralized servers. The distributed node authentication algorithm is established for the P2P SIP computer networks. The corresponding protocol has been implemented in our P2P SIP experiment platform successfully. The performance study has verified the proposed distributed node authentication algorithm for SIP based P2P computer networks
On Bayesian Oracle Properties
When model uncertainty is handled by Bayesian model averaging (BMA) or
Bayesian model selection (BMS), the posterior distribution possesses a
desirable "oracle property" for parametric inference, if for large enough data
it is nearly as good as the oracle posterior, obtained by assuming
unrealistically that the true model is known and only the true model is used.
We study the oracle properties in a very general context of quasi-posterior,
which can accommodate non-regular models with cubic root asymptotics and
partial identification. Our approach for proving the oracle properties is based
on a unified treatment that bounds the posterior probability of model
mis-selection. This theoretical framework can be of interest to Bayesian
statisticians who would like to theoretically justify their new model selection
or model averaging methods in addition to empirical results. Furthermore, for
non-regular models, we obtain nontrivial conclusions on the choice of prior
penalty on model complexity, the temperature parameter of the quasi-posterior,
and the advantage of BMA over BMS.Comment: 31 page
Multivariate Granger Causality and Generalized Variance
Granger causality analysis is a popular method for inference on directed
interactions in complex systems of many variables. A shortcoming of the
standard framework for Granger causality is that it only allows for examination
of interactions between single (univariate) variables within a system, perhaps
conditioned on other variables. However, interactions do not necessarily take
place between single variables, but may occur among groups, or "ensembles", of
variables. In this study we establish a principled framework for Granger
causality in the context of causal interactions among two or more multivariate
sets of variables. Building on Geweke's seminal 1982 work, we offer new
justifications for one particular form of multivariate Granger causality based
on the generalized variances of residual errors. Taken together, our results
support a comprehensive and theoretically consistent extension of Granger
causality to the multivariate case. Treated individually, they highlight
several specific advantages of the generalized variance measure, which we
illustrate using applications in neuroscience as an example. We further show
how the measure can be used to define "partial" Granger causality in the
multivariate context and we also motivate reformulations of "causal density"
and "Granger autonomy". Our results are directly applicable to experimental
data and promise to reveal new types of functional relations in complex
systems, neural and otherwise.Comment: added 1 reference, minor change to discussion, typos corrected; 28
pages, 3 figures, 1 table, LaTe
Security and Efficiency Analysis of the Hamming Distance Computation Protocol Based on Oblivious Transfer
open access articleBringer et al. proposed two cryptographic protocols for the computation of Hamming distance. Their first scheme uses Oblivious Transfer and provides security in the semi-honest model. The other scheme uses Committed Oblivious Transfer and is claimed to provide full security in the malicious case. The proposed protocols have direct implications to biometric authentication schemes between a prover and a verifier where the verifier has biometric data of the users in plain form.
In this paper, we show that their protocol is not actually fully secure against malicious adversaries. More precisely, our attack breaks the soundness property of their protocol where a malicious user can compute a Hamming distance which is different from the actual value. For biometric authentication systems, this attack allows a malicious adversary to pass the authentication without knowledge of the honest user's input with at most complexity instead of , where is the input length. We propose an enhanced version of their protocol where this attack is eliminated. The security of our modified protocol is proven using the simulation-based paradigm. Furthermore, as for efficiency concerns, the modified protocol utilizes Verifiable Oblivious Transfer which does not require the commitments to outputs which improves its efficiency significantly
Sensitivity Analysis for Mirror-Stratifiable Convex Functions
This paper provides a set of sensitivity analysis and activity identification
results for a class of convex functions with a strong geometric structure, that
we coined "mirror-stratifiable". These functions are such that there is a
bijection between a primal and a dual stratification of the space into
partitioning sets, called strata. This pairing is crucial to track the strata
that are identifiable by solutions of parametrized optimization problems or by
iterates of optimization algorithms. This class of functions encompasses all
regularizers routinely used in signal and image processing, machine learning,
and statistics. We show that this "mirror-stratifiable" structure enjoys a nice
sensitivity theory, allowing us to study stability of solutions of optimization
problems to small perturbations, as well as activity identification of
first-order proximal splitting-type algorithms. Existing results in the
literature typically assume that, under a non-degeneracy condition, the active
set associated to a minimizer is stable to small perturbations and is
identified in finite time by optimization schemes. In contrast, our results do
not require any non-degeneracy assumption: in consequence, the optimal active
set is not necessarily stable anymore, but we are able to track precisely the
set of identifiable strata.We show that these results have crucial implications
when solving challenging ill-posed inverse problems via regularization, a
typical scenario where the non-degeneracy condition is not fulfilled. Our
theoretical results, illustrated by numerical simulations, allow to
characterize the instability behaviour of the regularized solutions, by
locating the set of all low-dimensional strata that can be potentially
identified by these solutions
- …