64,353 research outputs found
When Does Relay Transmission Give a More Secure Connection in Wireless Ad Hoc Networks?
Relay transmission can enhance coverage and throughput, while it can be
vulnerable to eavesdropping attacks due to the additional transmission of the
source message at the relay. Thus, whether or not one should use relay
transmission for secure communication is an interesting and important problem.
In this paper, we consider the transmission of a confidential message from a
source to a destination in a decentralized wireless network in the presence of
randomly distributed eavesdroppers. The source-destination pair can be
potentially assisted by randomly distributed relays. For an arbitrary relay, we
derive exact expressions of secure connection probability for both colluding
and non-colluding eavesdroppers. We further obtain lower bound expressions on
the secure connection probability, which are accurate when the eavesdropper
density is small. By utilizing these lower bound expressions, we propose a
relay selection strategy to improve the secure connection probability. By
analytically comparing the secure connection probability for direct
transmission and relay transmission, we address the important problem of
whether or not to relay and discuss the condition for relay transmission in
terms of the relay density and source-destination distance. These analytical
results are accurate in the small eavesdropper density regime.Comment: Accepted for publication in IEEE Transactions On Information
Forensics and Securit
Tight Upper Bounds for Streett and Parity Complementation
Complementation of finite automata on infinite words is not only a
fundamental problem in automata theory, but also serves as a cornerstone for
solving numerous decision problems in mathematical logic, model-checking,
program analysis and verification. For Streett complementation, a significant
gap exists between the current lower bound and upper
bound , where is the state size, is the number of
Streett pairs, and can be as large as . Determining the complexity
of Streett complementation has been an open question since the late '80s. In
this paper show a complementation construction with upper bound for and for ,
which matches well the lower bound obtained in \cite{CZ11a}. We also obtain a
tight upper bound for parity complementation.Comment: Corrected typos. 23 pages, 3 figures. To appear in the 20th
Conference on Computer Science Logic (CSL 2011
Visualization Techniques for Tongue Analysis in Traditional Chinese Medicine
Visual inspection of the tongue has been an important diagnostic method of Traditional Chinese Medicine (TCM). Clinic data have shown significant connections between various viscera cancers and abnormalities in the tongue and the tongue coating. Visual inspection of the tongue is simple and inexpensive, but the current practice in TCM is mainly experience-based and the quality of the visual inspection varies between individuals. The computerized inspection method provides quantitative models to evaluate color, texture and surface features on the tongue. In this paper, we investigate visualization techniques and processes to allow interactive data analysis with the aim to merge computerized measurements with human expert's diagnostic variables based on five-scale diagnostic conditions: Healthy (H), History Cancers (HC), History of Polyps (HP), Polyps (P) and Colon Cancer (C)
Learning Multi-item Auctions with (or without) Samples
We provide algorithms that learn simple auctions whose revenue is
approximately optimal in multi-item multi-bidder settings, for a wide range of
valuations including unit-demand, additive, constrained additive, XOS, and
subadditive. We obtain our learning results in two settings. The first is the
commonly studied setting where sample access to the bidders' distributions over
valuations is given, for both regular distributions and arbitrary distributions
with bounded support. Our algorithms require polynomially many samples in the
number of items and bidders. The second is a more general max-min learning
setting that we introduce, where we are given "approximate distributions," and
we seek to compute an auction whose revenue is approximately optimal
simultaneously for all "true distributions" that are close to the given ones.
These results are more general in that they imply the sample-based results, and
are also applicable in settings where we have no sample access to the
underlying distributions but have estimated them indirectly via market research
or by observation of previously run, potentially non-truthful auctions.
Our results hold for valuation distributions satisfying the standard (and
necessary) independence-across-items property. They also generalize and improve
upon recent works, which have provided algorithms that learn approximately
optimal auctions in more restricted settings with additive, subadditive and
unit-demand valuations using sample access to distributions. We generalize
these results to the complete unit-demand, additive, and XOS setting, to i.i.d.
subadditive bidders, and to the max-min setting.
Our results are enabled by new uniform convergence bounds for hypotheses
classes under product measures. Our bounds result in exponential savings in
sample complexity compared to bounds derived by bounding the VC dimension, and
are of independent interest.Comment: Appears in FOCS 201
KBGAN: Adversarial Learning for Knowledge Graph Embeddings
We introduce KBGAN, an adversarial learning framework to improve the
performances of a wide range of existing knowledge graph embedding models.
Because knowledge graphs typically only contain positive facts, sampling useful
negative training examples is a non-trivial task. Replacing the head or tail
entity of a fact with a uniformly randomly selected entity is a conventional
method for generating negative facts, but the majority of the generated
negative facts can be easily discriminated from positive facts, and will
contribute little towards the training. Inspired by generative adversarial
networks (GANs), we use one knowledge graph embedding model as a negative
sample generator to assist the training of our desired model, which acts as the
discriminator in GANs. This framework is independent of the concrete form of
generator and discriminator, and therefore can utilize a wide variety of
knowledge graph embedding models as its building blocks. In experiments, we
adversarially train two translation-based models, TransE and TransD, each with
assistance from one of the two probability-based models, DistMult and ComplEx.
We evaluate the performances of KBGAN on the link prediction task, using three
knowledge base completion datasets: FB15k-237, WN18 and WN18RR. Experimental
results show that adversarial training substantially improves the performances
of target embedding models under various settings.Comment: To appear at NAACL HLT 201
Standard sirens and dark sector with Gaussian process
The gravitational waves from compact binary systems are viewed as a standard
siren to probe the evolution of the universe. This paper summarizes the
potential and ability to use the gravitational waves to constrain the
cosmological parameters and the dark sector interaction in the Gaussian process
methodology. After briefly introducing the method to reconstruct the dark
sector interaction by the Gaussian process, the concept of standard sirens and
the analysis of reconstructing the dark sector interaction with LISA are
outlined. Furthermore, we estimate the constraint ability of the gravitational
waves on cosmological parameters with ET. The numerical methods we use are
Gaussian process and the Markov-Chain Monte-Carlo. Finally, we also forecast
the improvements of the abilities to constrain the cosmological parameters with
ET and LISA combined with the \it Planck.Comment: 10 pages, 8 figures, prepared for the proceedings of the
International Conference on Gravitation : Joint Conference of ICGAC-XIII and
IK1
A systematic study of magnetic field in Relativistic Heavy-ion Collisions in the RHIC and LHC energy regions
The features of magnetic field in relativistic heavy-ion collisions are
systematically studied by using a modified magnetic field model in this paper.
The features of magnetic field distributions in the central point are studied
in the RHIC and LHC energy regions. We also predict the feature of magnetic
fields at LHC = 900, 2760 and 7000 GeV based on the detailed
study at RHIC = 62.4, 130 and 200 GeV. The dependencies of the
features of magnetic fields on the collision energies, centralities and
collision time are systematically investigated, respectively.Comment: 8 pages, 7 figure
- …