3,285 research outputs found
Approximate Analytical Solutions to the Initial Data Problem of Black Hole Binary Systems
We present approximate analytical solutions to the Hamiltonian and momentum
constraint equations, corresponding to systems composed of two black holes with
arbitrary linear and angular momentum. The analytical nature of these initial
data solutions makes them easier to implement in numerical evolutions than the
traditional numerical approach of solving the elliptic equations derived from
the Einstein constraints. Although in general the problem of setting up initial
conditions for black hole binary simulations is complicated by the presence of
singularities, we show that the methods presented in this work provide initial
data with and norms of violation of the constraint equations
falling below those of the truncation error (residual error due to
discretization) present in finite difference codes for the range of grid
resolutions currently used. Thus, these data sets are suitable for use in
evolution codes. Detailed results are presented for the case of a head-on
collision of two equal-mass M black holes with specific angular momentum 0.5M
at an initial separation of 10M. A straightforward superposition method yields
data adequate for resolutions of , and an "attenuated" superposition
yields data usable to resolutions at least as fine as . In addition, the
attenuated approximate data may be more tractable in a full (computational)
exact solution to the initial value problem.Comment: 6 pages, 5 postscript figures. Minor changes and some points
clarified. Accepted for publication in Phys. Rev.
On the design of an ECOC-compliant genetic algorithm
Genetic Algorithms (GA) have been previously applied to Error-Correcting Output Codes (ECOC) in state-of-the-art works in order to find a suitable coding matrix. Nevertheless, none of the presented techniques directly take into account the properties of the ECOC matrix. As a result the considered search space is unnecessarily large. In this paper, a novel Genetic strategy to optimize the ECOC coding step is presented. This novel strategy redefines the usual crossover and mutation operators in order to take into account the theoretical properties of the ECOC framework. Thus, it reduces the search space and lets the algorithm to converge faster. In addition, a novel operator that is able to enlarge the code in a smart way is introduced. The novel methodology is tested on several UCI datasets and four challenging computer vision problems. Furthermore, the analysis of the results done in terms of performance, code length and number of Support Vectors shows that the optimization process is able to find very efficient codes, in terms of the trade-off between classification performance and the number of classifiers. Finally, classification performance per dichotomizer results shows that the novel proposal is able to obtain similar or even better results while defining a more compact number of dichotomies and SVs compared to state-of-the-art approaches
Comparison of code rate and transmit diversity in MIMO systems
A thesis submitted in ful lment of the requirements
for the degree of Master of Science in the Centre of Excellence in Telecommunications and Software School of Electrical and Information Engineering, March 2016In order to compare low rate error correcting codes to MIMO schemes with transmit
diversity, two systems with the same throughput are compared. A VBLAST MIMO
system with (15; 5) Reed-Solomon coding is compared to an Alamouti MIMO system
with (15; 10) Reed-Solomon coding. The latter is found to perform signi cantly better,
indicating that transmit diversity is a more e ective technique for minimising errors than
reducing the code rate. The Guruswami-Sudan/Koetter-Vardy soft decision decoding
algorithm was implemented to allow decoding beyond the conventional error correcting
bound of RS codes and VBLAST was adapted to provide reliability information.
Analysis is also performed to nd the optimal code rate when using various MIMO
systems.MT201
A Hypercontractive Inequality for Matrix-Valued Functions with Applications to Quantum Computing and LDCs
The Bonami-Beckner hypercontractive inequality is a powerful tool in Fourier
analysis of real-valued functions on the Boolean cube. In this paper we present
a version of this inequality for matrix-valued functions on the Boolean cube.
Its proof is based on a powerful inequality by Ball, Carlen, and Lieb. We also
present a number of applications. First, we analyze maps that encode
classical bits into qubits, in such a way that each set of bits can be
recovered with some probability by an appropriate measurement on the quantum
encoding; we show that if , then the success probability is
exponentially small in . This result may be viewed as a direct product
version of Nayak's quantum random access code bound. It in turn implies strong
direct product theorems for the one-way quantum communication complexity of
Disjointness and other problems. Second, we prove that error-correcting codes
that are locally decodable with 2 queries require length exponential in the
length of the encoded string. This gives what is arguably the first
``non-quantum'' proof of a result originally derived by Kerenidis and de Wolf
using quantum information theory, and answers a question by Trevisan.Comment: This is the full version of a paper that will appear in the
proceedings of the IEEE FOCS 08 conferenc
Quantum cryptography: key distribution and beyond
Uniquely among the sciences, quantum cryptography has driven both
foundational research as well as practical real-life applications. We review
the progress of quantum cryptography in the last decade, covering quantum key
distribution and other applications.Comment: It's a review on quantum cryptography and it is not restricted to QK
Experimental Quantum Fingerprinting
Quantum communication holds the promise of creating disruptive technologies
that will play an essential role in future communication networks. For example,
the study of quantum communication complexity has shown that quantum
communication allows exponential reductions in the information that must be
transmitted to solve distributed computational tasks. Recently, protocols that
realize this advantage using optical implementations have been proposed. Here
we report a proof of concept experimental demonstration of a quantum
fingerprinting system that is capable of transmitting less information than the
best known classical protocol. Our implementation is based on a modified
version of a commercial quantum key distribution system using off-the-shelf
optical components over telecom wavelengths, and is practical for messages as
large as 100 Mbits, even in the presence of experimental imperfections. Our
results provide a first step in the development of experimental quantum
communication complexity.Comment: 11 pages, 6 Figure
- …