555,998 research outputs found
Cohomology in Grothendieck Topologies and Lower Bounds in Boolean Complexity
This paper is motivated by questions such as P vs. NP and other questions in
Boolean complexity theory. We describe an approach to attacking such questions
with cohomology, and we show that using Grothendieck topologies and other ideas
from the Grothendieck school gives new hope for such an attack.
We focus on circuit depth complexity, and consider only finite topological
spaces or Grothendieck topologies based on finite categories; as such, we do
not use algebraic geometry or manifolds.
Given two sheaves on a Grothendieck topology, their "cohomological
complexity" is the sum of the dimensions of their Ext groups. We seek to model
the depth complexity of Boolean functions by the cohomological complexity of
sheaves on a Grothendieck topology. We propose that the logical AND of two
Boolean functions will have its corresponding cohomological complexity bounded
in terms of those of the two functions using ``virtual zero extensions.'' We
propose that the logical negation of a function will have its corresponding
cohomological complexity equal to that of the original function using duality
theory. We explain these approaches and show that they are stable under
pullbacks and base change. It is the subject of ongoing work to achieve AND and
negation bounds simultaneously in a way that yields an interesting depth lower
bound.Comment: 70 pages, abstract corrected and modifie
Computing with Noise - Phase Transitions in Boolean Formulas
Computing circuits composed of noisy logical gates and their ability to
represent arbitrary Boolean functions with a given level of error are
investigated within a statistical mechanics setting. Bounds on their
performance, derived in the information theory literature for specific gates,
are straightforwardly retrieved, generalized and identified as the
corresponding typical-case phase transitions. This framework paves the way for
obtaining new results on error-rates, function-depth and sensitivity, and their
dependence on the gate-type and noise model used.Comment: 10 pages, 2 figure
Wavelet based stereo images reconstruction using depth images
It is believed by many that three-dimensional (3D) television will be the next logical development toward a more natural and vivid home entertaiment experience. While classical 3D approach requires the transmission of two video streams, one for each view, 3D TV systems based on depth image rendering (DIBR) require a single stream of monoscopic images and a second stream of associated images usually termed depth images or depth maps, that contain per-pixel depth information. Depth map is a two-dimensional function that contains information about distance from camera to a certain point of the object as a function of the image coordinates. By using this depth information and the original image it is possible to reconstruct a virtual image of a nearby viewpoint by projecting the pixels of available image to their locations in 3D space and finding their position in the desired view plane. One of the most significant advantages of the DIBR is that depth maps can be coded more efficiently than two streams corresponding to left and right view of the scene, thereby reducing the bandwidth required for transmission, which makes it possible to reuse existing transmission channels for the transmission of 3D TV. This technique can also be applied for other 3D technologies such as multimedia systems.
In this paper we propose an advanced wavelet domain scheme for the reconstruction of stereoscopic images, which solves some of the shortcommings of the existing methods discussed above. We perform the wavelet transform of both the luminance and depth images in order to obtain significant geometric features, which enable more sensible reconstruction of the virtual view. Motion estimation employed in our approach uses Markov random field smoothness prior for regularization of the estimated motion field.
The evaluation of the proposed reconstruction method is done on two video sequences which are typically used for comparison of stereo reconstruction algorithms. The results demonstrate advantages of the proposed approach with respect to the state-of-the-art methods, in terms of both objective and subjective performance measures
On the robustness of random Boolean formulae
Random Boolean formulae, generated by a growth process of noisy logical gates are analyzed using the generating functional methodology of statistical physics. We study the type of functions generated for different input distributions, their robustness for a given level of gate error and its dependence on the formulae depth and complexity and the gates used. Bounds on their performance, derived in the information theory literature for specific gates, are straightforwardly retrieved, generalized and identified as the corresponding typical-case phase transitions. Results for error-rates, function-depth and sensitivity of the generated functions are obtained for various gate-type and noise models
On the robustness of random Boolean formulae
Random Boolean formulae, generated by a growth process of noisy logical gates are analyzed using the generating functional methodology of statistical physics. We study the type of functions generated for different input distributions, their robustness for a given level of gate error and its dependence on the formulae depth and complexity and the gates used. Bounds on their performance, derived in the information theory literature for specific gates, are straightforwardly retrieved, generalized and identified as the corresponding typical-case phase transitions. Results for error-rates, function-depth and sensitivity of the generated functions are obtained for various gate-type and noise models
Theories for TC0 and Other Small Complexity Classes
We present a general method for introducing finitely axiomatizable "minimal"
two-sorted theories for various subclasses of P (problems solvable in
polynomial time). The two sorts are natural numbers and finite sets of natural
numbers. The latter are essentially the finite binary strings, which provide a
natural domain for defining the functions and sets in small complexity classes.
We concentrate on the complexity class TC^0, whose problems are defined by
uniform polynomial-size families of bounded-depth Boolean circuits with
majority gates. We present an elegant theory VTC^0 in which the provably-total
functions are those associated with TC^0, and then prove that VTC^0 is
"isomorphic" to a different-looking single-sorted theory introduced by
Johannsen and Pollet. The most technical part of the isomorphism proof is
defining binary number multiplication in terms a bit-counting function, and
showing how to formalize the proofs of its algebraic properties.Comment: 40 pages, Logical Methods in Computer Scienc
On Forging SPHINCS-Haraka Signatures on a Fault-Tolerant Quantum Computer
SPHINCS is a state-of-the-art hash based signature scheme, the security of which is either based on SHA-256, SHAKE-256 or on the Haraka hash function. In this work, we perform an in-depth analysis of how the hash functions are embedded into SPHINCS and how the quantum pre-image resistance impacts the security of the signature scheme. Subsequently, we evaluate the cost of implementing Grover’s quantum search algorithm to find a pre-image that admits a universal forgery.
In particular, we provide quantum implementations of the Haraka and SHAKE-256 hash functions in Q# and consider the efficiency of attacks in the context of fault-tolerant quantum computers. We restrict our findings to SPHINCS-128 due to the limited security margin of Haraka. Nevertheless, we present an attack that performs better, to the best of our knowledge, than previously published attacks.
We can forge a SPHINCS-128-Haraka signature in about surface code cycles and physical qubits, translating to about logical-qubit-cycles. For SHAKE-256, the same attack requires qubits and cycles resulting in about logical-qubit-cycles
- …