5,424 research outputs found
Circuit Complexity of Visual Search
We study computational hardness of feature and conjunction search through the
lens of circuit complexity. Let (resp., ) be Boolean variables each of which takes the value one if and only if a
neuron at place detects a feature (resp., another feature). We then simply
formulate the feature and conjunction search as Boolean functions and , respectively. We employ a threshold circuit or a discretized
circuit (such as a sigmoid circuit or a ReLU circuit with discretization) as
our models of neural networks, and consider the following four computational
resources: [i] the number of neurons (size), [ii] the number of levels (depth),
[iii] the number of active neurons outputting non-zero values (energy), and
[iv] synaptic weight resolution (weight).
We first prove that any threshold circuit of size , depth , energy
and weight satisfies ,
where is the rank of the communication matrix of a
-variable Boolean function that computes. Since has rank
, we have . Thus, an exponential
lower bound on the size of even sublinear-depth threshold circuits exists if
the energy and weight are sufficiently small. Since is computable
independently of , our result suggests that computational capacity for the
feature and conjunction search are different. We also show that the inequality
is tight up to a constant factor if . We next show that a
similar inequality holds for any discretized circuit. Thus, if we regard the
number of gates outputting non-zero values as a measure for sparse activity,
our results suggest that larger depth helps neural networks to acquire sparse
activity
Lower Bounds for DeMorgan Circuits of Bounded Negation Width
We consider Boolean circuits over {or, and, neg} with negations applied only to input variables. To measure the "amount of negation" in such circuits, we introduce the concept of their "negation width". In particular, a circuit computing a monotone Boolean function f(x_1,...,x_n) has negation width w if no nonzero term produced (purely syntactically) by the circuit contains more than w distinct negated variables. Circuits of negation width w=0 are equivalent to monotone Boolean circuits, while those of negation width w=n have no restrictions. Our motivation is that already circuits of moderate negation width w=n^{epsilon} for an arbitrarily small constant epsilon>0 can be even exponentially stronger than monotone circuits.
We show that the size of any circuit of negation width w computing f is roughly at least the minimum size of a monotone circuit computing f divided by K=min{w^m,m^w}, where m is the maximum length of a prime implicant of f. We also show that the depth of any circuit of negation width w computing f is roughly at least the minimum depth of a monotone circuit computing f minus log K. Finally, we show that formulas of bounded negation width can be balanced to achieve a logarithmic (in their size) depth without increasing their negation width
The power of negations in cryptography
The study of monotonicity and negation complexity for Bool-ean functions has been prevalent in complexity theory as well as in computational learning theory, but little attention has been given to it in the cryptographic context. Recently, Goldreich and Izsak (2012) have initiated a study of whether cryptographic primitives can be monotone, and showed that one-way functions can be monotone (assuming they exist), but a pseudorandom generator cannot.
In this paper, we start by filling in the picture and proving that many other basic cryptographic primitives cannot be monotone. We then initiate a quantitative study of the power of negations, asking how many negations are required. We provide several lower bounds, some of them tight, for various cryptographic primitives and building blocks including one-way permutations, pseudorandom functions, small-bias generators, hard-core predicates, error-correcting codes, and randomness extractors. Among our results, we highlight the following.
Unlike one-way functions, one-way permutations cannot be monotone.
We prove that pseudorandom functions require logn − O(1) negations (which is optimal up to the additive term).
We prove that error-correcting codes with optimal distance parameters require logn − O(1) negations (again, optimal up to the additive term).
We prove a general result for monotone functions, showing a lower bound on the depth of any circuit with t negations on the bottom that computes a monotone function f in terms of the monotone circuit depth of f. This result addresses a question posed by Koroth and Sarma (2014) in the context of the circuit complexity of the Clique problem
Design of multimedia processor based on metric computation
Media-processing applications, such as signal processing, 2D and 3D graphics
rendering, and image compression, are the dominant workloads in many embedded
systems today. The real-time constraints of those media applications have
taxing demands on today's processor performances with low cost, low power and
reduced design delay. To satisfy those challenges, a fast and efficient
strategy consists in upgrading a low cost general purpose processor core. This
approach is based on the personalization of a general RISC processor core
according the target multimedia application requirements. Thus, if the extra
cost is justified, the general purpose processor GPP core can be enforced with
instruction level coprocessors, coarse grain dedicated hardware, ad hoc
memories or new GPP cores. In this way the final design solution is tailored to
the application requirements. The proposed approach is based on three main
steps: the first one is the analysis of the targeted application using
efficient metrics. The second step is the selection of the appropriate
architecture template according to the first step results and recommendations.
The third step is the architecture generation. This approach is experimented
using various image and video algorithms showing its feasibility
Emergence of Functional Specificity in Balanced Networks with Synaptic Plasticity
In rodent visual cortex, synaptic connections between orientation-selective neurons are unspecific at the time of eye opening, and become to some degree functionally specific only later during development. An explanation for this two-stage process was proposed in terms of Hebbian plasticity based on visual experience that would eventually enhance connections between neurons with similar response features. For this to work, however, two conditions must be satisfied: First, orientation selective neuronal responses must exist before specific recurrent synaptic connections can be established. Second, Hebbian learning must be compatible with the recurrent network dynamics contributing to orientation selectivity, and the resulting specific connectivity must remain stable for unspecific background activity. Previous studies have mainly focused on very simple models, where the receptive fields of neurons were essentially determined by feedforward mechanisms, and where the recurrent network was small, lacking the complex recurrent dynamics of large-scale networks of excitatory and inhibitory neurons. Here we studied the emergence of functionally specific connectivity in large-scale recurrent networks with synaptic plasticity. Our results show that balanced random networks, which already exhibit highly selective responses at eye opening, can develop feature-specific connectivity if appropriate rules of synaptic plasticity are invoked within and between excitatory and inhibitory populations. If these conditions are met, the initial orientation selectivity guides the process of Hebbian learning and, as a result, functionally specific and a surplus of bidirectional connections emerge. Our results thus demonstrate the cooperation of synaptic plasticity and recurrent dynamics in large-scale functional networks with realistic receptive fields, highlight the role of inhibition as a critical element in this process, and paves the road for further computational studies of sensory processing in neocortical network models equipped with synaptic plasticity
High Speed Event Camera TRacking
Event cameras are bioinspired sensors with reaction times in the order of
microseconds. This property makes them appealing for use in highly-dynamic
computer vision applications. In this work,we explore the limits of this
sensing technology and present an ultra-fast tracking algorithm able to
estimate six-degree-of-freedom motion with dynamics over 25.8 g, at a
throughput of 10 kHz,processing over a million events per second. Our method is
capable of tracking either camera motion or the motion of an object in front of
it, using an error-state Kalman filter formulated in a Lie-theoretic sense. The
method includes a robust mechanism for the matching of events with projected
line segments with very fast outlier rejection. Meticulous treatment of sparse
matrices is applied to achieve real-time performance. Different motion models
of varying complexity are considered for the sake of comparison and performance
analysi
Past, Present, and Future of Simultaneous Localization And Mapping: Towards the Robust-Perception Age
Simultaneous Localization and Mapping (SLAM)consists in the concurrent
construction of a model of the environment (the map), and the estimation of the
state of the robot moving within it. The SLAM community has made astonishing
progress over the last 30 years, enabling large-scale real-world applications,
and witnessing a steady transition of this technology to industry. We survey
the current state of SLAM. We start by presenting what is now the de-facto
standard formulation for SLAM. We then review related work, covering a broad
set of topics including robustness and scalability in long-term mapping, metric
and semantic representations for mapping, theoretical performance guarantees,
active SLAM and exploration, and other new frontiers. This paper simultaneously
serves as a position paper and tutorial to those who are users of SLAM. By
looking at the published research with a critical eye, we delineate open
challenges and new research issues, that still deserve careful scientific
investigation. The paper also contains the authors' take on two questions that
often animate discussions during robotics conferences: Do robots need SLAM? and
Is SLAM solved
- …