3,255 research outputs found
Near-optimal small-depth lower bounds for small distance connectivity
We show that any depth- circuit for determining whether an -node graph
has an -to- path of length at most must have size
. The previous best circuit size lower bounds for this
problem were (due to Beame, Impagliazzo, and Pitassi
[BIP98]) and (following from a recent formula size
lower bound of Rossman [Ros14]). Our lower bound is quite close to optimal,
since a simple construction gives depth- circuits of size
for this problem (and strengthening our bound even to
would require proving that undirected connectivity is not in )
Our proof is by reduction to a new lower bound on the size of small-depth
circuits computing a skewed variant of the "Sipser functions" that have played
an important role in classical circuit lower bounds [Sip83, Yao85, H{\aa}s86].
A key ingredient in our proof of the required lower bound for these Sipser-like
functions is the use of \emph{random projections}, an extension of random
restrictions which were recently employed in [RST15]. Random projections allow
us to obtain sharper quantitative bounds while employing simpler arguments,
both conceptually and technically, than in the previous works [Ajt89, BPU92,
BIP98, Ros14]
Contrasting Views of Complexity and Their Implications For Network-Centric Infrastructures
There exists a widely recognized need to better understand
and manage complex âsystems of systems,â ranging from
biology, ecology, and medicine to network-centric technologies.
This is motivating the search for universal laws of highly evolved
systems and driving demand for new mathematics and methods
that are consistent, integrative, and predictive. However, the theoretical
frameworks available today are not merely fragmented
but sometimes contradictory and incompatible. We argue that
complexity arises in highly evolved biological and technological
systems primarily to provide mechanisms to create robustness.
However, this complexity itself can be a source of new fragility,
leading to ârobust yet fragileâ tradeoffs in system design. We
focus on the role of robustness and architecture in networked
infrastructures, and we highlight recent advances in the theory
of distributed control driven by network technologies. This view
of complexity in highly organized technological and biological systems
is fundamentally different from the dominant perspective in
the mainstream sciences, which downplays function, constraints,
and tradeoffs, and tends to minimize the role of organization and
design
Digital implementation of the cellular sensor-computers
Two different kinds of cellular sensor-processor architectures are used nowadays in various
applications. The first is the traditional sensor-processor architecture, where the sensor and the
processor arrays are mapped into each other. The second is the foveal architecture, in which a
small active fovea is navigating in a large sensor array. This second architecture is introduced
and compared here. Both of these architectures can be implemented with analog and digital
processor arrays. The efficiency of the different implementation types, depending on the used
CMOS technology, is analyzed. It turned out, that the finer the technology is, the better to use
digital implementation rather than analog
Quantum Transpiler Optimization: On the Development, Implementation, and Use of a Quantum Research Testbed
Quantum computing research is at the cusp of a paradigm shift. As the complexity of quantum systems increases, so does the complexity of research procedures for creating and testing layers of the quantum software stack. However, the tools used to perform these tasks have not experienced the increase in capability required to effectively handle the development burdens involved. This case is made particularly clear in the context of IBM QX Transpiler optimization algorithms and functions. IBM QX systems use the Qiskit library to create, transform, and execute quantum circuits. As coherence times and hardware qubit counts increase and qubit topologies become more complex, so does orchestration of qubit mapping and qubit state movement across these topologies. The transpiler framework used to create and test improved algorithms has not kept pace. A testbed is proposed to provide abstractions to create and test transpiler routines. The development process is analyzed and implemented, from design principles through requirements analysis and verification testing. Additionally, limitations of existing transpiler algorithms are identified and initial results are provided that suggest more effective algorithms for qubit mapping and state movement
Recommended from our members
Silicon compilation
Silicon compilation is a term used for many different purposes. In this paper we define silicon compilation as a mapping from some higher level description into layout. We define the basic issues in structural and behavioral silicon compilation and some possible solutions to those issues. Finally, we define the concept of an intelligent silicon compiler in which the compiler evaluates the quality of the generated design and attempts to improve it if it is not satisfactory
- âŠ