3,255 research outputs found

    Near-optimal small-depth lower bounds for small distance connectivity

    Get PDF
    We show that any depth-dd circuit for determining whether an nn-node graph has an ss-to-tt path of length at most kk must have size nΩ(k1/d/d)n^{\Omega(k^{1/d}/d)}. The previous best circuit size lower bounds for this problem were nkexp⁥(−O(d))n^{k^{\exp(-O(d))}} (due to Beame, Impagliazzo, and Pitassi [BIP98]) and nΩ((log⁥k)/d)n^{\Omega((\log k)/d)} (following from a recent formula size lower bound of Rossman [Ros14]). Our lower bound is quite close to optimal, since a simple construction gives depth-dd circuits of size nO(k2/d)n^{O(k^{2/d})} for this problem (and strengthening our bound even to nkΩ(1/d)n^{k^{\Omega(1/d)}} would require proving that undirected connectivity is not in NC1.\mathsf{NC^1}.) Our proof is by reduction to a new lower bound on the size of small-depth circuits computing a skewed variant of the "Sipser functions" that have played an important role in classical circuit lower bounds [Sip83, Yao85, H{\aa}s86]. A key ingredient in our proof of the required lower bound for these Sipser-like functions is the use of \emph{random projections}, an extension of random restrictions which were recently employed in [RST15]. Random projections allow us to obtain sharper quantitative bounds while employing simpler arguments, both conceptually and technically, than in the previous works [Ajt89, BPU92, BIP98, Ros14]

    Contrasting Views of Complexity and Their Implications For Network-Centric Infrastructures

    Get PDF
    There exists a widely recognized need to better understand and manage complex “systems of systems,” ranging from biology, ecology, and medicine to network-centric technologies. This is motivating the search for universal laws of highly evolved systems and driving demand for new mathematics and methods that are consistent, integrative, and predictive. However, the theoretical frameworks available today are not merely fragmented but sometimes contradictory and incompatible. We argue that complexity arises in highly evolved biological and technological systems primarily to provide mechanisms to create robustness. However, this complexity itself can be a source of new fragility, leading to “robust yet fragile” tradeoffs in system design. We focus on the role of robustness and architecture in networked infrastructures, and we highlight recent advances in the theory of distributed control driven by network technologies. This view of complexity in highly organized technological and biological systems is fundamentally different from the dominant perspective in the mainstream sciences, which downplays function, constraints, and tradeoffs, and tends to minimize the role of organization and design

    Digital implementation of the cellular sensor-computers

    Get PDF
    Two different kinds of cellular sensor-processor architectures are used nowadays in various applications. The first is the traditional sensor-processor architecture, where the sensor and the processor arrays are mapped into each other. The second is the foveal architecture, in which a small active fovea is navigating in a large sensor array. This second architecture is introduced and compared here. Both of these architectures can be implemented with analog and digital processor arrays. The efficiency of the different implementation types, depending on the used CMOS technology, is analyzed. It turned out, that the finer the technology is, the better to use digital implementation rather than analog

    Quantum Transpiler Optimization: On the Development, Implementation, and Use of a Quantum Research Testbed

    Get PDF
    Quantum computing research is at the cusp of a paradigm shift. As the complexity of quantum systems increases, so does the complexity of research procedures for creating and testing layers of the quantum software stack. However, the tools used to perform these tasks have not experienced the increase in capability required to effectively handle the development burdens involved. This case is made particularly clear in the context of IBM QX Transpiler optimization algorithms and functions. IBM QX systems use the Qiskit library to create, transform, and execute quantum circuits. As coherence times and hardware qubit counts increase and qubit topologies become more complex, so does orchestration of qubit mapping and qubit state movement across these topologies. The transpiler framework used to create and test improved algorithms has not kept pace. A testbed is proposed to provide abstractions to create and test transpiler routines. The development process is analyzed and implemented, from design principles through requirements analysis and verification testing. Additionally, limitations of existing transpiler algorithms are identified and initial results are provided that suggest more effective algorithms for qubit mapping and state movement
    • 

    corecore