343 research outputs found

    Quantum Experiments and Graphs: Multiparty States as coherent superpositions of Perfect Matchings

    Get PDF
    We show a surprising link between experimental setups to realize high-dimensional multipartite quantum states and Graph Theory. In these setups, the paths of photons are identified such that the photon-source information is never created. We find that each of these setups corresponds to an undirected graph, and every undirected graph corresponds to an experimental setup. Every term in the emerging quantum superposition corresponds to a perfect matching in the graph. Calculating the final quantum state is in the complexity class #P-complete, thus cannot be done efficiently. To strengthen the link further, theorems from Graph Theory -- such as Hall's marriage problem -- are rephrased in the language of pair creation in quantum experiments. We show explicitly how this link allows to answer questions about quantum experiments (such as which classes of entangled states can be created) with graph theoretical methods, and potentially simulate properties of Graphs and Networks with quantum experiments (such as critical exponents and phase transitions).Comment: 6+5 pages, 4+7 figure

    Combinatorics

    Get PDF
    [no abstract available

    Quantum networking with optimised parametric down-conversion sources

    Get PDF
    Quantum information processing exploits superposition and entanglement to enable tasks in computation, communication and sensing that are classically inconceivable. Photonics is a leading platform for quantum information processing owing to the relative ease in which the encoding and manipulation of quantum information can be achieved, but there are a set of characteristics that photons themselves must exhibit in order to be useful. The ideal photon source for building up multi-qubit states needs to produce indistinguishable photons with high efficiency. Indistinguishability is crucial for minimising errors in two-photon interference, central to building larger states, while high heralding rates will be needed to overcome unfavourable loss scaling. Domain engineering in parametric down-conversion sources negates the need for lossy spectral filtering allowing one to satisfy these conditions inherently within the source design. Contained in this Thesis are two experimental investigations. Within the first investigation, we present a telecom-wavelength parametric down-conversion photon source that operates on the achievable limit of domain engineering. The source is capable of generating photons from independent sources which achieve two-photon interference visibilities of up to 98.6 ± 1.1% without narrow-band filtering. As a consequence, we can reach net heralding efficiencies of 67.5%, corresponding to collection efficiencies exceeding 90%. These sources enable us to efficiently generate multi-photon graph states, constituting the second experimental investigation. Graph states, and their underlying formalism, have been shown to be a valuable resource in quantum information processing. The generation and distribution of a 6-photon graph state—defining the topology of a quantum network—allows us to explore prospective issues with networks that invoke protocols beyond end-to-end primitives, where users only require local operations and projective measurements. In the case where multiple users wish to establish a common key for conference communication, our proof-of-principle experiment concludes that employing N-user key distribution methods over 2-user methods, results in a 2.13 ± 0.06 key rate advantage

    Model-based Curvilinear Network Extraction and Tracking toward Quantitative Analysis of Biopolymer Networks

    Get PDF
    Curvilinear biopolymer networks pervade living systems. They are routinely imaged by fluorescence microscopy to gain insight into their structural, mechanical, and dynamic properties. Image analysis can facilitate understanding the mechanisms of their formation and their biological functions from a quantitative viewpoint. Due to the variability in network geometry, topology and dynamics as well as often low resolution and low signal-to-noise ratio in images, segmentation and tracking networks from these images is challenging. In this dissertation, we propose a complete framework for extracting the geometry and topology of curvilinear biopolymer networks, and also tracking their dynamics from multi-dimensional images. The proposed multiple Stretching Open Active Contours (SOACs) can identify network centerlines and junctions, and infer plausible network topology. Combined with a kk-partite matching algorithm, temporal correspondences among all the detected filaments can be established. This work enables statistical analysis of structural parameters of biopolymer networks as well as their dynamics. Quantitative evaluation using simulated and experimental images demonstrate its effectiveness and efficiency. Moreover, a principled method of optimizing key parameters without ground truth is proposed for attaining the best extraction result for any type of images. The proposed methods are implemented into a usable open source software ``SOAX\u27\u27. Besides network extraction and tracking, SOAX provides a user-friendly cross-platform GUI for interactive visualization, manual editing and quantitative analysis. Using SOAX to analyze several types of biopolymer networks demonstrates the potential of the proposed methods to help answer key questions in cell biology and biophysics from a quantitative viewpoint

    New Algorithms for Maximum Disjoint Paths Based on Tree-Likeness

    Get PDF
    We study the classical NP-hard problems of finding maximum-size subsets from given sets of k terminal pairs that can be routed via edge-disjoint paths (MaxEDP) or node-disjoint paths (MaxNDP) in a given graph. The approximability of MaxEDP/NDP is currently not well understood; the best known lower bound is Omega(log^{1/2 - varepsilon} n), assuming NP not subseteq ZPTIME(n^{poly log n}). This constitutes a significant gap to the best known approximation upper bound of O(n^1/2) due to Chekuri et al. (2006) and closing this gap is currently one of the big open problems in approximation algorithms. In their seminal paper, Raghavan and Thompson (Combinatorica, 1987) introduce the technique of randomized rounding for LPs; their technique gives an O(1)-approximation when edges (or nodes) may be used by O(log n/log log n) paths. In this paper, we strengthen the above fundamental results. We provide new bounds formulated in terms of the feedback vertex set number r of a graph, which measures its vertex deletion distance to a forest. In particular, we obtain the following. - For MaxEDP, we give an O(r^0.5 log^1.5 kr)-approximation algorithm. As r<=n, up to logarithmic factors, our result strengthens the best known ratio O(n^0.5) due to Chekuri et al. - Further, we show how to route Omega(opt) pairs with congestion O(log(kr)/log log(kr)), strengthening the bound obtained by the classic approach of Raghavan and Thompson. - For MaxNDP, we give an algorithm that gives the optimal answer in time (k+r)^O(r)n. This is a substantial improvement on the run time of 2^kr^O(r)n, which can be obtained via an algorithm by Scheffler. We complement these positive results by proving that MaxEDP is NP-hard even for r=1, and MaxNDP is W[1]-hard for parameter r. This shows that neither problem is fixed-parameter tractable in r unless FPT = W[1] and that our approximability results are relevant even for very small constant values of r

    Object Association Across Multiple Moving Cameras In Planar Scenes

    Get PDF
    In this dissertation, we address the problem of object detection and object association across multiple cameras over large areas that are well modeled by planes. We present a unifying probabilistic framework that captures the underlying geometry of planar scenes, and present algorithms to estimate geometric relationships between different cameras, which are subsequently used for co-operative association of objects. We first present a local1 object detection scheme that has three fundamental innovations over existing approaches. First, the model of the intensities of image pixels as independent random variables is challenged and it is asserted that useful correlation exists in intensities of spatially proximal pixels. This correlation is exploited to sustain high levels of detection accuracy in the presence of dynamic scene behavior, nominal misalignments and motion due to parallax. By using a non-parametric density estimation method over a joint domain-range representation of image pixels, complex dependencies between the domain (location) and range (color) are directly modeled. We present a model of the background as a single probability density. Second, temporal persistence is introduced as a detection criterion. Unlike previous approaches to object detection that detect objects by building adaptive models of the background, the foreground is modeled to augment the detection of objects (without explicit tracking), since objects detected in the preceding frame contain substantial evidence for detection in the current frame. Finally, the background and foreground models are used competitively in a MAP-MRF decision framework, stressing spatial context as a condition of detecting interesting objects and the posterior function is maximized efficiently by finding the minimum cut of a capacitated graph. Experimental validation of the method is performed and presented on a diverse set of data. We then address the problem of associating objects across multiple cameras in planar scenes. Since cameras may be moving, there is a possibility of both spatial and temporal non-overlap in the fields of view of the camera. We first address the case where spatial and temporal overlap can be assumed. Since the cameras are moving and often widely separated, direct appearance-based or proximity-based constraints cannot be used. Instead, we exploit geometric constraints on the relationship between the motion of each object across cameras, to test multiple correspondence hypotheses, without assuming any prior calibration information. Here, there are three contributions. First, we present a statistically and geometrically meaningful means of evaluating a hypothesized correspondence between multiple objects in multiple cameras. Second, since multiple cameras exist, ensuring coherency in association, i.e. transitive closure is maintained between more than two cameras, is an essential requirement. To ensure such coherency we pose the problem of object associating across cameras as a k-dimensional matching and use an approximation to find the association. We show that, under appropriate conditions, re-entering objects can also be re-associated to their original labels. Third, we show that as a result of associating objects across the cameras, a concurrent visualization of multiple aerial video streams is possible. Results are shown on a number of real and controlled scenarios with multiple objects observed by multiple cameras, validating our qualitative models. Finally, we present a unifying framework for object association across multiple cameras and for estimating inter-camera homographies between (spatially and temporally) overlapping and non-overlapping cameras, whether they are moving or non-moving. By making use of explicit polynomial models for the kinematics of objects, we present algorithms to estimate inter-frame homographies. Under an appropriate measurement noise model, an EM algorithm is applied for the maximum likelihood estimation of the inter-camera homographies and kinematic parameters. Rather than fit curves locally (in each camera) and match them across views, we present an approach that simultaneously refines the estimates of inter-camera homographies and curve coefficients globally. We demonstrate the efficacy of the approach on a number of real sequences taken from aerial cameras, and report quantitative performance during simulations

    Physical-Layer Security, Quantum Key Distribution and Post-quantum Cryptography

    Get PDF
    The growth of data-driven technologies, 5G, and the Internet place enormous pressure on underlying information infrastructure. There exist numerous proposals on how to deal with the possible capacity crunch. However, the security of both optical and wireless networks lags behind reliable and spectrally efficient transmission. Significant achievements have been made recently in the quantum computing arena. Because most conventional cryptography systems rely on computational security, which guarantees the security against an efficient eavesdropper for a limited time, with the advancement in quantum computing this security can be compromised. To solve these problems, various schemes providing perfect/unconditional security have been proposed including physical-layer security (PLS), quantum key distribution (QKD), and post-quantum cryptography. Unfortunately, it is still not clear how to integrate those different proposals with higher level cryptography schemes. So the purpose of the Special Issue entitled “Physical-Layer Security, Quantum Key Distribution and Post-quantum Cryptography” was to integrate these various approaches and enable the next generation of cryptography systems whose security cannot be broken by quantum computers. This book represents the reprint of the papers accepted for publication in the Special Issue
    corecore