115,468 research outputs found

    Finding Optimal Flows Efficiently

    Full text link
    Among the models of quantum computation, the One-way Quantum Computer is one of the most promising proposals of physical realization, and opens new perspectives for parallelization by taking advantage of quantum entanglement. Since a one-way quantum computation is based on quantum measurement, which is a fundamentally nondeterministic evolution, a sufficient condition of global determinism has been introduced as the existence of a causal flow in a graph that underlies the computation. A O(n^3)-algorithm has been introduced for finding such a causal flow when the numbers of output and input vertices in the graph are equal, otherwise no polynomial time algorithm was known for deciding whether a graph has a causal flow or not. Our main contribution is to introduce a O(n^2)-algorithm for finding a causal flow, if any, whatever the numbers of input and output vertices are. This answers the open question stated by Danos and Kashefi and by de Beaudrap. Moreover, we prove that our algorithm produces an optimal flow (flow of minimal depth.) Whereas the existence of a causal flow is a sufficient condition for determinism, it is not a necessary condition. A weaker version of the causal flow, called gflow (generalized flow) has been introduced and has been proved to be a necessary and sufficient condition for a family of deterministic computations. Moreover the depth of the quantum computation is upper bounded by the depth of the gflow. However, the existence of a polynomial time algorithm that finds a gflow has been stated as an open question. In this paper we answer this positively with a polynomial time algorithm that outputs an optimal gflow of a given graph and thus finds an optimal correction strategy to the nondeterministic evolution due to measurements.Comment: 10 pages, 3 figure

    Streamlines for Motion Planning in Underwater Currents

    Full text link
    Motion planning for underwater vehicles must consider the effect of ocean currents. We present an efficient method to compute reachability and cost between sample points in sampling-based motion planning that supports long-range planning over hundreds of kilometres in complicated flows. The idea is to search a reduced space of control inputs that consists of stream functions whose level sets, or streamlines, optimally connect two given points. Such stream functions are generated by superimposing a control input onto the underlying current flow. A streamline represents the resulting path that a vehicle would follow as it is carried along by the current given that control input. We provide rigorous analysis that shows how our method avoids exhaustive search of the control space, and demonstrate simulated examples in complicated flows including a traversal along the east coast of Australia, using actual current predictions, between Sydney and Brisbane.Comment: 7 pages, 4 figures, accepted to IEEE ICRA 2019. Copyright 2019 IEE

    A unified view of data-intensive flows in business intelligence systems : a survey

    Get PDF
    Data-intensive flows are central processes in today’s business intelligence (BI) systems, deploying different technologies to deliver data, from a multitude of data sources, in user-preferred and analysis-ready formats. To meet complex requirements of next generation BI systems, we often need an effective combination of the traditionally batched extract-transform-load (ETL) processes that populate a data warehouse (DW) from integrated data sources, and more real-time and operational data flows that integrate source data at runtime. Both academia and industry thus must have a clear understanding of the foundations of data-intensive flows and the challenges of moving towards next generation BI environments. In this paper we present a survey of today’s research on data-intensive flows and the related fundamental fields of database theory. The study is based on a proposed set of dimensions describing the important challenges of data-intensive flows in the next generation BI setting. As a result of this survey, we envision an architecture of a system for managing the lifecycle of data-intensive flows. The results further provide a comprehensive understanding of data-intensive flows, recognizing challenges that still are to be addressed, and how the current solutions can be applied for addressing these challenges.Peer ReviewedPostprint (author's final draft

    Physical portrayal of computational complexity

    Get PDF
    Computational complexity is examined using the principle of increasing entropy. To consider computation as a physical process from an initial instance to the final acceptance is motivated because many natural processes have been recognized to complete in non-polynomial time (NP). The irreversible process with three or more degrees of freedom is found intractable because, in terms of physics, flows of energy are inseparable from their driving forces. In computational terms, when solving problems in the class NP, decisions will affect subsequently available sets of decisions. The state space of a non-deterministic finite automaton is evolving due to the computation itself hence it cannot be efficiently contracted using a deterministic finite automaton that will arrive at a solution in super-polynomial time. The solution of the NP problem itself is verifiable in polynomial time (P) because the corresponding state is stationary. Likewise the class P set of states does not depend on computational history hence it can be efficiently contracted to the accepting state by a deterministic sequence of dissipative transformations. Thus it is concluded that the class P set of states is inherently smaller than the set of class NP. Since the computational time to contract a given set is proportional to dissipation, the computational complexity class P is a subset of NP.Comment: 16, pages, 7 figure
    • …
    corecore