63,983 research outputs found

    Algorithms for the Problems of Length-Constrained Heaviest Segments

    Full text link
    We present algorithms for length-constrained maximum sum segment and maximum density segment problems, in particular, and the problem of finding length-constrained heaviest segments, in general, for a sequence of real numbers. Given a sequence of n real numbers and two real parameters L and U (L <= U), the maximum sum segment problem is to find a consecutive subsequence, called a segment, of length at least L and at most U such that the sum of the numbers in the subsequence is maximum. The maximum density segment problem is to find a segment of length at least L and at most U such that the density of the numbers in the subsequence is the maximum. For the first problem with non-uniform width there is an algorithm with time and space complexities in O(n). We present an algorithm with time complexity in O(n) and space complexity in O(U). For the second problem with non-uniform width there is a combinatorial solution with time complexity in O(n) and space complexity in O(U). We present a simple geometric algorithm with the same time and space complexities. We extend our algorithms to respectively solve the length-constrained k maximum sum segments problem in O(n+k) time and O(max{U, k}) space, and the length-constrained kk maximum density segments problem in O(n min{k, U-L}) time and O(U+k) space. We present extensions of our algorithms to find all the length-constrained segments having user specified sum and density in O(n+m) and O(nlog (U-L)+m) times respectively, where m is the number of output. Previously, there was no known algorithm with non-trivial result for these problems. We indicate the extensions of our algorithms to higher dimensions. All the algorithms can be extended in a straight forward way to solve the problems with non-uniform width and non-uniform weight.Comment: 21 pages, 12 figure

    Fast and adaptive fractal tree-based path planning for programmable bevel tip steerable needles

    Get PDF
    © 2016 IEEE. Steerable needles are a promising technology for minimally invasive surgery, as they can provide access to difficult to reach locations while avoiding delicate anatomical regions. However, due to the unpredictable tissue deformation associated with needle insertion and the complexity of many surgical scenarios, a real-time path planning algorithm with high update frequency would be advantageous. Real-time path planning for nonholonomic systems is commonly used in a broad variety of fields, ranging from aerospace to submarine navigation. In this letter, we propose to take advantage of the architecture of graphics processing units (GPUs) to apply fractal theory and thus parallelize real-time path planning computation. This novel approach, termed adaptive fractal trees (AFT), allows for the creation of a database of paths covering the entire domain, which are dense, invariant, procedurally produced, adaptable in size, and present a recursive structure. The generated cache of paths can in turn be analyzed in parallel to determine the most suitable path in a fraction of a second. The ability to cope with nonholonomic constraints, as well as constraints in the space of states of any complexity or number, is intrinsic to the AFT approach, rendering it highly versatile. Three-dimensional (3-D) simulations applied to needle steering in neurosurgery show that our approach can successfully compute paths in real-time, enabling complex brain navigation

    Parallel and Distributed Performance of a Depth Estimation Algorithm

    Get PDF
    Expansion of dataset sizes and increasing complexity of processing algorithms have led to consideration of parallel and distributed implementations. The rationale for distributing the computational load may be to thin-provision computational resources, to accelerate data processing rate, or to efficiently reuse already available but otherwise idle computational resources. Whatever the rationale, an efficient solution of this type brings with it questions of data distribution, job partitioning, reliability, and robustness. This paper addresses the first two of these questions in the context of a local cluster-computing environment. Using the CHRT depth estimator, it considers active and passive data distribution and their effect on data throughput, focusing mainly on the compromises required to maintain minimal communications requirements between nodes. As metric, the algorithm considers the overall computation time for a given dataset (i.e., the time lag that a user would experience), and shows that although there are significant speedups to be had by relatively simple modifications to the algorithm, there are limitations to the parallelism that can be achieved efficiently, and a balance between inter-node parallelism (i.e., multiple nodes running in parallel) and intranode parallelism (i.e., multiple threads within one node) for most efficient utilization of available resources
    • …
    corecore