165 research outputs found

    Parallel Algorithms for Constructing Convex Hulls.

    Get PDF
    For a given set of planar points S, the convex hull of S, CH(S), is defined to be a list of ordered points which represents the smallest convex polygon that contains all of the points. The convex hull problem, one of the most important problems in computational geometry, has many applications in areas such as computer graphics, simulation and pattern recognition. There are two strategies used in designing parallel convex hull algorithms. One strategy is the divide-and-conquer paradigm. The disadvantage to this strategy is that the recursive merge step is complicated and difficult to implement on current parallel machines. The second strategy is to parallelize sequential convex hull algorithms. The algorithms designed using the second strategy are often iterative algorithms which can be more easily implemented on the current parallel machines. This research focuses on designing parallel convex hull algorithms using the second strategy because we intend to facilitate the implementation of the newly designed algorithms on massively parallel machines. We first design a sequential algorithm for constructing a convex hull of a simple polygon, which is a special case of a set of planar points. This optimal algorithm is extended to handle a set of planar points without increasing the time complexity. Next, the sequential algorithm is converted for linear array and two or more dimensional mesh-array architectures. The algorithms for the case where the number of points is greater than the number of processors is also addressed. Each of the algorithms developed is optimal. To analyze the performance of the algorithms compared to previous algorithms, a system called the Parallel Convex Hull Simulation System was developed. The results of the analysis indicate that the new algorithms exhibit better performance than previous algorithms

    Exploiting parallelism in n-D convex hull algorithms

    Get PDF
    PhD ThesisThe convex hull is a problem of primary importance because of its applications in computational geometry. A number of sequential and parallel algorithms for computing the convex hull of a finite set of points in the lower dimensions are known. In compar- ison, the general n-D problem is not as well understood and parallel algorithms are not so prevalent because the 2-D and 3-D methods are not easily extended to the general case. This thesis presents parallel algorithms for evaluating the general n- D convex hull problem (where 2-D and 3-D are special cases) using Swart's sequential algorithm. One of our methods combines a gift-wrapping technique with partitioning and merge algorithms > where the original list is split into p 1 partitions followed by the computation of the subhulls using the sequential n-D gift-wrapping method. The partial hulls are then combined using a fanin tree. The second method computes the convex hull in parallel by wrapping around the edges until a complete facial lattice structure of the polytope is generated. Several parameterised versions of the proposed algorithms have been implemented on the shared memory and message passing architectures. In the former, performance on an Encore Multimax using Encore Parallel Threads and the more lightweight Microthread programming utilities are examined. In the latter, performance on a transputer based machine using CS- Tools is discussed. We have shown that our techniques will be useful in the construction of faster algorithms which employ the n-D convex hull algorithms as a sub-algorithmCommonwealth Scholarship Commission in the United Kingdo

    Computational Methods for Segmentation of Multi-Modal Multi-Dimensional Cardiac Images

    Get PDF
    Segmentation of the heart structures helps compute the cardiac contractile function quantified via the systolic and diastolic volumes, ejection fraction, and myocardial mass, representing a reliable diagnostic value. Similarly, quantification of the myocardial mechanics throughout the cardiac cycle, analysis of the activation patterns in the heart via electrocardiography (ECG) signals, serve as good cardiac diagnosis indicators. Furthermore, high quality anatomical models of the heart can be used in planning and guidance of minimally invasive interventions under the assistance of image guidance. The most crucial step for the above mentioned applications is to segment the ventricles and myocardium from the acquired cardiac image data. Although the manual delineation of the heart structures is deemed as the gold-standard approach, it requires significant time and effort, and is highly susceptible to inter- and intra-observer variability. These limitations suggest a need for fast, robust, and accurate semi- or fully-automatic segmentation algorithms. However, the complex motion and anatomy of the heart, indistinct borders due to blood flow, the presence of trabeculations, intensity inhomogeneity, and various other imaging artifacts, makes the segmentation task challenging. In this work, we present and evaluate segmentation algorithms for multi-modal, multi-dimensional cardiac image datasets. Firstly, we segment the left ventricle (LV) blood-pool from a tri-plane 2D+time trans-esophageal (TEE) ultrasound acquisition using local phase based filtering and graph-cut technique, propagate the segmentation throughout the cardiac cycle using non-rigid registration-based motion extraction, and reconstruct the 3D LV geometry. Secondly, we segment the LV blood-pool and myocardium from an open-source 4D cardiac cine Magnetic Resonance Imaging (MRI) dataset by incorporating average atlas based shape constraint into the graph-cut framework and iterative segmentation refinement. The developed fast and robust framework is further extended to perform right ventricle (RV) blood-pool segmentation from a different open-source 4D cardiac cine MRI dataset. Next, we employ convolutional neural network based multi-task learning framework to segment the myocardium and regress its area, simultaneously, and show that segmentation based computation of the myocardial area is significantly better than that regressed directly from the network, while also being more interpretable. Finally, we impose a weak shape constraint via multi-task learning framework in a fully convolutional network and show improved segmentation performance for LV, RV and myocardium across healthy and pathological cases, as well as, in the challenging apical and basal slices in two open-source 4D cardiac cine MRI datasets. We demonstrate the accuracy and robustness of the proposed segmentation methods by comparing the obtained results against the provided gold-standard manual segmentations, as well as with other competing segmentation methods

    Visibility-Related Problems on Parallel Computational Models

    Get PDF
    Visibility-related problems find applications in seemingly unrelated and diverse fields such as computer graphics, scene analysis, robotics and VLSI design. While there are common threads running through these problems, most existing solutions do not exploit these commonalities. With this in mind, this thesis identifies these common threads and provides a unified approach to solve these problems and develops solutions that can be viewed as template algorithms for an abstract computational model. A template algorithm provides an architecture independent solution for a problem, from which solutions can be generated for diverse computational models. In particular, the template algorithms presented in this work lead to optimal solutions to various visibility-related problems on fine-grain mesh connected computers such as meshes with multiple broadcasting and reconfigurable meshes, and also on coarse-grain multicomputers. Visibility-related problems studied in this thesis can be broadly classified into Object Visibility and Triangulation problems. To demonstrate the practical relevance of these algorithms, two of the fundamental template algorithms identified as powerful tools in almost every algorithm designed in this work were implemented on an IBM-SP2. The code was developed in the C language, using MPI, and can easily be ported to many commercially available parallel computers

    Efficient Analysis in Multimedia Databases

    Get PDF
    The rapid progress of digital technology has led to a situation where computers have become ubiquitous tools. Now we can find them in almost every environment, be it industrial or even private. With ever increasing performance computers assumed more and more vital tasks in engineering, climate and environmental research, medicine and the content industry. Previously, these tasks could only be accomplished by spending enormous amounts of time and money. By using digital sensor devices, like earth observation satellites, genome sequencers or video cameras, the amount and complexity of data with a spatial or temporal relation has gown enormously. This has led to new challenges for the data analysis and requires the use of modern multimedia databases. This thesis aims at developing efficient techniques for the analysis of complex multimedia objects such as CAD data, time series and videos. It is assumed that the data is modeled by commonly used representations. For example CAD data is represented as a set of voxels, audio and video data is represented as multi-represented, multi-dimensional time series. The main part of this thesis focuses on finding efficient methods for collision queries of complex spatial objects. One way to speed up those queries is to employ a cost-based decompositioning, which uses interval groups to approximate a spatial object. For example, this technique can be used for the Digital Mock-Up (DMU) process, which helps engineers to ensure short product cycles. This thesis defines and discusses a new similarity measure for time series called threshold-similarity. Two time series are considered similar if they expose a similar behavior regarding the transgression of a given threshold value. Another part of the thesis is concerned with the efficient calculation of reverse k-nearest neighbor (RkNN) queries in general metric spaces using conservative and progressive approximations. The aim of such RkNN queries is to determine the impact of single objects on the whole database. At the end, the thesis deals with video retrieval and hierarchical genre classification of music using multiple representations. The practical relevance of the discussed genre classification approach is highlighted with a prototype tool that helps the user to organize large music collections. Both the efficiency and the effectiveness of the presented techniques are thoroughly analyzed. The benefits over traditional approaches are shown by evaluating the new methods on real-world test datasets

    Parallel Geometric Algorithms.

    Get PDF
    Geometric algorithms have many important applications in science and technology. Some geometric problems require fast response time that could not be achieved by traditional sequential algorithms. However, the speed, power and versatility of parallel computers can be exploited to develop efficient geometric algorithms as shown in this dissertation. Our study focuses on designing efficient parallel geometric algorithms and analyzing their computational complexities. In this research, first we developed a parallel algorithm to find the maxima of a set of N points in the d-dimensional space, d 3˘e\u3e 3, on a hypercube SIMD machine. Our algorithm is a parallel implementation from the sequential algorithm given by Kung, Luccio, and Preparata (KLP75). Although the time complexity, O(N\sp{0.77}\log\sp{d-1}\ N), of our algorithm is not optimal, it is the first sublinear time algorithm for solving the high dimensional maxima problem. Next, we developed another parallel algorithm to construct the Voronoi diagram of a point set in the plane. Our algorithm is based on the sequential algorithm given by Brown (B79). We use an N×NN\times N mesh of trees (MOT) SIMD computer and get the optimal time complexity O(log\sp2N).. Finally, we developed another MOT algorithm to solve the congruent pattern problem. Given a simple polygon P with k edges and a planar graph G with N edges, N3˘ek.N\u3ek. The problem is to find all the patterns (cycles) in G which are congruent to P. Our algorithm is based on the CREW PRAM algorithm given by Jeong, Kim, and Baek (JKB92). We also use an N×NN\times N MOT and get the optimal time complexity O(klogN).O(k\log N).

    Sensor system and related models to determine irregular shaped 3-D objects

    Get PDF
    This work comprises several parts, the initial part o f which is a review o f the techniques in use at present for measuring shape and characterising products. The major work details a ring sensor system, which consists o f a large number o f transmitters and receivers alternately arranged on the circumference o f a metal annulus. Using a modified polar co-ordinate system and trigonometric functions, two enveloping spirals o f an object can be determined. One or both spirals can then be used for further data analysis. Each spiral consists o f intersections between enveloping chords and parts o f the chords. The area surrounding the object is segmented and properties such as volume and axis measurements can be determined. A mode! was developed to simulate artificial objects o f various shapes. Simulation tests were carried out to determine the limits o f the system concerning position within the ring, shape and speed o f the object and resolution o f the ring. A ring was manufactured for actual tests, which were carried out mainly on potatoes to confirm the possible use in practice and to show the relative merits compared with existing systems. Interesting side issues are introduced, such as the low number of primary data, possibilities of further reduction using differential coding, and the consumption time of the algorithms. Finally, a model for the simulation o f more than one object in the ring at the same time is introduced and a possible way o f separation is investigated

    Enhancing cardiac image segmentation through persistent homology regularization

    Full text link
    Treballs Finals de Grau d'Enginyeria Informàtica, Facultat de Matemàtiques, Universitat de Barcelona, Any: 2022, Director: Sergio Escalera Guerrero, Carles Casacuberta i Rubén Ballester Bautista[en] Cardiovascular diseases are a major cause of death and disability. Deep learning-based segmentation methods could help to reduce their severity by aiding in early diagnosing but high levels of accuracy are necessary. The vast majority of methods focus on correcting local errors and miss the global picture. To ad- dress this issue, researchers have developed techniques that incorporate global context and consider the relationships between pixels. Here, we apply persistent homology, a branch of topology that studies the topological structure of shapes, along with deep learning methods to improve the heart segmentation. We use multidimensional topological losses to avoid spurious components and holes and increase the total accuracy. We evaluate the performance of three different approaches: using the dice and pixel-wise losses with the sum of persistences of label diagrams as a regularizer, using the dice and pixel-wise losses with the bottleneck distance as a regularizer, and using both losses without any regularization. We find that, while more computationally demanding, the methods using topological regularizers outperform the other method in terms of accuracy
    corecore