222 research outputs found

    A Computational Paradigm on Network-Based Models of Computation

    Get PDF
    The maturation of computer science has strengthened the need to consolidate isolated algorithms and techniques into general computational paradigms. The main goal of this dissertation is to provide a unifying framework which captures the essence of a number of problems in seemingly unrelated contexts in database design, pattern recognition, image processing, VLSI design, computer vision, and robot navigation. The main contribution of this work is to provide a computational paradigm which involves the unifying framework, referred to as the multiple Query problem, along with a generic solution to the Multiple Query problem. To demonstrate the applicability of the paradigm, a number of problems from different areas of computer science are solved by formulating them in this framework. Also, to show practical relevance, two fundamental problems were implemented in the C language using MPI. The code can be ported onto many commercially available parallel computers; in particular, the code was tested on an IBM-SP2 and on a network of workstations

    Algorithmic Motion Planning and Related Geometric Problems on Parallel Machines (Dissertation Proposal)

    Get PDF
    The problem of algorithmic motion planning is one that has received considerable attention in recent years. The automatic planning of motion for a mobile object moving amongst obstacles is a fundamentally important problem with numerous applications in computer graphics and robotics. Numerous approximate techniques (AI-based, heuristics-based, potential field methods, for example) for motion planning have long been in existence, and have resulted in the design of experimental systems that work reasonably well under various special conditions [7, 29, 30]. Our interest in this problem, however, is in the use of algorithmic techniques for motion planning, with provable worst case performance guarantees. The study of algorithmic motion planning has been spurred by recent research that has established the mathematical depth of motion planning. Classical geometry, algebra, algebraic geometry and combinatorics are some of the fields of mathematics that have been used to prove various results that have provided better insight into the issues involved in motion planning [49]. In particular, the design and analysis of geometric algorithms has proved to be very useful for numerous important special cases. In the remainder of this proposal we will substitute the more precise term of algorithmic motion planning by just motion planning

    Visibility-Related Problems on Parallel Computational Models

    Get PDF
    Visibility-related problems find applications in seemingly unrelated and diverse fields such as computer graphics, scene analysis, robotics and VLSI design. While there are common threads running through these problems, most existing solutions do not exploit these commonalities. With this in mind, this thesis identifies these common threads and provides a unified approach to solve these problems and develops solutions that can be viewed as template algorithms for an abstract computational model. A template algorithm provides an architecture independent solution for a problem, from which solutions can be generated for diverse computational models. In particular, the template algorithms presented in this work lead to optimal solutions to various visibility-related problems on fine-grain mesh connected computers such as meshes with multiple broadcasting and reconfigurable meshes, and also on coarse-grain multicomputers. Visibility-related problems studied in this thesis can be broadly classified into Object Visibility and Triangulation problems. To demonstrate the practical relevance of these algorithms, two of the fundamental template algorithms identified as powerful tools in almost every algorithm designed in this work were implemented on an IBM-SP2. The code was developed in the C language, using MPI, and can easily be ported to many commercially available parallel computers

    [Activity of Institute for Computer Applications in Science and Engineering]

    Get PDF
    This report summarizes research conducted at the Institute for Computer Applications in Science and Engineering in applied mathematics, fluid mechanics, and computer science

    Parallel patch-based volumetric reconstruction from images.

    Get PDF
    M. Sc. Eng. University of KwaZulu-Natal, Durban 2014.Three Dimensional (3D) reconstruction relates to the creating of 3D computer models from sets of Two Dimensional (2D) images. 3D reconstruction algorithms tend to have long execution times, meaning they are ill suited to real time 3D reconstruction tasks. This is a significant limitation which this dissertation attempts to address. Modern Graphics Processing Units (GPUs) have become fully programmable and have spawned the field known as General Purpose GPU (GPGPU) processing. Using this technology it is possible to of- fload certain types of tasks from the Central Processing Unit (CPU) to the GPU. GPGPU processing is designed for problems that have data parallelism. This means that a particular task can be split into many smaller tasks that can run in parallel, the results of which and are not dependent upon the order in which the tasks are completed. Therefore to properly make use of both CPU parallelism and GPGPU processing a 3D reconstruction algorithm with data parallelism was required. The selected algorithm was the Patch-Based Multi-View Stereopsis (PMVS) method, proposed and implemented by Yasutaka Furukawa and Jean Ponce. This algorithm uses small oriented rectangular patches to model a surface and is broken into four major steps: Feature detection; feature matching, expansion and filtering. The reconstructed patches are independent and as such the algorithm is data parallel. Some segments of the PMVS algorithm were programmed for GPGPU and others for CPU parallelism. Results show that the feature detection stage runs 10 times faster on the GPU than the equivalent CPU implementation. The patch creation and expansion stages also benefited from GPU implementation. Which brought an improvement in the execution time of two times for large images, and equivalent execution times for small images, when compared to the CPU implementation. These results show that the use of GPGPU and CPU parallelism can indeed improve the performance of this 3D reconstruction algorithm

    Past, Present, and Future of Simultaneous Localization And Mapping: Towards the Robust-Perception Age

    Get PDF
    Simultaneous Localization and Mapping (SLAM)consists in the concurrent construction of a model of the environment (the map), and the estimation of the state of the robot moving within it. The SLAM community has made astonishing progress over the last 30 years, enabling large-scale real-world applications, and witnessing a steady transition of this technology to industry. We survey the current state of SLAM. We start by presenting what is now the de-facto standard formulation for SLAM. We then review related work, covering a broad set of topics including robustness and scalability in long-term mapping, metric and semantic representations for mapping, theoretical performance guarantees, active SLAM and exploration, and other new frontiers. This paper simultaneously serves as a position paper and tutorial to those who are users of SLAM. By looking at the published research with a critical eye, we delineate open challenges and new research issues, that still deserve careful scientific investigation. The paper also contains the authors' take on two questions that often animate discussions during robotics conferences: Do robots need SLAM? and Is SLAM solved

    Virtual Reality Simulation of Glenoid Reaming Procedure

    Get PDF
    Glenoid reaming is a bone machining operation in Total Shoulder Arthroplasty (TSA) in which the glenoid bone is resurfaced to make intimate contact with implant undersurface. While this step is crucial for the longevity of TSA, many surgeons find it technically challenging. With the recent advances in Virtual Reality (VR) simulations, it has become possible to realistically replicate complicated operations without any need for patients or cadavers, and at the same time, provide quantitative feedback to improve surgeons\u27 psycho-motor skills. In light of these advantages, the current thesis intends to develop tools and methods required for construction of a VR simulator for glenoid reaming, in an attempt to construct a reliable tool for preoperative training and planning for surgeons involved with TSA. Towards the end, this thesis presents computational algorithms to appropriately represent surgery tool and bone in the VR environment, determine their intersection and compute realistic haptic feedback based on the intersections. The core of the computations is constituted by sampled geometrical representations of both objects. In particular, point cloud model of the tool and voxelized model of bone - that is derived from Computed Tomography (CT) images - are employed. The thesis shows how to efficiently construct these models and adequately represent them in memory. It also elucidates how to effectively use these models to rapidly determine tool-bone collisions and account for bone removal momentarily. Furthermore, the thesis applies cadaveric experimental data to study the mechanics of glenoid reaming and proposes a realistic model for haptic computations. The proposed model integrates well with the developed computational tools, enabling real-time haptic and graphic simulation of glenoid reaming. Throughout the thesis, a particular emphasis is placed upon computational efficiency, especially on the use of parallel computing using Graphics Processing Units (GPUs). Extensive implementation results are also presented to verify the effectiveness of the developments. Not only do the results of this thesis advance the knowledge in the simulation of glenoid reaming, but they also rigorously contribute to the broader area of surgery simulation, and can serve as a step forward to the wider implementation of VR technology in surgeon training programs

    Computer-Aided Geometry Modeling

    Get PDF
    Techniques in computer-aided geometry modeling and their application are addressed. Mathematical modeling, solid geometry models, management of geometric data, development of geometry standards, and interactive and graphic procedures are discussed. The applications include aeronautical and aerospace structures design, fluid flow modeling, and gas turbine design

    Parallel Algorithms for Constructing Convex Hulls.

    Get PDF
    For a given set of planar points S, the convex hull of S, CH(S), is defined to be a list of ordered points which represents the smallest convex polygon that contains all of the points. The convex hull problem, one of the most important problems in computational geometry, has many applications in areas such as computer graphics, simulation and pattern recognition. There are two strategies used in designing parallel convex hull algorithms. One strategy is the divide-and-conquer paradigm. The disadvantage to this strategy is that the recursive merge step is complicated and difficult to implement on current parallel machines. The second strategy is to parallelize sequential convex hull algorithms. The algorithms designed using the second strategy are often iterative algorithms which can be more easily implemented on the current parallel machines. This research focuses on designing parallel convex hull algorithms using the second strategy because we intend to facilitate the implementation of the newly designed algorithms on massively parallel machines. We first design a sequential algorithm for constructing a convex hull of a simple polygon, which is a special case of a set of planar points. This optimal algorithm is extended to handle a set of planar points without increasing the time complexity. Next, the sequential algorithm is converted for linear array and two or more dimensional mesh-array architectures. The algorithms for the case where the number of points is greater than the number of processors is also addressed. Each of the algorithms developed is optimal. To analyze the performance of the algorithms compared to previous algorithms, a system called the Parallel Convex Hull Simulation System was developed. The results of the analysis indicate that the new algorithms exhibit better performance than previous algorithms

    JERS-1 SAR and LANDSAT-5 TM image data fusion: An application approach for lithological mapping

    Get PDF
    Satellite image data fusion is an image processing set of procedures utilise either for image optimisation for visual photointerpretation, or for automated thematic classification with low error rate and high accuracy. Lithological mapping using remote sensing image data relies on the spectral and textural information of the rock units of the area to be mapped. These pieces of information can be derived from Landsat optical TM and JERS-1 SAR images respectively. Prior to extracting such information (spectral and textural) and fusing them together, geometric image co-registration between TM and the SAR, atmospheric correction of the TM, and SAR despeckling are required. In this thesis, an appropriate atmospheric model is developed and implemented utilising the dark pixel subtraction method for atmospheric correction. For SAR despeckling, an efficient new method is also developed to test whether the SAR filter used remove the textural information or not. For image optimisation for visual photointerpretation, a new method of spectral coding of the six bands of the optical TM data is developed. The new spectral coding method is used to produce efficient colour composite with high separability between the spectral classes similar to that if the whole six optical TM bands are used together. This spectral coded colour composite is used as a spectral component, which is then fused with the textural component represented by the despeckled JERS-1 SAR using the fusion tools, including the colour transform and the PCT. The Grey Level Cooccurrence Matrix (GLCM) technique is used to build the textural data set using the speckle filtered JERS-1 SAR data making seven textural GLCM measures. For automated thematic mapping and by the use of both the six TM spectral data and the seven textural GLCM measures, a new method of classification has been developed using the Maximum Likelihood Classifier (MLC). The method is named the sequential maximum likelihood classification and works efficiently by comparison the classified textural pixels, the classified spectral pixels, and the classified textural-spectral pixels, and gives the means of utilising the textural and spectral information for automated lithological mapping
    corecore