6,143 research outputs found

    Data distributed, parallel algorithm for ray-traced volume rendering

    Get PDF
    Journal ArticleThis paper presents a divide-and-conquer ray-traced volume rendering algorithm and a parallel image compositing method, along with their implementation and performance on the Connection Machine CM-5, and networked workstations. This algorithm distributes both the data and the computations to individual processing units to achieve fast, high-quality rendering of high-resolution data. The volume data, once distributed, is left intact. The processing nodes perform local raytracing of their subvolume concurrently. No communication between processing units is needed during this locally ray-tracing process. A subimage is generated by each processing unit and the nal image is obtained by compositing subimages in the proper order, which can be determined a priori. Test results on both the CM-5 and a group of networked workstations demonstrate the practicality of our rendering algorithm and compositing method

    Data distributed, parallel algorithm for ray-traced volume rendering

    Get PDF
    Journal ArticleThis paper presents a divide-and-conquer ray-traced volume rendering algorithm and a parallel image compositing method, along with their implementation and performance on the connection Machine CM-5, and networked workstations. This algorithm distributes both the data and the computations to individual processing units to achieve fast, high-quality rendering of high-resolution data. The volume data, once distributed, is left intact. The processing nodes perform local raytracing of their sub volume concurrently. No communication between processing units is needed during this locally ray-tracing process. A subimage is generated by each processing unit and the final image is obtained by compositing subimages in the proper order, which can be determined a priori. Test results on the CM-5 and a group of networked workstations demonstrate the practicality of our rendering algorithm and compositing method

    Divide and Conquer G-Buffer Ray Tracing

    Get PDF

    A Randomized Incremental Algorithm for the Hausdorff Voronoi Diagram of Non-crossing Clusters

    Full text link
    In the Hausdorff Voronoi diagram of a family of \emph{clusters of points} in the plane, the distance between a point tt and a cluster PP is measured as the maximum distance between tt and any point in PP, and the diagram is defined in a nearest-neighbor sense for the input clusters. In this paper we consider %El."non-crossing" \emph{non-crossing} clusters in the plane, for which the combinatorial complexity of the Hausdorff Voronoi diagram is linear in the total number of points, nn, on the convex hulls of all clusters. We present a randomized incremental construction, based on point location, that computes this diagram in expected O(nlog2n)O(n\log^2{n}) time and expected O(n)O(n) space. Our techniques efficiently handle non-standard characteristics of generalized Voronoi diagrams, such as sites of non-constant complexity, sites that are not enclosed in their Voronoi regions, and empty Voronoi regions. The diagram finds direct applications in VLSI computer-aided design.Comment: arXiv admin note: substantial text overlap with arXiv:1306.583

    EasyFJP: Providing Hybrid Parallelism as a Concern for Divide and Conquer Java Applications

    Get PDF
    Because of the increasing availability of multi-core machines, clus- ters, Grids, and combinations of these there is now plenty of computational power,but today's programmers are not fully prepared to exploit parallelism. In particular, Java has helped in handling the heterogeneity of such environments. However, there is a lot of ground to cover regarding facilities to easily and elegantly parallelizing applications. One path to this end seems to be the synthesis of semi- automatic parallelism and Parallelism as a Concern (PaaC). The former allows users to be mostly unaware of parallel exploitation problems and at the same time manually optimize parallelized applications whenever necessary, while the latter allows applications to be separated from parallel-related code. In this paper, we present EasyFJP, an approach that implicitly exploits parallelism in Java applications based on the concept of fork-join synchronization pattern, a simple but effective abstraction for creating and coordinating parallel tasks. In addition, EasyFJP lets users to explicitly optimize applications through policies, or user-provided rules to dynamically regulate task granularity. Finally, EasyFJP relies on PaaC by means of source code generation techniques to wire applications and parallel-specific code together. Experiments with real-world applications on an emulated Grid and a cluster evidence that EasyFJP delivers competitive performance compared to state-of-the-art Java parallel programming tools.Fil: Mateos Diaz, Cristian Maximiliano. Consejo Nacional de Investigaciones Científicas y Técnicas. Centro Científico Tecnológico - CONICET - Tandil. Instituto Superior de Ingenieria del Software; Argentina;Fil: Zunino Suarez, Alejandro Octavio. Consejo Nacional de Investigaciones Científicas y Técnicas. Centro Científico Tecnológico - CONICET - Tandil. Instituto Superior de Ingenieria del Software; Argentina;Fil: Hirsch Jofré, Matías Eberardo. Consejo Nacional de Investigaciones Científicas y Técnicas. Centro Científico Tecnológico - CONICET - Tandil. Instituto Superior de Ingenieria del Software; Argentina

    Searching edges in the overlap of two plane graphs

    Full text link
    Consider a pair of plane straight-line graphs, whose edges are colored red and blue, respectively, and let n be the total complexity of both graphs. We present a O(n log n)-time O(n)-space technique to preprocess such pair of graphs, that enables efficient searches among the red-blue intersections along edges of one of the graphs. Our technique has a number of applications to geometric problems. This includes: (1) a solution to the batched red-blue search problem [Dehne et al. 2006] in O(n log n) queries to the oracle; (2) an algorithm to compute the maximum vertical distance between a pair of 3D polyhedral terrains one of which is convex in O(n log n) time, where n is the total complexity of both terrains; (3) an algorithm to construct the Hausdorff Voronoi diagram of a family of point clusters in the plane in O((n+m) log^3 n) time and O(n+m) space, where n is the total number of points in all clusters and m is the number of crossings between all clusters; (4) an algorithm to construct the farthest-color Voronoi diagram of the corners of n axis-aligned rectangles in O(n log^2 n) time; (5) an algorithm to solve the stabbing circle problem for n parallel line segments in the plane in optimal O(n log n) time. All these results are new or improve on the best known algorithms.Comment: 22 pages, 6 figure

    Volume visualization of time-varying data using parallel, multiresolution and adaptive-resolution techniques

    Get PDF
    This paper presents a parallel rendering approach that allows high-quality visualization of large time-varying volume datasets. Multiresolution and adaptive-resolution techniques are also incorporated to improve the efficiency of the rendering. Three basic steps are needed to implement this kind of an application. First we divide the task through decomposition of data. This decomposition can be either temporal or spatial or a mix of both. After data has been divided, each of the data portions is rendered by a separate processor to create sub-images or frames. Finally these sub-images or frames are assembled together into a final image or animation. After developing this application, several experiments were performed to show that this approach indeed saves time when a reasonable number of processors are used. Also, we conclude that the optimal number of processors is dependent on the size of the dataset used

    Scalable 3D Surface Reconstruction by Local Stochastic Fusion of Disparity Maps

    Get PDF
    Digital three-dimensional (3D) models are of significant interest to many application fields, such as medicine, engineering, simulation, and entertainment. Manual creation of 3D models is extremely time-consuming and data acquisition, e.g., through laser sensors, is expensive. In contrast, images captured by cameras mean cheap acquisition and high availability. Significant progress in the field of computer vision already allows for automatic 3D reconstruction using images. Nevertheless, many problems still exist, particularly for big sets of large images. In addition to the complex formulation necessary to solve an ill-posed problem, one has to manage extremely large amounts of data. This thesis targets 3D surface reconstruction using image sets, especially for large-scale, but also for high-accuracy applications. To this end, a processing chain for dense scalable 3D surface reconstruction using large image sets is defined consisting of image registration, disparity estimation, disparity map fusion, and triangulation of point clouds. The main focus of this thesis lies on the fusion and filtering of disparity maps, obtained by Semi-Global Matching, to create accurate 3D point clouds. For unlimited scalability, a Divide and Conquer method is presented that allows for parallel processing of subspaces of the 3D reconstruction space. The method for fusing disparity maps employs local optimization of spatial data. By this means, it avoids complex fusion strategies when merging subspaces. Although the focus is on scalable reconstruction, a high surface quality is obtained by several extensions to state-of-the-art local optimization methods. To this end, the seminal local volumetric optimization method by Curless and Levoy (1996) is interpreted from a probabilistic perspective. From this perspective, the method is extended through Bayesian fusion of spatial measurements with Gaussian uncertainty. Additionally to the generation of an optimal surface, this probabilistic perspective allows for the estimation of surface probabilities. They are used for filtering outliers in 3D space by means of geometric consistency checks. A further improvement of the quality is obtained based on the analysis of the disparity uncertainty. To this end, Total Variation (TV)-based feature classes are defined that are highly correlated with the disparity uncertainty. The correlation function is learned from ground-truth data by means of an Expectation Maximization (EM) approach. Because of the consideration of a statistically estimated disparity error in a probabilistic framework for fusion of spatial data, this can be regarded as a stochastic fusion of disparity maps. In addition, the influence of image registration and polygonization for volumetric fusion is analyzed and used to extend the method. Finally, a multi-resolution strategy is presented that allows for the generation of surfaces from spatial data with a largely varying quality. This method extends state-of-the-art methods by considering the spatial uncertainty of 3D points from stereo data. The evaluation of several well-known and novel datasets demonstrates the potential of the scalable stochastic fusion method. The strength and the weakness of the method are discussed and direction for future research is given.Digitale dreidimensionale (3D) Modelle sind in vielen Anwendungsfeldern, wie Medizin, Ingenieurswesen, Simulation und Unterhaltung von signifikantem Interesse. Eine manuelle Erstellung von 3D-Modellen ist äußerst zeitaufwendig und die Erfassung der Daten, z.B. durch Lasersensoren, ist teuer. Kamerabilder ermöglichen hingegen preiswerte Aufnahmen und sind gut verfügbar. Der rasante Fortschritt im Forschungsfeld Computer Vision ermöglicht bereits eine automatische 3D-Rekonstruktion aus Bilddaten. Dennoch besteht weiterhin eine Vielzahl von Problemen, insbesondere bei der Verarbeitung von großen Mengen hochauflösender Bilder. Zusätzlich zur komplexen Formulierung, die zur Lösung eines schlecht gestellten Problems notwendig ist, besteht die Herausforderung darin, äußerst große Datenmengen zu verwalten. Diese Arbeit befasst sich mit dem Problem der 3D-Oberflächenrekonstruktion aus Bilddaten, insbesondere für sehr große Modelle, aber auch Anwendungen mit hohem Genauigkeitsanforderungen. Zu diesem Zweck wird eine Prozesskette zur dichten skalierbaren 3D-Oberflächenrekonstruktion für große Bildmengen definiert, bestehend aus Bildregistrierung, Disparitätsschätzung, Fusion von Disparitätskarten und Triangulation von Punktwolken. Der Schwerpunkt dieser Arbeit liegt auf der Fusion und Filterung von durch Semi-Global Matching generierten Disparitätskarten zur Bestimmung von genauen 3D-Punktwolken. Für eine unbegrenzte Skalierbarkeit wird eine Divide and Conquer Methode vorgestellt, welche eine parallele Verarbeitung von Teilräumen des 3D-Rekonstruktionsraums ermöglicht. Die Methode zur Fusion von Disparitätskarten basiert auf lokaler Optimierung von 3D Daten. Damit kann eine komplizierte Fusionsstrategie für die Unterräume vermieden werden. Obwohl der Fokus auf der skalierbaren Rekonstruktion liegt, wird eine hohe Oberflächenqualität durch mehrere Erweiterungen von lokalen Optimierungsmodellen erzielt, die dem Stand der Forschung entsprechen. Dazu wird die wegweisende lokale volumetrische Optimierungsmethode von Curless and Levoy (1996) aus einer probabilistischen Perspektive interpretiert. Aus dieser Perspektive wird die Methode durch eine Bayes Fusion von räumlichen Messungen mit Gaußscher Unsicherheit erweitert. Zusätzlich zur Bestimmung einer optimalen Oberfläche ermöglicht diese probabilistische Fusion die Extraktion von Oberflächenwahrscheinlichkeiten. Diese werden wiederum zur Filterung von Ausreißern mittels geometrischer Konsistenzprüfungen im 3D-Raum verwendet. Eine weitere Verbesserung der Qualität wird basierend auf der Analyse der Disparitätsunsicherheit erzielt. Dazu werden Gesamtvariation-basierte Merkmalsklassen definiert, welche stark mit der Disparitätsunsicherheit korrelieren. Die Korrelationsfunktion wird aus ground-truth Daten mittels eines Expectation Maximization (EM) Ansatzes gelernt. Aufgrund der Berücksichtigung eines statistisch geschätzten Disparitätsfehlers in einem probabilistischem Grundgerüst für die Fusion von räumlichen Daten, kann dies als eine stochastische Fusion von Disparitätskarten betrachtet werden. Außerdem wird der Einfluss der Bildregistrierung und Polygonisierung auf die volumetrische Fusion analysiert und verwendet, um die Methode zu erweitern. Schließlich wird eine Multi-Resolution Strategie präsentiert, welche die Generierung von Oberflächen aus räumlichen Daten mit unterschiedlichster Qualität ermöglicht. Diese Methode erweitert Methoden, die den Stand der Forschung darstellen, durch die Berücksichtigung der räumlichen Unsicherheit von 3D-Punkten aus Stereo Daten. Die Evaluierung von mehreren bekannten und neuen Datensätzen zeigt das Potential der skalierbaren stochastischen Fusionsmethode auf. Stärken und Schwächen der Methode werden diskutiert und es wird eine Empfehlung für zukünftige Forschung gegeben

    Analysing the Performance of Divide-and-Conquer Algorithms on Multicore Processors

    Get PDF
    Multicore systems are widely gaining popularity because of the significant avail-ability and performance increase over the single core systems. Multicore systems have a lesser power consumption and heat generation than that of the multiple single core systems. The different compiler support provided by different vendors also make multicore programming one of the main area of research. The multicore programming utilises the power of multiple cores to parallelise a task. The widely used algorithm paradigms for multicore programming are the Divide and Conquer algorithms. The divide and conquer algorithms are candidate problem for the multicore programming because divide and conquer algorithm divides a problem into sub- problems which can be solved by distributing the sub-problems among the different cores and parallel solve them. A wide range of divide and conquer algorithm has been parallelized. In this paper, we have taken two of the widely used divide and conquer algorithms, quick sort and convex hull, parallel implemented them to analyse their performance gain in compared to the sequential version of the algorithm. The parallel implementations distribute the load onto the multiple cores, parallel work upon the loads and finally merge individual results of the each core. We have also proposed a scheme for efficient merging of the parallel sorted sub-arrays in the quick sort. We have taken the mean and standard deviation theory for efficient merging of the sorted sub-arrays. The OpenMP programming model has been used for the implementation of the programs. The processor architecture used for analysing the behaviour of the algorithm is a shared memory based processo
    corecore