3,951 research outputs found

    EasyFJP: Providing Hybrid Parallelism as a Concern for Divide and Conquer Java Applications

    Get PDF
    Because of the increasing availability of multi-core machines, clus- ters, Grids, and combinations of these there is now plenty of computational power,but today's programmers are not fully prepared to exploit parallelism. In particular, Java has helped in handling the heterogeneity of such environments. However, there is a lot of ground to cover regarding facilities to easily and elegantly parallelizing applications. One path to this end seems to be the synthesis of semi- automatic parallelism and Parallelism as a Concern (PaaC). The former allows users to be mostly unaware of parallel exploitation problems and at the same time manually optimize parallelized applications whenever necessary, while the latter allows applications to be separated from parallel-related code. In this paper, we present EasyFJP, an approach that implicitly exploits parallelism in Java applications based on the concept of fork-join synchronization pattern, a simple but effective abstraction for creating and coordinating parallel tasks. In addition, EasyFJP lets users to explicitly optimize applications through policies, or user-provided rules to dynamically regulate task granularity. Finally, EasyFJP relies on PaaC by means of source code generation techniques to wire applications and parallel-specific code together. Experiments with real-world applications on an emulated Grid and a cluster evidence that EasyFJP delivers competitive performance compared to state-of-the-art Java parallel programming tools.Fil: Mateos Diaz, Cristian Maximiliano. Consejo Nacional de Investigaciones Científicas y Técnicas. Centro Científico Tecnológico - CONICET - Tandil. Instituto Superior de Ingenieria del Software; Argentina;Fil: Zunino Suarez, Alejandro Octavio. Consejo Nacional de Investigaciones Científicas y Técnicas. Centro Científico Tecnológico - CONICET - Tandil. Instituto Superior de Ingenieria del Software; Argentina;Fil: Hirsch Jofré, Matías Eberardo. Consejo Nacional de Investigaciones Científicas y Técnicas. Centro Científico Tecnológico - CONICET - Tandil. Instituto Superior de Ingenieria del Software; Argentina

    Occlusion Modeling for Coherent Echo Data Simulation:A Comparison Between Ray-Tracing and Convex-Hull Methods

    Get PDF
    The ability to simulate realistic coherent datasets for synthetic aperture imaging systems is crucial for the design, development and evaluation of the sensors and their signal processing pipelines, machine learning algorithms and autonomy systems. In the case of synthetic aperture sonar (SAS), collecting experimental data is expensive and it is rarely possible to obtain ground truth of the sensor’s path, the speed of sound in the medium, and the geometry of the imaged scene. Simulating sonar echo data allows signal processing algorithms to be tested with known ground truth, enabling rapid and inexpensive development and evaluation of signal processing algorithms. The de-facto standard for simulating conventional high-frequency (i.e., > 100 kHz) SAS echo data from an arbitrary sensor, path and scene is to use a point-based or facet-based diffraction model. A crucial part of this process is acoustic occlusion modeling. This article describes a SAS simulation pipeline and compares implementations of two occlusion methods; ray-tracing, and a newer approximate method based on finding the convex hull of a transformed point cloud. The full capability of the simulation pipeline is demonstrated using an example scene based on a high-resolution 3D model of the SS Thistlegorm shipwreck which was obtained using photogrammetry. The 3D model spans a volume of 220 × 130 × 25 m and is comprised of over 30 million facets that are decomposed into a cloud of almost 1 billion points. The convex-hull occlusion model was found to result in simulated SAS imagery that is qualitatively indistinguishable from the ray-tracing approach and quantitatively very similar, demonstrating that use of this alternative method has potential to improve speed while retaining high fidelity of simulation.The convex-hull approach was found to be up to 4 times faster in a fair speed comparison with serial and parallel CPU implementations for both methods, with the largest performance increase for wide-beam systems. The fastest occlusion modeling algorithm was found to be GPU-accelerated ray-tracing over the majority of scene scales tested, which was found to be up to 2 times faster than the parallel CPU convex-hull implementation. Although GPU implementations of convex hull algorithms are not currently readily available, future development of GPU-accelerated convex-hull finding could make the new approach much more viable. However, in the meantime, ray-tracing is still preferable, since it has higher accuracy and can leverage existing implementations for high performance computing architectures for better performance

    Analysing the Performance of Divide-and-Conquer Algorithms on Multicore Processors

    Get PDF
    Multicore systems are widely gaining popularity because of the significant avail-ability and performance increase over the single core systems. Multicore systems have a lesser power consumption and heat generation than that of the multiple single core systems. The different compiler support provided by different vendors also make multicore programming one of the main area of research. The multicore programming utilises the power of multiple cores to parallelise a task. The widely used algorithm paradigms for multicore programming are the Divide and Conquer algorithms. The divide and conquer algorithms are candidate problem for the multicore programming because divide and conquer algorithm divides a problem into sub- problems which can be solved by distributing the sub-problems among the different cores and parallel solve them. A wide range of divide and conquer algorithm has been parallelized. In this paper, we have taken two of the widely used divide and conquer algorithms, quick sort and convex hull, parallel implemented them to analyse their performance gain in compared to the sequential version of the algorithm. The parallel implementations distribute the load onto the multiple cores, parallel work upon the loads and finally merge individual results of the each core. We have also proposed a scheme for efficient merging of the parallel sorted sub-arrays in the quick sort. We have taken the mean and standard deviation theory for efficient merging of the sorted sub-arrays. The OpenMP programming model has been used for the implementation of the programs. The processor architecture used for analysing the behaviour of the algorithm is a shared memory based processo

    Scalable 3D Surface Reconstruction by Local Stochastic Fusion of Disparity Maps

    Get PDF
    Digital three-dimensional (3D) models are of significant interest to many application fields, such as medicine, engineering, simulation, and entertainment. Manual creation of 3D models is extremely time-consuming and data acquisition, e.g., through laser sensors, is expensive. In contrast, images captured by cameras mean cheap acquisition and high availability. Significant progress in the field of computer vision already allows for automatic 3D reconstruction using images. Nevertheless, many problems still exist, particularly for big sets of large images. In addition to the complex formulation necessary to solve an ill-posed problem, one has to manage extremely large amounts of data. This thesis targets 3D surface reconstruction using image sets, especially for large-scale, but also for high-accuracy applications. To this end, a processing chain for dense scalable 3D surface reconstruction using large image sets is defined consisting of image registration, disparity estimation, disparity map fusion, and triangulation of point clouds. The main focus of this thesis lies on the fusion and filtering of disparity maps, obtained by Semi-Global Matching, to create accurate 3D point clouds. For unlimited scalability, a Divide and Conquer method is presented that allows for parallel processing of subspaces of the 3D reconstruction space. The method for fusing disparity maps employs local optimization of spatial data. By this means, it avoids complex fusion strategies when merging subspaces. Although the focus is on scalable reconstruction, a high surface quality is obtained by several extensions to state-of-the-art local optimization methods. To this end, the seminal local volumetric optimization method by Curless and Levoy (1996) is interpreted from a probabilistic perspective. From this perspective, the method is extended through Bayesian fusion of spatial measurements with Gaussian uncertainty. Additionally to the generation of an optimal surface, this probabilistic perspective allows for the estimation of surface probabilities. They are used for filtering outliers in 3D space by means of geometric consistency checks. A further improvement of the quality is obtained based on the analysis of the disparity uncertainty. To this end, Total Variation (TV)-based feature classes are defined that are highly correlated with the disparity uncertainty. The correlation function is learned from ground-truth data by means of an Expectation Maximization (EM) approach. Because of the consideration of a statistically estimated disparity error in a probabilistic framework for fusion of spatial data, this can be regarded as a stochastic fusion of disparity maps. In addition, the influence of image registration and polygonization for volumetric fusion is analyzed and used to extend the method. Finally, a multi-resolution strategy is presented that allows for the generation of surfaces from spatial data with a largely varying quality. This method extends state-of-the-art methods by considering the spatial uncertainty of 3D points from stereo data. The evaluation of several well-known and novel datasets demonstrates the potential of the scalable stochastic fusion method. The strength and the weakness of the method are discussed and direction for future research is given.Digitale dreidimensionale (3D) Modelle sind in vielen Anwendungsfeldern, wie Medizin, Ingenieurswesen, Simulation und Unterhaltung von signifikantem Interesse. Eine manuelle Erstellung von 3D-Modellen ist Ă€ußerst zeitaufwendig und die Erfassung der Daten, z.B. durch Lasersensoren, ist teuer. Kamerabilder ermöglichen hingegen preiswerte Aufnahmen und sind gut verfĂŒgbar. Der rasante Fortschritt im Forschungsfeld Computer Vision ermöglicht bereits eine automatische 3D-Rekonstruktion aus Bilddaten. Dennoch besteht weiterhin eine Vielzahl von Problemen, insbesondere bei der Verarbeitung von großen Mengen hochauflösender Bilder. ZusĂ€tzlich zur komplexen Formulierung, die zur Lösung eines schlecht gestellten Problems notwendig ist, besteht die Herausforderung darin, Ă€ußerst große Datenmengen zu verwalten. Diese Arbeit befasst sich mit dem Problem der 3D-OberflĂ€chenrekonstruktion aus Bilddaten, insbesondere fĂŒr sehr große Modelle, aber auch Anwendungen mit hohem Genauigkeitsanforderungen. Zu diesem Zweck wird eine Prozesskette zur dichten skalierbaren 3D-OberflĂ€chenrekonstruktion fĂŒr große Bildmengen definiert, bestehend aus Bildregistrierung, DisparitĂ€tsschĂ€tzung, Fusion von DisparitĂ€tskarten und Triangulation von Punktwolken. Der Schwerpunkt dieser Arbeit liegt auf der Fusion und Filterung von durch Semi-Global Matching generierten DisparitĂ€tskarten zur Bestimmung von genauen 3D-Punktwolken. FĂŒr eine unbegrenzte Skalierbarkeit wird eine Divide and Conquer Methode vorgestellt, welche eine parallele Verarbeitung von TeilrĂ€umen des 3D-Rekonstruktionsraums ermöglicht. Die Methode zur Fusion von DisparitĂ€tskarten basiert auf lokaler Optimierung von 3D Daten. Damit kann eine komplizierte Fusionsstrategie fĂŒr die UnterrĂ€ume vermieden werden. Obwohl der Fokus auf der skalierbaren Rekonstruktion liegt, wird eine hohe OberflĂ€chenqualitĂ€t durch mehrere Erweiterungen von lokalen Optimierungsmodellen erzielt, die dem Stand der Forschung entsprechen. Dazu wird die wegweisende lokale volumetrische Optimierungsmethode von Curless and Levoy (1996) aus einer probabilistischen Perspektive interpretiert. Aus dieser Perspektive wird die Methode durch eine Bayes Fusion von rĂ€umlichen Messungen mit Gaußscher Unsicherheit erweitert. ZusĂ€tzlich zur Bestimmung einer optimalen OberflĂ€che ermöglicht diese probabilistische Fusion die Extraktion von OberflĂ€chenwahrscheinlichkeiten. Diese werden wiederum zur Filterung von Ausreißern mittels geometrischer KonsistenzprĂŒfungen im 3D-Raum verwendet. Eine weitere Verbesserung der QualitĂ€t wird basierend auf der Analyse der DisparitĂ€tsunsicherheit erzielt. Dazu werden Gesamtvariation-basierte Merkmalsklassen definiert, welche stark mit der DisparitĂ€tsunsicherheit korrelieren. Die Korrelationsfunktion wird aus ground-truth Daten mittels eines Expectation Maximization (EM) Ansatzes gelernt. Aufgrund der BerĂŒcksichtigung eines statistisch geschĂ€tzten DisparitĂ€tsfehlers in einem probabilistischem GrundgerĂŒst fĂŒr die Fusion von rĂ€umlichen Daten, kann dies als eine stochastische Fusion von DisparitĂ€tskarten betrachtet werden. Außerdem wird der Einfluss der Bildregistrierung und Polygonisierung auf die volumetrische Fusion analysiert und verwendet, um die Methode zu erweitern. Schließlich wird eine Multi-Resolution Strategie prĂ€sentiert, welche die Generierung von OberflĂ€chen aus rĂ€umlichen Daten mit unterschiedlichster QualitĂ€t ermöglicht. Diese Methode erweitert Methoden, die den Stand der Forschung darstellen, durch die BerĂŒcksichtigung der rĂ€umlichen Unsicherheit von 3D-Punkten aus Stereo Daten. Die Evaluierung von mehreren bekannten und neuen DatensĂ€tzen zeigt das Potential der skalierbaren stochastischen Fusionsmethode auf. StĂ€rken und SchwĂ€chen der Methode werden diskutiert und es wird eine Empfehlung fĂŒr zukĂŒnftige Forschung gegeben

    æœšă‚’ç”šă„ăŸæ§‹é€ ćŒ–äžŠćˆ—ăƒ—ăƒ­ă‚°ăƒ©ăƒŸăƒłă‚°

    Get PDF
    High-level abstractions for parallel programming are still immature. Computations on complicated data structures such as pointer structures are considered as irregular algorithms. General graph structures, which irregular algorithms generally deal with, are difficult to divide and conquer. Because the divide-and-conquer paradigm is essential for load balancing in parallel algorithms and a key to parallel programming, general graphs are reasonably difficult. However, trees lead to divide-and-conquer computations by definition and are sufficiently general and powerful as a tool of programming. We therefore deal with abstractions of tree-based computations. Our study has started from Matsuzaki’s work on tree skeletons. We have improved the usability of tree skeletons by enriching their implementation aspect. Specifically, we have dealt with two issues. We first have implemented the loose coupling between skeletons and data structures and developed a flexible tree skeleton library. We secondly have implemented a parallelizer that transforms sequential recursive functions in C into parallel programs that use tree skeletons implicitly. This parallelizer hides the complicated API of tree skeletons and makes programmers to use tree skeletons with no burden. Unfortunately, the practicality of tree skeletons, however, has not been improved. On the basis of the observations from the practice of tree skeletons, we deal with two application domains: program analysis and neighborhood computation. In the domain of program analysis, compilers treat input programs as control-flow graphs (CFGs) and perform analysis on CFGs. Program analysis is therefore difficult to divide and conquer. To resolve this problem, we have developed divide-and-conquer methods for program analysis in a syntax-directed manner on the basis of Rosen’s high-level approach. Specifically, we have dealt with data-flow analysis based on Tarjan’s formalization and value-graph construction based on a functional formalization. In the domain of neighborhood computations, a primary issue is locality. A naive parallel neighborhood computation without locality enhancement causes a lot of cache misses. The divide-and-conquer paradigm is known to be useful also for locality enhancement. We therefore have applied algebraic formalizations and a tree-segmenting technique derived from tree skeletons to the locality enhancement of neighborhood computations.é›»æ°—é€šäżĄć€§ć­Š201

    Evaluating the Performance of Vulkan GLSL Compute Shaders in Real-Time Ray-Traced Audio Propagation Through 3D Virtual Environments

    Get PDF
    Real time ray tracing is a growing area of interest with applications in audio processing. However, real time audio processing comes with strict performance requirements, which parallel computing is often used to overcome. As graphics processing units (GPUs) have become more powerful and programmable, general-purpose computing on graphics processing units (GPGPU) has allowed GPUs to become extremely powerful parallel processors, leading them to become more prevalent in the domain of audio processing through platforms such as CUDA. The aim of this research was to investigate the potential of GLSL compute shaders in the domain of real time audio processing. Specifically regarding real time ray tracing tasks. To do this a number of GLSL compute shaders were created, along with a C++ Vulkan application with which to execute them. These shaders facilitate the propagation of audio, using ray tracing, through a virtual environment, and implement 3D space partitioning and ray intersection prediction in order to gauge the effectiveness of these optimisations for this task. Statistically significant results show that the GLSL compute shaders successfully propagated audio through a virtual environment, returning results to the host system in real time, within 30 milliseconds. However, while this capability was shown, significantly detailed virtual environments prevented results from being returned in real time. Indicating a potential for future research and optimisation

    The projector algorithm: a simple parallel algorithm for computing Voronoi diagrams and Delaunay graphs

    Full text link
    The Voronoi diagram is a certain geometric data structure which has numerous applications in various scientific and technological fields. The theory of algorithms for computing 2D Euclidean Voronoi diagrams of point sites is rich and useful, with several different and important algorithms. However, this theory has been quite steady during the last few decades in the sense that no essentially new algorithms have entered the game. In addition, most of the known algorithms are serial in nature and hence cast inherent difficulties on the possibility to compute the diagram in parallel. In this paper we present the projector algorithm: a new and simple algorithm which enables the (combinatorial) computation of 2D Voronoi diagrams. The algorithm is significantly different from previous ones and some of the involved concepts in it are in the spirit of linear programming and optics. Parallel implementation is naturally supported since each Voronoi cell can be computed independently of the other cells. A new combinatorial structure for representing the cells (and any convex polytope) is described along the way and the computation of the induced Delaunay graph is obtained almost automatically.Comment: This is a major revision; re-organization and better presentation of some parts; correction of several inaccuracies; improvement of some proofs and figures; added references; modification of the title; the paper is long but more than half of it is composed of proofs and references: it is sufficient to look at pages 5, 7--11 in order to understand the algorith

    LightSpeed: Light and Fast Neural Light Fields on Mobile Devices

    Full text link
    Real-time novel-view image synthesis on mobile devices is prohibitive due to the limited computational power and storage. Using volumetric rendering methods, such as NeRF and its derivatives, on mobile devices is not suitable due to the high computational cost of volumetric rendering. On the other hand, recent advances in neural light field representations have shown promising real-time view synthesis results on mobile devices. Neural light field methods learn a direct mapping from a ray representation to the pixel color. The current choice of ray representation is either stratified ray sampling or Plucker coordinates, overlooking the classic light slab (two-plane) representation, the preferred representation to interpolate between light field views. In this work, we find that using the light slab representation is an efficient representation for learning a neural light field. More importantly, it is a lower-dimensional ray representation enabling us to learn the 4D ray space using feature grids which are significantly faster to train and render. Although mostly designed for frontal views, we show that the light-slab representation can be further extended to non-frontal scenes using a divide-and-conquer strategy. Our method offers superior rendering quality compared to previous light field methods and achieves a significantly improved trade-off between rendering quality and speed.Comment: Project Page: http://lightspeed-r2l.github.io/ . Add camera ready versio

    Open-ended evolution to discover analogue circuits for beyond conventional applications

    Get PDF
    This is the author's accepted manuscript. The final publication is available at Springer via http://dx.doi.org/10.1007/s10710-012-9163-8. Copyright @ Springer 2012.Analogue circuits synthesised by means of open-ended evolutionary algorithms often have unconventional designs. However, these circuits are typically highly compact, and the general nature of the evolutionary search methodology allows such designs to be used in many applications. Previous work on the evolutionary design of analogue circuits has focused on circuits that lie well within analogue application domain. In contrast, our paper considers the evolution of analogue circuits that are usually synthesised in digital logic. We have developed four computational circuits, two voltage distributor circuits and a time interval metre circuit. The approach, despite its simplicity, succeeds over the design tasks owing to the employment of substructure reuse and incremental evolution. Our findings expand the range of applications that are considered suitable for evolutionary electronics
    • 

    corecore