231 research outputs found

    Run-time support for parallel object-oriented computing: the NIP lazy task creation technique and the NIP object-based software distributed shared memory

    Get PDF
    PhD ThesisAdvances in hardware technologies combined with decreased costs have started a trend towards massively parallel architectures that utilise commodity components. It is thought unreasonable to expect software developers to manage the high degree of parallelism that is made available by these architectures. This thesis argues that a new programming model is essential for the development of parallel applications and presents a model which embraces the notions of object-orientation and implicit identification of parallelism. The new model allows software engineers to concentrate on development issues, using the object-oriented paradigm, whilst being freed from the burden of explicitly managing parallel activity. To support the programming model, the semantics of an execution model are defined and implemented as part of a run-time support system for object-oriented parallel applications. Details of the novel techniques from the run-time system, in the areas of lazy task creation and object-based, distributed shared memory, are presented. The tasklet construct for representing potentially parallel computation is introduced and further developed by this thesis. Three caching techniques that take advantage of memory access patterns exhibited in object-oriented applications are explored. Finally, the performance characteristics of the introduced run-time techniques are analysed through a number of benchmark applications

    Design and integrity of deterministic system architectures.

    Get PDF
    Architectures represented by system construction 'building block' components and interrelationships provide the structural form. This thesis addresses processes, procedures and methods that support system design synthesis and specifically the determination of the integrity of candidate architectural structures. Particular emphasis is given to the structural representation of system architectures, their consistency and functional quantification. It is a design imperative that a hierarchically decomposed structure maintains compatibility and consistency between the functional and realisation solutions. Complex systems are normally simplified by the use of hierarchical decomposition so that lower level components are precisely defined and simpler than higher-level components. To enable such systems to be reconstructed from their components, the hierarchical construction must provide vertical intra-relationship consistency, horizontal interrelationship consistency, and inter-component functional consistency. Firstly, a modified process design model is proposed that incorporates the generic structural representation of system architectures. Secondly, a system architecture design knowledge domain is proposed that enables viewpoint evaluations to be aggregated into a coherent set of domains that are both necessary and sufficient to determine the integrity of system architectures. Thirdly, four methods of structural analysis are proposed to assure the integrity of the architecture. The first enables the structural compatibility between the 'building blocks' that provide the emergent functional properties and implementation solution properties to be determined. The second enables the compatibility of the functional causality structure and the implementation causality structure to be determined. The third method provides a graphical representation of architectural structures. The fourth method uses the graphical form of structural representation to provide a technique that enables quantitative estimation of performance estimates of emergent properties for large scale or complex architectural structures. These methods have been combined into a procedure of formal design. This is a design process that, if rigorously executed, meets the requirements for reconstructability

    Automated parallel application creation and execution tool for clusters

    Full text link
    This research investigated an automated approach to re-writing traditional sequential computer programs into parallel programs for networked computers. A tool was designed and developed for generating parallel programs automatically and also executing these parallel programs on a network of computers. Performance is maximized by utilising all idle resources

    An object-oriented model for adaptive high-performance computing on the computational GRID

    Get PDF
    The dissertation presents a new parallel programming paradigm for developing high performance (HPC) applications on the Grid. We address the question "How to tailor HPC applications to the Grid?" where the heterogeneity and the large scale of resources are the two main issues. We respond to the question at two different levels: the programming tool level and the parallelization concept level. At the programming tool level, the adaptation of applications to the Grid environment consists of two forms: either the application components should somehow decompose dynamically based on the available resources; or the components should be able to ask the infrastructure to select automatically the suitable resources by providing descriptive information about the resource requirements. These two forms of adaptation lead to the parallel object model on which resource requirements are integrated into shareable distributed objects under the form of object descriptions. We develop a tool called ParoC++ that implements the parallel object model. ParoC++ provides a comprehensive object-oriented infrastructure for developing and integrating HPC applications, for managing the Grid environment and for executing applications on the Grid. At the parallelization concept level, we investigate the parallelization scheme which provides the user a method to express the parallelism to satisfy the user specified time constraints for a class of problems with known (or well-estimated) complexities on the Grid. The parallelization scheme is constructed on the following two principal elements: the decomposition tree which represents the multi-level decomposition and the decomposition dependency graph which defines the partial order of execution within each decomposition. Through the scheme, the parallelism grain will be automatically chosen based on the available resources at run-time. The parallelization scheme framework has been implemented using the ParoC++. This framework provides a high level abstraction which hides all of the complexities of the Grid environment so that users can focus on the "logic" of their problems. The dissertation has been accompanied with a series of benchmarks and two real life applications from image analysis for real-time textile manufacturing and from snow simulation and avalanche warning. The results show the effectiveness of ParoC++ on developing high performance computing applications and in particular for solving the time constraint problems on the Grid

    Extraction of buildings from high-resolution satellite data and airborne LIDAR

    Get PDF
    Automatic building extraction is a difficult object recognition problem due to a high complexity of the scene content and the object representation. There is a dilemma to select appropriate building models to be reconstructed; the models have to be generic in order to represent a variety of building shape, whereas they also have to be specific to differentiate buildings from other objects in the scene. Therefore, a scientific challenge of building extraction lies in constructing a framework for modelling building objects with appropriate balance between generic and specific models. This thesis investigates a synergy of IKONOS satellite imagery and airborne LIDAR data, which have recently emerged as powerful remote sensing tools, and aims to develop an automatic system, which delineates building outlines with more complex shape, but by less use of geometric constraints. The method described in this thesis is a two step procedure: building detection and building description. A method of automatic building detection that can separate individual buildings from surrounding features is presented. The process is realized in a hierarchical strategy, where terrain, trees, and building objects are sequentially detected. Major research efforts are made on the development of a LIDAR filtering technique, which automatically detects terrain surfaces from a cloud of 3D laser points. The thesis also proposes a method of building description to automatically reconstruct building boundaries. A building object is generally represented as a mosaic of convex polygons. The first stage is to generate polygonal cues by a recursive intersection of both datadriven and model-driven linear features extracted from IKONOS imagery and LIDAR data. The second stage is to collect relevant polygons comprising the building object and to merge them for reconstructing the building outlines. The developed LIDAR filter was tested in a range of different landforms, and showed good results to meet most of the requirements of DTM generation and building detection. Also, the implemented building extraction system was able to successfully reconstruct the building outlines, and the accuracy of the building extraction is good enough for mapping purposes.EThOS - Electronic Theses Online ServiceGBUnited Kingdo

    Extraction of buildings from high-resolution satellite data and airborne Lidar

    Get PDF
    Automatic building extraction is a difficult object recognition problem due to a high complexity of the scene content and the object representation. There is a dilemma to select appropriate building models to be reconstructed; the models have to be generic in order to represent a variety of building shape, whereas they also have to be specific to differentiate buildings from other objects in the scene. Therefore, a scientific challenge of building extraction lies in constructing a framework for modelling building objects with appropriate balance between generic and specific models. This thesis investigates a synergy of IKONOS satellite imagery and airborne LIDAR data, which have recently emerged as powerful remote sensing tools, and aims to develop an automatic system, which delineates building outlines with more complex shape, but by less use of geometric constraints. The method described in this thesis is a two step procedure: building detection and building description. A method of automatic building detection that can separate individual buildings from surrounding features is presented. The process is realized in a hierarchical strategy, where terrain, trees, and building objects are sequentially detected. Major research efforts are made on the development of a LIDAR filtering technique, which automatically detects terrain surfaces from a cloud of 3D laser points. The thesis also proposes a method of building description to automatically reconstruct building boundaries. A building object is generally represented as a mosaic of convex polygons. The first stage is to generate polygonal cues by a recursive intersection of both datadriven and model-driven linear features extracted from IKONOS imagery and LIDAR data. The second stage is to collect relevant polygons comprising the building object and to merge them for reconstructing the building outlines. The developed LIDAR filter was tested in a range of different landforms, and showed good results to meet most of the requirements of DTM generation and building detection. Also, the implemented building extraction system was able to successfully reconstruct the building outlines, and the accuracy of the building extraction is good enough for mapping purposes

    Image compression techniques using vector quantization

    Get PDF

    Runtime support for load balancing of parallel adaptive and irregular applications

    Get PDF
    Applications critical to today\u27s engineering research often must make use of the increased memory and processing power of a parallel machine. While advances in architecture design are leading to more and more powerful parallel systems, the software tools needed to realize their full potential are in a much less advanced state. In particular, efficient, robust, and high-performance runtime support software is critical in the area of dynamic load balancing. While the load balancing of loosely synchronous codes, such as field solvers, has been studied extensively for the past 15 years, there exists a class of problems, known as asynchronous and highly adaptive , for which the dynamic load balancing problem remains open. as we discuss, characteristics of this class of problems render compile-time or static analysis of little benefit, and complicate the dynamic load balancing task immensely.;We make two contributions to this area of research. The first is the design and development of a runtime software toolkit, known as the Parallel Runtime Environment for Multi-computer Applications, or PREMA, which provides interprocessor communication, a global namespace, a framework for the implementation of customized scheduling policies, and several such policies which are prevalent in the load balancing literature. The PREMA system is designed to support coarse-grained domain decompositions with the goals of portability, flexibility, and maintainability in mind, so that developers will quickly feel comfortable incorporating it into existing codes and developing new codes which make use of its functionality. We demonstrate that the programming model and implementation are efficient and lead to the development of robust and high-performance applications.;Our second contribution is in the area of performance modeling. In order to make the most effective use of the PREMA runtime software, certain parameters governing its execution must be set off-line. Optimal values for these parameters may be determined through repeated executions of the target application; however, this is not always possible, particularly in large-scale environments and long-running applications. We present an analytic model that allows the user to quickly and inexpensively predict application performance and fine-tune applications built on the PREMA platform

    PiCo: A Domain-Specific Language for Data Analytics Pipelines

    Get PDF
    In the world of Big Data analytics, there is a series of tools aiming at simplifying programming applications to be executed on clusters. Although each tool claims to provide better programming, data and execution models—for which only informal (and often confusing) semantics is generally provided—all share a common under- lying model, namely, the Dataflow model. Using this model as a starting point, it is possible to categorize and analyze almost all aspects about Big Data analytics tools from a high level perspective. This analysis can be considered as a first step toward a formal model to be exploited in the design of a (new) framework for Big Data analytics. By putting clear separations between all levels of abstraction (i.e., from the runtime to the user API), it is easier for a programmer or software designer to avoid mixing low level with high level aspects, as we are often used to see in state-of-the-art Big Data analytics frameworks. From the user-level perspective, we think that a clearer and simple semantics is preferable, together with a strong separation of concerns. For this reason, we use the Dataflow model as a starting point to build a programming environment with a simplified programming model implemented as a Domain-Specific Language, that is on top of a stack of layers that build a prototypical framework for Big Data analytics. The contribution of this thesis is twofold: first, we show that the proposed model is (at least) as general as existing batch and streaming frameworks (e.g., Spark, Flink, Storm, Google Dataflow), thus making it easier to understand high-level data-processing applications written in such frameworks. As result of this analysis, we provide a layered model that can represent tools and applications following the Dataflow paradigm and we show how the analyzed tools fit in each level. Second, we propose a programming environment based on such layered model in the form of a Domain-Specific Language (DSL) for processing data collections, called PiCo (Pipeline Composition). The main entity of this programming model is the Pipeline, basically a DAG-composition of processing elements. This model is intended to give the user an unique interface for both stream and batch processing, hiding completely data management and focusing only on operations, which are represented by Pipeline stages. Our DSL will be built on top of the FastFlow library, exploiting both shared and distributed parallelism, and implemented in C++11/14 with the aim of porting C++ into the Big Data world
    • …
    corecore