411 research outputs found

    Probabilistic and parallel algorithms for centroidal Voronoi tessellations with application to meshless computing and numerical analysis on surfaces

    Get PDF
    Centroidal Voronoi tessellations (CVT) are Voronoi tessellations of a region such that the generating points of the tessellations are also the centroids of the corresponding Voronoi regions. Such tessellations are of use in very diverse applications, including data compression, clustering analysis, cell biology, territorial behavior of animals, optimal allocation of resources, and grid generation. A detailed review is given in chapter 1. In chapter 2, some probabilistic methods for determining centroidal Voronoi tessellations and their parallel implementation on distributed memory systems are presented. The results of computational experiments performed on a CRAY T3E-600 system are given for each algorithm. These demonstrate the superior sequential and parallel performance of a new algorithm we introduce. Then, new algorithms are presented in chapter 3 for the determination of point sets and associated support regions that can then be used in meshless computing methods. The algorithms are probabilistic in nature so that they are totally meshfree, i.e., they do not require, at any stage, the use of any coarse or fine boundary conforming or superimposed meshes. Computational examples are provided that show, for both uniform and non-uniform point distributions that the algorithms result in high-quality point sets and high-quality support regions. The extensions of centroidal Voronoi tessellations to general spaces and sets are also available. For example, tessellations of surfaces in a Euclidean space may be considered. In chapter 4, a precise definition of such constrained centroidal Voronoi tessellations (CCVT\u27s) is given and a number of their properties are derived, including their characterization as minimizers of a kind of energy. Deterministic and probabilistic algorithms for the construction of CCVT\u27s are presented and some analytical results for one of the algorithms are given. Some computational examples are provided which serve to illustrate the high quality of CCVT point sets. CCVT point sets are also applied to polynomial interpolation and numerical integration on the sphere. Finally, some conclusions are given in chapter 5

    Energy challenges for ICT

    Get PDF
    The energy consumption from the expanding use of information and communications technology (ICT) is unsustainable with present drivers, and it will impact heavily on the future climate change. However, ICT devices have the potential to contribute signi - cantly to the reduction of CO2 emission and enhance resource e ciency in other sectors, e.g., transportation (through intelligent transportation and advanced driver assistance systems and self-driving vehicles), heating (through smart building control), and manu- facturing (through digital automation based on smart autonomous sensors). To address the energy sustainability of ICT and capture the full potential of ICT in resource e - ciency, a multidisciplinary ICT-energy community needs to be brought together cover- ing devices, microarchitectures, ultra large-scale integration (ULSI), high-performance computing (HPC), energy harvesting, energy storage, system design, embedded sys- tems, e cient electronics, static analysis, and computation. In this chapter, we introduce challenges and opportunities in this emerging eld and a common framework to strive towards energy-sustainable ICT

    Framework for simulation of fault tolerant heterogeneous multiprocessor system-on-chip

    Full text link
    Due to the ever growing requirement in high performance data computation, current Uniprocessor systems fall short of hand to meet critical real-time performance demands in (i) high throughput (ii) faster processing time (iii) low power consumption (iv) design cost and time-to-market factors and more importantly (v) fault tolerant processing. Shifting the design trend to MPSOCs is a work-around to meet these challenges. However, developing efficient fault tolerant task scheduling and mapping techniques requires optimized algorithms that consider the various scenarios in Multiprocessor environments. Several works have been done in the past few years which proposed simulation based frameworks for scheduling and mapping strategies that considered homogenous systems and error avoidance techniques. However, most of these works inadequately describe today\u27s MPSOC trend because they were focused on the network domain and didn\u27t consider heterogeneous systems with fault tolerant capabilities; In order to address these issues, this work proposes (i) a performance driven scheduling algorithm (PD SA) based on simulated annealing technique (ii) an optimized Homogenous-Workload-Distribution (HWD) Multiprocessor task mapping algorithm which considers the dynamic workload on processors and (iii) a dynamic Fault Tolerant (FT) scheduling/mapping algorithm to employ robust application processing system. The implementation was accompanied by a heterogeneous Multiprocessor system simulation framework developed in systemC/C++. The proposed framework reads user data, set the architecture, execute input task graph and finally generate performance variables. This framework alleviates previous work issues with respect to (i) architectural flexibility in number-of-processors, processor types and topology (ii) optimized scheduling and mapping strategies and (iii) fault-tolerant processing capability focusing more on the computational domain; A set of random as well as application specific STG benchmark suites were run on the simulator to evaluate and verify the performance of the proposed algorithms. The simulations were carried out for (i) scheduling policy evaluation (ii) fault tolerant evaluation (iii) topology evaluation (iv) Number of processor evaluation (v) Mapping policy evaluation and (vi) Processor Type evaluation. The results showed that PD scheduling algorithm showed marginally better performance than EDF with respect to utilization, Execution-Time and Power factors. The dynamic Fault Tolerant implementation showed to be a viable and efficient strategy to meet real-time constraints without posing significant system performance degradation. Torus topology gave better performance than Tile with respect to task completion time and power factors. Executing highly heterogeneous Tasks showed higher power consumption and execution time. Finally, increasing the number of processors showed a decrease in average Utilization but improved task completion time and power consumption; Based on the simulation results, the system designer can compare tradeoffs between a various design choices with respect to the performance requirement specifications. In general, designing an optimized Multiprocessor scheduling and mapping strategy with added fault tolerant capability will enable to develop efficient Multiprocessor systems which meet future performance goal requirements. This is the substance of this work

    Advances in Evolutionary Algorithms

    Get PDF
    With the recent trends towards massive data sets and significant computational power, combined with evolutionary algorithmic advances evolutionary computation is becoming much more relevant to practice. Aim of the book is to present recent improvements, innovative ideas and concepts in a part of a huge EA field

    High-Performance Modelling and Simulation for Big Data Applications

    Get PDF
    This open access book was prepared as a Final Publication of the COST Action IC1406 “High-Performance Modelling and Simulation for Big Data Applications (cHiPSet)“ project. Long considered important pillars of the scientific method, Modelling and Simulation have evolved from traditional discrete numerical methods to complex data-intensive continuous analytical optimisations. Resolution, scale, and accuracy have become essential to predict and analyse natural and complex systems in science and engineering. When their level of abstraction raises to have a better discernment of the domain at hand, their representation gets increasingly demanding for computational and data resources. On the other hand, High Performance Computing typically entails the effective use of parallel and distributed processing units coupled with efficient storage, communication and visualisation systems to underpin complex data-intensive applications in distinct scientific and technical domains. It is then arguably required to have a seamless interaction of High Performance Computing with Modelling and Simulation in order to store, compute, analyse, and visualise large data sets in science and engineering. Funded by the European Commission, cHiPSet has provided a dynamic trans-European forum for their members and distinguished guests to openly discuss novel perspectives and topics of interests for these two communities. This cHiPSet compendium presents a set of selected case studies related to healthcare, biological data, computational advertising, multimedia, finance, bioinformatics, and telecommunications

    Developing service supply chains by using agent based simulation

    Get PDF
    The Master thesis present a novel approach to model a service supply chain with agent based simulation. Also, the case study of thesis is related to healthcare services and research problem includes facility location of healthcare centers in Vaasa region by considering the demand, resource units and service quality. Geographical information system is utilized for locating population, agent based simulation for patients and their illness status probability, and discrete event simulation for healthcare services modelling. Health centers are located on predefined sites based on managers’ preference, then each patient based on the distance to health centers, move to the nearest point for receiving the healthcare services. For evaluating cost and services condition, various key performance indicators have defined in the modelling such as Number of patient in queue, patients waiting time, resource utilization, and number of patients ratio yielded by different of inflow and outflow. Healthcare managers would be able to experiment different scenarios based on changing number of resource units or location of healthcare centers, and subsequently evaluate the results without necessity of implementation in real life.fi=Opinnäytetyö kokotekstinä PDF-muodossa.|en=Thesis fulltext in PDF format.|sv=Lärdomsprov tillgängligt som fulltext i PDF-format

    The 2nd Conference of PhD Students in Computer Science

    Get PDF

    AMoEBA: the adaptive modeling by evolving blocks algorithm

    Get PDF
    This dissertation presents AMoEBA, the Adaptive Modeling by Evolving Blocks Algorithm. AMoEBA is an evolutionary technique for automatic decomposition of data fields and solver/descriptor placement. By automatically decomposing a numerical data set, the algorithm is able to solve a variety of problems that are difficult to solve with other techniques. Two key features of the algorithm are its ability to work with discrete data types and its unique geometric representation of the domain. AMoEBA uses genetic programming generated parse trees to define data segregation schemes. These trees also place solver/descriptors in the decomposed regions. Since the segregation trees define the boundaries between the regions, discrete representations of the data set are possible. AMoEBA is versatile and can be applied to many different types of geometries as well as different types of problems. In this thesis, three problems will be used to demonstrate the capabilities of this algorithm. For the first problem, AMoEBA used approximated algebraic expressions to match known profiles representing a steady-state conduction heat transfer problem and the fully-developed laminar flow through a pipe. To further illustrate the versatility of the algorithm, an inverse engineering problem was also solved. For this problem, AMoEBA placed different materials in the segregated regions defined by the trees and compared this to known temperature profiles. The final demonstration illustrates the application of AMoEBA to computational fluid dynamics. In this implementation, AMoEBA segregated an elbow section of pipe and placed numerical solvers in the regions. The resulting solver networks were solved and compared to a known solution. Both the time and accuracy of the networks were compared to determine if a faster solution method can be found with a reasonably accurate solution. Although AMoEBA is adapted for each application, the core algorithm of AMoEBA is unaltered in each application. This illustrates the flexibility of the algorithm
    • …
    corecore