6 research outputs found

    A formalism for describing and simulating systems with interacting components.

    Get PDF
    This thesis addresses the problem of descriptive complexity presented by systems involving a high number of interacting components. It investigates the evaluation measure of performability and its application to such systems. A new description and simulation language, ICE and it's application to performability modelling is presented. ICE (Interacting ComponEnts) is based upon an earlier description language which was first proposed for defining reliability problems. ICE is declarative in style and has a limited number of keywords. The ethos in the development of the language has been to provide an intuitive formalism with a powerful descriptive space. The full syntax of the language is presented with discussion as to its philosophy. The implementation of a discrete event simulator using an ICE interface is described, with use being made of examples to illustrate the functionality of the code and the semantics of the language. Random numbers are used to provide the required stochastic behaviour within the simulator. The behaviour of an industry standard generator within the simulator and different methods of number allocation are shown. A new generator is proposed that is a development of a fast hardware shift register generator and is demonstrated to possess good statistical properties and operational speed. For the purpose of providing a rigorous description of the language and clarification of its semantics, a computational model is developed using the formalism of extended coloured Petri nets. This model also gives an indication of the language's descriptive power relative to that of a recognised and well developed technique. Some recognised temporal and structural problems of system event modelling are identified. and ICE solutions given. The growing research area of ATM communication networks is introduced and a sophisticated top down model of an ATM switch presented. This model is simulated and interesting results are given. A generic ICE framework for performability modelling is developed and demonstrated. This is considered as a positive contribution to the general field of performability research

    The hardware implementation of an artificial neural network using stochastic pulse rate encoding principles

    Get PDF
    In this thesis the development of a hardware artificial neuron device and artificial neural network using stochastic pulse rate encoding principles is considered. After a review of neural network architectures and algorithmic approaches suitable for hardware implementation, a critical review of hardware techniques which have been considered in analogue and digital systems is presented. New results are presented demonstrating the potential of two learning schemes which adapt by the use of a single reinforcement signal. The techniques for computation using stochastic pulse rate encoding are presented and extended with new novel circuits relevant to the hardware implementation of an artificial neural network. The generation of random numbers is the key to the encoding of data into the stochastic pulse rate domain. The formation of random numbers and multiple random bit sequences from a single PRBS generator have been investigated. Two techniques, Simulated Annealing and Genetic Algorithms, have been applied successfully to the problem of optimising the configuration of a PRBS random number generator for the formation of multiple random bit sequences and hence random numbers. A complete hardware design for an artificial neuron using stochastic pulse rate encoded signals has been described, designed, simulated, fabricated and tested before configuration of the device into a network to perform simple test problems. The implementation has shown that the processing elements of the artificial neuron are small and simple, but that there can be a significant overhead for the encoding of information into the stochastic pulse rate domain. The stochastic artificial neuron has the capability of on-line weight adaption. The implementation of reinforcement schemes using the stochastic neuron as a basic element are discussed

    Evaluation of alternative discrete-event simulation experimental methods

    Get PDF
    The aim of the research was to assist non-experts produce meaningful, non-terminating discrete event simulations studies. The exemplar used was manufacturing applications, in particular sequential production lines. The thesis addressed the selection of methods for introducing randomness, setting the length of individual simulation runs, and determining the conditions for starting measurements". Received wisdom" in these aspects of simulation experimentation was not accepted.The research made use of a Markov Chain queuing model and statistica analysis of exhaustive computer-based experimentation using test models. A specific production-line model drawn from the motor industry was used as a point of reference. A distinctive,quality control like, process of facilitating the controlled introduction of "representative randomness" from a pseudo random-number generator was developed, rather than relying on a generator's a priori performance in standard statistical tests of randomness. This approach proved to be effective and practical. Other results included: The distortion in measurements due to the initial conditions of a simulation run of a queue was only corrected by a lengthy run and not by discarding early results. Simulation experiments of the same queue, demonstrated that a single long run gave greater accuracy than having multiple runs. The choice of random number generator is less important than the choice of seed. Notably, RANDU (a "discredited"MLCG) with careful seed selection was able to outperform in tests both real random numbers, and other MLCGs if their seed were chosen randomly,99.8% of the time. Similar results were obtained for Mersenne Twister and Descriptive Sampling.Descriptive Samnpling was found to provide the best samples and was less susceptible to errorsin the forecast of the required sample size. A method of determining the run length of the simulation that would ensure the run was representative of the true condifions was proposed. An interactive computer program was created to assist in the calculation of the run length of a simulation and determine seeds so as to obtain" highly representative" samples, demonstrating the facility required in simulation software to support theses elected methods

    Evaluation of alternative discrete-event simulation experimental methods

    Get PDF
    The aim of the research was to assist non-experts produce meaningful, non-terminating discrete event simulations studies. The exemplar used was manufacturing applications, in particular sequential production lines. The thesis addressed the selection of methods for introducing randomness, setting the length of individual simulation runs, and determining the conditions for starting measurements". Received wisdom" in these aspects of simulation experimentation was not accepted.The research made use of a Markov Chain queuing model and statistica analysis of exhaustive computer-based experimentation using test models. A specific production-line model drawn from the motor industry was used as a point of reference. A distinctive,quality control like, process of facilitating the controlled introduction of "representative randomness" from a pseudo random-number generator was developed, rather than relying on a generator's a priori performance in standard statistical tests of randomness. This approach proved to be effective and practical. Other results included: The distortion in measurements due to the initial conditions of a simulation run of a queue was only corrected by a lengthy run and not by discarding early results. Simulation experiments of the same queue, demonstrated that a single long run gave greater accuracy than having multiple runs. The choice of random number generator is less important than the choice of seed. Notably, RANDU (a "discredited"MLCG) with careful seed selection was able to outperform in tests both real random numbers, and other MLCGs if their seed were chosen randomly,99.8% of the time. Similar results were obtained for Mersenne Twister and Descriptive Sampling.Descriptive Samnpling was found to provide the best samples and was less susceptible to errorsin the forecast of the required sample size. A method of determining the run length of the simulation that would ensure the run was representative of the true condifions was proposed. An interactive computer program was created to assist in the calculation of the run length of a simulation and determine seeds so as to obtain" highly representative" samples, demonstrating the facility required in simulation software to support theses elected methods.EThOS - Electronic Theses Online ServiceGBUnited Kingdo

    Hybrid algorithms for efficient Cholesky decomposition and matrix inverse using multicore CPUs with GPU accelerators

    Get PDF
    The use of linear algebra routines is fundamental to many areas of computational science, yet their implementation in software still forms the main computational bottleneck in many widely used algorithms. In machine learning and computational statistics, for example, the use of Gaussian distributions is ubiquitous, and routines for calculating the Cholesky decomposition, matrix inverse and matrix determinant must often be called many thousands of times for common algorithms, such as Markov chain Monte Carlo. These linear algebra routines consume most of the total computational time of a wide range of statistical methods, and any improvements in this area will therefore greatly increase the overall efficiency of algorithms used in many scientific application areas. The importance of linear algebra algorithms is clear from the substantial effort that has been invested over the last 25 years in producing low-level software libraries such as LAPACK, which generally optimise these linear algebra routines by breaking up a large problem into smaller problems that may be computed independently. The performance of such libraries is however strongly dependent on the specific hardware available. LAPACK was originally developed for single core processors with a memory hierarchy, whereas modern day computers often consist of mixed architectures, with large numbers of parallel cores and graphics processing units (GPU) being used alongside traditional CPUs. The challenge lies in making optimal use of these different types of computing units, which generally have very different processor speeds and types of memory. In this thesis we develop novel low-level algorithms that may be generally employed in blocked linear algebra routines, which automatically optimise themselves to take full advantage of the variety of heterogeneous architectures that may be available. We present a comparison of our methods with MAGMA, the state of the art open source implementation of LAPACK designed specifically for hybrid architectures, and demonstrate up to 400% increase in speed that may be obtained using our novel algorithms, specifically when running commonly used Cholesky matrix decomposition, matrix inverse and matrix determinant routines
    corecore