501 research outputs found

    CABE : a cloud-based acoustic beamforming emulator for FPGA-based sound source localization

    Get PDF
    Microphone arrays are gaining in popularity thanks to the availability of low-cost microphones. Applications including sonar, binaural hearing aid devices, acoustic indoor localization techniques and speech recognition are proposed by several research groups and companies. In most of the available implementations, the microphones utilized are assumed to offer an ideal response in a given frequency domain. Several toolboxes and software can be used to obtain a theoretical response of a microphone array with a given beamforming algorithm. However, a tool facilitating the design of a microphone array taking into account the non-ideal characteristics could not be found. Moreover, generating packages facilitating the implementation on Field Programmable Gate Arrays has, to our knowledge, not been carried out yet. Visualizing the responses in 2D and 3D also poses an engineering challenge. To alleviate these shortcomings, a scalable Cloud-based Acoustic Beamforming Emulator (CABE) is proposed. The non-ideal characteristics of microphones are considered during the computations and results are validated with acoustic data captured from microphones. It is also possible to generate hardware description language packages containing delay tables facilitating the implementation of Delay-and-Sum beamformers in embedded hardware. Truncation error analysis can also be carried out for fixed-point signal processing. The effects of disabling a given group of microphones within the microphone array can also be calculated. Results and packages can be visualized with a dedicated client application. Users can create and configure several parameters of an emulation, including sound source placement, the shape of the microphone array and the required signal processing flow. Depending on the user configuration, 2D and 3D graphs showing the beamforming results, waterfall diagrams and performance metrics can be generated by the client application. The emulations are also validated with captured data from existing microphone arrays.</jats:p

    Sliced rotated sphere packing designs

    Full text link
    Space-filling designs are popular choices for computer experiments. A sliced design is a design that can be partitioned into several subdesigns. We propose a new type of sliced space-filling design called sliced rotated sphere packing designs. Their full designs and subdesigns are rotated sphere packing designs. They are constructed by rescaling, rotating, translating and extracting the points from a sliced lattice. We provide two fast algorithms to generate such designs. Furthermore, we propose a strategy to use sliced rotated sphere packing designs adaptively. Under this strategy, initial runs are uniformly distributed in the design space, follow-up runs are added by incorporating information gained from initial runs, and the combined design is space-filling for any local region. Examples are given to illustrate its potential application

    Multi-Fidelity Gaussian Process Emulation And Its Application In The Study Of Tsunami Risk Modelling

    Get PDF
    Investigating uncertainties in computer simulations can be prohibitive in terms of computational costs, since the simulator needs to be run over a large number of input values. Building a statistical surrogate model of the simulator, using a small design of experiments, greatly alleviates the computational burden to carry out such investigations. Nevertheless, this can still be above the computational budget for many studies. We present a novel method that combines both approaches, the multilevel adaptive sequential design of computer experiments (MLASCE) in the framework of Gaussian process (GP) emulators. MLASCE is based on the two major approaches: efficient design of experiments, such as sequential designs, and combining training data of different degrees of sophistication in a so-called multi-fidelity method, or multilevel in case these fidelities are ordered typically for increasing resolutions. This dual strategy allows us to allocate efficiently limited computational resources over simulations of different levels of fidelity and build the GP emulator. The allocation of computational resources is shown to be the solution of a simple optimization problem in a special case where we theoretically prove the validity of our approach. MLASCE is compared with other existing models of multi-fidelity Gaussian process emulation. Gains of orders of magnitudes in accuracy for medium-size computing budgets are demonstrated in numerical examples. MLASCE should be useful in a computer experiment of a natural disaster risk and more than a mere tool for calculating the scale of natural disasters. To show MLASCE meets this expectation, we propose the first end-to-end example of a risk model for household asset loss due to a possible future tsunami. As a follow-up to this proposed framework, MLASCE provides a reliable statistical surrogate to a realistic tsunami risk assessment under a restricted computational resource and provides accurate and instant predictions of future tsunami risks

    Simulation-based Inference for Model Parameterization on Analog Neuromorphic Hardware

    Full text link
    The BrainScaleS-2 (BSS-2) system implements physical models of neurons as well as synapses and aims for an energy-efficient and fast emulation of biological neurons. When replicating neuroscientific experiment results, a major challenge is finding suitable model parameters. This study investigates the suitability of the sequential neural posterior estimation (SNPE) algorithm for parameterizing a multi-compartmental neuron model emulated on the BSS-2 analog neuromorphic hardware system. In contrast to other optimization methods such as genetic algorithms or stochastic searches, the SNPE algorithms belongs to the class of approximate Bayesian computing (ABC) methods and estimates the posterior distribution of the model parameters; access to the posterior allows classifying the confidence in parameter estimations and unveiling correlation between model parameters. In previous applications, the SNPE algorithm showed a higher computational efficiency than traditional ABC methods. For our multi-compartmental model, we show that the approximated posterior is in agreement with experimental observations and that the identified correlation between parameters is in agreement with theoretical expectations. Furthermore, we show that the algorithm can deal with high-dimensional observations and parameter spaces. These results suggest that the SNPE algorithm is a promising approach for automating the parameterization of complex models, especially when dealing with characteristic properties of analog neuromorphic substrates, such as trial-to-trial variations or limited parameter ranges

    Neural networks learn to detect and emulate sorting algorithms from images of their execution traces

    Get PDF
    Context: recent advancements in the applicability of neural networks across a variety of fields, such as computer vision, natural language processing and others, have re-sparked an interest in program induction methods. Problem: when performing a program induction task, it is not feasible to search across all possible programs that map an input to an output because the number of possible combinations or sequences of instructions is too high: at least an exponential growth based on the generated program length. Moreover, there does not exist a general framework to formulate such program induction tasks and current computational limitations do not allow a very wide range of machine learning applications in the field of computer programs generation. Objective: in this study, we analyze the effectiveness of execution traces as learning representations for neural network models in a program induction set-up. Our goal is to generate visualizations of program execution dynamics, specifically of sorting algorithms, and to apply machine learning techniques on them to capture their semantics and emulate their behavior using neural networks. Method: we begin by classifying images of execution traces for algorithms working on a finite array of numbers, such as various sorting and data structures algorithms. Next we experiment with detecting sub-program patterns inside the trace sequence of a larger program. The last step is to predict future steps in the execution of various sorting algorithms. More specifically, we try to emulate their behavior by observing their execution traces. We also discuss generalizations to other classes of programs, such as 1-D cellular automata. Results: our experiments show that neural networks are capable of modeling the mechanisms underlying simple algorithms if enough execution traces are provided as data. We compare the performance of our program induction model with other similar experimental results from Graves et al. [4] and Vinyals et al. [5]. We were also able to demonstrate that sorting algorithms can be treated both as images displaying spatial patterns, as well as sequential instructions in a domain specific language, such as swapping two elements. We tested our approach on three types of increasingly harder tasks: detection, recognition and emulation. Conclusions: we demonstrate that simple algorithms can be modelled using neural networks and provide a method for representing specific classes of programs as either images or sequences of instructions in a domain-specific language, such that a neural network can learn their behavior. We consider the complexity of various set-ups to arrive at some improvements based on the data representation type. The insights from our experiments can be applied for designing better models of program induction

    Parallel algorithms and architectures for VLSI pattern generation

    Get PDF

    Speeding up neighborhood search in local Gaussian process prediction

    Full text link
    Recent implementations of local approximate Gaussian process models have pushed computational boundaries for non-linear, non-parametric prediction problems, particularly when deployed as emulators for computer experiments. Their flavor of spatially independent computation accommodates massive parallelization, meaning that they can handle designs two or more orders of magnitude larger than previously. However, accomplishing that feat can still require massive supercomputing resources. Here we aim to ease that burden. We study how predictive variance is reduced as local designs are built up for prediction. We then observe how the exhaustive and discrete nature of an important search subroutine involved in building such local designs may be overly conservative. Rather, we suggest that searching the space radially, i.e., continuously along rays emanating from the predictive location of interest, is a far thriftier alternative. Our empirical work demonstrates that ray-based search yields predictors with accuracy comparable to exhaustive search, but in a fraction of the time - bringing a supercomputer implementation back onto the desktop.Comment: 24 pages, 5 figures, 4 table

    Policy making using computer simulators for complex physical systems; Bayesian decision support for the development of adaptive strategies

    Get PDF
    Policy makers increasingly rely on computer models to aid policy judgements for complex systems. The climate system, for example, is extremely complicated and its reaction to changes in radiative forcing through CO2 emissions can only be explored using models. Bayesian methods for making inferences about physical systems that combine information from computer simulators and system observations have become increasingly well studied. We apply some of these methods to the policy problem where the decisions to be made are inputs to the computer model. Particular features of our methodologies include: the provision of Bayesian decision support for the policy problem when it is known that policy may be adapted in reaction to future observations of the complex system; and careful integration of the knowledge that our computer simulators will evolve and improve over time, which may affect downstream strategies and, hence, current policy. Our methods also allow research investment questions to be explored in the context of the wider policy problem. For example, the question of whether or not an improved version of a computer simulator should be built and how much it should be run can be addressed as part of the policy problem

    Emulating dynamic non-linear simulators using Gaussian processes

    Get PDF
    The dynamic emulation of non-linear deterministic computer codes where the output is a time series, possibly multivariate, is examined. Such computer models simulate the evolution of some real-world phenomenon over time, for example models of the climate or the functioning of the human brain. The models we are interested in are highly non-linear and exhibit tipping points, bifurcations and chaotic behaviour. However, each simulation run could be too time-consuming to perform analyses that require many runs, including quantifying the variation in model output with respect to changes in the inputs. Therefore, Gaussian process emulators are used to approximate the output of the code. To do this, the flow map of the system under study is emulated over a short time period. Then, it is used in an iterative way to predict the whole time series. A number of ways are proposed to take into account the uncertainty of inputs to the emulators, after fixed initial conditions, and the correlation between them through the time series. The methodology is illustrated with two examples: the highly non-linear dynamical systems described by the Lorenz and Van der Pol equations. In both cases, the predictive performance is relatively high and the measure of uncertainty provided by the method reflects the extent of predictability in each system

    A Survey of Brain Inspired Technologies for Engineering

    Full text link
    Cognitive engineering is a multi-disciplinary field and hence it is difficult to find a review article consolidating the leading developments in the field. The in-credible pace at which technology is advancing pushes the boundaries of what is achievable in cognitive engineering. There are also differing approaches to cognitive engineering brought about from the multi-disciplinary nature of the field and the vastness of possible applications. Thus research communities require more frequent reviews to keep up to date with the latest trends. In this paper we shall dis-cuss some of the approaches to cognitive engineering holistically to clarify the reasoning behind the different approaches and to highlight their strengths and weaknesses. We shall then show how developments from seemingly disjointed views could be integrated to achieve the same goal of creating cognitive machines. By reviewing the major contributions in the different fields and showing the potential for a combined approach, this work intends to assist the research community in devising more unified methods and techniques for developing cognitive machines
    corecore