193,696 research outputs found

    The Optimality of the US and Euro Area Taylor Rule

    Get PDF
    The purpose of this paper is to examine the optimality of the monetary authorities reaction function in the two-area medium size model MARCOS (US and euro areas). The parameters and the horizons of output gap and inflation expectations of the Taylor rule are computed in order to minimise a loss function of the monetary authorities. However, investigating the optimality of the Taylor rule in the context of a large scale macroeconomic model raises several difficulties: the model is non-linear and all the state variables potentially enter the optimal monetary policy rule. Furthermore, the optimality of the Taylor rule is assessed by the minimisation of the loss function under the constraint of a large forward-looking model. To overcome these problems, Black, Macklem and Rose [1998] propose a stochastic simulation based method which has been applied to single-country macroeconomic models. To study the optimality of the Taylor rule in the case of a two-area model, we suppose that the economy is stochastically hit by numerous shocks (supply, demand, monetary, exchange rate and world demand) in each area and simulate MARCOS stochastically.Monetary Policy, Computational Techniques, International Policy Transmission

    Large-Scale simulations of plastic neural networks on neuromorphic hardware

    Get PDF
    SpiNNaker is a digital, neuromorphic architecture designed for simulating large-scale spiking neural networks at speeds close to biological real-time. Rather than using bespoke analog or digital hardware, the basic computational unit of a SpiNNaker system is a general-purpose ARM processor, allowing it to be programmed to simulate a wide variety of neuron and synapse models. This flexibility is particularly valuable in the study of biological plasticity phenomena. A recently proposed learning rule based on the Bayesian Confidence Propagation Neural Network (BCPNN) paradigm offers a generic framework for modeling the interaction of different plasticity mechanisms using spiking neurons. However, it can be computationally expensive to simulate large networks with BCPNN learning since it requires multiple state variables for each synapse, each of which needs to be updated every simulation time-step. We discuss the trade-offs in efficiency and accuracy involved in developing an event-based BCPNN implementation for SpiNNaker based on an analytical solution to the BCPNN equations, and detail the steps taken to fit this within the limited computational and memory resources of the SpiNNaker architecture. We demonstrate this learning rule by learning temporal sequences of neural activity within a recurrent attractor network which we simulate at scales of up to 2.0 × 104 neurons and 5.1 × 107 plastic synapses: the largest plastic neural network ever to be simulated on neuromorphic hardware. We also run a comparable simulation on a Cray XC-30 supercomputer system and find that, if it is to match the run-time of our SpiNNaker simulation, the super computer system uses approximately 45× more power. This suggests that cheaper, more power efficient neuromorphic systems are becoming useful discovery tools in the study of plasticity in large-scale brain models

    Rule-based modeling of biochemical systems with BioNetGen

    Get PDF
    Totowa, NJ. Please cite this article when referencing BioNetGen in future publications. Rule-based modeling involves the representation of molecules as structured objects and molecular interactions as rules for transforming the attributes of these objects. The approach is notable in that it allows one to systematically incorporate site-specific details about proteinprotein interactions into a model for the dynamics of a signal-transduction system, but the method has other applications as well, such as following the fates of individual carbon atoms in metabolic reactions. The consequences of protein-protein interactions are difficult to specify and track with a conventional modeling approach because of the large number of protein phosphoforms and protein complexes that these interactions potentially generate. Here, we focus on how a rule-based model is specified in the BioNetGen language (BNGL) and how a model specification is analyzed using the BioNetGen software tool. We also discuss new developments in rule-based modeling that should enable the construction and analyses of comprehensive models for signal transduction pathways and similarly large-scale models for other biochemical systems. Key Words: Computational systems biology; mathematical modeling; combinatorial complexity; software; formal languages; stochastic simulation; ordinary differential equations; protein-protein interactions; signal transduction; metabolic networks. 1

    Design of Wind Turbine Tower Height and Blade Length: an Optimization Approach

    Get PDF
    The wind industry is a fast growing market and is quickly becoming competitive with traditional non-renewable energy resources. As with any developing industry, research must continually be redefined as more complex understandings of design variables are learned. Optimization studies are common ways to quickly refine design variable selections. Historical wind turbine data shows that the tower hub height to rotor diameter ratio scales almost linearly. However there is no specific rule that dictates the optimum hub height for a given diameter. This study addresses this question by using an Excel based optimization program to determine the height to diameter ratio of a simulated turbine with the lowest cost of energy. Using a wind turbine power curve database and previous scaling relationships/cost models, the optimum hub height to rotor diameter ratio is predicted. The results of this simulation show that current cost and scaling models do not reflect an accurate optimum height to diameter ratio. However, these cost and scaling models can be modified to provide more accurate predictions of the optimum hub height for a given rotor diameter. This simulation predicts that future large scale wind turbines will have aspect ratios closer to 0.5

    FiCoS: A fine-grained and coarse-grained GPU-powered deterministic simulator for biochemical networks.

    Get PDF
    Mathematical models of biochemical networks can largely facilitate the comprehension of the mechanisms at the basis of cellular processes, as well as the formulation of hypotheses that can be tested by means of targeted laboratory experiments. However, two issues might hamper the achievement of fruitful outcomes. On the one hand, detailed mechanistic models can involve hundreds or thousands of molecular species and their intermediate complexes, as well as hundreds or thousands of chemical reactions, a situation generally occurring in rule-based modeling. On the other hand, the computational analysis of a model typically requires the execution of a large number of simulations for its calibration, or to test the effect of perturbations. As a consequence, the computational capabilities of modern Central Processing Units can be easily overtaken, possibly making the modeling of biochemical networks a worthless or ineffective effort. To the aim of overcoming the limitations of the current state-of-the-art simulation approaches, we present in this paper FiCoS, a novel "black-box" deterministic simulator that effectively realizes both a fine-grained and a coarse-grained parallelization on Graphics Processing Units. In particular, FiCoS exploits two different integration methods, namely, the Dormand-Prince and the Radau IIA, to efficiently solve both non-stiff and stiff systems of coupled Ordinary Differential Equations. We tested the performance of FiCoS against different deterministic simulators, by considering models of increasing size and by running analyses with increasing computational demands. FiCoS was able to dramatically speedup the computations up to 855×, showing to be a promising solution for the simulation and analysis of large-scale models of complex biological processes

    Towards a goal-oriented agent-based simulation framework for high-performance computing

    Get PDF
    Currently, agent-based simulation frameworks force the user to choose between simulations involving a large number of agents (at the expense of limited agent reasoning capability) or simulations including agents with increased reasoning capabilities (at the expense of a limited number of agents per simulation). This paper describes a first attempt at putting goal-oriented agents into large agentbased (micro-)simulations. We discuss a model for goal-oriented agents in HighPerformance Computing (HPC) and then briefly discuss its implementation in PyCOMPSs (a library that eases the parallelisation of tasks) to build such a platform that benefits from a large number of agents with the capacity to execute complex cognitive agents.Peer ReviewedPostprint (author's final draft
    corecore