17,453 research outputs found

    Interval simulation: raising the level of abstraction in architectural simulation

    Get PDF
    Detailed architectural simulators suffer from a long development cycle and extremely long evaluation times. This longstanding problem is further exacerbated in the multi-core processor era. Existing solutions address the simulation problem by either sampling the simulated instruction stream or by mapping the simulation models on FPGAs; these approaches achieve substantial simulation speedups while simulating performance in a cycle-accurate manner This paper proposes interval simulation which rakes a completely different approach: interval simulation raises the level of abstraction and replaces the core-level cycle-accurate simulation model by a mechanistic analytical model. The analytical model estimates core-level performance by analyzing intervals, or the timing between two miss events (branch mispredictions and TLB/cache misses); the miss events are determined through simulation of the memory hierarchy, cache coherence protocol, interconnection network and branch predictor By raising the level of abstraction, interval simulation reduces both development time and evaluation time. Our experimental results using the SPEC CPU2000 and PARSEC benchmark suites and the MS multi-core simulator show good accuracy up to eight cores (average error of 4.6% and max error of 11% for the multi-threaded full-system workloads), while achieving a one order of magnitude simulation speedup compared to cycle-accurate simulation. Moreover interval simulation is easy to implement: our implementation of the mechanistic analytical model incurs only one thousand lines of code. Its high accuracy, fast simulation speed and ease-of-use make interval simulation a useful complement to the architect's toolbox for exploring system-level and high-level micro-architecture trade-offs

    Big Data in Critical Infrastructures Security Monitoring: Challenges and Opportunities

    Full text link
    Critical Infrastructures (CIs), such as smart power grids, transport systems, and financial infrastructures, are more and more vulnerable to cyber threats, due to the adoption of commodity computing facilities. Despite the use of several monitoring tools, recent attacks have proven that current defensive mechanisms for CIs are not effective enough against most advanced threats. In this paper we explore the idea of a framework leveraging multiple data sources to improve protection capabilities of CIs. Challenges and opportunities are discussed along three main research directions: i) use of distinct and heterogeneous data sources, ii) monitoring with adaptive granularity, and iii) attack modeling and runtime combination of multiple data analysis techniques.Comment: EDCC-2014, BIG4CIP-201

    Evidence accumulation in a Laplace domain decision space

    Full text link
    Evidence accumulation models of simple decision-making have long assumed that the brain estimates a scalar decision variable corresponding to the log-likelihood ratio of the two alternatives. Typical neural implementations of this algorithmic cognitive model assume that large numbers of neurons are each noisy exemplars of the scalar decision variable. Here we propose a neural implementation of the diffusion model in which many neurons construct and maintain the Laplace transform of the distance to each of the decision bounds. As in classic findings from brain regions including LIP, the firing rate of neurons coding for the Laplace transform of net accumulated evidence grows to a bound during random dot motion tasks. However, rather than noisy exemplars of a single mean value, this approach makes the novel prediction that firing rates grow to the bound exponentially, across neurons there should be a distribution of different rates. A second set of neurons records an approximate inversion of the Laplace transform, these neurons directly estimate net accumulated evidence. In analogy to time cells and place cells observed in the hippocampus and other brain regions, the neurons in this second set have receptive fields along a "decision axis." This finding is consistent with recent findings from rodent recordings. This theoretical approach places simple evidence accumulation models in the same mathematical language as recent proposals for representing time and space in cognitive models for memory.Comment: Revised for CB

    Synaptic state matching: a dynamical architecture for predictive internal representation and feature perception

    Get PDF
    Here we consider the possibility that a fundamental function of sensory cortex is the generation of an internal simulation of sensory environment in real-time. A logical elaboration of this idea leads to a dynamical neural architecture that oscillates between two fundamental network states, one driven by external input, and the other by recurrent synaptic drive in the absence of sensory input. Synaptic strength is modified by a proposed synaptic state matching (SSM) process that ensures equivalence of spike statistics between the two network states. Remarkably, SSM, operating locally at individual synapses, generates accurate and stable network-level predictive internal representations, enabling pattern completion and unsupervised feature detection from noisy sensory input. SSM is a biologically plausible substrate for learning and memory because it brings together sequence learning, feature detection, synaptic homeostasis, and network oscillations under a single parsimonious computational framework. Beyond its utility as a potential model of cortical computation, artificial networks based on this principle have remarkable capacity for internalizing dynamical systems, making them useful in a variety of application domains including time-series prediction and machine intelligence

    Hamiltonian dynamics and geometry of phase transitions in classical XY models

    Full text link
    The Hamiltonian dynamics associated to classical, planar, Heisenberg XY models is investigated for two- and three-dimensional lattices. Besides the conventional signatures of phase transitions, here obtained through time averages of thermodynamical observables in place of ensemble averages, qualitatively new information is derived from the temperature dependence of Lyapunov exponents. A Riemannian geometrization of newtonian dynamics suggests to consider other observables of geometric meaning tightly related with the largest Lyapunov exponent. The numerical computation of these observables - unusual in the study of phase transitions - sheds a new light on the microscopic dynamical counterpart of thermodynamics also pointing to the existence of some major change in the geometry of the mechanical manifolds at the thermodynamical transition. Through the microcanonical definition of the entropy, a relationship between thermodynamics and the extrinsic geometry of the constant energy surfaces ΣE\Sigma_E of phase space can be naturally established. In this framework, an approximate formula is worked out, determining a highly non-trivial relationship between temperature and topology of the ΣE\Sigma_E. Whence it can be understood that the appearance of a phase transition must be tightly related to a suitable major topology change of the ΣE\Sigma_E. This contributes to the understanding of the origin of phase transitions in the microcanonical ensemble.Comment: in press on Physical Review E, 43 pages, LaTeX (uses revtex), 22 PostScript figure

    Dynamics and statistics of simple models with infinite-range attractive interaction

    Full text link
    In this paper we review a series of results obtained for 1D and 2D simple N-body dynamical models with infinite-range attractive interactions and without short distance singularities. The free energy of both models can be exactly obtained in the canonical ensemble, while microcanonical results can be derived from numerical simulations. Both models show a phase transition from a low energy clustered phase to a high energy gaseous state, in analogy with the models introduced in the early 70's by Thirring and Hertel. The phase transition is second order for the 1D model, first order for the 2D model. Negative specific heat appears in both models near the phase transition point. For both models, in the presence of a negative specific heat, a cluster of collapsed particles coexists with a halo of higher energy particles which perform long correlated flights, which lead to anomalous diffusion. The dynamical origin of the "superdiffusion" is different in the two models, being related to particle trapping and untrapping in the cluster in 1D, while in 2D the channelling of particles in an egg-crate effective potential is responsible of the effect. Both models are Lyapunov unstable and the maximal Lyapunov exponent λ\lambda has a peak just in the region preceeding the phase transition. Moreover, in the low energy limit λ\lambda increases proportionally to the square root of the internal energy, while in the high energy region it vanishes as N−1/3N^{-1/3}.Comment: 33 pages, Latex2 - 12 Figs - Proceedings of the Conference "The Chaotic Universe" held in Rome-Pescara in Feb. 199
    • …
    corecore