15,849 research outputs found

    Generic Self-Adaptation to Reduce Design Effort for System-on-Chip

    Full text link

    TANGO: Transparent heterogeneous hardware Architecture deployment for eNergy Gain in Operation

    Get PDF
    The paper is concerned with the issue of how software systems actually use Heterogeneous Parallel Architectures (HPAs), with the goal of optimizing power consumption on these resources. It argues the need for novel methods and tools to support software developers aiming to optimise power consumption resulting from designing, developing, deploying and running software on HPAs, while maintaining other quality aspects of software to adequate and agreed levels. To do so, a reference architecture to support energy efficiency at application construction, deployment, and operation is discussed, as well as its implementation and evaluation plans.Comment: Part of the Program Transformation for Programmability in Heterogeneous Architectures (PROHA) workshop, Barcelona, Spain, 12th March 2016, 7 pages, LaTeX, 3 PNG figure

    Hierarchical Agent-based Adaptation for Self-Aware Embedded Computing Systems

    Get PDF
    Siirretty Doriast

    NeuroFlow: A General Purpose Spiking Neural Network Simulation Platform using Customizable Processors

    Get PDF
    © 2016 Cheung, Schultz and Luk.NeuroFlow is a scalable spiking neural network simulation platform for off-the-shelf high performance computing systems using customizable hardware processors such as Field-Programmable Gate Arrays (FPGAs). Unlike multi-core processors and application-specific integrated circuits, the processor architecture of NeuroFlow can be redesigned and reconfigured to suit a particular simulation to deliver optimized performance, such as the degree of parallelism to employ. The compilation process supports using PyNN, a simulator-independent neural network description language, to configure the processor. NeuroFlow supports a number of commonly used current or conductance based neuronal models such as integrate-and-fire and Izhikevich models, and the spike-timing-dependent plasticity (STDP) rule for learning. A 6-FPGA system can simulate a network of up to ~600,000 neurons and can achieve a real-time performance of 400,000 neurons. Using one FPGA, NeuroFlow delivers a speedup of up to 33.6 times the speed of an 8-core processor, or 2.83 times the speed of GPU-based platforms. With high flexibility and throughput, NeuroFlow provides a viable environment for large-scale neural network simulation

    Garnet: a middleware architecture for distributing data streams originating in wireless sensor networks

    Get PDF
    We present an architectural framework, Garnet, which provides a data stream centric abstraction to encourage the manipulation and exploitation of data generated in sensor networks. By providing middleware services to allow mutually-unaware applications to manipulate sensor behaviour, a scalable, extensible platform is provided. We focus on sensor networks with transmit and receive capabilities as this combination poses greater challenges for managing and distributing sensed data. Our approach allows simple and sophisticated sensors to coexist, and allows data consumers to be mutually unaware of each other This also promotes the use of middleware services to mediate among consumers with potentially conflicting demands for shared data. Garnet has been implemented in Java, and we report on our progress to date and outline some likely scenarios where the use of our distributed architecture and accompanying middleware support enhances the task of sharing data in sensor network environments
    • …
    corecore