37,278 research outputs found
Memory and information processing in neuromorphic systems
A striking difference between brain-inspired neuromorphic processors and
current von Neumann processors architectures is the way in which memory and
processing is organized. As Information and Communication Technologies continue
to address the need for increased computational power through the increase of
cores within a digital processor, neuromorphic engineers and scientists can
complement this need by building processor architectures where memory is
distributed with the processing. In this paper we present a survey of
brain-inspired processor architectures that support models of cortical networks
and deep neural networks. These architectures range from serial clocked
implementations of multi-neuron systems to massively parallel asynchronous ones
and from purely digital systems to mixed analog/digital systems which implement
more biological-like models of neurons and synapses together with a suite of
adaptation and learning mechanisms analogous to the ones found in biological
nervous systems. We describe the advantages of the different approaches being
pursued and present the challenges that need to be addressed for building
artificial neural processing systems that can display the richness of behaviors
seen in biological systems.Comment: Submitted to Proceedings of IEEE, review of recently proposed
neuromorphic computing platforms and system
Recommended from our members
Towards a Wearer-Centred Framework for Animal Biotelemetry
The emerging discipline of Animal-Computer Interaction (ACI) aims to understand the relation between animals and technology in naturalistic settings, to design technology that can support animals in different contexts and to develop user-centred research methods and frameworks that enable animals to take part in the design process as legitimate contributors [11]. Given existing interspecies differences and communication barriers, measuring the behaviour of animals involved in ACI research can be instrumental to achieving any or all of these aims, as a way of gauging the animalsâ patterns, needs and preferences. Indeed, measuring behaviour is a common practice among ACI researchers, who take various approaches to this task [5,15,17,24]. In this respect, the use of biotelemetry devices such as VHF tags and GPS trackers, or bio-logging and environmental sensors has a significant potential [22].
At the same time, biotelemetry has been used for many years in many areas of biological research. Biotelemetry is used to improve the quality of physiological and behavioural data collected from animals and in an attempt to reduce researchersâ intrusion in the animalsâ habitat [2]. However, there is evidence that carrying biotelemetry tags may influence the bearerâs physiology and behaviour [20]. Such impacts interfere with the validity of recorded data [14] and the welfare of individual animal wearers [1,3,13]. Neither of these effects are compatible with the animal-centred perspective advocated by ACI, on both scientific and ethical grounds. Our analysis of current body-attached device design and biotelemetry-enabled studies points to a general lack of wearer-centred perspective. To address these issues, we have developed a framework to inform the design of wearer-centred biotelemetry interventions, in order to support the implementation of animal-centred research methodologies and design solutions in ACI and other disciplines
FAST: FAST Analysis of Sequences Toolbox.
FAST (FAST Analysis of Sequences Toolbox) provides simple, powerful open source command-line tools to filter, transform, annotate and analyze biological sequence data. Modeled after the GNU (GNU's Not Unix) Textutils such as grep, cut, and tr, FAST tools such as fasgrep, fascut, and fastr make it easy to rapidly prototype expressive bioinformatic workflows in a compact and generic command vocabulary. Compact combinatorial encoding of data workflows with FAST commands can simplify the documentation and reproducibility of bioinformatic protocols, supporting better transparency in biological data science. Interface self-consistency and conformity with conventions of GNU, Matlab, Perl, BioPerl, R, and GenBank help make FAST easy and rewarding to learn. FAST automates numerical, taxonomic, and text-based sorting, selection and transformation of sequence records and alignment sites based on content, index ranges, descriptive tags, annotated features, and in-line calculated analytics, including composition and codon usage. Automated content- and feature-based extraction of sites and support for molecular population genetic statistics make FAST useful for molecular evolutionary analysis. FAST is portable, easy to install and secure thanks to the relative maturity of its Perl and BioPerl foundations, with stable releases posted to CPAN. Development as well as a publicly accessible Cookbook and Wiki are available on the FAST GitHub repository at https://github.com/tlawrence3/FAST. The default data exchange format in FAST is Multi-FastA (specifically, a restriction of BioPerl FastA format). Sanger and Illumina 1.8+ FastQ formatted files are also supported. FAST makes it easier for non-programmer biologists to interactively investigate and control biological data at the speed of thought
Dynamic Power Management for Neuromorphic Many-Core Systems
This work presents a dynamic power management architecture for neuromorphic
many core systems such as SpiNNaker. A fast dynamic voltage and frequency
scaling (DVFS) technique is presented which allows the processing elements (PE)
to change their supply voltage and clock frequency individually and
autonomously within less than 100 ns. This is employed by the neuromorphic
simulation software flow, which defines the performance level (PL) of the PE
based on the actual workload within each simulation cycle. A test chip in 28 nm
SLP CMOS technology has been implemented. It includes 4 PEs which can be scaled
from 0.7 V to 1.0 V with frequencies from 125 MHz to 500 MHz at three distinct
PLs. By measurement of three neuromorphic benchmarks it is shown that the total
PE power consumption can be reduced by 75%, with 80% baseline power reduction
and a 50% reduction of energy per neuron and synapse computation, all while
maintaining temporary peak system performance to achieve biological real-time
operation of the system. A numerical model of this power management model is
derived which allows DVFS architecture exploration for neuromorphics. The
proposed technique is to be used for the second generation SpiNNaker
neuromorphic many core system
Chiminey: Reliable Computing and Data Management Platform in the Cloud
The enabling of scientific experiments that are embarrassingly parallel, long
running and data-intensive into a cloud-based execution environment is a
desirable, though complex undertaking for many researchers. The management of
such virtual environments is cumbersome and not necessarily within the core
skill set for scientists and engineers. We present here Chiminey, a software
platform that enables researchers to (i) run applications on both traditional
high-performance computing and cloud-based computing infrastructures, (ii)
handle failure during execution, (iii) curate and visualise execution outputs,
(iv) share such data with collaborators or the public, and (v) search for
publicly available data.Comment: Preprint, ICSE 201
Recommended from our members
Synthetic biology: advancing biological frontiers by building synthetic systems
Advances in synthetic biology are contributing
to diverse research areas, from basic biology to
biomanufacturing and disease therapy. We discuss the
theoretical foundation, applications, and potential of
this emerging field
Recommended from our members
Skills and Knowledge for Data-Intensive Environmental Research.
The scale and magnitude of complex and pressing environmental issues lend urgency to the need for integrative and reproducible analysis and synthesis, facilitated by data-intensive research approaches. However, the recent pace of technological change has been such that appropriate skills to accomplish data-intensive research are lacking among environmental scientists, who more than ever need greater access to training and mentorship in computational skills. Here, we provide a roadmap for raising data competencies of current and next-generation environmental researchers by describing the concepts and skills needed for effectively engaging with the heterogeneous, distributed, and rapidly growing volumes of available data. We articulate five key skills: (1) data management and processing, (2) analysis, (3) software skills for science, (4) visualization, and (5) communication methods for collaboration and dissemination. We provide an overview of the current suite of training initiatives available to environmental scientists and models for closing the skill-transfer gap
- âŠ