3,878 research outputs found
Efficient ConvNets for Analog Arrays
Analog arrays are a promising upcoming hardware technology with the potential
to drastically speed up deep learning. Their main advantage is that they
compute matrix-vector products in constant time, irrespective of the size of
the matrix. However, early convolution layers in ConvNets map very unfavorably
onto analog arrays, because kernel matrices are typically small and the
constant time operation needs to be sequentially iterated a large number of
times, reducing the speed up advantage for ConvNets. Here, we propose to
replicate the kernel matrix of a convolution layer on distinct analog arrays,
and randomly divide parts of the compute among them, so that multiple kernel
matrices are trained in parallel. With this modification, analog arrays execute
ConvNets with an acceleration factor that is proportional to the number of
kernel matrices used per layer (here tested 16-128). Despite having more free
parameters, we show analytically and in numerical experiments that this
convolution architecture is self-regularizing and implicitly learns similar
filters across arrays. We also report superior performance on a number of
datasets and increased robustness to adversarial attacks. Our investigation
suggests to revise the notion that mixed analog-digital hardware is not
suitable for ConvNets
Doubly stochastic continuous-time hidden Markov approach for analyzing genome tiling arrays
Microarrays have been developed that tile the entire nonrepetitive genomes of
many different organisms, allowing for the unbiased mapping of active
transcription regions or protein binding sites across the entire genome. These
tiling array experiments produce massive correlated data sets that have many
experimental artifacts, presenting many challenges to researchers that require
innovative analysis methods and efficient computational algorithms. This paper
presents a doubly stochastic latent variable analysis method for transcript
discovery and protein binding region localization using tiling array data. This
model is unique in that it considers actual genomic distance between probes.
Additionally, the model is designed to be robust to cross-hybridized and
nonresponsive probes, which can often lead to false-positive results in
microarray experiments. We apply our model to a transcript finding data set to
illustrate the consistency of our method. Additionally, we apply our method to
a spike-in experiment that can be used as a benchmark data set for researchers
interested in developing and comparing future tiling array methods. The results
indicate that our method is very powerful, accurate and can be used on a single
sample and without control experiments, thus defraying some of the overhead
cost of conducting experiments on tiling arrays.Comment: Published in at http://dx.doi.org/10.1214/09-AOAS248 the Annals of
Applied Statistics (http://www.imstat.org/aoas/) by the Institute of
Mathematical Statistics (http://www.imstat.org
Detector and Telescope Development for ProtoEXIST and Fine Beam Measurements of Spectral Response of CZT Detectors
We outline our plan to develop ProtoEXIST, a balloon-borne prototype
experiment for the Energetic X-ray Imaging Survey Telescope (EXIST) for the
Black Hole Finder Probe. EXIST will consist of multiple wide-field hard X-ray
coded-aperture telescopes. The current design of the EXIST mission employs two
types of telescope systems: high energy telescopes (HETs) using CZT detectors,
and low energy telescopes (LETs) using Si detectors. With ProtoEXIST, we will
develop and demonstrate the technologies required for the EXIST HETs. As part
of our development efforts, we also present recent laboratory measurements of
the spectral response and efficiency variation of imaging CZT detectors on a
fine scale (~0.5 mm). The preliminary results confirm the need for multi-pixel
readouts and small inter-pixel gaps to achieve uniform spectral response and
high detection efficiency across detectors.Comment: 9 pages, 12 figures, 1 table, appears in SPIE 2005 proceedings (5898:
UV, X-ray, and Gamma-ray Space Instrumentation for Astronomy XIV
Recording advances for neural prosthetics
An important challenge for neural prosthetics research is to record from populations of neurons over long periods of time, ideally for the lifetime of the patient. Two new advances toward this goal are described, the use of local field potentials (LFPs) and autonomously positioned recording electrodes. LFPs are the composite extracellular potential field from several hundreds of neurons around the electrode tip. LFP recordings can be maintained for longer periods of time than single cell recordings. We find that similar information can be decoded from LFP and spike recordings, with better performance for state decodes with LFPs and, depending on the area, equivalent or slightly less than equivalent performance for signaling the direction of planned movements. Movable electrodes in microdrives can be adjusted in the tissue to optimize recordings, but their movements must be automated to be a practical benefit to patients. We have developed automation algorithms and a meso-scale autonomous electrode testbed, and demonstrated that this system can autonomously isolate and maintain the recorded signal quality of single cells in the cortex of awake, behaving monkeys. These two advances show promise for developing very long term recording for neural prosthetic applications
Transformations of High-Level Synthesis Codes for High-Performance Computing
Specialized hardware architectures promise a major step in performance and
energy efficiency over the traditional load/store devices currently employed in
large scale computing systems. The adoption of high-level synthesis (HLS) from
languages such as C/C++ and OpenCL has greatly increased programmer
productivity when designing for such platforms. While this has enabled a wider
audience to target specialized hardware, the optimization principles known from
traditional software design are no longer sufficient to implement
high-performance codes. Fast and efficient codes for reconfigurable platforms
are thus still challenging to design. To alleviate this, we present a set of
optimizing transformations for HLS, targeting scalable and efficient
architectures for high-performance computing (HPC) applications. Our work
provides a toolbox for developers, where we systematically identify classes of
transformations, the characteristics of their effect on the HLS code and the
resulting hardware (e.g., increases data reuse or resource consumption), and
the objectives that each transformation can target (e.g., resolve interface
contention, or increase parallelism). We show how these can be used to
efficiently exploit pipelining, on-chip distributed fast memory, and on-chip
streaming dataflow, allowing for massively parallel architectures. To quantify
the effect of our transformations, we use them to optimize a set of
throughput-oriented FPGA kernels, demonstrating that our enhancements are
sufficient to scale up parallelism within the hardware constraints. With the
transformations covered, we hope to establish a common framework for
performance engineers, compiler developers, and hardware developers, to tap
into the performance potential offered by specialized hardware architectures
using HLS
- …