90,671 research outputs found
A field guide for Agency staff operating the SIMRAD EY500 portable scientific echosounder. 2nd draft 3rd August 1999
This manual has been produced by members of the national acoustics group (NAG) and represents the first in a series of outputs designed to promote co-ordination and consistency
in Agency hydroacoustic surveys. It is designed as a field guide for Agency staff operating the SIMRAD EY500 portable scientific echosounder. It should be simplistic enough for the newcomer to EY500 to be able to set up and run a mobile hydroacoustic survey with some knowledge of the supporting theory. It should act as guidance for standardisation of survey procedures providing a concise list of settings and recommendations that can be used as a quick reference guide in the field. This manual condenses 5 years of practical experience of surveying fish populations using Simrad hardware and software for surveying large rivers and still waters throughout England and Wales. This document should be used as a companion to the manufacturers instruction manual and not act as a substitute for it
A review of advances in pixel detectors for experiments with high rate and radiation
The Large Hadron Collider (LHC) experiments ATLAS and CMS have established
hybrid pixel detectors as the instrument of choice for particle tracking and
vertexing in high rate and radiation environments, as they operate close to the
LHC interaction points. With the High Luminosity-LHC upgrade now in sight, for
which the tracking detectors will be completely replaced, new generations of
pixel detectors are being devised. They have to address enormous challenges in
terms of data throughput and radiation levels, ionizing and non-ionizing, that
harm the sensing and readout parts of pixel detectors alike. Advances in
microelectronics and microprocessing technologies now enable large scale
detector designs with unprecedented performance in measurement precision (space
and time), radiation hard sensors and readout chips, hybridization techniques,
lightweight supports, and fully monolithic approaches to meet these challenges.
This paper reviews the world-wide effort on these developments.Comment: 84 pages with 46 figures. Review article.For submission to Rep. Prog.
Phy
The IceCube Neutrino Observatory: Instrumentation and Online Systems
The IceCube Neutrino Observatory is a cubic-kilometer-scale high-energy
neutrino detector built into the ice at the South Pole. Construction of
IceCube, the largest neutrino detector built to date, was completed in 2011 and
enabled the discovery of high-energy astrophysical neutrinos. We describe here
the design, production, and calibration of the IceCube digital optical module
(DOM), the cable systems, computing hardware, and our methodology for drilling
and deployment. We also describe the online triggering and data filtering
systems that select candidate neutrino and cosmic ray events for analysis. Due
to a rigorous pre-deployment protocol, 98.4% of the DOMs in the deep ice are
operating and collecting data. IceCube routinely achieves a detector uptime of
99% by emphasizing software stability and monitoring. Detector operations have
been stable since construction was completed, and the detector is expected to
operate at least until the end of the next decade.Comment: 83 pages, 50 figures; updated with minor changes from journal review
and proofin
Neuro-memristive Circuits for Edge Computing: A review
The volume, veracity, variability, and velocity of data produced from the
ever-increasing network of sensors connected to Internet pose challenges for
power management, scalability, and sustainability of cloud computing
infrastructure. Increasing the data processing capability of edge computing
devices at lower power requirements can reduce several overheads for cloud
computing solutions. This paper provides the review of neuromorphic
CMOS-memristive architectures that can be integrated into edge computing
devices. We discuss why the neuromorphic architectures are useful for edge
devices and show the advantages, drawbacks and open problems in the field of
neuro-memristive circuits for edge computing
FINN: A Framework for Fast, Scalable Binarized Neural Network Inference
Research has shown that convolutional neural networks contain significant
redundancy, and high classification accuracy can be obtained even when weights
and activations are reduced from floating point to binary values. In this
paper, we present FINN, a framework for building fast and flexible FPGA
accelerators using a flexible heterogeneous streaming architecture. By
utilizing a novel set of optimizations that enable efficient mapping of
binarized neural networks to hardware, we implement fully connected,
convolutional and pooling layers, with per-layer compute resources being
tailored to user-provided throughput requirements. On a ZC706 embedded FPGA
platform drawing less than 25 W total system power, we demonstrate up to 12.3
million image classifications per second with 0.31 {\mu}s latency on the MNIST
dataset with 95.8% accuracy, and 21906 image classifications per second with
283 {\mu}s latency on the CIFAR-10 and SVHN datasets with respectively 80.1%
and 94.9% accuracy. To the best of our knowledge, ours are the fastest
classification rates reported to date on these benchmarks.Comment: To appear in the 25th International Symposium on Field-Programmable
Gate Arrays, February 201
NullHop: A Flexible Convolutional Neural Network Accelerator Based on Sparse Representations of Feature Maps
Convolutional neural networks (CNNs) have become the dominant neural network
architecture for solving many state-of-the-art (SOA) visual processing tasks.
Even though Graphical Processing Units (GPUs) are most often used in training
and deploying CNNs, their power efficiency is less than 10 GOp/s/W for
single-frame runtime inference. We propose a flexible and efficient CNN
accelerator architecture called NullHop that implements SOA CNNs useful for
low-power and low-latency application scenarios. NullHop exploits the sparsity
of neuron activations in CNNs to accelerate the computation and reduce memory
requirements. The flexible architecture allows high utilization of available
computing resources across kernel sizes ranging from 1x1 to 7x7. NullHop can
process up to 128 input and 128 output feature maps per layer in a single pass.
We implemented the proposed architecture on a Xilinx Zynq FPGA platform and
present results showing how our implementation reduces external memory
transfers and compute time in five different CNNs ranging from small ones up to
the widely known large VGG16 and VGG19 CNNs. Post-synthesis simulations using
Mentor Modelsim in a 28nm process with a clock frequency of 500 MHz show that
the VGG19 network achieves over 450 GOp/s. By exploiting sparsity, NullHop
achieves an efficiency of 368%, maintains over 98% utilization of the MAC
units, and achieves a power efficiency of over 3TOp/s/W in a core area of
6.3mm. As further proof of NullHop's usability, we interfaced its FPGA
implementation with a neuromorphic event camera for real time interactive
demonstrations
- …