6,572 research outputs found
A Hardware Efficient Random Number Generator for Nonuniform Distributions with Arbitrary Precision
Nonuniform random numbers are key for many technical applications, and designing efficient hardware implementations of non-uniform random
number generators is a very active research field. However, most state-of-the-art architectures are either tailored to specific distributions or use up a lot of hardware resources. At ReConFig 2010, we have presented a new design that saves up to 48% of area compared to state-of-the-art inversion-based implementation, usable for arbitrary distributions and precision. In this paper, we introduce a more flexible version together with a refined segmentation scheme that allows to further reduce the approximation error significantly. We provide a free software tool allowing users to implement their own distributions easily, and we have tested our random number generator thoroughly by statistic analysis and two application tests
Customisable arithmetic hardware designs
Imperial Users onl
Quantum machine learning: a classical perspective
Recently, increased computational power and data availability, as well as
algorithmic advances, have led machine learning techniques to impressive
results in regression, classification, data-generation and reinforcement
learning tasks. Despite these successes, the proximity to the physical limits
of chip fabrication alongside the increasing size of datasets are motivating a
growing number of researchers to explore the possibility of harnessing the
power of quantum computation to speed-up classical machine learning algorithms.
Here we review the literature in quantum machine learning and discuss
perspectives for a mixed readership of classical machine learning and quantum
computation experts. Particular emphasis will be placed on clarifying the
limitations of quantum algorithms, how they compare with their best classical
counterparts and why quantum resources are expected to provide advantages for
learning problems. Learning in the presence of noise and certain
computationally hard problems in machine learning are identified as promising
directions for the field. Practical questions, like how to upload classical
data into quantum form, will also be addressed.Comment: v3 33 pages; typos corrected and references adde
Recommended from our members
Efficient Programmable Random Variate Generation Accelerator from Sensor Noise
We introduce a method for non-uniform random number generation based on
sampling a physical process in a controlled environment. We demonstrate one
proof-of-concept implementation of the method that reduces the error of Monte
Carlo integration of a univariate Gaussian by 1068 times while doubling the
speed of the Monte Carlo simulation. We show that the supply voltage and
temperature of the physical process must be controlled to prevent the mean and
standard deviation of the random number generator from drifting.Alan Turing Institute award: TU/B/000096
EPSRC grants: EP/N510129/1, EP/R022534/1, EP/V004654/1 and EP/L015889/
BĀ²NĀ²: Resource efficient Bayesian neural network accelerator using Bernoulli sampler on FPGA
A resource efficient hardware accelerator for Bayesian neural network (BNN) named BĀ²NĀ², Bernoulli random number based Bayesian neural network accelerator, is proposed. As neural networks expand their application into risk sensitive domains where mispredictions may cause serious social and economic losses, evaluating the NNās confidence on its prediction has emerged as a critical concern. Among many uncertainty evaluation methods, BNN provides a theoretically grounded way to evaluate the uncertainty of NNās output by treating network parameters as random variables. By exploiting the central limit theorem, we propose to replace costly Gaussian random number generators (RNG) with Bernoulli RNG which can be efficiently implemented on hardware since the possible outcome from Bernoulli distribution is binary. We demonstrate that BĀ²NĀ² implemented on Xilinx ZCU104 FPGA board consumes only 465 DSPs and 81661 LUTs which corresponds to 50.9% and 14.3% reductions compared to Gaussian-BNN (Hirayama et al., 2020) implemented on the same FPGA board for fair comparison. We further compare BĀ²NĀ² with VIBNN (Cai et al., 2018), which shows that BĀ²NĀ² successfully reduced DSPs and LUTs usages by 50.9% and 57.9%, respectively. Owing to the reduced hardware resources, BĀ²NĀ² improved energy efficiency by 7.50% and 57.5% compared to Gaussian-BNN (Hirayama et al., 2020) and VIBNN (Cai et al., 2018), respectively
- ā¦