4,473 research outputs found
BSL: An R Package for Efficient Parameter Estimation for Simulation-Based Models via Bayesian Synthetic Likelihood
Bayesian synthetic likelihood (BSL; Price, Drovandi, Lee, and Nott 2018) is a popular method for estimating the parameter posterior distribution for complex statistical models and stochastic processes that possess a computationally intractable likelihood function. Instead of evaluating the likelihood, BSL approximates the likelihood of a judiciously chosen summary statistic of the data via model simulation and density estimation. Compared to alternative methods such as approximate Bayesian computation (ABC), BSL requires little tuning and requires less model simulations than ABC when the chosen summary statistic is high-dimensional. The original synthetic likelihood relies on a multivariate normal approximation of the intractable likelihood, where the mean and covariance are estimated by simulation. An extension of BSL considers replacing the sample covariance with a penalized covariance estimator to reduce the number of required model simulations. Further, a semi-parametric approach has been developed to relax the normality assumption. Finally, another extension of BSL aims to develop a more robust synthetic likelihood estimator while acknowledging there might be model misspecification. In this paper, we present the R package BSL that amalgamates the aforementioned methods and more into a single, easy-to-use and coherent piece of software. The package also includes several examples to illustrate use of the package and the utility of the methods
The Critical Radius in Sampling-based Motion Planning
We develop a new analysis of sampling-based motion planning in Euclidean
space with uniform random sampling, which significantly improves upon the
celebrated result of Karaman and Frazzoli (2011) and subsequent work.
Particularly, we prove the existence of a critical connection radius
proportional to for samples and dimensions:
Below this value the planner is guaranteed to fail (similarly shown by the
aforementioned work, ibid.). More importantly, for larger radius values the
planner is asymptotically (near-)optimal. Furthermore, our analysis yields an
explicit lower bound of on the probability of success. A
practical implication of our work is that asymptotic (near-)optimality is
achieved when each sample is connected to only neighbors. This is
in stark contrast to previous work which requires
connections, that are induced by a radius of order . Our analysis is not restricted to PRM and applies to a
variety of PRM-based planners, including RRG, FMT* and BTT. Continuum
percolation plays an important role in our proofs. Lastly, we develop similar
theory for all the aforementioned planners when constructed with deterministic
samples, which are then sparsified in a randomized fashion. We believe that
this new model, and its analysis, is interesting in its own right
An Interpretable Probabilistic Autoregressive Neural Network Model for Time Series Forecasting
Forecasting time series data presents an emerging field of data science that
has its application ranging from stock price and exchange rate prediction to
the early prediction of epidemics. Numerous statistical and machine learning
methods have been proposed in the last five decades with the demand for
generating high-quality and reliable forecasts. However, in real-life
prediction problems, situations exist in which a model based on one of the
above paradigms is preferable, and therefore, hybrid solutions are needed to
bridge the gap between classical forecasting methods and scalable neural
network models. We introduce an interpretable probabilistic autoregressive
neural network model for an explainable, scalable, and "white box-like"
framework that can handle a wide variety of irregular time series data (e.g.,
nonlinearity and nonstationarity). Sufficient conditions for asymptotic
stationarity and geometric ergodicity are obtained by considering the
asymptotic behavior of the associated Markov chain. During computational
experiments, PARNN outperforms standard statistical, machine learning, and deep
learning models on a diverse collection of real-world datasets coming from
economics, finance, and epidemiology, to mention a few. Furthermore, the
proposed PARNN model improves forecast accuracy significantly for 10 out of 12
datasets compared to state-of-the-art models for short to long-term forecasts
FFT-Based Deep Learning Deployment in Embedded Systems
Deep learning has delivered its powerfulness in many application domains,
especially in image and speech recognition. As the backbone of deep learning,
deep neural networks (DNNs) consist of multiple layers of various types with
hundreds to thousands of neurons. Embedded platforms are now becoming essential
for deep learning deployment due to their portability, versatility, and energy
efficiency. The large model size of DNNs, while providing excellent accuracy,
also burdens the embedded platforms with intensive computation and storage.
Researchers have investigated on reducing DNN model size with negligible
accuracy loss. This work proposes a Fast Fourier Transform (FFT)-based DNN
training and inference model suitable for embedded platforms with reduced
asymptotic complexity of both computation and storage, making our approach
distinguished from existing approaches. We develop the training and inference
algorithms based on FFT as the computing kernel and deploy the FFT-based
inference model on embedded platforms achieving extraordinary processing speed.Comment: Design, Automation, and Test in Europe (DATE) For source code, please
contact Mahdi Nazemi at <[email protected]
A Language and Hardware Independent Approach to Quantum-Classical Computing
Heterogeneous high-performance computing (HPC) systems offer novel
architectures which accelerate specific workloads through judicious use of
specialized coprocessors. A promising architectural approach for future
scientific computations is provided by heterogeneous HPC systems integrating
quantum processing units (QPUs). To this end, we present XACC (eXtreme-scale
ACCelerator) --- a programming model and software framework that enables
quantum acceleration within standard or HPC software workflows. XACC follows a
coprocessor machine model that is independent of the underlying quantum
computing hardware, thereby enabling quantum programs to be defined and
executed on a variety of QPUs types through a unified application programming
interface. Moreover, XACC defines a polymorphic low-level intermediate
representation, and an extensible compiler frontend that enables language
independent quantum programming, thus promoting integration and
interoperability across the quantum programming landscape. In this work we
define the software architecture enabling our hardware and language independent
approach, and demonstrate its usefulness across a range of quantum computing
models through illustrative examples involving the compilation and execution of
gate and annealing-based quantum programs
Automatic Variational Inference in Stan
Variational inference is a scalable technique for approximate Bayesian
inference. Deriving variational inference algorithms requires tedious
model-specific calculations; this makes it difficult to automate. We propose an
automatic variational inference algorithm, automatic differentiation
variational inference (ADVI). The user only provides a Bayesian model and a
dataset; nothing else. We make no conjugacy assumptions and support a broad
class of models. The algorithm automatically determines an appropriate
variational family and optimizes the variational objective. We implement ADVI
in Stan (code available now), a probabilistic programming framework. We compare
ADVI to MCMC sampling across hierarchical generalized linear models,
nonconjugate matrix factorization, and a mixture model. We train the mixture
model on a quarter million images. With ADVI we can use variational inference
on any model we write in Stan
- …