524,390 research outputs found
Accurate and complexity-effective spatial pattern prediction
Recent research suggests that there are large variations in a cache's spatial usage, both within and across programs. Unfortunately, conventional caches typically employ fixed cache line sizes to balance the exploitation of spatial and temporal locality, and to avoid prohibitive cache fill bandwidth demands. The resulting inability of conventional caches to exploit spatial variations leads to sub-optimal performance and unnecessary cache power dissipation. This paper describes the Spatial Pattern Predictor (SPP), a cost-effective hardware mechanism that accurately predicts reference patterns within a spatial group (i.e., a contiguous region of data in memory) at runtime. The key observation enabling an accurate, yet low-cost, SPP design is that spatial patterns correlate well with instruction addresses and data reference offsets within a cache line. We require only a small amount of predictor memory to store the predicted patterns. Simulation results for a 64-Kbyte 2-way set- associative L1 data cache with 64-byte lines show that: (1) a 256-entry tag- less direct-mapped SPP can achieve, on average, a prediction coverage of 95%, over-predicting the patterns by only 8%, (2) assuming a 70nm process technology, the SPP helps reduce leakage energy in the base cache by 41% on average, incurring less than 1% performance degradation, and (3) prefetching spatial groups of up to 512 bytes using SPP improves execution time by 33% on average and up to a factor of two
Scalable Bayesian model averaging through local information propagation
We show that a probabilistic version of the classical forward-stepwise
variable inclusion procedure can serve as a general data-augmentation scheme
for model space distributions in (generalized) linear models. This latent
variable representation takes the form of a Markov process, thereby allowing
information propagation algorithms to be applied for sampling from model space
posteriors. In particular, we propose a sequential Monte Carlo method for
achieving effective unbiased Bayesian model averaging in high-dimensional
problems, utilizing proposal distributions constructed using local information
propagation. We illustrate our method---called LIPS for local information
propagation based sampling---through real and simulated examples with
dimensionality ranging from 15 to 1,000, and compare its performance in
estimating posterior inclusion probabilities and in out-of-sample prediction to
those of several other methods---namely, MCMC, BAS, iBMA, and LASSO. In
addition, we show that the latent variable representation can also serve as a
modeling tool for specifying model space priors that account for knowledge
regarding model complexity and conditional inclusion relationships
- …