2 research outputs found
An Efficient MCMC Approach to Energy Function Optimization in Protein Structure Prediction
Protein structure prediction is a critical problem linked to drug design,
mutation detection, and protein synthesis, among other applications. To this
end, evolutionary data has been used to build contact maps which are
traditionally minimized as energy functions via gradient descent based schemes
like the L-BFGS algorithm. In this paper we present what we call the
Alternating Metropolis-Hastings (AMH) algorithm, which (a) significantly
improves the performance of traditional MCMC methods, (b) is inherently
parallelizable allowing significant hardware acceleration using GPU, and (c)
can be integrated with the L-BFGS algorithm to improve its performance. The
algorithm shows an improvement in energy of found structures of 8.17% to 61.04%
(average 38.9%) over traditional MH and 0.53% to 17.75% (average 8.9%) over
traditional MH with intermittent noisy restarts, tested across 9 proteins from
recent CASP competitions. We go on to map the Alternating MH algorithm to a
GPGPU which improves sampling rate by 277x and improves simulation time to a
low energy protein prediction by 7.5x to 26.5x over CPU. We show that our
approach can be incorporated into state-of-the-art protein prediction pipelines
by applying it to both trRosetta2's energy function and the distogram component
of Alphafold1's energy function. Finally, we note that specially designed
probabilistic computers (or p-computers) can provide even better performance
than GPUs for MCMC algorithms like the one discussed here.Comment: 10 pages, 4 figure
Autonomous Probabilistic Coprocessing with Petaflips per Second
In this paper we present a concrete design for a probabilistic (p-) computer
based on a network of p-bits, robust classical entities fluctuating between -1
and +1, with probabilities that are controlled through an input constructed
from the outputs of other p-bits. The architecture of this probabilistic
computer is similar to a stochastic neural network with the p-bit playing the
role of a binary stochastic neuron, but with one key difference: there is no
sequencer used to enforce an ordering of p-bit updates, as is typically
required. Instead, we explore \textit{sequencerless} designs where all p-bits
are allowed to flip autonomously and demonstrate that such designs can allow
ultrafast operation unconstrained by available clock speeds without
compromising the solution's fidelity. Based on experimental results from a
hardware benchmark of the autonomous design and benchmarked device models, we
project that a nanomagnetic implementation can scale to achieve petaflips per
second with millions of neurons. A key contribution of this paper is the focus
on a hardware metric flips per second as a problem and
substrate-independent figure-of-merit for an emerging class of hardware
annealers known as Ising Machines. Much like the shrinking feature sizes of
transistors that have continually driven Moore's Law, we believe that flips per
second can be continually improved in later technology generations of a wide
class of probabilistic, domain specific hardware.Comment: 13 pages, 8 figures, 1 tabl