The Multi-Prize Lottery Ticket Hypothesis posits that randomly initialized
neural networks contain several subnetworks that achieve comparable accuracy to
fully trained models of the same architecture. However, current methods require
that the network is sufficiently overparameterized. In this work, we propose a
modification to two state-of-the-art algorithms (Edge-Popup and Biprop) that
finds high-accuracy subnetworks with no additional storage cost or scaling. The
algorithm, Iterative Weight Recycling, identifies subsets of important weights
within a randomly initialized network for intra-layer reuse. Empirically we
show improvements on smaller network architectures and higher prune rates,
finding that model sparsity can be increased through the "recycling" of
existing weights. In addition to Iterative Weight Recycling, we complement the
Multi-Prize Lottery Ticket Hypothesis with a reciprocal finding: high-accuracy,
randomly initialized subnetwork's produce diverse masks, despite being
generated with the same hyperparameter's and pruning strategy. We explore the
landscapes of these masks, which show high variability