We develop a tensor network technique that can solve universal reversible classical computational problems, formulated as vertex models on a square lattice [Nat. Commun. 8, 15303 (2017)]. By encoding the truth table of each vertex constraint in a tensor, the total number of solutions compatible with partial inputs and outputs at the boundary can be represented as the full contraction of a tensor network. We introduce an iterative compression-decimation (ICD) scheme that performs this contraction efficiently. The ICD algorithm first propagates local constraints to longer ranges via repeated contraction-decomposition sweeps over all lattice bonds, thus achieving compression on a given length scale. It then decimates the lattice via coarse-graining tensor contractions. Repeated iterations of these two steps gradually collapse the tensor network and ultimately yield the exact tensor trace for large systems, without the need for manual control of tensor dimensions. Our protocol allows us to obtain the exact number of solutions for computations where a naive enumeration would take astronomically long times.We thank Justin Reyes, Oskar Pfeffer, and Lei Zhang for many useful discussions. The computations were carried out at Boston University's Shared Computing Cluster. We acknowledge the Condensed Matter Theory Visitors Program at Boston University for support. Z.-C. Y. and C. C. are supported by DOE Grant No. DE-FG02-06ER46316. E.R.M. is supported by NSF Grant No. CCF-1525943. (Condensed Matter Theory Visitors Program at Boston University; DE-FG02-06ER46316 - DOE; CCF-1525943 - NSF)Accepted manuscrip