Speculative parallelization (SP) enables a processor to extract multiple threads from a single sequential thread and execute them in parallel. For speculative parallelization to achieve high performance on integer programs, loads must speculate on the data dependences among threads. Techniques for speculating on inter-thread data dependences have a first-order impact on the performance, power, and complexity of SP architectures. Synchronizing predicted inter-thread dependences enables aggressive load speculation while minimizing the risk of misspeculation. In this paper, we present store set synchronization, a complexityeffective technique for speculating on inter-thread data dependences. The store set synchronizer (SSS) predicts store-load dependences using store sets and enforces those predicted dependences using recently proposed techniques for dynamic register synchronization. The key insight behind store set synchronization is that predicted dependences carried through store sets can be treated exactly like the dependences carried through architectural registers. By balancing the benefits and risks of load speculation, the SSS increases performance, conserves power, and reduces complexity. On integer benchmarks the SSS increases performance by as much as 56 % and by 20 % on average. The SSS also reduces the average rate of dependence violations by 80%, which conserves power and dramatically decreases the number of threads squashed due to dependence violations. Furthermore, the low rate of dependence violations mitigates the need for costly disambiguation hardware such as per-thread load queues. We show that replacing the associative load queues with filtered load re-execution in an SSS-equipped system decreases performance by just 3%.