Provisions in many data protection laws require a legal basis, or at the very least safeguards, for significant, solely automated decisions; Article 22 of the GDPR is the most notable. - Little attention has been paid to Article 22 in light of decision-making processes with multiple stages, potentially both manual and automated, and which together might impact upon decision subjects in different ways. - Using stylised examples grounded in real-world systems, we raise five distinct complications relating to interpreting Article 22 in the context of such multi-stage profiling systems. - These are: the potential for selective automation on subsets of data subjects despite generally adequate human input; the ambiguity around where to locate the decision itself; whether 'significance' should be interpreted in terms of any potential effects or only selectively in terms of realised effects; the potential for upstream automation processes to foreclose downstream outcomes despite human input; and that a focus on the final step may distract from the status and importance of upstream processes. - We argue that the nature of these challenges will make it difficult for courts or regulators to distil a set of clear, fair and consistent interpretations for many realistic contexts