Item Response Theory becomes an increasingly important tool when analyzing
``Big Data'' gathered from online educational venues. However, the mechanism
was originally developed in traditional exam settings, and several of its
assumptions are infringed upon when deployed in the online realm. For a large
enrollment physics course for scientists and engineers, the study compares
outcomes from IRT analyses of exam and homework data, and then proceeds to
investigate the effects of each confounding factor introduced in the online
realm. It is found that IRT yields the correct trends for learner ability and
meaningful item parameters, yet overall agreement with exam data is moderate.
It is also found that learner ability and item discrimination is over wide
ranges robust with respect to model assumptions and introduced noise, less so
than item difficulty