Big data’s rise has amplified the role of information systems in process management. Process mining, a branch of data science, provides analytical tools and methods which can distil insights about process behaviour from big process-related data. Yet challenges remain, including dealing with the quality of big data and the impact of poor quality data on event logs as the input to process mining analyses. We show, through an analysis of 152 case studies, that despite researchers raising concerns about event log data quality, the event log preparation (data pre-processing) phase of process mining case studies is generally handled in a naive manner (as opposed to informed), focusing on fixing symptoms rather than uncovering the root causes of event log data quality issues.This paper considers event log data quality problems from a new angle. We introduce the Odigos (Greek for ‘guide’) framework, adapted from Mingers and Willcocks (2014), based on semiotics and Peircean abductive reasoning, that explains the notion of process mining context at a conceptual level. From a practical perspective, the Odigos framework facilitates an informed way of dealing with data quality issues in event logs through supporting both prognostic (foreshadowing potential quality issues) and diagnostic (identifying root causes of discovered quality issues) approaches. From a theoretical perspective, the work provides a foundation for the development of a process mining methodology for data pre-processing and for further IS theory development in the area of data analytics
To submit an update or takedown request for this paper, please submit an Update/Correction/Removal Request.