10,888 research outputs found
Mining Sequences of Developer Interactions in Visual Studio for Usage Smells
In this paper, we present a semi-automatic approach for mining a large-scale dataset of IDE interactions to extract usage smells, i.e., inefficient IDE usage patterns exhibited by developers in the field. The approach outlined in this paper first mines frequent IDE usage patterns, filtered via a set of thresholds and by the authors, that are subsequently supported (or disputed) using a developer survey, in order to form usage smells. In contrast with conventional mining of IDE usage data, our approach identifies time-ordered sequences of developer actions that are exhibited by many developers in the field. This pattern mining workflow is resilient to the ample noise present in IDE datasets due to the mix of actions and events that these datasets typically contain. We identify usage patterns and smells that contribute to the understanding of the usability of Visual Studio for debugging, code search, and active file navigation, and, more broadly, to the understanding of developer behavior during these software development activities. Among our findings is the discovery that developers are reluctant to use conditional breakpoints when debugging, due to perceived IDE performance problems as well as due to the lack of error checking in specifying the conditional
Log-based Evaluation of Label Splits for Process Models
Process mining techniques aim to extract insights in processes from event
logs. One of the challenges in process mining is identifying interesting and
meaningful event labels that contribute to a better understanding of the
process. Our application area is mining data from smart homes for elderly,
where the ultimate goal is to signal deviations from usual behavior and provide
timely recommendations in order to extend the period of independent living.
Extracting individual process models showing user behavior is an important
instrument in achieving this goal. However, the interpretation of sensor data
at an appropriate abstraction level is not straightforward. For example, a
motion sensor in a bedroom can be triggered by tossing and turning in bed or by
getting up. We try to derive the actual activity depending on the context
(time, previous events, etc.). In this paper we introduce the notion of label
refinements, which links more abstract event descriptions with their more
refined counterparts. We present a statistical evaluation method to determine
the usefulness of a label refinement for a given event log from a process
perspective. Based on data from smart homes, we show how our statistical
evaluation method for label refinements can be used in practice. Our method was
able to select two label refinements out of a set of candidate label
refinements that both had a positive effect on model precision.Comment: Paper accepted at the 20th International Conference on
Knowledge-Based and Intelligent Information & Engineering Systems, to appear
in Procedia Computer Scienc
- …