At Microsoft we use a number of static analysis tools to ensure the quality of the code we produce. Over several years, we have solved problems associated with deploying these tools in a large development environment, including problems of performance, policies for using tools, and methods for encouraging their usage. One challenge is getting appropriate feedback from users about the effectiveness of these methods. In particular, we do not get feedback about errors and warnings that are found and resolved on the desktop and do not make it into the code repository. To address this problem, we have developed an instrumentation framework called ATMetrics, which allows us to collect usage metrics that we can use to analyze how static analysis tools are used in the field. In this paper, we discuss our experiences putting together this metrics system in a complex industrial setting and shed light on how it can help to guide key business decisions around the deployment of static analysis tools
To submit an update or takedown request for this paper, please submit an Update/Correction/Removal Request.