254 research outputs found
Combining Spreadsheet Smells for Improved Fault Prediction
Spreadsheets are commonly used in organizations as a programming tool for
business-related calculations and decision making. Since faults in spreadsheets
can have severe business impacts, a number of approaches from general software
engineering have been applied to spreadsheets in recent years, among them the
concept of code smells. Smells can in particular be used for the task of fault
prediction. An analysis of existing spreadsheet smells, however, revealed that
the predictive power of individual smells can be limited. In this work we
therefore propose a machine learning based approach which combines the
predictions of individual smells by using an AdaBoost ensemble classifier.
Experiments on two public datasets containing real-world spreadsheet faults
show significant improvements in terms of fault prediction accuracy.Comment: 4 pages, 1 figure, to be published in 40th International Conference
on Software Engineering: New Ideas and Emerging Results Trac
Software Development Analytics in Practice: A Systematic Literature Review
Context:Software Development Analytics is a research area concerned with
providing insights to improve product deliveries and processes. Many types of
studies, data sources and mining methods have been used for that purpose.
Objective:This systematic literature review aims at providing an aggregate view
of the relevant studies on Software Development Analytics in the past decade
(2010-2019), with an emphasis on its application in practical settings.
Method:Definition and execution of a search string upon several digital
libraries, followed by a quality assessment criteria to identify the most
relevant papers. On those, we extracted a set of characteristics (study type,
data source, study perspective, development life-cycle activities covered,
stakeholders, mining methods, and analytics scope) and classified their impact
against a taxonomy. Results:Source code repositories, experimental case
studies, and developers are the most common data sources, study types, and
stakeholders, respectively. Product and project managers are also often
present, but less than expected. Mining methods are evolving rapidly and that
is reflected in the long list identified. Descriptive statistics are the most
usual method followed by correlation analysis. Being software development an
important process in every organization, it was unexpected to find that process
mining was present in only one study. Most contributions to the software
development life cycle were given in the quality dimension. Time management and
costs control were lightly debated. The analysis of security aspects suggests
it is an increasing topic of concern for practitioners. Risk management
contributions are scarce. Conclusions:There is a wide improvement margin for
software development analytics in practice. For instance, mining and analyzing
the activities performed by software developers in their actual workbench, the
IDE
An Automatically Created Novel Bug Dataset and its Validation in Bug Prediction
Bugs are inescapable during software development due to frequent code
changes, tight deadlines, etc.; therefore, it is important to have tools to
find these errors. One way of performing bug identification is to analyze the
characteristics of buggy source code elements from the past and predict the
present ones based on the same characteristics, using e.g. machine learning
models. To support model building tasks, code elements and their
characteristics are collected in so-called bug datasets which serve as the
input for learning.
We present the \emph{BugHunter Dataset}: a novel kind of automatically
constructed and freely available bug dataset containing code elements (files,
classes, methods) with a wide set of code metrics and bug information. Other
available bug datasets follow the traditional approach of gathering the
characteristics of all source code elements (buggy and non-buggy) at only one
or more pre-selected release versions of the code. Our approach, on the other
hand, captures the buggy and the fixed states of the same source code elements
from the narrowest timeframe we can identify for a bug's presence, regardless
of release versions. To show the usefulness of the new dataset, we built and
evaluated bug prediction models and achieved F-measure values over 0.74
A systematic literature review on the code smells datasets and validation mechanisms
The accuracy reported for code smell-detecting tools varies depending on the
dataset used to evaluate the tools. Our survey of 45 existing datasets reveals
that the adequacy of a dataset for detecting smells highly depends on relevant
properties such as the size, severity level, project types, number of each type
of smell, number of smells, and the ratio of smelly to non-smelly samples in
the dataset. Most existing datasets support God Class, Long Method, and Feature
Envy while six smells in Fowler and Beck's catalog are not supported by any
datasets. We conclude that existing datasets suffer from imbalanced samples,
lack of supporting severity level, and restriction to Java language.Comment: 34 pages, 10 figures, 12 tables, Accepte
Use and misuse of the term "Experiment" in mining software repositories research
The significant momentum and importance of Mining Software Repositories (MSR) in Software Engineering (SE) has fostered new opportunities and challenges for extensive empirical research. However, MSR researchers seem to struggle to characterize the empirical methods they use into the existing empirical SE body of knowledge. This is especially the case of MSR experiments. To provide evidence on the special characteristics of MSR experiments and their differences with experiments traditionally acknowledged in SE so far, we elicited the hallmarks that differentiate an experiment from other types of empirical studies and characterized the hallmarks and types of experiments in MSR. We analyzed MSR literature obtained from a small-scale systematic mapping study to assess the use of the term experiment in MSR. We found that 19% of the papers claiming to be an experiment are indeed not an experiment at all but also observational studies, so they use the term in a misleading way. From the remaining 81% of the papers, only one of them refers to a genuine controlled experiment while the others stand for experiments with limited control. MSR researchers tend to overlook such limitations, compromising the interpretation of the results of their studies. We provide recommendations and insights to support the improvement of MSR experiments.This work has been partially supported by the Spanish project: MCI PID2020-117191RB-I00.Peer ReviewedPostprint (author's final draft
Software development process mining: discovery, conformance checking and enhancement
Context. Modern software projects require the proper allocation of human, technical and
financial resources. Very often, project managers make decisions supported only by their personal
experience, intuition or simply by mirroring activities performed by others in similar
contexts. Most attempts to avoid such practices use models based on lines of code, cyclomatic
complexity or effort estimators, thus commonly supported by software repositories which are
known to contain several flaws.
Objective. Demonstrate the usefulness of process data and mining methods to enhance the
software development practices, by assessing efficiency and unveil unknown process insights,
thus contributing to the creation of novel models within the software development analytics
realm.
Method. We mined the development process fragments of multiple developers in three
different scenarios by collecting Integrated Development Environment (IDE) events during their
development sessions. Furthermore, we used process and text mining to discovery developers’
workflows and their fingerprints, respectively.
Results. We discovered and modeled with good quality developers’ processes during programming
sessions based on events extracted from their IDEs. We unveiled insights from
coding practices in distinct refactoring tasks, built accurate software complexity forecast models
based only on process metrics and setup a method for characterizing coherently developers’
behaviors. The latter may ultimately lead to the creation of a catalog of software development
process smells.
Conclusions. Our approach is agnostic to programming languages, geographic location or
development practices, making it suitable for challenging contexts such as in modern global
software development projects using either traditional IDEs or sophisticated low/no code platforms.Contexto. Projetos de software modernos requerem a correta alocação de recursos humanos,
técnicos e financeiros. Frequentemente, os gestores de projeto tomam decisões suportadas
apenas na sua própria experiência, intuição ou simplesmente espelhando atividades executadas
por terceiros em contextos similares. As tentativas para evitar tais práticas baseiam-se em
modelos que usam linhas de código, a complexidade ciclomática ou em estimativas de esforço,
sendo estes tradicionalmente suportados por repositórios de software conhecidos por conterem
várias limitações.
Objetivo. Demonstrar a utilidade dos dados de processo e respetivos métodos de análise na
melhoria das práticas de desenvolvimento de software, colocando o foco na análise da eficiência
e revelando aspetos dos processos até então desconhecidos, contribuindo para a criação de
novos modelos no contexto de análises avançadas para o desenvolvimento de software.
Método. Explorámos os fragmentos de processo de vários programadores em três cenários
diferentes, recolhendo eventos durante as suas sessões de desenvolvimento no IDE. Adicionalmente,
usámos métodos de descoberta e análise de processos e texto no sentido de modelar o
fluxo de trabalho dos programadores e as suas caracterÃsticas individuais, respetivamente.
Resultados. Descobrimos e modelámos com boa qualidade os processos dos programadores
durante as suas sessões de trabalho, usando eventos provenientes dos seus IDEs. Revelámos factos
desconhecidos sobre práticas de refabricação, construÃmos modelos de previsão da complexidade
ciclomática usando apenas métricas de processo e criámos um método para caracterizar
coerentemente os comportamentos dos programadores. Este último, pode levar à criação de um
catálogo de boas/más práticas no processo de desenvolvimento de software.
Conclusões. A nossa abordagem é agnóstica em termos de linguagens de programação,
localização geográfica ou prática de desenvolvimento, tornando-a aplicável em contextos complexos
tal como em projetos modernos de desenvolvimento global que utilizam tanto os IDEs
tradicionais como as atuais e sofisticadas plataformas "low/no code"
- …