38,614 research outputs found
Towards a self-evolving software defect detection process
Software defect detection research typically focuses on individual inspection and testing techniques. However, to be effective in applying defect detection techniques, it is important to recognize when to use inspection techniques and when to use testing techniques. In addition, it is important to know when to deliver a product and use maintenance activities, such as trouble shooting and bug fixing, to address the remaining defects in the software.To be more effective detecting software defects, not only should defect detection techniques be studied and compared, but the entire software defect detection process should be studied to give us a better idea of how it can be conducted, controlled, evaluated and improved.This thesis presents a self-evolving software defect detection process (SEDD) that provides a systematic approach to software defect detection and guides us as to when inspection, testing or maintenance activities are best performed. The approach is self-evolving in that it is continuously improved by assessing the outcome of the defect detection techniques in comparison with historical data.A software architecture and prototype implementation of the approach is also presented along with a case study that was conducted to validate the approach. Initial results of using the self-evolving defect detection approach are promising
Cheetah Experimental Platform Web 1.0: Cleaning Pupillary Data
Recently, researchers started using cognitive load in various settings, e.g.,
educational psychology, cognitive load theory, or human-computer interaction.
Cognitive load characterizes a tasks' demand on the limited information
processing capacity of the brain. The widespread adoption of eye-tracking
devices led to increased attention for objectively measuring cognitive load via
pupil dilation. However, this approach requires a standardized data processing
routine to reliably measure cognitive load. This technical report presents
CEP-Web, an open source platform to providing state of the art data processing
routines for cleaning pupillary data combined with a graphical user interface,
enabling the management of studies and subjects. Future developments will
include the support for analyzing the cleaned data as well as support for
Task-Evoked Pupillary Response (TEPR) studies
Are the perspectives really different? Further experimentation on scenario-based reading of requirements
Perspective-Based Reading (PBR) is a scenario based inspection technique where several reviewers read a document from different perspectives (e.g. user, designer, tester). The reading is made according to a special scenario, specific for each perspective. The basic assumption behind PBR is that the perspectives find different defects and a combination of several perspectives detects more defects compared to the same amount of reading with a single perspective. The paper presents a study which analyses the differences in perspectives. The study is a partial replication of previous studies. It is conducted in an academic environment using graduate students as subjects. Each perspective applies a specific modelling technique: use case modelling for the user perspective, equivalence partitioning for the tester perspective and structured analysis for the design perspective. A total of 30 subjects were divided into 3 groups, giving 10 subjects per perspective. The analysis results show that: (1) there is no significant difference among the three perspectives in terms of defect detection rate and number of defects found per hour, (2) there is no significant difference in the defect coverage of the three perspectives, and (3) a simulation study shows that 30 subjects is enough to detect relatively small perspective differences with the chosen statistical test. The results suggest that a combination of multiple perspectives may not give higher coverage of the defects compared to single-perspective reading, but further studies are needed to increase the understanding of perspective differenc
Investigation of individual factors impacting the effectiveness of requirements inspections: a replicated experiment
Cataloged from PDF version of article.This paper presents a replication of an empirical study regarding the impact of
individual factors on the effectiveness of requirements inspections. Experimental replications
are important for verifying results and investigating the generality of empirical studies.
We utilized the lab package and procedures from the original study, with some changes and
additions, to conduct the replication with 69 professional developers in three different
companies in Turkey. In general the results of the replication were consistent with those of
the original study. The main result from the original study, which is supported in the
replication, was that inspectors whose degree is in a field related to software engineering
are less effective during a requirements inspection than inspectors whose degrees are in other
fields. In addition, we found that Company, Experience, and English Proficiency impacted
inspection effectiveness
Experimental Evaluation of a Checklist-Based Inspection Technique to Verify the Compliance of Software Systems with the Brazilian General Data Protection Law
Recent laws to ensure the security and protection of personal data establish
new software requirements. Consequently, new technologies are needed to
guarantee software quality under the perception of privacy and protection of
personal data. Therefore, we created a checklist-based inspection technique
(LGPDCheck) to support the identification of defects in software artifacts
based on the principles established by the Brazilian General Data Protection
Law (LGPD). Objective/Aim: To evaluate the effectiveness and efficiency of
LGPDCheck for verifying privacy and data protection (PDP) in software artifacts
compared to ad-hoc techniques. Method: To assess LGPDCheck and ad-hoc
techniques experimentally through a quasi-experiment (two factors, five
treatments). The data will be collected from IoT-based health software systems
built by software engineering students from the Federal University of Rio de
Janeiro. The data analyses will compare results from ad-hoc and LGPDCheck
inspections, the participant's effectiveness and efficiency in each trial,
defects' variance and standard deviation, and time spent with the reviews. The
data will be screened for outliers, and normality and homoscedasticity will be
verified using the Shapiro-Wilk and Levene tests. Nonparametric or parametric
tests, such as the Wilcoxon or Student's t-tests, will be applied as
appropriate.Comment: Registered Report accepted for presentation at 17th ACM/IEEE
International Symposium on Empirical Software Engineering and Measurement.
New Orleans, Louisiana, United State
Deep Learning in the Automotive Industry: Applications and Tools
Deep Learning refers to a set of machine learning techniques that utilize
neural networks with many hidden layers for tasks, such as image
classification, speech recognition, language understanding. Deep learning has
been proven to be very effective in these domains and is pervasively used by
many Internet services. In this paper, we describe different automotive uses
cases for deep learning in particular in the domain of computer vision. We
surveys the current state-of-the-art in libraries, tools and infrastructures
(e.\,g.\ GPUs and clouds) for implementing, training and deploying deep neural
networks. We particularly focus on convolutional neural networks and computer
vision use cases, such as the visual inspection process in manufacturing plants
and the analysis of social media data. To train neural networks, curated and
labeled datasets are essential. In particular, both the availability and scope
of such datasets is typically very limited. A main contribution of this paper
is the creation of an automotive dataset, that allows us to learn and
automatically recognize different vehicle properties. We describe an end-to-end
deep learning application utilizing a mobile app for data collection and
process support, and an Amazon-based cloud backend for storage and training.
For training we evaluate the use of cloud and on-premises infrastructures
(including multiple GPUs) in conjunction with different neural network
architectures and frameworks. We assess both the training times as well as the
accuracy of the classifier. Finally, we demonstrate the effectiveness of the
trained classifier in a real world setting during manufacturing process.Comment: 10 page
Variation Factors in the Design and Analysis of Replicated Controlled Experiments - Three (Dis)similar Studies on Inspections versus Unit Testing
Background. In formal experiments on software engineering, the number of factors that may impact an outcome is very high. Some factors are controlled and change by design, while others are are either unforeseen or due to chance. Aims. This paper aims to explore how context factors change in a series of for- mal experiments and to identify implications for experimentation and replication practices to enable learning from experimentation. Method. We analyze three experiments on code inspections and structural unit testing. The first two experiments use the same experimental design and instrumentation (replication), while the third, conducted by different researchers, replaces the programs and adapts defect detection methods accordingly (reproduction). Experimental procedures and location also differ between the experiments. Results. Contrary to expectations, there are significant differences between the original experiment and the replication, as well as compared to the reproduction. Some of the differences are due to factors other than the ones designed to vary between experiments, indicating the sensitivity to context factors in software engineering experimentation. Conclusions. In aggregate, the analysis indicates that reducing the complexity of software engineering experiments should be considered by researchers who want to obtain reliable and repeatable empirical measures
- …