6,348 research outputs found
Using the ISO/IEC 9126 product quality model to classify defects : a Controlled Experiment
Background: Existing software defect classification schemes support multiple tasks, such as root cause analysis and process improvement guidance. However, existing schemes do not assist in assigning defects to a broad range of high level software goals, such as software quality characteristics like functionality, maintainability, and usability. Aim: We investigate whether a classification based on the ISO/IEC 9126 software product quality model is reliable and useful to link defects to quality aspects impacted. Method: Six different subjects, divided in two groups with respect to their expertise, classified 78 defects from an industrial web application using the ISO/IEC 9126 quality main characteristics and sub-characteristics, and a set of proposed extended guidelines. Results: The ISO/IEC 9126 model is reasonably reliable when used to classify defects, even using incomplete defect reports. Reliability and variability is better for the six high level main characteristics of the model than for the 22 sub- characteristics. Conclusions: The ISO/IEC 9126 software quality model provides a solid foundation for defect classification. We also recommend, based on the follow up qualitative analysis performed, to use more complete defect reports and tailor the quality model to the context of us
Measuring Software Process: A Systematic Mapping Study
Context: Measurement is essential to reach predictable performance and high capability processes. It provides
support for better understanding, evaluation, management, and control of the development process
and project, as well as the resulting product. It also enables organizations to improve and predict its processâs
performance, which places organizations in better positions to make appropriate decisions. Objective:
This study aims to understand the measurement of the software development process, to identify studies,
create a classification scheme based on the identified studies, and then to map such studies into the scheme
to answer the research questions. Method: Systematic mapping is the selected research methodology for this
study. Results: A total of 462 studies are included and classified into four topics with respect to their focus
and into three groups based on the publishing date. Five abstractions and 64 attributes were identified,
25 methods/models and 17 contexts were distinguished. Conclusion: capability and performance were the
most measured process attributes, while effort and performance were the most measured project attributes.
Goal Question Metric and Capability Maturity Model Integration were the main methods and models used
in the studies, whereas agile/lean development and small/medium-size enterprise were the most frequently
identified research contexts.Ministerio de EconomĂa y Competitividad TIN2013-46928-C3-3-RMinisterio de EconomĂa y Competitividad TIN2016-76956-C3-2- RMinisterio de EconomĂa y Competitividad TIN2015-71938-RED
V-GQM: a feed-back approach to validation of a GQM study
The Goal/Question/Metric paradigm is used for empirical studies on software projects. Support is given on how to define and execute a study. However, the support for running several subsequent studies is poor. V-GQM introduces a life-cycle perspective, creating a process, spanning several GQM studies. After the GQM study has been completed, an analysis step of the plan is initiated. The metrics are analysed to investigate if they comply with the plan or has extended it, and also to investigate if the metrics collected answer more questions than posed in the plan. The questions derived from metrics are then used to form the goal for the next GQM study, effectively introducing a feed-back loop. By introducing the bottom-up approach, a structured analysis of the GQM study is possible when constructing several consecutive GQM studies. A case study, using V-GQM, is performed in an industrial setting
The characteristics of the Computer Supported Collaborative Learning (CSCL) through Moodle: a view on studentsâ knowledge construction process
Computer Supported Collaborative Learning (CSCL) is based on the pedagogical process of observation where students will learn progressively through active group interaction. CSCL is an emerging branch of the learning sciences concerned with studying on how people can learn together with the help of computers. Thus, this research was conducted to measure the characteristics of the CSCL learning environment through Moodle that assists the process of studentsâ knowledge construction during the teaching and learning process. The CSCL learning environment is an educational learning system which develops to help the teachers and students in managing School Based Assessment (SBA) in selected secondary school in Malaysia. Samples involved two groups of students and two Technical and Vocational Education and Training (TVET) teachers from two different schools. A total of 61 students, who were taught using CSCL approach through Moodle, underwent the process of teaching and learning using their school computer laboratory. The finding shows that the characteristics of the CSCL learning approach that used in this learning environment for the first group are at a high level with overall mean of 4.17 and the second group at moderate level with overall mean of 3.62. The result proves that the characteristics of the CSCL learning environment help students to build their knowledge during teaching and learning process at the high level with an overall mean score of 3.87. The mean of these two groups may vary according to studentsâ background, as well as learning environment facilities. Although, CSCL leads to studentsâ self-development, improving learning quality, sharing knowledge and assisting studentsâ in the process of building their knowledge, implementation of CSCL must first considering the technology relevant facilities, especially computer laboratory and internet accessibility in school. The implication is that designing a good CSCL must also taking into account the targeted usersâ cultural background and socioeconomic factor
Technical Debt Prioritization: State of the Art. A Systematic Literature Review
Background. Software companies need to manage and refactor Technical Debt
issues. Therefore, it is necessary to understand if and when refactoring
Technical Debt should be prioritized with respect to developing features or
fixing bugs. Objective. The goal of this study is to investigate the existing
body of knowledge in software engineering to understand what Technical Debt
prioritization approaches have been proposed in research and industry. Method.
We conducted a Systematic Literature Review among 384 unique papers published
until 2018, following a consolidated methodology applied in Software
Engineering. We included 38 primary studies. Results. Different approaches have
been proposed for Technical Debt prioritization, all having different goals and
optimizing on different criteria. The proposed measures capture only a small
part of the plethora of factors used to prioritize Technical Debt qualitatively
in practice. We report an impact map of such factors. However, there is a lack
of empirical and validated set of tools. Conclusion. We observed that technical
Debt prioritization research is preliminary and there is no consensus on what
are the important factors and how to measure them. Consequently, we cannot
consider current research conclusive and in this paper, we outline different
directions for necessary future investigations
Benchmarking best manufacturing practices: a study into four sectors of Turkish industry
Reports on a benchmarking study conducted to quantify how well companies operating in various sectors of Turkish industry match up to best practice, both in the practices they adopt and in the operational outcomes that result, and to test the hypothesis that the closer a company is to best practice, the more likely it is for that company to achieve higher business performance. The survey conducted in 1997 and 1998 included 82 companies from the Turkish electronics, cement, automotive sectors and part and component suppliers to the appliance industry. For data gathering. employs the Competitive Strategies and Best Practices Benchmarking Questionnaire, supported ly, some follow-up interviews and one-day site visits. Classifies two small groups of companies as leaders and laggers, depending on how close they were to best practice. Shows that the leaders have performed better than the laggers in adopting best manufacturing practices and in the achievement of high performance La,els. The leaders also have achieved substantially higher business performance than the laggers. Furthermore, observes that large-sized companies outperform the rest both in terms of their success in implementing best manufacturing practices and in achieving high operational outcomes and that there is no appreciable difference between industrial sectors in implementing best manufacturing practices and in achieving high operational outcomes
MultiDimEr: a multi-dimensional bug analyzEr
Background: Bugs and bug management consumes a significant amount of time and
effort from software development organizations. A reduction in bugs can
significantly improve the capacity for new feature development. Aims: We
categorize and visualize dimensions of bug reports to identify accruing
technical debt. This evidence can serve practitioners and decision makers not
only as an argumentative basis for steering improvement efforts, but also as a
starting point for root cause analysis, reducing overall bug inflow. Method: We
implemented a tool, MultiDimEr, that analyzes and visualizes bug reports. The
tool was implemented and evaluated at Ericsson. Results: We present our
preliminary findings using the MultiDimEr for bug analysis, where we
successfully identified components generating most of the bugs and bug trends
within certain components. Conclusions: By analyzing the dimensions provided by
MultiDimEr, we show that classifying and visualizing bug reports in different
dimensions can stimulate discussions around bug hot spots as well as validating
the accuracy of manually entered bug report attributes used in technical debt
measurements such as fault slip through.Comment: TechDebt@ICSE 2022: 66-7
Agile Literature Review
Background: Over the last 20 years the software development community has implemented agile techniques over the traditional approach to software development. Agile methods require less upfront costs and increase project flexibility; however, agile methodology is not infallible. Objective: This research seeks to validate the assumption that there is a lack of robust research regarding agile project management and its use in the software development industry. This extensive review of existing literature on the topic will serve as a basis for new research on areas with existing ambiguity. Method: The search engines used to identify relevant literature from 1987 to 2021 on the topic were Business Source Premier and Google Scholar. The procedure used to narrow the search queries was the use of deliberate keywords and phrases such as âagile software developmentâ and âcost of requirement errorsâ. All results were cross-referenced on both search engines to validate the accuracy of each source. Results: 76 papers containing relevant information to agile project management within the software community have been identified: 55 academic journals, 1 book, 1 conference paper, 1 magazine article, 7 periodicals, 10 professional journals, and 1 textbook. 35 papers are critical of Agile methodology, 16 focus mostly on its strengths, 12 focus mainly on its weaknesses, and 13 contain relevant information regarding the cost of requirement errors
A framework for flexible and reconfigurable vision inspection systems
Reconfiguration activities remain a significant challenge for automated Vision Inspection Systems (VIS), which are characterized by hardware rigidity and time-consuming software programming tasks. This work contributes to overcoming the current gap in VIS reconfigurability by proposing a novel framework based on the design of Flexible Vision Inspection Systems (FVIS), enabling a Reconfiguration Support System (RSS). FVIS is achieved using reprogrammable hardware components that allow for easy setup based on software commands. The RSS facilitates offline software programming by extracting parameters from real images, Computer-Aided Design (CAD) data, and rendered images using Automatic Feature Recognition (AFR). The RSS offers a user-friendly interface that guides non-expert users through the reconfiguration process for new part types, eliminating the need for low-level coding. The proposed framework has been practically validated during a 4-year collaboration with a global leading automotive half shaft manufacturer. A fully automated FVIS and the related RSS have been designed following the proposed framework and are currently implemented in 7 plants of GKN global automotive supplier, checking 60 defect types on thousands of parts per day, covering more than 200 individual part types and 12 part families
Models, Techniques, and Metrics for Managing Risk in Software Engineering
The field of Software Engineering (SE) is the study of systematic and quantifiable approaches to software development, operation, and maintenance. This thesis presents a set of scalable and easily implemented techniques for quantifying and mitigating risks associated with the SE process. The thesis comprises six papers corresponding to SE knowledge areas such as software requirements, testing, and management. The techniques for risk management are drawn from stochastic modeling and operational research.
The first two papers relate to software testing and maintenance. The first paper describes and validates novel iterative-unfolding technique for filtering a set of execution traces relevant to a specific task. The second paper analyzes and validates the applicability of some entropy measures to the trace classification described in the previous paper. The techniques in these two papers can speed up problem determination of defects encountered by customers, leading to improved organizational response and thus increased customer satisfaction and to easing of resource constraints.
The third and fourth papers are applicable to maintenance, overall software quality and SE management. The third paper uses Extreme Value Theory and Queuing Theory tools to derive and validate metrics based on defect rediscovery data. The metrics can aid the allocation of resources to service and maintenance teams, highlight gaps in quality assurance processes, and help assess the risk of using a given software product. The fourth paper characterizes and validates a technique for automatic selection and prioritization of a minimal set of customers for profiling. The minimal set is obtained using Binary Integer Programming and prioritized using a greedy heuristic. Profiling the resulting customer set leads to enhanced comprehension of user behaviour, leading to improved test specifications and clearer quality assurance policies, hence reducing risks associated with unsatisfactory product quality.
The fifth and sixth papers pertain to software requirements. The fifth paper both models the relation between requirements and their underlying assumptions and measures the risk associated with failure of the assumptions using Boolean networks and stochastic modeling. The sixth paper models the risk associated with injection of requirements late in development cycle with the help of stochastic processes
- âŠ