126,384 research outputs found
Telecommunications Network Planning and Maintenance
Telecommunications network operators are on a constant challenge to provide new services which require ubiquitous broadband access. In an attempt to do so, they are faced with many problems such as the network coverage or providing the guaranteed Quality of Service (QoS). Network planning is a multi-objective optimization problem which involves clustering the area of interest by minimizing a cost function which includes relevant parameters, such as installation cost, distance between user and base station, supported traffic, quality of received signal, etc. On the other hand, service assurance deals with the disorders that occur in hardware or software of the managed network. This paper presents a large number of multicriteria techniques that have been developed to deal with different kinds of problems regarding network planning and service assurance. The state of the art presented will help the reader to develop a broader understanding of the problems in the domain
Semi-supervised and Active Learning Models for Software Fault Prediction
As software continues to insinuate itself into nearly every aspect of our life, the quality of software has been an extremely important issue. Software Quality Assurance (SQA) is a process that ensures the development of high-quality software. It concerns the important problem of maintaining, monitoring, and developing quality software. Accurate detection of fault prone components in software projects is one of the most commonly practiced techniques that offer the path to high quality products without excessive assurance expenditures. This type of quality modeling requires the availability of software modules with known fault content developed in similar environment. However, collection of fault data at module level, particularly in new projects, is expensive and time-consuming. Semi-supervised learning and active learning offer solutions to this problem for learning from limited labeled data by utilizing inexpensive unlabeled data.;In this dissertation, we investigate semi-supervised learning and active learning approaches in the software fault prediction problem. The role of base learner in semi-supervised learning is discussed using several state-of-the-art supervised learners. Our results showed that semi-supervised learning with appropriate base learner leads to better performance in fault proneness prediction compared to supervised learning. In addition, incorporating pre-processing technique prior to semi-supervised learning provides a promising direction to further improving the prediction performance. Active learning, sharing the similar idea as semi-supervised learning in utilizing unlabeled data, requires human efforts for labeling fault proneness in its learning process. Empirical results showed that active learning supplemented by dimensionality reduction technique performs better than the supervised learning on release-based data sets
Variance of ML-based software fault predictors: are we really improving fault prediction?
Software quality assurance activities become increasingly difficult as
software systems become more and more complex and continuously grow in size.
Moreover, testing becomes even more expensive when dealing with large-scale
systems. Thus, to effectively allocate quality assurance resources, researchers
have proposed fault prediction (FP) which utilizes machine learning (ML) to
predict fault-prone code areas. However, ML algorithms typically make use of
stochastic elements to increase the prediction models' generalizability and
efficiency of the training process. These stochastic elements, also known as
nondeterminism-introducing (NI) factors, lead to variance in the training
process and as a result, lead to variance in prediction accuracy and training
time. This variance poses a challenge for reproducibility in research. More
importantly, while fault prediction models may have shown good performance in
the lab (e.g., often-times involving multiple runs and averaging outcomes),
high variance of results can pose the risk that these models show low
performance when applied in practice. In this work, we experimentally analyze
the variance of a state-of-the-art fault prediction approach. Our experimental
results indicate that NI factors can indeed cause considerable variance in the
fault prediction models' accuracy. We observed a maximum variance of 10.10% in
terms of the per-class accuracy metric. We thus, also discuss how to deal with
such variance
Checks and cheques : implementing a population health and recall system to improve coverage of patients with diabetes in a rural general practice
Identification of all diabetic patients in the population is essential if diabetic care is to be effective in achieving the targets of the St Vincent Declaration.1 The challenge therefore is to establish population based monitoring and control systems by means of state of the art technology in order to achieve quality assurance in the provision of care for patients with diabetes. 2,3 Disease management receives extensive international support as the most appropriate approach to organising and delivering healthcare for chronic conditions like diabetes.4 This approach is achieved through a combination of guidelines for practice, patient education, consultations and follow up using a planned team approach and a strong focus on continuous quality improvement using information technology. 5,6 The current software (Medical Director) could not easily meet these requirements which led us to adopt a trial of Ferret. In designing this project we used change management7 and the plan, do, study, act cycle8 illustrated in Diagram 1.<br /
Predicting Line-Level Defects by Capturing Code Contexts with Hierarchical Transformers
Software defects consume 40% of the total budget in software development and
cost the global economy billions of dollars every year. Unfortunately, despite
the use of many software quality assurance (SQA) practices in software
development (e.g., code review, continuous integration), defects may still
exist in the official release of a software product. Therefore, prioritizing
SQA efforts for the vulnerable areas of the codebase is essential to ensure the
high quality of a software release. Predicting software defects at the line
level could help prioritize the SQA effort but is a highly challenging task
given that only ~3% of lines of a codebase could be defective. Existing works
on line-level defect prediction often fall short and cannot fully leverage the
line-level defect information. In this paper, we propose Bugsplorer, a novel
deep-learning technique for line-level defect prediction. It leverages a
hierarchical structure of transformer models to represent two types of code
elements: code tokens and code lines. Unlike the existing techniques that are
optimized for file-level defect prediction, Bugsplorer is optimized for a
line-level defect prediction objective. Our evaluation with five performance
metrics shows that Bugsplorer has a promising capability of predicting
defective lines with 26-72% better accuracy than that of the state-of-the-art
technique. It can rank the first 20% defective lines within the top 1-3%
suspicious lines. Thus, Bugsplorer has the potential to significantly reduce
SQA costs by ranking defective lines higher
Identifying Common Patterns and Unusual Dependencies in Faults, Failures and Fixes for Large-scale Safety-critical Software
As software evolves, becoming a more integral part of complex systems, modern society becomes more reliant on the proper functioning of such systems. However, the field of software quality assurance lacks detailed empirical studies from which best practices can be determined. The fundamental factors that contribute to software quality are faults, failures and fixes, and although some studies have considered specific aspects of each, comprehensive studies have been quite rare. Thus, the fact that we establish the cause-effect relationship between the fault(s) that caused individual failures, as well as the link to the fixes made to prevent the failures from (re)occurring appears to be a unique characteristic of our work. In particular, we analyze fault types, verification activities, severity levels, investigation effort, artifacts fixed, components fixed, and the effort required to implement fixes for a large industrial case study. The analysis includes descriptive statistics, statistical inference through formal hypothesis testing, and data mining. Some of the most interesting empirical results include (1) Contrary to popular belief, later life-cycle faults dominate as causes of failures. Furthermore, over 50% of high priority failures (e.g., post-release failures and safety-critical failures) were caused by coding faults. (2) 15% of failures led to fixes spread across multiple components and the spread was largely affected by the software architecture. (3) The amount of effort spent fixing faults associated with each failure was not uniformly distributed across failures; fixes with a greater spread across components and artifacts, required more effort. Overall, the work indicates that fault prevention and elimination efforts focused on later life cycle faults is essential as coding faults were the dominating cause of safety-critical failures and post-release failures. Further, statistical correlation and/or traditional data mining techniques show potential for assessment and prediction of the locations of fixes and the associated effort. By providing quantitative results and including statistical hypothesis testing, which is not yet a standard practice in software engineering, our work enriches the empirical knowledge needed to improve the state-of-the-art and practice in software quality assurance
Extending the L* Process Mining Model with Quality Management and Business Improvement Tools and Techniques
Selle lõputöö ülesandeks on leida, kas L* elutsükli mudelit on võimalik laiendada Six Sigma DMAIC mudeli, ISO 9001:2008 kvaliteedijuhtimissüsteemi ja äriparandusraamistikega nagu Baldrige Criteria for Performance ExcellenceTM äri ja mittetulundusühingutele ning European Foundation for Quality Management Excellence ModelTM. Protsessikaevandamisprojektiga, mille L*elutsükli mudel laiendati Six Sigma DMAIC metoodikaga, seotud töö viidi läbi Itaalia IT firmas, kasutades andmeid firma abilauast ning tarkvara kvaliteedikontrolli tegevustest. Firmas läbi viidud töö näitab, et DMAIC tsükkel saab pakkuda laiendatud raamistikku L* elutsükli mudelile selle kõikides staadiumites, kasutades tänapäevaseid protsessikaevandamistehnikaid ning -tarkvara.The purpose of this thesis is to determine whether is possible to expand the L*life-cycle model with Six Sigma’s DMAIC model, the ISO 9001:2008 Quality Management System, and business improvement frameworks like the Baldrige Criteria for Performance Excellence for Business and NonprofitTM, and the European Foundation for Quality Management Excellence ModelTM. The work related to the Process Mining project where the L* life-cycle model was expanded with Six Sigma’s DMAIC model has been conducted in an Italian IT Company with data from company’s Help Desk and Software Quality Assurance operations. The work conducted in the company pursues in proving that the DMAIC cycle can provide an expanded framework for the L* life-cycle model in all of its stages while employing state of the art Process Mining techniques and Process Mining software
- …