114 research outputs found
EMPIRICAL ASSESSMENT OF THE IMPACT OF USING AUTOMATIC STATIC ANALYSIS ON CODE QUALITY
Automatic static analysis (ASA) tools analyze the source or compiled code looking for violations of recommended programming practices (called issues) that might cause faults or might degrade some dimensions of software quality. Antonio Vetro' has focused his PhD in studying how applying ASA impacts software quality, taking as reference point the different quality dimensions specified by the standard ISO/IEC 25010. The epistemological approach he used is that one of empirical software engineering. During his three years PhD, he's been conducting experiments and case studies on three main areas: Functionality/Reliability, Performance and Maintainability. He empirically proved that specific ASA issues had impact on these quality characteristics in the contexts under study: thus, removing them from the code resulted in a quality improvement. Vetro' has also investigated and proposed new research directions for this field: using ASA to improve software energy efficiency and to detect the problems deriving from the interaction of multiple languages. The contribution is enriched with the final recommendation of a generalized process for researchers and practitioners with a twofold goal: improve software quality through ASA and create a body of knowledge on the impact of using ASA on specific software quality dimensions, based on empirical evidence. This thesis represents a first step towards this goa
Recommended from our members
Mining software repositories to determine the impact of team factors on the structural attributes of software
This thesis was submitted for the award of PhD and was awarded by Brunel University LondonSoftware development is intrinsically a human activity and the role of the development team has been established as among the most decisive of all project success factors. Prior research has proven empirically that team size and stability are linked to stakeholder satisfaction, team productivity and fault-proneness. Team size is usually considered a measure of the number of developers that modify the source code of a project while team stability is typically a function of the cumulative time that each team member has worked with their fellow team members. There is, however, limited research investigating the impact of these factors on software maintainability - a crucial aspect given that up to 80% of development budgets are consumed in the maintenance phase of the lifecycle. This research sheds light on how these aspects of team composition influence the structural attributes of the developed software that, in turn, drive the maintenance costs of software. This thesis asserts that new and broader insights can be gained by measuring these internal attributes of the software rather than the more traditional approach of measuring its external attributes. This can also enable practitioners to measure and monitor key indicators throughout the development lifecycle taking remedial action where appropriate. Within this research the GoogleCode open-source forge is mined and a sample of 1,480 Java projects are selected for further study. Using the Chidamber and Kemerer design metrics suite, the impact of development team size and stability on the internal structural attributes of software is isolated and quantified. Drawing on prior research correlating these internal attributes with external attributes, the impact on maintainability is deduced. This research finds that those structural attributes that have been established to correlate to fault-proneness - coupling, cohesion and modularity - show degradation as team sizes increase or team stability decreases. That degradation in the internal attributes of the software is associated with a deterioration in the sub-attributes of maintainability; changeability, understandability, testability and stability
Decoding algorithms for surface codes
Quantum technologies have the potential to solve computationally hard
problems that are intractable via classical means. Unfortunately, the unstable
nature of quantum information makes it prone to errors. For this reason,
quantum error correction is an invaluable tool to make quantum information
reliable and enable the ultimate goal of fault-tolerant quantum computing.
Surface codes currently stand as the most promising candidates to build error
corrected qubits given their two-dimensional architecture, a requirement of
only local operations, and high tolerance to quantum noise. Decoding algorithms
are an integral component of any error correction scheme, as they are tasked
with producing accurate estimates of the errors that affect quantum
information, so that it can subsequently be corrected. A critical aspect of
decoding algorithms is their speed, since the quantum state will suffer
additional errors with the passage of time. This poses a connundrum-like
tradeoff, where decoding performance is improved at the expense of complexity
and viceversa. In this review, a thorough discussion of state-of-the-art
surface code decoding algorithms is provided. The core operation of these
methods is described along with existing variants that show promise for
improved results. In addition, both the decoding performance, in terms of error
correction capability, and decoding complexity, are compared. A review of the
existing software tools regarding surface code decoding is also provided.Comment: 54 pages, 31 figure
Proceedings of the Seventh Annual Software Engineering Workshop
The Software Engineering Laboratory, software tools, software errors and cost estimation are addressed
NASA Aircraft Controls Research, 1983
The workshop consisted of 24 technical presentations on various aspects of aircraft controls, ranging from the theoretical development of control laws to the evaluation of new controls technology in flight test vehicles. A special report on the status of foreign aircraft technology and a panel session with seven representatives from organizations which use aircraft controls technology were also included. The controls research needs and opportunities for the future as well as the role envisioned for NASA in that research were addressed. Input from the panel and response to the workshop presentations will be used by NASA in developing future programs
Recommended from our members
A Systematic Literature Review and Meta-analysis on Cross Project Defect Prediction
Background: Cross project defect prediction (CPDP) recently gained considerable attention, yet there are no systematic efforts to analyse existing empirical evidence. Objective: To synthesise literature to understand the state-of-the-art in CPDP with respect to metrics, models, data approaches, datasets and associated performances. Further, we aim to assess the performance of CPDP vs. within project DP models. Method: We conducted a systematic literature review. Results from primary studies are synthesised (thematic, meta-analysis) to answer research questions. Results: We identified 30 primary studies passing quality
assessment. Performance measures, except precision, vary with the choice of metrics. Recall, precision, f-measure, and AUC are the most common measures. Models based on Nearest-Neighbour and Decision Tree tend to perform well in CPDP, whereas the popular na¨ıve Bayes yield average performance. Performance of ensembles varies greatly across f-measure and AUC. Data approaches address CPDP challenges using row/column processing, which improve CPDP in terms of recall at the cost of precision. This is observed in multiple occasions including the meta-analysis of CPDP vs. WPDP. NASA and Jureczko datasets seem to favour CPDP over WPDP more frequently. Conclusion: CPDP is still a challenge and requires more research before trustworthy applications can take place. We provide guidelines for further research
Usability issues and design principles for visual programming languages
This thesis was submitted for the degree of Doctor of Philosophy and awarded by Brunel University.Despite two decades of empirical studies focusing on programmers and the problems with programming, usability of textual programming languages is still hard to achieve. Its younger relation, visual programming languages (VPLs) also share the same problem of poor usability. This research explores and investigates the usability issues relating to VPLs in order to suggest a set of design principles that emphasise usability. The approach adopted focuses on issues arising from the interaction and communication between the human (programmers), the computer (user interface), and the program. Being exploratory in nature, this PhD reviews the literature as a starting point for stimulating and developing research questions and hypotheses that experimental studies were conducted to investigate. However, the literature alone cannot provide a fully comprehensive list of possible usability problems in VPLs so that design principles can be confidently recommended. A commercial VPL was, therefore, holistically evaluated and a comprehensive list of usability problems was obtained from the research. Six empirical studies employing both quantitative and qualitative methodology were undertaken as dictated by the nature of the research. Five of these were controlled experiments and one was qualitative-naturalistic. The experiments studied the effect of a programming paradigm and of representation of program flow on novices' performances. The results indicated superiority of control-flow programs in relation to data-flow programs; a control-flow preference among novices; and in addition that directional representation does not affect performance while traversal direction does - due to cognitive demands imposed upon programmers. Results of the qualitative study included a list of 145 usability problems and these were further categorised into ten problem areas. These findings were integrated with other analytical work based upon the review of the literature in a structured fashion to form a checklist and a set of design principles for VPLs that are empirically grounded and evaluated against existing research in the literature. Furthermore, an extended framework for Cognitive Dimensions of Notations is also discussed and proposed as an evaluation method for diagrammatic VPLs on the basis of the qualitative study. The above consists of the major findings and deliverables of this research. Nevertheless, there are several other findings identified on the basis of the substantial amount of data obtained in the series of experiments carried out, which have made a novel contribution to knowledge in the fields of Human-Computer Interaction, Psychology of Programming, and Visual Programming Languages
Energy-Efficient and Semi-automated Truck Platooning
This open access book presents research and evaluation results of the Austrian flagship project “Connecting Austria,” illustrating the wide range of research needs and questions that arise when semi-automated truck platooning is deployed in Austria. The work presented is introduced in the context of work in similar research areas around the world. This interdisciplinary research effort considers aspects of engineering, road-vehicle and infrastructure technologies, traffic management and optimization, traffic safety, and psychology, as well as potential economic effects. The book’s broad perspective means that readers interested in current and state-of-the-art methods and techniques for the realization of semi-automated driving and with either an engineering background or with a less technical background gain a comprehensive picture of this important subject. The contributors address many questions such as: Which maneuvers does a platoon typically have to carry out, and how? How can platoons be integrated seamlessly in the traffic flow without becoming an obstacle to individual road users? What trade-offs between system information (sensors, communication effort, etc.) and efficiency are realistic? How can intersections be passed by a platoon in an intelligent fashion? Consideration of diverse disciplines and highlighting their meaning for semi-automated truck platooning, together with the highlighting of necessary research and evaluation patterns to address such a broad task scientifically, makes Energy-Efficient and Semi-automated Truck Platooning a unique contribution with methods that can be extended and adapted beyond the geographical area of the research reported
Energy-Efficient and Semi-automated Truck Platooning
This open access book presents research and evaluation results of the Austrian flagship project “Connecting Austria,” illustrating the wide range of research needs and questions that arise when semi-automated truck platooning is deployed in Austria. The work presented is introduced in the context of work in similar research areas around the world. This interdisciplinary research effort considers aspects of engineering, road-vehicle and infrastructure technologies, traffic management and optimization, traffic safety, and psychology, as well as potential economic effects. The book’s broad perspective means that readers interested in current and state-of-the-art methods and techniques for the realization of semi-automated driving and with either an engineering background or with a less technical background gain a comprehensive picture of this important subject. The contributors address many questions such as: Which maneuvers does a platoon typically have to carry out, and how? How can platoons be integrated seamlessly in the traffic flow without becoming an obstacle to individual road users? What trade-offs between system information (sensors, communication effort, etc.) and efficiency are realistic? How can intersections be passed by a platoon in an intelligent fashion? Consideration of diverse disciplines and highlighting their meaning for semi-automated truck platooning, together with the highlighting of necessary research and evaluation patterns to address such a broad task scientifically, makes Energy-Efficient and Semi-automated Truck Platooning a unique contribution with methods that can be extended and adapted beyond the geographical area of the research reported
- …