2,480 research outputs found

    Using the ISO/IEC 9126 product quality model to classify defects : a Controlled Experiment

    Get PDF
    Background: Existing software defect classification schemes support multiple tasks, such as root cause analysis and process improvement guidance. However, existing schemes do not assist in assigning defects to a broad range of high level software goals, such as software quality characteristics like functionality, maintainability, and usability. Aim: We investigate whether a classification based on the ISO/IEC 9126 software product quality model is reliable and useful to link defects to quality aspects impacted. Method: Six different subjects, divided in two groups with respect to their expertise, classified 78 defects from an industrial web application using the ISO/IEC 9126 quality main characteristics and sub-characteristics, and a set of proposed extended guidelines. Results: The ISO/IEC 9126 model is reasonably reliable when used to classify defects, even using incomplete defect reports. Reliability and variability is better for the six high level main characteristics of the model than for the 22 sub- characteristics. Conclusions: The ISO/IEC 9126 software quality model provides a solid foundation for defect classification. We also recommend, based on the follow up qualitative analysis performed, to use more complete defect reports and tailor the quality model to the context of us

    Fault Localization Models in Debugging

    Full text link
    Debugging is considered as a rigorous but important feature of software engineering process. Since more than a decade, the software engineering research community is exploring different techniques for removal of faults from programs but it is quite difficult to overcome all the faults of software programs. Thus, it is still remains as a real challenge for software debugging and maintenance community. In this paper, we briefly introduced software anomalies and faults classification and then explained different fault localization models using theory of diagnosis. Furthermore, we compared and contrasted between value based and dependencies based models in accordance with different real misbehaviours and presented some insight information for the debugging process. Moreover, we discussed the results of both models and manifested the shortcomings as well as advantages of these models in terms of debugging and maintenance.Comment: 58-6

    Novel Validation Techniques for Autonomous Vehicles

    Get PDF
    The automotive industry is facing challenges in producing electrical, connected, and autonomous vehicles. Even if these challenges are, from a technical point of view, independent from each other, the market and regulatory bodies require them to be developed and integrated simultaneously. The development of autonomous vehicles implies the development of highly dependable systems. This is a multidisciplinary activity involving knowledge from robotics, computer science, electrical and mechanical engineering, psychology, social studies, and ethics. Nowadays, many Advanced Driver Assistance Systems (ADAS), like Emergency Braking System, Lane Keep Assistant, and Park Assist, are available. Newer luxury cars can drive by themselves on highways or park automatically, but the end goal is to develop completely autonomous driving vehicles, able to go by themselves, without needing human interventions in any situation. The more vehicles become autonomous, the greater the difficulty in keeping them reliable. It enhances the challenges in terms of development processes since their misbehaviors can lead to catastrophic consequences and, differently from the past, there is no more a human driver to mitigate the effects of erroneous behaviors. Primary threats to dependability come from three sources: misuse from the drivers, design systematic errors, and random hardware failures. These safety threats are addressed under various aspects, considering the particular type of item to be designed. In particular, for the sake of this work, we analyze those related to Functional Safety (FuSa), viewed as the ability of a system to react on time and in the proper way to the external environment. From the technological point of view, these behaviors are implemented by electrical and electronic items. Various standards to achieve FuSa have been released over the years. The first, released in 1998, was the IEC 61508. Its last version is the one released in 2010. This standard defines mainly: • a Functional Safety Management System (FSMS); • methods to determine a Safety Integrated Level (SIL); • methods to determine the probability of failures. To adapt the IEC61508 to the automotive industry’s peculiarity, a newer standard, the ISO26262, was released in 2011 then updated in 2018. This standard provides guidelines about FSMS, called in this case Safety Lifecycle, describing how to develop software and hardware components suitable for functional safety. It also provides a different way to compute the SIL, called in this case Automotive SIL (ASIL), allowing us to consider the average driver’s abilities to control the vehicle in case of failures. Moreover, it describes a way to determine the probability of random hardware failures through Failure Mode, Effects, and Diagnostic Analysis (FMEDA). This dissertation contains contributions to three topics: • random hardware failures mitigation; • improvementoftheISO26262HazardAnalysisandRiskAssessment(HARA); • real-time verification of the embedded software. As the main contribution of this dissertation, I address the safety threats due to random hardware failures (RHFs). For this purpose, I propose a novel simulation-based approach to aid the Failure Mode, Effects, and Diagnostic Analysis (FMEDA) required by the ISO26262 standard. Thanks to a SPICE-level model of the item, and the adoption of fault injection techniques, it is possible to simulate its behaviors obtaining useful information to classify the various failure modes. The proposed approach evolved from a mere simulation of the item, allowing only an item-level failure mode classification up to a vehicle-level analysis. The propagation of the failure modes’ effects on the whole vehicle enables us to assess the impacts on the vehicle’s drivability, improving the quality of the classifications. It can be advantageous where it is difficult to predict how the item-level misbehaviors propagate to the vehicle level, as in the case of a virtual differential gear or the mobility system of a robot. It has been chosen since it can be considered similar to the novel light vehicles, such as electric scooters, that are becoming more and more popular. Moreover, my research group has complete access to its design since it is realized by our university’s DIANA students’ team. When a SPICE-level simulation is too long to be performed, or it is not possible to develop a complete model of the item due to intellectual property protection rules, it is possible to aid this process through behavioral models of the item. A simulation of this kind has been performed on a mobile robotic system. Behavioral models of the electronic components were used, alongside mechanical simulations, to assess the software failure mitigation capabilities. Another contribution has been obtained by modifying the main one. The idea was to make it possible to aid also the Hazard Analysis and Risk Assessment (HARA). This assessment is performed during the concept phase, so before starting to design the item implementation. Its goal is to determine the hazards involved in the item functionality and their associated levels of risk. The end goal of this phase is a list of safety goals. For each one of these safety goals, an ASIL has to be determined. Since HARA relies only on designers expertise and knowledge, it lacks in objectivity and repeatability. Thanks to the simulation results, it is possible to predict the effects of the failures on the vehicle’s drivability, allowing us to improve the severity and controllability assessment, thus improving the objectivity. Moreover, since simulation conditions can be stored, it is possible, at any time, to recheck the results and to add new scenarios, improving the repeatability. The third group of contributions is about the real-time verification of embedded software. Through Hardware-In-the-Loop (HIL), a software integration verification has been performed to test a fundamental automotive component, mixed-criticality applications, and multi-agent robots. The first of these contributions is about real-time tests on Body Control Modules (BCM). These modules manage various electronic accessories in the vehicle’s body, like power windows and mirrors, air conditioning, immobilizer, central locking. The main characteristics of BCMs are the communications with other embedded computers via the car’s vehicle bus (Controller Area Network) and to have a high number (hundreds) of low-speed I/Os. As the second contribution, I propose a methodology to assess the error recovery system’s effects on mixed-criticality applications regarding deadline misses. The system runs two tasks: a critical airplane longitudinal control and a non-critical image compression algorithm. I start by presenting the approach on a benchmark application containing an instrumented bug into the lower criticality task; then, we improved it by injecting random errors inside the lower criticality task’s memory space through a debugger. In the latter case, thanks to the HIL, it is possible to pause the time domain simulation when the debugger operates and resume it once the injection is complete. In this way, it is possible to interact with the target without interfering with the simulation results, combining a full control of the target with an accurate time-domain assessment. The last contribution of this third group is about a methodology to verify, on multi-agent robots, the synchronization between two agents in charge to move the end effector of a delta robot: the correct position and speed of the end effector at any time is strongly affected by a loss of synchronization. The last two contributions may seem unrelated to the automotive industry, but interest in these applications is gaining. Mixed-criticality systems allow reducing the number of ECUs inside cars (for cost reduction), while the multi-agent approach is helpful to improve the cooperation of the connected cars with respect to other vehicles and the infrastructure. The fourth contribution, contained in the appendix, is about a machine learning application to improve the social acceptance of autonomous vehicles. The idea is to improve the comfort of the passengers by recognizing their emotions. I started with the idea to modify the vehicle’s driving style based on a real-time emotions recognition system but, due to the difficulties of performing such operations in an experimental setup, I move to analyze them offline. The emotions are determined on volunteers’ facial expressions recorded while viewing 3D representa- tions showing different calibrations. Thanks to the passengers’ emotional responses, it is possible to choose the better calibration from the comfort point of view

    Novel Validation Techniques for Autonomous Vehicles

    Get PDF
    L'abstract è presente nell'allegato / the abstract is in the attachmen

    Design and test of a pump failure anticipator

    Get PDF
    Tests were conducted on two different types of pumps in order to refine the concept and to finalize design details of a positive displacement internal gear pump and a shroudless centrifugal pump. A concept and a system that could be used with pumps to allow a rapid judgement to be made of the suitability of the pump for futher service is developed. Test results and detailed data analysis are included

    Design for quality manufacturability analysis for common assembly process

    Get PDF
    The globalization of market economy has precipitated a dramatic increase in competition necessitating the need for higher quality products at lower cost in shorter time periods. Shorter life cycles and proliferation of products has made companies integrate all the phases of manufacturing to bring about a superior design. Design for Quality Manufacturability (DFQM) provides a technique to invoke manufacturing and assembly considerations while designing a product. The DFQM architecture identifies factors consisting of several variables that are influenced by certain error catalysts to cause one or more specific defects. A methodology is suggested to identify and quantify these error catalysts to be able to estimate the quality of the design. Some of the assembly processes that are widely used are insertion, riveting, welding, fastening, press-fit, and snap-fit. A detailed study of each of these processes is done to analyze the techniques, capabilities, and limitations. Using the DFQM architecture defect classes and specific defects are identified and analyzed. A correlation matrix is formed to identify the processes that are associated with each specific defect. Cause-Effect analysis using Ishikawa diagrams provide a means of analyzing the characteristics of the relevant processes attributing to each specific defect. These characteristics are grouped to identify the error catalysts that influence the occurrence of the specific defect

    MiSFIT: Mining Software Fault Information and Types

    Get PDF
    As software becomes more important to society, the number, age, and complexity of systems grow. Software organizations require continuous process improvement to maintain the reliability, security, and quality of these software systems. Software organizations can utilize data from manual fault classification to meet their process improvement needs, but organizations lack the expertise or resources to implement them correctly. This dissertation addresses the need for the automation of software fault classification. Validation results show that automated fault classification, as implemented in the MiSFIT tool, can group faults of similar nature. The resulting classifications result in good agreement for common software faults with no manual effort. To evaluate the method and tool, I develop and apply an extended change taxonomy to classify the source code changes that repaired software faults from an open source project. MiSFIT clusters the faults based on the changes. I manually inspect a random sample of faults from each cluster to validate the results. The automatically classified faults are used to analyze the evolution of a software application over seven major releases. The contributions of this dissertation are an extended change taxonomy for software fault analysis, a method to cluster faults by the syntax of the repair, empirical evidence that fault distribution varies according to the purpose of the module, and the identification of project-specific trends from the analysis of the changes

    Functional Size Measurement and Model Verification for Software Model-Driven Developments: A COSMIC-based Approach

    Full text link
    Historically, software production methods and tools have a unique goal: to produce high quality software. Since the goal of Model-Driven Development (MDD) methods is no different, MDD methods have emerged to take advantage of the benefits of using conceptual models to produce high quality software. In such MDD contexts, conceptual models are used as input to automatically generate final applications. Thus, we advocate that there is a relation between the quality of the final software product and the quality of the models used to generate it. The quality of conceptual models can be influenced by many factors. In this thesis, we focus on the accuracy of the techniques used to predict the characteristics of the development process and the generated products. In terms of the prediction techniques for software development processes, it is widely accepted that knowing the functional size of applications in order to successfully apply effort models and budget models is essential. In order to evaluate the quality of generated applications, defect detection is considered to be the most suitable technique. The research goal of this thesis is to provide an accurate measurement procedure based on COSMIC for the automatic sizing of object-oriented OO-Method MDD applications. To achieve this research goal, it is necessary to accurately measure the conceptual models used in the generation of object-oriented applications. It is also very important for these models not to have defects so that the applications to be measured are correctly represented. In this thesis, we present the OOmCFP (OO-Method COSMIC Function Points) measurement procedure. This procedure makes a twofold contribution: the accurate measurement of objectoriented applications generated in MDD environments from the conceptual models involved, and the verification of conceptual models to allow the complete generation of correct final applications from the conceptual models involved. The OOmCFP procedure has been systematically designed, applied, and automated. This measurement procedure has been validated to conform to the ISO 14143 standard, the metrology concepts defined in the ISO VIM, and the accuracy of the measurements obtained according to ISO 5725. This procedure has also been validated by performing empirical studies. The results of the empirical studies demonstrate that OOmCFP can obtain accurate measures of the functional size of applications generated in MDD environments from the corresponding conceptual models.Marín Campusano, BM. (2011). Functional Size Measurement and Model Verification for Software Model-Driven Developments: A COSMIC-based Approach [Tesis doctoral no publicada]. Universitat Politècnica de València. https://doi.org/10.4995/Thesis/10251/11237Palanci
    • …
    corecore