2,431 research outputs found

    Mitigating the effect of coincidental correctness in spectrum based fault localization

    Get PDF
    2013 Summer.Includes bibliographical references.Coincidentally correct test cases are those that execute faulty program statements but do not result in failures. The presence of such test cases in a test suite reduces the effectiveness of spectrum-based fault localization approaches, such as Ochiai and Tarantula, which localize faulty statements by calculating a suspiciousness score for every program statement from test coverage information. The goal of this dissertation is to improve the understanding of how the presence of coincidentally correct test cases impacts the effectiveness of spectrum-based fault localization approaches and to develop a family of approaches that improve fault localization effectiveness by mitigating the effect of coincidentally correct test cases. Each approach (1)~classifies coincidentally correct test cases using test coverage information, and (2)~recalculates a suspiciousness score for every program statement using the classification information. We developed classification approaches using test coverage metrics at different levels of granularity, such as statement, branch, and function. We developed a new approach for ranking program statements using suspiciousness scores calculated based on the heuristic that the statements covered by more failing and coincidentally correct test cases are more suspicious. We extended the family of fault localization approaches to support multiple faults. We developed an approach to incorporate tester feedback to mitigate the effect of coincidental correctness. The approach analyzes tester feedback to determine a lower bound for the number of coincidentally correct test cases present in a test suite. The lower bound is also used to determine when classification of coincidentally correct test cases can improve fault localization effectiveness. We evaluated the fault localization effectiveness of our approaches and studied how the effectiveness changes for varying characteristics of test suites, such as size, test suite type (e.g., random, coverage adequate), and the percentage of passing test cases that are coincidentally correct. Our key findings are summarized as follows. Mitigating the effect of coincidentally correct test cases improved fault localization effectiveness. The extent of the improvement increased with an increase in the percentage of passing test cases that were coincidentally correct, although no improvement was observed when most passing test cases in a test suite were coincidentally correct. When random test suites were used to localize faults, a coarse-grained coverage spectrum, such as function coverage, resulted in better classification than fine-grained coverage spectra, such as statement and branch coverage. Utilizing tester feedback improved the precision of classification. Mitigating the effect of coincidental correctness in the presence of two faults improved the effectiveness for both faults simultaneously for most faulty programs. Faulty statements that were harder to reach and that affected fewer program statements resulted in fewer coincidentally correct test cases and were more effectively localized

    Augmenting bug localization with part-of-speech and invocation

    Get PDF
    Bug localization represents one of the most expensive, as well as time-consuming, activities during software maintenance and evolution. To alleviate the workload of developers, numerous methods have been proposed to automate this process and narrow down the scope of reviewing buggy files. In this paper, we present a novel buggy source-file localization approach, using the information from both the bug reports and the source files. We leverage the part-of-speech features of bug reports and the invocation relationship among source files. We also integrate an adaptive technique to further optimize the performance of the approach. The adaptive technique discriminates Top 1 and Top N recommendations for a given bug report and consists of two modules. One module is to maximize the accuracy of the first recommended file, and the other one aims at improving the accuracy of the fixed defect file list. We evaluate our approach on six large-scale open source projects, i.e. ASpectJ, Eclipse, SWT, Zxing, Birt and Tomcat. Compared to the previous work, empirical results show that our approach can improve the overall prediction performance in all of these cases. Particularly, in terms of the Top 1 recommendation accuracy, our approach achieves an enhancement from 22.73% to 39.86% for ASpectJ, from 24.36% to 30.76% for Eclipse, from 31.63% to 46.94% for SWT, from 40% to 55% for ZXing, from 7.97% to 21.99% for Birt, and from 33.37% to 38.90% for Tomcat

    Radio Frequency Interference Impact Assessment on Global Navigation Satellite Systems

    Get PDF
    The Institute for the Protection and Security of the Citizen of the EC Joint Research Centre (IPSC-JRC) has been mandated to perform a study on the Radio Frequency (RF) threat against telecommunications and ICT control systems. This study is divided into two parts. The rst part concerns the assessment of high energy radio frequency (HERF) threats, where the focus is on the generation of electromagnetic pulses (EMP), the development of corresponding devices and the possible impact on ICT and power distribution systems. The second part of the study concerns radio frequency interference (RFI) with regard to global navigation satellite systems (GNSS). This document contributes to the second part and contains a detailed literature study disclosing the weaknesses of GNSS systems. Whereas the HERF analysis only concerns intentional interference issues, this study on GNSS also takes into account unintentional interference, enlarging the spectrum of plausible interference scenarios.JRC.DG.G.6-Security technology assessmen

    Robust estimation of excitations in mechanical systems using M-estimators –Experimental applications

    Get PDF
    This second part of the study presents some experimentalapplications to mechanicalsystems in which the results of excitationestimation, obtained using traditional least squares and M-estimate, are compared. The first case presented is a single input–multiple outputs system: a simple test-rig for the study of the vibrations of a two-degrees of freedom system is employed to identify the constraint displacement that causes the measured mass vibrations in presence of heavy noise. The second case is a multiple inputs–multiple outputs system: a rotor test-rig is used to identify the positions, the amplitudes and the phases of two unbalances using the vibrations measured in the bearings. In this case, also an additional theoretical part is introduced about the basics of model-based identification in the frequency domain applied to rotor dynamics. The last case is again a single input–multiple outputs system, but in an industrial application: experimental vibrations of a 320 MW steam turbo-generator are used to identify position and amount of a known balancing mass in an on-field real case. Moreover, whilst in the numerical examples presented in the first part the knowledge of the system was perfect, in these cases some uncertainties are present also in the system model. Finally, the paper introduces the use of the M-estimate technique to evaluate the adequacy the model of the system, by means of the analysis of the weights attributed to the measures as a function of the frequency of the excitation

    A cell outage management framework for dense heterogeneous networks

    Get PDF
    In this paper, we present a novel cell outage management (COM) framework for heterogeneous networks with split control and data planes-a candidate architecture for meeting future capacity, quality-of-service, and energy efficiency demands. In such an architecture, the control and data functionalities are not necessarily handled by the same node. The control base stations (BSs) manage the transmission of control information and user equipment (UE) mobility, whereas the data BSs handle UE data. An implication of this split architecture is that an outage to a BS in one plane has to be compensated by other BSs in the same plane. Our COM framework addresses this challenge by incorporating two distinct cell outage detection (COD) algorithms to cope with the idiosyncrasies of both data and control planes. The COD algorithm for control cells leverages the relatively larger number of UEs in the control cell to gather large-scale minimization-of-drive-test report data and detects an outage by applying machine learning and anomaly detection techniques. To improve outage detection accuracy, we also investigate and compare the performance of two anomaly-detecting algorithms, i.e., k-nearest-neighbor- and local-outlier-factor-based anomaly detectors, within the control COD. On the other hand, for data cell COD, we propose a heuristic Grey-prediction-based approach, which can work with the small number of UE in the data cell, by exploiting the fact that the control BS manages UE-data BS connectivity and by receiving a periodic update of the received signal reference power statistic between the UEs and data BSs in its coverage. The detection accuracy of the heuristic data COD algorithm is further improved by exploiting the Fourier series of the residual error that is inherent to a Grey prediction model. Our COM framework integrates these two COD algorithms with a cell outage compensation (COC) algorithm that can be applied to both planes. Our COC solution utilizes an actor-critic-based reinforcement learning algorithm, which optimizes the capacity and coverage of the identified outage zone in a plane, by adjusting the antenna gain and transmission power of the surrounding BSs in that plane. The simulation results show that the proposed framework can detect both data and control cell outage and compensate for the detected outage in a reliable manner

    Improved Fault Classification and Localization in Power Transmission Networks Using VAE-Generated Synthetic Data and Machine Learning Algorithms

    Get PDF
    The reliable operation of power transmission networks depends on the timely detection and localization of faults. Fault classification and localization in electricity transmission networks can be challenging because of the complicated and dynamic nature of the system. In recent years, a variety of machine learning (ML) and deep learning algorithms (DL) have found applications in the enhancement of fault identification and classification within power transmission networks. Yet, the efficacy of these ML architectures is profoundly dependent upon the abundance and quality of the training data. This intellectual explanation introduces an innovative strategy for the classification and pinpointing of faults within power transmission networks. This is achieved through the utilization of variational autoencoders (VAEs) to generate synthetic data, which in turn is harnessed in conjunction with ML algorithms. This approach encompasses the augmentation of the available dataset by infusing it with synthetically generated instances, contributing to a more robust and proficient fault recognition and categorization system. Specifically, we train the VAE on a set of real-world power transmission data and generate synthetic fault data that capture the statistical properties of real-world data. To overcome the difficulty of fault diagnosis methodology in three-phase high voltage transmission networks, a categorical boosting (Cat-Boost) algorithm is proposed in this work. The other standard machine learning algorithms recommended for this study, including Support Vector Machine (SVM), Decision Trees (DT), Random Forest (RF), and K-Nearest Neighbors (KNN), utilizing the customized version of forward feature selection (FFS), were trained using synthetic data generated by a VAE. The results indicate exceptional performance, surpassing current state-of-the-art techniques, in the tasks of fault classification and localization. Notably, our approach achieves a remarkable 99% accuracy in fault classification and an extremely low mean absolute error (MAE) of 0.2 in fault localization. These outcomes represent a notable advancement compared to the most effective existing baseline methods.publishedVersio

    Unmanned Aerial Systems for Wildland and Forest Fires

    Full text link
    Wildfires represent an important natural risk causing economic losses, human death and important environmental damage. In recent years, we witness an increase in fire intensity and frequency. Research has been conducted towards the development of dedicated solutions for wildland and forest fire assistance and fighting. Systems were proposed for the remote detection and tracking of fires. These systems have shown improvements in the area of efficient data collection and fire characterization within small scale environments. However, wildfires cover large areas making some of the proposed ground-based systems unsuitable for optimal coverage. To tackle this limitation, Unmanned Aerial Systems (UAS) were proposed. UAS have proven to be useful due to their maneuverability, allowing for the implementation of remote sensing, allocation strategies and task planning. They can provide a low-cost alternative for the prevention, detection and real-time support of firefighting. In this paper we review previous work related to the use of UAS in wildfires. Onboard sensor instruments, fire perception algorithms and coordination strategies are considered. In addition, we present some of the recent frameworks proposing the use of both aerial vehicles and Unmanned Ground Vehicles (UV) for a more efficient wildland firefighting strategy at a larger scale.Comment: A recent published version of this paper is available at: https://doi.org/10.3390/drones501001
    • …
    corecore