229,352 research outputs found

    Multiversion software reliability through fault-avoidance and fault-tolerance

    Get PDF
    In this project we have proposed to investigate a number of experimental and theoretical issues associated with the practical use of multi-version software in providing dependable software through fault-avoidance and fault-elimination, as well as run-time tolerance of software faults. In the period reported here we have working on the following: We have continued collection of data on the relationships between software faults and reliability, and the coverage provided by the testing process as measured by different metrics (including data flow metrics). We continued work on software reliability estimation methods based on non-random sampling, and the relationship between software reliability and code coverage provided through testing. We have continued studying back-to-back testing as an efficient mechanism for removal of uncorrelated faults, and common-cause faults of variable span. We have also been studying back-to-back testing as a tool for improvement of the software change process, including regression testing. We continued investigating existing, and worked on formulation of new fault-tolerance models. In particular, we have partly finished evaluation of Consensus Voting in the presence of correlated failures, and are in the process of finishing evaluation of Consensus Recovery Block (CRB) under failure correlation. We find both approaches far superior to commonly employed fixed agreement number voting (usually majority voting). We have also finished a cost analysis of the CRB approach

    Static analysis for facilitating secure and reliable software

    Get PDF
    Software security and reliability are aspects of major concern for software development enterprises that wish to deliver dependable software to their customers. Several static analysis-based approaches for facilitating the development of secure and reliable software have been proposed over the years. The purpose of the present thesis is to investigate these approaches and to extend their state of the art by addressing existing open issues that have not been sufficiently addressed yet. To this end, an empirical study was initially conducted with the purpose to investigate the ability of software metrics (e.g., complexity metrics) to discriminate between different types of vulnerabilities, and to examine whether potential interdependencies exist between different vulnerability types. The results of the analysis revealed that software metrics can be used only as weak indicators of specific security issues, while important interdependencies may exist between different types of vulnerabilities. The study also verified the capacity of software metrics (including previously uninvestigated metrics) to indicate the existence of vulnerabilities in general. Subsequently, a hierarchical security assessment model able to quantify the internal security level of software products, based on static analysis alerts and software metrics is proposed. The model is practical, since it is fully-automated and operationalized in the form of individual tools, while it is also sufficiently reliable since it was built based on data and well-accepted sources of information. An extensive evaluation of the model on a large volume of empirical data revealed that it is able to reliably assess software security both at product- and at class-level of granularity, with sufficient discretion power, while it may be also used for vulnerability prediction. The experimental results also provide further support regarding the ability of static analysis alerts and software metrics to indicate the existence of software vulnerabilities. Finally, a mathematical model for calculating the optimum checkpoint interval, i.e., the checkpoint interval that minimizes the execution time of software programs that adopt the application-level checkpoint and restart (ALCR) mechanism was proposed. The optimum checkpoint interval was found to depend on the failure rate of the application, the execution cost for establishing a checkpoint, and the execution cost for restarting a program after failure. Emphasis was given on programs with loops, while the results were illustrated through several numerical examples.Open Acces

    EyeMMV toolbox: An eye movement post-analysis tool based on a two-step spatial dispersion threshold for fixation identification

    Get PDF
    Eye movement recordings and their analysis constitute an effective way to examine visual perception. There is a special need for the design of computer software for the performance of data analysis. The present study describes the development of a new toolbox, called EyeMMV (Eye Movements Metrics & Visualizations), for post experimental eye movement analysis. The detection of fixation events is performed with the use of an introduced algorithm based on a two-step spatial dispersion threshold. Furthermore, EyeMMV is designed to support all well-known eye tracking metrics and visualization techniques. The results of fixation identification algorithm are compared with those of an algorithm of dispersion-type with a moving window, imported in another open source analysis tool. The comparison produces outputs that are strongly correlated. The EyeMMV software is developed using the scripting language of MATLAB and the source code is distributed through GitHub under the third version of GNU General Public License (link: https://github.com/krasvas/EyeMMV)

    Android Malware Family Classification Based on Resource Consumption over Time

    Full text link
    The vast majority of today's mobile malware targets Android devices. This has pushed the research effort in Android malware analysis in the last years. An important task of malware analysis is the classification of malware samples into known families. Static malware analysis is known to fall short against techniques that change static characteristics of the malware (e.g. code obfuscation), while dynamic analysis has proven effective against such techniques. To the best of our knowledge, the most notable work on Android malware family classification purely based on dynamic analysis is DroidScribe. With respect to DroidScribe, our approach is easier to reproduce. Our methodology only employs publicly available tools, does not require any modification to the emulated environment or Android OS, and can collect data from physical devices. The latter is a key factor, since modern mobile malware can detect the emulated environment and hide their malicious behavior. Our approach relies on resource consumption metrics available from the proc file system. Features are extracted through detrended fluctuation analysis and correlation. Finally, a SVM is employed to classify malware into families. We provide an experimental evaluation on malware samples from the Drebin dataset, where we obtain a classification accuracy of 82%, proving that our methodology achieves an accuracy comparable to that of DroidScribe. Furthermore, we make the software we developed publicly available, to ease the reproducibility of our results.Comment: Extended Versio

    Software quality attribute measurement and analysis based on class diagram metrics

    Get PDF
    Software quality measurement lies at the heart of the quality engineering process. Quality measurement for object-oriented artifacts has become the key for ensuring high quality software. Both researchers and practitioners are interested in measuring software product quality for improvement. It has recently become more important to consider the quality of products at the early phases, especially at the design level to ensure that the coding and testing would be conducted more quickly and accurately. The research work on measuring quality at the design level progressed in a number of steps. The first step was to discover the correct set of metrics to measure design elements at the design level. Chidamber and Kemerer (C&K) formulated the first suite of OO metrics. Other researchers extended on this suite and provided additional metrics. The next step was to collect these metrics by using software tools. A number of tools were developed to measure the different suites of metrics; some represent their measurements in the form of ordinary numbers, others represent them in 3D visual form. In recent years, researchers developed software quality models which went a bit further by computing quality attributes from collected design metrics. In this research we extended on the software quality modelers’ work by adding a quality attribute prioritization scheme and a design metric analysis layer. Our work is all focused on the class diagram, the most fundamental constituent in any object oriented design. Using earlier researchers’ work, we extract a class diagram’s metrics and compute its quality attributes. We then analyze the results and inform the user. We present our figures and observations in the form of an analysis report. Our target user could be a project manager or a software quality engineer or a developer who needs to improve the class diagram’s quality. We closely examine the design metrics that affect quality attributes. We pinpoint the weaknesses in the class diagram, based on these metrics, inform the user about the problems that emerged from these classes, and advice him/her as to how he/she can go about improving the overall design quality. We consider the six basic quality attributes: “Reusability”, “Functionality”, “Understandability”, “Flexibility”, “Extendibility”, and “Effectiveness” of the whole class diagram. We allow the user to set priorities on these quality attributes in a sequential manner based on his/her requirements. Using a geometric series, we calculate a weighted average value for the arranged list of quality attributes. This weighted average value indicates the overall quality of the product, the class diagram. Our experimental work gave us much insight into the meanings and dependencies between design metrics and quality attributes. This helped us refine our analysis technique and give more concrete observations to the user

    Technical Report: Anomaly Detection for a Critical Industrial System using Context, Logs and Metrics

    Get PDF
    Recent advances in contextual anomaly detection attempt to combine resource metrics and event logs to un- cover unexpected system behaviors and malfunctions at run- time. These techniques are highly relevant for critical software systems, where monitoring is often mandated by international standards and guidelines. In this technical report, we analyze the effectiveness of a metrics-logs contextual anomaly detection technique in a middleware for Air Traffic Control systems. Our study addresses the challenges of applying such techniques to a new case study with a dense volume of logs, and finer monitoring sampling rate. We propose an automated abstraction approach to infer system activities from dense logs and use regression analysis to infer the anomaly detector. We observed that the detection accuracy is impacted by abrupt changes in resource metrics or when anomalies are asymptomatic in both resource metrics and event logs. Guided by our experimental results, we propose and evaluate several actionable improvements, which include a change detection algorithm and the use of time windows on contextual anomaly detection. This technical report accompanies the paper “Contextual Anomaly Detection for a Critical Industrial System based on Logs and Metrics” [1] and provides further details on the analysis method, case study and experimental results

    Tracking materials science data lineage to manage millions of materials experiments and analyses

    Get PDF
    In an era of rapid advancement of algorithms that extract knowledge from data, data and metadata management are increasingly critical to research success. In materials science, there are few examples of experimental databases that contain many different types of information, and compared with other disciplines, the database sizes are relatively small. Underlying these issues are the challenges in managing and linking data across disparate synthesis and characterization experiments, which we address with the development of a lightweight data management framework that is generally applicable for experimental science and beyond. Five years of managing experiments with this system has yielded the Materials Experiment and Analysis Database (MEAD) that contains raw data and metadata from millions of materials synthesis and characterization experiments, as well as the analysis and distillation of that data into property and performance metrics via software in an accompanying open source repository. The unprecedented quantity and diversity of experimental data are searchable by experiment and analysis attributes generated by both researchers and data processing software. The search web interface allows users to visualize their search results and download zipped packages of data with full annotations of their lineage. The enormity of the data provides substantial challenges and opportunities for incorporating data science in the physical sciences, and MEAD’s data and algorithm management framework will foster increased incorporation of automation and autonomous discovery in materials and chemistry research
    • 

    corecore