45,530 research outputs found

    Learning Tractable Probabilistic Models for Fault Localization

    Full text link
    In recent years, several probabilistic techniques have been applied to various debugging problems. However, most existing probabilistic debugging systems use relatively simple statistical models, and fail to generalize across multiple programs. In this work, we propose Tractable Fault Localization Models (TFLMs) that can be learned from data, and probabilistically infer the location of the bug. While most previous statistical debugging methods generalize over many executions of a single program, TFLMs are trained on a corpus of previously seen buggy programs, and learn to identify recurring patterns of bugs. Widely-used fault localization techniques such as TARANTULA evaluate the suspiciousness of each line in isolation; in contrast, a TFLM defines a joint probability distribution over buggy indicator variables for each line. Joint distributions with rich dependency structure are often computationally intractable; TFLMs avoid this by exploiting recent developments in tractable probabilistic models (specifically, Relational SPNs). Further, TFLMs can incorporate additional sources of information, including coverage-based features such as TARANTULA. We evaluate the fault localization performance of TFLMs that include TARANTULA scores as features in the probabilistic model. Our study shows that the learned TFLMs isolate bugs more effectively than previous statistical methods or using TARANTULA directly.Comment: Fifth International Workshop on Statistical Relational AI (StaR-AI 2015

    Time-Space Efficient Regression Testing for Configurable Systems

    Full text link
    Configurable systems are those that can be adapted from a set of options. They are prevalent and testing them is important and challenging. Existing approaches for testing configurable systems are either unsound (i.e., they can miss fault-revealing configurations) or do not scale. This paper proposes EvoSPLat, a regression testing technique for configurable systems. EvoSPLat builds on our previously-developed technique, SPLat, which explores all dynamically reachable configurations from a test. EvoSPLat is tuned for two scenarios of use in regression testing: Regression Configuration Selection (RCS) and Regression Test Selection (RTS). EvoSPLat for RCS prunes configurations (not tests) that are not impacted by changes whereas EvoSPLat for RTS prunes tests (not configurations) which are not impacted by changes. Handling both scenarios in the context of evolution is important. Experimental results show that EvoSPLat is promising. We observed a substantial reduction in time (22%) and in the number of configurations (45%) for configurable Java programs. In a case study on a large real-world configurable system (GCC), EvoSPLat reduced 35% of the running time. Comparing EvoSPLat with sampling techniques, 2-wise was the most efficient technique, but it missed two bugs whereas EvoSPLat detected all bugs four times faster than 6-wise, on average.Comment: 14 page

    Automatic allocation of safety requirements to components of a software product line

    Get PDF
    Safety critical systems developed as part of a product line must still comply with safety standards. Standards use the concept of Safety Integrity Levels (SILs) to drive the assignment of system safety requirements to components of a system under design. However, for a Software Product Line (SPL), the safety requirements that need to be allocated to a component may vary in different products. Variation in design can indeed change the possible hazards incurred in each product, their causes, and can alter the safety requirements placed on individual components in different SPL products. Establishing common SILs for components of a large scale SPL by considering all possible usage scenarios, is desirable for economies of scale, but it also poses challenges to the safety engineering process. In this paper, we propose a method for automatic allocation of SILs to components of a product line. The approach is applied to a Hybrid Braking System SPL design

    IntRepair: Informed Repairing of Integer Overflows

    Full text link
    Integer overflows have threatened software applications for decades. Thus, in this paper, we propose a novel technique to provide automatic repairs of integer overflows in C source code. Our technique, based on static symbolic execution, fuses detection, repair generation and validation. This technique is implemented in a prototype named IntRepair. We applied IntRepair to 2,052C programs (approx. 1 million lines of code) contained in SAMATE's Juliet test suite and 50 synthesized programs that range up to 20KLOC. Our experimental results show that IntRepair is able to effectively detect integer overflows and successfully repair them, while only increasing the source code (LOC) and binary (Kb) size by around 1%, respectively. Further, we present the results of a user study with 30 participants which shows that IntRepair repairs are more than 10x efficient as compared to manually generated code repairsComment: Accepted for publication at the IEEE TSE journal. arXiv admin note: text overlap with arXiv:1710.0372

    Automated metamorphic testing on the analyses of feature models

    Get PDF
    Copyright © 2010 Elsevier B.V. All rights reserved.Context: A feature model (FM) represents the valid combinations of features in a domain. The automated extraction of information from FMs is a complex task that involves numerous analysis operations, techniques and tools. Current testing methods in this context are manual and rely on the ability of the tester to decide whether the output of an analysis is correct. However, this is acknowledged to be time-consuming, error-prone and in most cases infeasible due to the combinatorial complexity of the analyses, this is known as the oracle problem.Objective: In this paper, we propose using metamorphic testing to automate the generation of test data for feature model analysis tools overcoming the oracle problem. An automated test data generator is presented and evaluated to show the feasibility of our approach.Method: We present a set of relations (so-called metamorphic relations) between input FMs and the set of products they represent. Based on these relations and given a FM and its known set of products, a set of neighbouring FMs together with their corresponding set of products are automatically generated and used for testing multiple analyses. Complex FMs representing millions of products can be efficiently created by applying this process iteratively.Results: Our evaluation results using mutation testing and real faults reveal that most faults can be automatically detected within a few seconds. Two defects were found in FaMa and another two in SPLOT, two real tools for the automated analysis of feature models. Also, we show how our generator outperforms a related manual suite for the automated analysis of feature models and how this suite can be used to guide the automated generation of test cases obtaining important gains in efficiency.Conclusion: Our results show that the application of metamorphic testing in the domain of automated analysis of feature models is efficient and effective in detecting most faults in a few seconds without the need for a human oracle.This work has been partially supported by the European Commission(FEDER)and Spanish Government under CICYT project SETI(TIN2009-07366)and the Andalusian Government project ISABEL(TIC-2533)

    A systematic review of quality attributes and measures for software product lines

    Full text link
    [EN] It is widely accepted that software measures provide an appropriate mechanism for understanding, monitoring, controlling, and predicting the quality of software development projects. In software product lines (SPL), quality is even more important than in a single software product since, owing to systematic reuse, a fault or an inadequate design decision could be propagated to several products in the family. Over the last few years, a great number of quality attributes and measures for assessing the quality of SPL have been reported in literature. However, no studies summarizing the current knowledge about them exist. This paper presents a systematic literature review with the objective of identifying and interpreting all the available studies from 1996 to 2010 that present quality attributes and/or measures for SPL. These attributes and measures have been classified using a set of criteria that includes the life cycle phase in which the measures are applied; the corresponding quality characteristics; their support for specific SPL characteristics (e. g., variability, compositionality); the procedure used to validate the measures, etc. We found 165 measures related to 97 different quality attributes. The results of the review indicated that 92% of the measures evaluate attributes that are related to maintainability. In addition, 67% of the measures are used during the design phase of Domain Engineering, and 56% are applied to evaluate the product line architecture. However, only 25% of them have been empirically validated. In conclusion, the results provide a global vision of the state of the research within this area in order to help researchers in detecting weaknesses, directing research efforts, and identifying new research lines. In particular, there is a need for new measures with which to evaluate both the quality of the artifacts produced during the entire SPL life cycle and other quality characteristics. There is also a need for more validation (both theoretical and empirical) of existing measures. In addition, our results may be useful as a reference guide for practitioners to assist them in the selection or the adaptation of existing measures for evaluating their software product lines. © 2011 Springer Science+Business Media, LLC.This research has been funded by the Spanish Ministry of Science and Innovation under the MULTIPLE (Multimodeling Approach For Quality-Aware Software Product Lines) project with ref. TIN2009-13838.Montagud Gregori, S.; Abrahao Gonzales, SM.; Insfrán Pelozo, CE. (2012). A systematic review of quality attributes and measures for software product lines. Software Quality Journal. 20(3-4):425-486. https://doi.org/10.1007/s11219-011-9146-7S425486203-4Abdelmoez, W., Nassar, D. M., Shereschevsky, M., Gradetsky, N., Gunnalan, R., Ammar, H. H., et al. (2004). Error propagation in software architectures. In 10th international symposium on software metrics (METRICS), Chicago, Illinois, USA.Ajila, S. A., & Dumitrescu, R. T. (2007). Experimental use of code delta, code churn, and rate of change to understand software product line evolution. Journal of Systems and Software, 80, 74–91.Aldekoa, G., Trujillo, S., Sagardui, G., & Díaz, O. (2006). Experience measuring maintainability in software product lines. In XV Jornadas de Ingeniería del Software y Bases de Datos (JISBD). Barcelona.Aldekoa, G., Trujillo, S., Sagardui, G., & Díaz, O. (2008). Quantifying maintanibility in feature oriented product lines, Athens, Greece, pp. 243–247.Alves de Oliveira Junior, E., Gimenes, I. M. S., & Maldonado, J. C. (2008). A metric suite to support software product line architecture evaluation. In XXXIV Conferencia Latinamericana de Informática (CLEI), Santa Fé, Argentina, pp. 489–498.Alves, V., Niu, N., Alves, C., & Valença, G. (2010). Requirements engineering for software product lines: A systematic literature review. Information & Software Technology, 52(8), 806–820.Bosch, J. (2000). Design and use of software architectures: Adopting and evolving a product line approach. USA: ACM Press/Addison-Wesley Publishing Co.Briand, L. C., Differing, C. M., & Rombach, D. (1996a). Practical guidelines for measurement-based process improvement. Software Process-Improvement and Practice, 2, 253–280.Briand, L. C., Morasca, S., & Basili, V. R. (1996b). Property based software engineering measurement. IEEE Transactions on Software Eng., 22(1), 68–86.Calero, C., Ruiz, J., & Piattini, M. (2005). Classifying web metrics using the web quality model. Online Information Review, 29(3): 227–248.Chen, L., Ali Babar, M., & Ali, N. (2009). Variability management in software product lines: A systematic review. In 13th international software product lines conferences (SPLC), San Francisco, USA.Clements, P., & Northrop, L. (2002). Software product lines. 2003. Software product lines practices and patterns. Boston, MA: Addison-Wesley.Crnkovic, I., & Larsson, M. (2004). Classification of quality attributes for predictability in component-based systems. Journal of Econometrics, pp. 231–250.Conference Rankings of Computing Research and Education Association of Australasia (CORE). (2010). Available in http://core.edu.au/index.php/categories/conference%20rankings/1 .Davis, A., Dieste, Ó., Hickey, A., Juristo, N., & Moreno, A. M. (2006). Effectiveness of requirements elicitation techniques: Empirical results derived from a systematic review. In 14th IEEE international conference requirements engineering, pp. 179–188.de Souza Filho, E. D., de Oliveira Cavalcanti, R., Neiva, D. F. S., Oliveira, T. H. B., Barachisio Lisboa, L., de Almeida E. S., & de Lemos Meira, S. R. (2008). Evaluating domain design approaches using systematic review. In 2nd European conference on software architecture, Cyprus, pp. 50–65.Ejiogu, L. (1991). Software engineering with formal metrics. QED Publishing.Engström, E., & Runeson, P. (2011). Software product line testing—A systematic mapping study. Information & Software Technology, 53(1), 2–13.Etxeberria, L., Sagarui, G., & Belategi, L. (2008). Quality aware software product line engineering. Journal of the Brazilian Computer Society, 14(1), Campinas Mar.Ganesan, D., Knodel, J., Kolb, R., Haury, U., & Meier, G. (2007). Comparing costs and benefits of different test strategies for a software product line: A study from Testo AG. In 11th international software product line conference, Kyoto, Japan, pp. 74–83, September 2007.Gómez, O., Oktaba, H., Piattini, M., & García, F. (2006). A systematic review measurement in software engineering: State-of-the-art in measures. In First international conference on software and data technologies (ICSOFT), Setúbal, Portugal, pp. 11–14.IEEE standard for a software quality metrics methodology, IEEE Std 1061-1998, 1998.Inoki, M., & Fukazawa, Y. (2007). Software product line evolution method based on Kaizen approach. In 22nd annual ACM symposium on applied computing, Korea.Insfran, E., & Fernandez, A. (2008). A systematic review of usability evaluation in Web development. 2nd international workshop on web usability and accessibility (IWWUA’08), New Zealand, LNCS 5176, Springer, pp. 81–91.ISO/IEC 25010. (2008). Systems and software engineering. Systems and software Quality Requirements and Evaluation (SQuaRE). System and software quality models.ISO/IEC 9126. (2000). Software engineering. Product Quality.Johansson, E., & Höst, R. (2002). Tracking degradation in software product lines through measurement of design rule violations. In 14th International conference on software engineering and knowledge engineering, Ischia, Italy, pp. 249–254.Journal Citation Reports of Thomson Reuters. (2010). Available in http://thomsonreuters.com/products_services/science/science_products/a-z/journal_citation_reports/ .Khurum, M., & Gorschek, T. (2009). A systematic review of domain analysis solutions for product lines. The Journal of Systems and Software.Kim, T., Ko, I. Y., Kang, S. W., & Lee, D. H. (2008). Extending ATAM to assess product line architecture. In 8th IEEE international conference on computer and information technology, pp. 790–797.Kitchenham, B. (2007). Guidelines for performing systematic literature reviews in software engineering. Version 2.3, EBSE Technical Report, Keele University, UK.Kitchenham, B., Pfleeger, S., & Fenton, N. (1995). Towards a framework for software measurement validation. IEEE Transactions on Software Engineering, 21(12).Landis, J. R., & Koch, G. G. (1977). The measurement of observer agreement for categorical data. Biometrics, 33, 159–174.Mendes, E. (2005). A systematic review of Web engineering research. International symposium on empirical software engineering. Noosa Heads, Australia.Meyer, M. H., & Dalal, D. (2002). Managing platform architectures and manufacturing processes for non assembled products. Journal of Product Innovation Management, 19(4), 277–293.Montagud, S., & Abrahão, S. (2009). Gathering Current knowledge about quality evaluation in software product lines. In 13th international software product lines conferences (SPLC), San Francisco, USA.Montagud, S., & Abrahão, S. (2009). A SQuaRE-bassed quality evaluation method for software product lines. Master’s thesis, December 2009 (in Spanish).Needham, D., & Jones, S. (2006). A software fault tree metric. In 22nd international conference on software maintenance (ICSM), Philadelphia, Pennsylvania, USA.Niemelä, E., & Immonen, A. (2007). Capturing quality requirements of product family architecture. Information and Software Technology, 49(11–12), 1107–1120.Odia, O. E. (2007). Testing in software product lines. Master Thesis Software Engineering of School of Engineering, Bleking Institute of Technology. Thesis no. MSE-2007:16, Sweden.Olumofin, F. G., & Mišić, V. B. (2007). A holistic architecture assessment method for software product lines. Information and Software Technology, 49, 309–323.Pérez Lamancha, B., Polo Usaola, M., & Piattini Velthius, M. (2009). Software product line testing—a systematic review. ICSOFT, (1), 23–30.Poels, G., & Dedene, G. (2000). Distance-based software measurement: necessary and sufficient properties for software measures. Information and Software Technology, 42(I), 35–46.Prehofer, C., van Gurp, J., & Bosch, J. (2008). Compositionality in software platforms. In Emerging methods, technologies and process management in software engineering. Wiley.Rahman, A. (2004). Metrics for the structural assessment of product line architecture. Master Thesis on Software Engineering, Thesis no. MSE-2004:24. School of Engineering, Blekinge Institute of Technology, Sweden.Sethi, K., Cai, Y., Wong, S., Garcia, A., & Sant’Anna, C. (2009). From retrospect to prospect: Assessing modularity and stability from software architecture. Joint working IEEE/IFIP conference on software architecture, 2009 & European conference on software architecture. WICSA/ECSA.Shaik, I., Abdelmoez, W,. Gunnalan, R., Shereshevsky, M., Zeid, A., Ammar, H. H., et al. (2005). Change propagation for assessing design quality of software architectures. 5th working IEEE/IFIP conference on software architecture (WICSA’05).Siegmund, N., Rosenmüller, M., Kuhlemann, M., Kästner, C., & Saake, G. (2008). Measuring non-functional properties in software product lines for product derivation. In 15th Asia-Pacific software engineering conference, Beijing, China.Sun Her, J., Hyeok Kim, J., Hun Oh, S., Yul Rhew, S., & Dong Kim, S. (2007). A framework for evaluating reusability of core asset in product line engineering. Information and Software Technology, 49, 740–760.Svahnberg, M., & Bosch, J. (2000). Evolution in software product lines. In 3rd international workshop on software architectures for products families (IWSAPF-3). Las Palmas de Gran Canaria.Van der Hoek, A., Dincel, E., & Medidović, N. (2003). Using services utilization metrics to assess the structure of product line architectures. In 9th international software metrics symposium (METRICS), Sydney, Australia.Van der Linden, F., Schmid, K., & Rommes, E. (2007). Software product lines in action. Springer.Whitmire, S. (1997). Object oriented design measurement. John Wiley & Sons.Wnuk, K., Regnell, B., & Karlsson, L. (2009). What happened to our features? Visualization and understanding of scope change dynamics in a large-scale industrial setting. In 17th IEEE international requirements engineering conference.Yoshimura, K., Ganesan, D., & Muthig, D. (2006). Assessing merge potential of existing engine control systems into a product line. In International workshop on software engineering for automative systems, Shangai, China, pp. 61–67.Zhang, T., Deng, L., Wu, J., Zhou, Q., & Ma, C. (2008). Some metrics for accessing quality of product line architecture. In International conference on computer science and software engineering (CSSE), Wuhan, China, pp. 500–503

    Formal Analysis of CRT-RSA Vigilant's Countermeasure Against the BellCoRe Attack: A Pledge for Formal Methods in the Field of Implementation Security

    Full text link
    In our paper at PROOFS 2013, we formally studied a few known countermeasures to protect CRT-RSA against the BellCoRe fault injection attack. However, we left Vigilant's countermeasure and its alleged repaired version by Coron et al. as future work, because the arithmetical framework of our tool was not sufficiently powerful. In this paper we bridge this gap and then use the same methodology to formally study both versions of the countermeasure. We obtain surprising results, which we believe demonstrate the importance of formal analysis in the field of implementation security. Indeed, the original version of Vigilant's countermeasure is actually broken, but not as much as Coron et al. thought it was. As a consequence, the repaired version they proposed can be simplified. It can actually be simplified even further as two of the nine modular verifications happen to be unnecessary. Fortunately, we could formally prove the simplified repaired version to be resistant to the BellCoRe attack, which was considered a "challenging issue" by the authors of the countermeasure themselves.Comment: arXiv admin note: substantial text overlap with arXiv:1401.817
    • …
    corecore