3,165 research outputs found

    Validating a model-driven software architecture evaluation and improvement method: A family of experiments

    Full text link
    Context: Software architectures should be evaluated during the early stages of software development in order to verify whether the non-functional requirements (NFRs) of the product can be fulfilled. This activity is even more crucial in software product line (SPL) development, since it is also necessary to identify whether the NFRs of a particular product can be achieved by exercising the variation mechanisms provided by the product line architecture or whether additional transformations are required. These issues have motivated us to propose QuaDAI, a method for the derivation, evaluation and improvement of software architectures in model-driven SPL development. Objective: We present in this paper the results of a family of four experiments carried out to empirically validate the evaluation and improvement strategy of QuaDAI. Method: The family of experiments was carried out by 92 participants: Computer Science Master s and undergraduate students from Spain and Italy. The goal was to compare the effectiveness, efficiency, perceived ease of use, perceived usefulness and intention to use with regard to participants using the evaluation and improvement strategy of QuaDAI as opposed to the Architecture Tradeoff Analysis Method (ATAM). Results: The main result was that the participants produced their best results when applying QuaDAI, signifying that the participants obtained architectures with better values for the NFRs faster, and that they found the method easier to use, more useful and more likely to be used. The results of the meta-analysis carried out to aggregate the results obtained in the individual experiments also confirmed these results. Conclusions: The results support the hypothesis that QuaDAI would achieve better results than ATAM in the experiments and that QuaDAI can be considered as a promising approach with which to perform architectural evaluations that occur after the product architecture derivation in model-driven SPL development processes when carried out by novice software evaluators.The authors would like to thank all the participants in the experiments for their selfless involvement in this research. This research is supported by the MULTIPLE Project (MICINN TIN2009-13838) and the ValI+D Program (ACIF/2011/235).González Huerta, J.; Insfrán Pelozo, CE.; Abrahao Gonzales, SM.; Scanniello, G. (2015). Validating a model-driven software architecture evaluation and improvement method: A family of experiments. Information and Software Technology. 57:405-429. https://doi.org/10.1016/j.infsof.2014.05.018S4054295

    "Influence Sketching": Finding Influential Samples In Large-Scale Regressions

    Full text link
    There is an especially strong need in modern large-scale data analysis to prioritize samples for manual inspection. For example, the inspection could target important mislabeled samples or key vulnerabilities exploitable by an adversarial attack. In order to solve the "needle in the haystack" problem of which samples to inspect, we develop a new scalable version of Cook's distance, a classical statistical technique for identifying samples which unusually strongly impact the fit of a regression model (and its downstream predictions). In order to scale this technique up to very large and high-dimensional datasets, we introduce a new algorithm which we call "influence sketching." Influence sketching embeds random projections within the influence computation; in particular, the influence score is calculated using the randomly projected pseudo-dataset from the post-convergence Generalized Linear Model (GLM). We validate that influence sketching can reliably and successfully discover influential samples by applying the technique to a malware detection dataset of over 2 million executable files, each represented with almost 100,000 features. For example, we find that randomly deleting approximately 10% of training samples reduces predictive accuracy only slightly from 99.47% to 99.45%, whereas deleting the same number of samples with high influence sketch scores reduces predictive accuracy all the way down to 90.24%. Moreover, we find that influential samples are especially likely to be mislabeled. In the case study, we manually inspect the most influential samples, and find that influence sketching pointed us to new, previously unidentified pieces of malware.Comment: fixed additional typo

    Modeling and Simulating Causal Dependencies on Process-aware Information Systems from a Cost Perspective

    Get PDF
    Providing effective IT support for business processes has become crucial for enterprises to stay competitive in their market. Business processes must be defined, implemented, enacted, monitored, and continuously adapted to changing situations. Process life cycle support and continuous process improvement become critical success factors in contemporary and future enterprise computing. In this context, process-aware information systems (PAISs) adopt a key role. Thereby, organization-specific and generic process support systems are distinguished. In the former case, the PAIS is build "from scratch" and incorporates organization-specific information about the structure and processes to be supported. In the latter case, the PAIS does not contain any information about the structure and processes of a particular organization. Instead, an organization needs to configure the PAIS by specifying processes, organizational entities, and business objects. To enable the realization of PAISs, numerous process support paradigms, process modeling standards, and business process management tools have been introduced. The application of these approaches in PAIS engineering projects is not only influenced by technological, but also by organizational and project-specific factors. Between these factors there exist numerous causal dependencies, which, in turn, often lead to complex and unexpected effects in PAIS engineering projects. In particular, the costs of PAIS engineering projects are significantly influenced by these causal dependencies. What is therefore needed is a comprehensive approach enabling PAIS engineers to systematically investigate these causal dependencies as well as their impact on the costs of PAIS engineering projects. Existing economic-driven IT evaluation and software cost estimation approaches, however, are unable to take into account causal dependencies and resulting effects. In response, this thesis introduces the EcoPOST framework. This framework utilizes evaluation models to describe the interplay of technological, organizational, and project-specific evaluation factors, and simulation concepts to unfold the dynamic behavior of PAIS engineering projects. In this context, the EcoPOST framework also supports the reuse of evaluation models based on a library of generic, predefined evaluation patterns and also provides governing guidelines (e.g., model design guidelines) which enhance the transfer of the EcoPOST framework into practice. Tool support is available as well. Finally, we present the results of two online surveys, three case studies, and one controlled software experiment. Based on these empirical and experimental research activities, we are able to validate evaluation concepts underlying the EcoPOST framework and additionally demonstrate its practical applicability
    corecore