687 research outputs found

    A process for managing interaction between experimenters to get useful similar replications

    Get PDF
    Context: A replication is the repetition of an experiment. Several efforts have been made to adopt replication as a common practice in software engineering. There are different types of replications, depending on their purpose. Similar replications keep the experimental conditions as alike as possible to the original ones. External similar replications, where the replicating experimenters are not the same people as the original experimenters, have been a stumbling block. Several attempts at combining the results of replications have resulted in failure. Software engineering does not appear to be well suited to such replications, because it works with complex experimentally immature contexts. Software engineering settings have a large number of variables, and the role that many of them play is unknown. A successful (or useful) similar replication helps to better understand the phenomenon under study by verifying results and/or identifying contextual variables that could influence (or not) the results, through the combination of experimental results. Objective: To be able to get successful similar replications, there needs to be interaction between original and replicating experimenters. In this paper, we propose an interaction process for achieving successful similar replications. Method: This process consists of: an adaptation meeting, where experimenters tailor the experiment to the new setting; querying, to settle occasional inquiries while the experiment is being run; and a combination meeting, where experimenters meet to discuss the combination of replication outcomes with previous results. To check its effectiveness, the process has been tested on three different replications of the same experiment. Results: The proposed interaction process has helped to identify new contextual variables that could potentially influence (or not) the experimental results in the three replications run. Additionally, the interaction process has helped to uncover certain problems and deviations that occurred during some of the replications that we would have not been aware of otherwise. Conclusions: There are signs that suggest that it is possible to get successful similar replications in software engineering experimentation, when there is appropriate interaction among experimenters.This work has been performed under research Grant TIN2011-23216 of the Spanish Ministry of Science and Innovation.Juristo, N.; Vegas, S.; Solari, M.; Abrahao Gonzales, SM.; Ramos, I. (2013). A process for managing interaction between experimenters to get useful similar replications. Information and Software Technology. 55(2):215-225. https://doi.org/10.1016/j.infsof.2012.07.016S21522555

    Understanding replication of experiments in software engineering: a classification

    Get PDF
    Context: Replication plays an important role in experimental disciplines. There are still many uncertain- ties about how to proceed with replications of SE experiments. Should replicators reuse the baseline experiment materials? How much liaison should there be among the original and replicating experiment- ers, if any? What elements of the experimental configuration can be changed for the experiment to be considered a replication rather than a new experiment? Objective: To improve our understanding of SE experiment replication, in this work we propose a classi- fication which is intend to provide experimenters with guidance about what types of replication they can perform. Method: The research approach followed is structured according to the following activities: (1) a litera- ture review of experiment replication in SE and in other disciplines, (2) identification of typical elements that compose an experimental configuration, (3) identification of different replications purposes and (4) development of a classification of experiment replications for SE. Results: We propose a classification of replications which provides experimenters in SE with guidance about what changes can they make in a replication and, based on these, what verification purposes such a replication can serve. The proposed classification helped to accommodate opposing views within a broader framework, it is capable of accounting for less similar replications to more similar ones regarding the baseline experiment. Conclusion: The aim of replication is to verify results, but different types of replication serve special ver- ification purposes and afford different degrees of change. Each replication type helps to discover partic- ular experimental conditions that might influence the results. The proposed classification can be used to identify changes in a replication and, based on these, understand the level of verification

    Assessing the Effectiveness of Sequence Diagrams in the Comprehension of Functional Requirements: Results from a Family of Five Experiments

    Get PDF
    Modeling is a fundamental activity within the requirements engineering process and concerns the construction of abstract descriptions of requirements that are amenable to interpretation and validation. The choice of a modeling technique is critical whenever it is necessary to discuss the interpretation and validation of requirements. This is particularly true in the case of functional requirements and stakeholders with divergent goals and different backgrounds and experience. This paper presents the results of a family of experiments conducted with students and professionals to investigate whether the comprehension of functional requirements is influenced by the use of dynamic models that are represented by means of the UML sequence diagrams. The family contains five experiments performed in different locations and with 112 participants of different abilities and levels of experience with the UML. The results show that sequence diagrams improve the comprehension of the modeled functional requirements in the case of high ability and more experienced participants

    An external replication on the effects of test-driven development using a multi-site blind analysis approach

    Get PDF
    Context: Test-driven development (TDD) is an agile practice claimed to improve the quality of a software product, as well as the productivity of its developers. A previous study (i.e., baseline experiment) at the University of Oulu (Finland) compared TDD to a test-last development (TLD) approach through a randomized controlled trial. The results failed to support the claims. Goal: We want to validate the original study results by replicating it at the University of Basilicata (Italy), using a different design. Method: We replicated the baseline experiment, using a crossover design, with 21 graduate students. We kept the settings and context as close as possible to the baseline experiment. In order to limit researchers bias, we involved two other sites (UPM, Spain, and Brunel, UK) to conduct blind analysis of the data. Results: The Kruskal-Wallis tests did not show any significant difference between TDD and TLD in terms of testing effort (p-value = .27), external code quality (p-value = .82), and developers' productivity (p-value = .83). Nevertheless, our data revealed a difference based on the order in which TDD and TLD were applied, though no carry over effect. Conclusions: We verify the baseline study results, yet our results raises concerns regarding the selection of experimental objects, particularly with respect to their interaction with the order in which of treatments are applied. We recommend future studies to survey the tasks used in experiments evaluating TDD. Finally, to lower the cost of replication studies and reduce researchers' bias, we encourage other research groups to adopt similar multi-site blind analysis approach described in this paper.This research is supported in part by the Academy of Finland Project 278354

    The tip of the iceberg: placebo, experimenter expectation and interference phenomena in subconscious information flow

    Get PDF
    A multi-disciplinary dialogue on the experimental evidence for nonlocal bio-communication, its emergent characteristics, impact on mainstream sciences and future research directions.

    On the Effectiveness of Tools to Support Infrastructure as Code: Model-Driven Versus Code-Centric

    Full text link
    [EN] Infrastructure as Code (IaC) is an approach for infrastructure automation that is based on software development practices. The IaC approach supports code-centric tools that use scripts to specify the creation, updating and execution of cloud infrastructure resources. Since each cloud provider offers a different type of infrastructure, the definition of an infrastructure resource (e.g., a virtual machine) implies writing several lines of code that greatly depend on the target cloud provider. Model-driven tools, meanwhile, abstract the complexity of using IaC scripts through the high-level modeling of the cloud infrastructure. In a previous work, we presented an infrastructure modeling approach and tool (Argon) for cloud provisioning that leverages model-driven engineering and supports the IaC approach. The objective of the present work is to compare a model-driven tool (Argon) with a well-known code-centric tool (Ansible) in order to provide empirical evidence of their effectiveness when defining the cloud infrastructure, and the participants & x2019; perceptions when using these tools. We, therefore, conducted a family of three experiments involving 67 Computer Science students in order to compare Argon with Ansible as regards their effectiveness, efficiency, perceived ease of use, perceived usefulness, and intention to use. We used the AB/BA crossover design to configure the individual experiments and the linear mixed model to statistically analyze the data collected and subsequently obtain empirical findings. The results of the individual experiments and meta-analysis indicate that Argon is more effective as regards supporting the IaC approach in terms of defining the cloud infrastructure. The participants also perceived that Argon is easier to use and more useful for specifying the infrastructure resources. Our findings suggest that Argon accelerates the provisioning process by modeling the cloud infrastructure and automating the generation of scripts for different DevOps tools when compared to Ansible, which is a code-centric tool that is greatly used in practice.This work was supported by the Ministry of Science, Innovation, and Universities (Adapt@Cloud project), Spain, under Grant TIN2017-84550-R. The work of Julio Sandobalin was supported by the Escuela Politecnica Nacional, Ecuador.Sandobalín, J.; Insfran, E.; Abrahao Gonzales, SM. (2020). On the Effectiveness of Tools to Support Infrastructure as Code: Model-Driven Versus Code-Centric. IEEE Access. 8:17734-17761. https://doi.org/10.1109/ACCESS.2020.2966597S1773417761

    Improvement of usability in user interfaces for massive data analysis: an empirical study

    Full text link
    [EN] Big Data challenges the conventional way of analyzing massive data and creates the need to improve the usability of existing user interfaces (UIs) in order to deal with massive amounts of data. How the UIs facilitate the search for information and helps in the end-user's decision-making depends on developers and designers, who have no guides for producing usable UIs. We have proposed a set of interaction patterns for designing massive data analysis UIs by studying 27 real case studies of massive data analysis. We evaluate if the proposed patterns improve the usability of the massive data analysis UIs in the context of literature search. We conducted two replications of the same controlled experiment, one with 24 undergraduate students experienced in scientific literature search and the other with eight researchers who are experienced in biomedical literature search. The experiment, which was planned as a repeated measures design, compares UIs that have been enhanced with the proposed patterns versus original UIs in terms of three response variables: effectiveness, efficiency, and satisfaction. The outcomes show that the use of interaction patterns in UIs for massive data analysis yields better and more significant effects for the three response variables, enhancing the discovery and visualization of the data. The use of the proposed interaction design patterns improves the usability of the UIs that deal with massive data. The patterns can be considered as guides for helping designers and developers to design usable UIs for massive data analysis web applications.The authors thank the members of the PROS Center Genome group for productive discussions. In addition, it is also important to highlight that the Secretaría Nacional de Educación, Ciencia y Tecnología (SENESCYT) and the Escuela Politécnica Nacional from Ecuador have supported this work. This project has also been developed with the financial support of the Spanish State Research Agency and the Generalitat Valenciana, under the projects TIN2016-80811-P and PROMETEO/2018/176, and co-financed with ERDF.Iñiguez-Jarrín, C.; Panach, JI.; Pastor López, O. (2020). Improvement of usability in user interfaces for massive data analysis: an empirical study. Multimedia Tools and Applications. 79(17-18):12257-12288. https://doi.org/10.1007/s11042-019-08456-6S12257122887917-18Borchers JO (2000) “Interaction Design Patterns : Twelve Theses,” Position Pap. CHI Work. “‘Pattern Lang. INteractoin Des. Build. Momentum,’”Borchers J (2009) The aachen media space: design patterns for augmented work environments, in Designing User Friendly Augmented Work Environments, Springer, 2009, pp. 261–312.Borchers J, Buschmann F (2001) A pattern approach to interaction design. WileyCohen J (1988) Statistical power analysis for the behavioral sciences 2nd edn. Erlbaum Associates, HillsdaleCremonesi P, Elahi M, and Garzotto F (2015) Interaction design patterns in recommender systems, in Proceedings of the Biannual Conference on Italian SIGCHI Chapter, 2015, pp. 66–73, https://doi.org/10.1145/2808435.2808442.Cremonesi P, Elahi M, Garzotto F (Feb. 2017) User interface patterns in recommendation-empowered content intensive multimedia applications. Multimed Tools Appl 76(4):5275–5309. https://doi.org/10.1007/s11042-016-3946-5Datamer e-book (2016) Top five high-impact use cases for big data analytics. Available at. https://www.datameer.com/pdf/eBook-Top-Five-High-Impact-UseCases-for-Big-Data-Analytics.pdf. Accessed on Apr-22-2017.DigitalScience (2018) “Dimensions.” Available at. https://app.dimensions.ai/discover/publication. Accessed on Mar-03-2018.Douglas SM, Montelione GT, Gerstein M (2005) PubNet: a flexible system for visualizing literature derived networks. Genome Biol 6(9):R80. https://doi.org/10.1186/gb-2005-6-9-r80Elliott AC, Woodward WA (2006) Statistical analysis quick reference guidebook: with SPSS examples. Sage Publications Pvt. Ltd.Ellis PD (2010) The essential guide to effect sizes: statistical power, meta-analysis, and the interpretation of research results. Cambridge University PressField A (2013) Discovering statistics using IBM SPSS statistics, 4th ed. Sage Publications Ltd.Fiorini N et al (2018, Jan.) PubMed labs: an experimental system for improving biomedical literature search. Database, vol 2018. https://doi.org/10.1093/database/bay094Folmer E (2006) Usability patterns in games. Futur. Play, vol. 6.Fritz MS, Arthur AM (2017) Moderator variables. Oxford University PressGenomenon (2018) “Mastermind - Comprehensive Genomic Search Engine.” Available at. https://mastermind.genomenon.com/. Accessed on Apr-22-2018.Good BM, Clarke EL, Loguercio S, Su AI (2012) Linking genes to diseases with a SNPedia-Gene Wiki mashup. J Biomed Semantics 3(1):S6. https://doi.org/10.1186/2041-1480-3-S1-S6Graham I (2003) A pattern language for Web usability. Addison-Wesley.Granlund Å, Lafrenière D, and Carr DA (2001) A pattern-supported approach to the user Interface design processGuerra E, Fernandes C (2010) An evaluation process for pattern languages, in Proceedings of the 8th Latin American Conference on Pattern Languages of Programs, 2010, pp. 18:1–18:11, https://doi.org/10.1145/2581507.2581525.IBM (2015) IBM big data use cases – What is a big data use case and how to get started – Exploration, 2015. Available at. http://www-01.ibm.com/software/data/bigdata/use-cases.html. Accessed on Apr-22-2017.Iñiguez-Jarrín CE, Panach JI, Pastor Ó (2018) Defining interaction design patterns to extract knowledge from big data. Advanced Information Systems Engineering 10816:539–553. https://doi.org/10.1007/978-3-319-91563-0Kuehl RO (2001) Diseño de experimentos: principios estadísticos de diseño y análisis de investigación, 2 ed. MéxicoLaskowski N (2015) Ten big data case studies in a Nutshell, SearchCIO.com. pp. 11–12Lewis JR (1995) IBM computer usability satisfaction questionnaires: psychometric evaluation and instructions for use. Int J Hum Comput Interact 7(1):57–78Lu Z (2011) PubMed and beyond: A survey of web tools for searching biomedical literature,” Database, vol. 2011, p. baq036, 2011, DOI: https://doi.org/10.1093/database/baq036.Marill JL, Miller N, Kitendaugh P (Jan. 2006) The MedlinePlus public user interface: studies of design challenges and opportunities. J Med Libr Assoc 94(1):30–40Martín-Rodilla P, Panach JI (2014) Applications in the context of cultural heritage dataNilsson EG (2009) Design patterns for user interface for mobile applications. Adv Eng Softw 40(12):1318–1328Pentaho (2015) Big data uses cases | Pentaho. Available at. http://www.pentaho.com/big-data-use-cases. Accessed on Jun-11-2017.Pituch KA, Stevens JP (2015) Applied multivariate statistics for the social sciences: analyses with SAS and IBM’s SPSS. RoutledgeRiley RD, Lambert PC, Abo-Zaid G (Feb. 2010) Meta-analysis of individual participant data: rationale, conduct, and reporting. BMJ 340:c221. https://doi.org/10.1136/BMJ.C221Schmettow M (2006) User interaction design patterns for information retrieval, Eur. 2006, pp. 489–512, 2006.Scott B and Neil T (2009) Designing web interfaces: Principles and patterns for rich interactions. O’Reilly Media, Inc.Seffah A, Taleb M (2012) Tracing the evolution of HCI patterns as an interaction design tool. Innov Syst Softw Eng 8(2):93–109. https://doi.org/10.1007/s11334-011-0178-8Seidel N (2017) Empirical evaluation methods for pattern languages: sketches, classification, and network analysis, in Proceedings of the 22Nd European Conference on Pattern Languages of Programs, 2017, pp. 13:1--13:24, DOI:https://doi.org/10.1145/3147704.3147719.Seltman HJ (2012) Experimental design and analysis. Online at: http://www.stat.cmu.edu/~hseltman/309/Book/Book.pdfTempleton GF (2011) A two-step approach for transforming continuous variables to normal: implications and recommendations for IS research. Commun. Assoc. Inf., vol. 28The Hillside Group (1994) How to Hold a Writer’s Workshop, 1994. Available at. https://hillside.net/conferences/plop/235-how-to-hold-a-writers-workshop. Accessed on Dec-18-2018.Thimthong T, Chintakovid T, and Krootjohn S (2012) An empirical study of search box and autocomplete design patterns in online bookstore. SHUSER 2012–2012 IEEE Symp. Humanit. Sci. Eng. Res., pp. 1165–1170, https://doi.org/10.1109/SHUSER.2012.6268796.Tidwell J (1999) Common ground: a pattern language for human-computer interface design. O’Reilly MediaTidwell J (2010) Designing interfaces: patterns for effective interaction design. O’Reilly Media, Inc.Toxboe A (2018) User interface design pattern library. UI Patterns, 2013. Available at. http://ui-patterns.com. Accessed on Feb-05-2018.Van Duyne DK, Landay JA, Hong JI (2003) The design of sites : patterns, principles, and processes for crafting a customer-centered web experience. Addison-WesleyVan Solingen R, Basili V, Caldiera G, Rombach HD (2002) Goal question metric (gqm) approach. Encycl Softw EngVan Welie M (2008) Patterns in interaction design. Available at. http://www.welie.com/patterns/. Accessed on Mar-01-2018.Vegas S, Apa C, Juristo N (2016) Crossover designs in software engineering experiments: benefits and perils. IEEE Trans Softw Eng 42(2):120–135. https://doi.org/10.1109/TSE.2015.2467378VOSviewer (2015) Visualizing scientific landscapes, Centre for Science and Technology Studies, Leiden University, 2015. Available at. http://www.vosviewer.com/.Wohlin C, Runeson P, Höst M, Ohlsson MC, Regnell B, Wesslén A (2012) Experimentation in software engineering, vol 9783642290. Springer, United StatesWu C, Jin X, Tsueng G, Afrasiabi C, Su AI (2016) BioGPS: building your own mash-up of gene annotations and expression profiles. Nucleic Acids Res 44(D1):D313–D316. https://doi.org/10.1093/nar/gkv1104Yahoo (2006) Yahoo design pattern library. Available at. https://developer.yahoo.com/ypatterns/everything.html. Accessed on Apr-03-2017
    corecore