128 research outputs found

    Using contextual knowledge in interactive fault localization

    Get PDF
    Tool support for automated fault localization in program debugging is limited because state-of-the-art algorithms often fail to provide efficient help to the user. They usually offer a ranked list of suspicious code elements, but the fault is not guaranteed to be found among the highest ranks. In Spectrum-Based Fault Localization (SBFL) – which uses code coverage information of test cases and their execution outcomes to calculate the ranks –, the developer has to investigate several locations before finding the faulty code element. Yet, all the knowledge she a priori has or acquires during this process is not reused by the SBFL tool. There are existing approaches in which the developer interacts with the SBFL algorithm by giving feedback on the elements of the prioritized list. We propose a new approach called iFL which extends interactive approaches by exploiting contextual knowledge of the user about the next item in the ranked list (e. g., a statement), with which larger code entities (e. g., a whole function) can be repositioned in their suspiciousness. We implemented a closely related algorithm proposed by Gong et al. , called Talk . First, we evaluated iFL using simulated users, and compared the results to SBFL and Talk . Next, we introduced two types of imperfections in the simulation: user’s knowledge and confidence levels. On SIR and Defects4J, results showed notable improvements in fault localization efficiency, even with strong user imperfections. We then empirically evaluated the effectiveness of the approach with real users in two sets of experiments: a quantitative evaluation of the successfulness of using iFL , and a qualitative evaluation of practical uses of the approach with experienced developers in think-aloud sessions

    Using contextual knowledge in interactive fault localization

    Get PDF
    Tool support for automated fault localization in program debugging is limited because state-of-the-art algorithms often fail to provide efficient help to the user. They usually offer a ranked list of suspicious code elements, but the fault is not guaranteed to be found among the highest ranks. In Spectrum-Based Fault Localization (SBFL) – which uses code coverage information of test cases and their execution outcomes to calculate the ranks –, the developer has to investigate several locations before finding the faulty code element. Yet, all the knowledge she a priori has or acquires during this process is not reused by the SBFL tool. There are existing approaches in which the developer interacts with the SBFL algorithm by giving feedback on the elements of the prioritized list. We propose a new approach called iFL which extends interactive approaches by exploiting contextual knowledge of the user about the next item in the ranked list (e. g., a statement), with which larger code entities (e. g., a whole function) can be repositioned in their suspiciousness. We implemented a closely related algorithm proposed by Gong et al. , called Talk . First, we evaluated iFL using simulated users, and compared the results to SBFL and Talk . Next, we introduced two types of imperfections in the simulation: user’s knowledge and confidence levels. On SIR and Defects4J, results showed notable improvements in fault localization efficiency, even with strong user imperfections. We then empirically evaluated the effectiveness of the approach with real users in two sets of experiments: a quantitative evaluation of the successfulness of using iFL , and a qualitative evaluation of practical uses of the approach with experienced developers in think-aloud sessions

    Usability Tool Support for Model-Based Web Development

    Get PDF
    When web engineering methods are used for the development of web applications, models are created during the development process which describe the website. Using the information present in these models, it is possible to create usability tool support that is more advanced than current approaches, which do not rely on the presence of models. This dissertation presents ideas for tool support during different phases of the development, such as the implementation phase or the testing phase. For example, if a tool knows from a model that the audience of a website are teenagers, it can examine whether the words and sentences used on the website are likely to be understood by teenagers. An approach is presented to augment existing web engineering models with the additional information ("age" in this case) and to make it available to tools, e.g. via embedding it in HTML code. Two prototypes demonstrate the concepts for integrating usability tool support into web engineering

    Making local knowledge matter: Exploring the appropriateness of pictorial decision trees as interaction style for non-literate communities to capture their traditional ecological knowledge

    Get PDF
    Sustainable natural resource management is one of the fundamental development challenges humanity faces today. The scale of the issues involved and the inadequacy of existing paradigms mean that there is an urgent need for innovative and appropriate solutions to enable scientifically-informed sustainable resource management of key environments. Local and indigenous communities often possess unique Traditional Ecological Knowledge (TEK) about their natural resources, which despite being increasingly recognised as critical for sustaining and protecting the environment, it is difficult to capture in a digital format, in particular given the environment in which many communities live and their lack of technical knowledge. Yet, their knowledge is required in digital form to reach a wide audience and particularly those stakeholders who need to base their decisions on the knowledge provided. This thesis draws knowledge from Human-Computer Interaction (HCI), HCI for Development (HCI4D), Software Engineering, Information and Communications Technologies for Development (ICT4D), Participatory Geographic Information Systems (PGIS) and Citizen Science to develop and evaluate methods and Information and Communications Technology (ICT) tools to enable communities to capture and share their local environmental conditions, information that can in turn lead to improvements in environmental governance and social-environmental justice. One core challenge in this endeavour is to enable lay users, especially those with limited technical skills or no prior exposure to technology and no (or basic) literacy or no formal education, to use smartphones to capture their TEK and share data with relevant stakeholders. To achieve that, this thesis explores whether pictorial decision trees are appropriate as an interaction mode for non-literate participants to capture geographical data. In the context of three case studies, taking place in Republic of the Congo and focusing on enabling local communities to participate in socio-environmental monitoring schemes regarding their forest, this thesis explores the opportunities and challenges in collaboratively developing software to realise this vision. The research findings and the methodological framework provide an approach and guidelines for the development and evaluation of ICT solutions in similar, challenging vii environments. The most significant finding of the thesis is that while pictographs are easily understood by participants, when employed in pictorial decision trees they proved to be challenging for them due to the categorisation and hierarchical structure of decision trees. Alternatively, interaction modes that employ audio or physical interfaces can alleviate these issues and assist participants to collect geographical data. This thesis also demonstrates how a participatory and iterative design approach led to the conception and evaluation of interaction modes that increase participants’ accuracy from 75% towards 95% and improve participants’ satisfaction, which could in turn increase the sustainability of the project. Finally, a number of methodological approaches were evaluated and amended in order to design and evaluate ICT solutions with non-literate, forest communities

    Validation of Score Meaning for the Next Generation of Assessments

    Get PDF
    Despite developments in research and practice on using examinee response process data in assessment design, the use of such data in test validation is rare. Validation of Score Meaning in the Next Generation of Assessments Using Response Processes highlights the importance of validity evidence based on response processes and provides guidance to measurement researchers and practitioners in creating and using such evidence as a regular part of the assessment validation process. Response processes refer to approaches and behaviors of examinees when they interpret assessment situations and formulate and generate solutions as revealed through verbalizations, eye movements, response times, or computer clicks. Such response process data can provide information about the extent to which items and tasks engage examinees in the intended ways. With contributions from the top researchers in the field of assessment, this volume includes chapters that focus on methodological issues and on applications across multiple contexts of assessment interpretation and use. In Part I of this book, contributors discuss the framing of validity as an evidence-based argument for the interpretation of the meaning of test scores, the specifics of different methods of response process data collection and analysis, and the use of response process data relative to issues of validation as highlighted in the joint standards on testing. In Part II, chapter authors offer examples that illustrate the use of response process data in assessment validation. These cases are provided specifically to address issues related to the analysis and interpretation of performance on assessments of complex cognition, assessments designed to inform classroom learning and instruction, and assessments intended for students with varying cultural and linguistic backgrounds

    Ethics in the mining of software repositories

    Get PDF
    Research in Mining Software Repositories (MSR) is research involving human subjects, as the repositories usually contain data about developers’ and users’ interactions with the repositories and with each other. The ethics issues raised by such research therefore need to be considered before beginning. This paper presents a discussion of ethics issues that can arise in MSR research, using the mining challenges from the years 2006 to 2021 as a case study to identify the kinds of data used. On the basis of contemporary research ethics frameworks we discuss ethics challenges that may be encountered in creating and using repositories and associated datasets. We also report some results from a small community survey of approaches to ethics in MSR research. In addition, we present four case studies illustrating typical ethics issues one encounters in projects and how ethics considerations can shape projects before they commence. Based on our experience, we present some guidelines and practices that can help in considering potential ethics issues and reducing risks

    Re-Crafting Games: The inner life of Minecraft modding.

    Get PDF
    Prior scholarship on game modding has tended to focus on the relationship between commercial developers and modders, while the preponderance of existing work on the open-world sandbox game Minecraft has tended to focus on children’s play or the program’s utility as an educational platform. Based on participant observation, interviews with modders, discourse analysis, and the techniques of software studies, this research uncovers the inner life of Minecraft modding practices, and how they have become central to the way the game is articulated as a cultural artifact. While the creative activities of audiences have previously been described in terms of de Certeau’s concept of “tactics,” this paper argues that modders are also engaged in the development of new strategies. Modders thus become “settlers,” forging a new identity for the game property as they expand the possibilities for play. Emerging modder strategies link to the ways that the underlying game software structures computation, and are closely tied to notions of modularity, interoperability, and programming “best practices.” Modders also mobilize tactics and strategies in the discursive contestation and co-regulation of gameplay meanings and programming practices, which become more central to an understanding of game modding than the developer-modder relationship. This discourse, which structures the circulation of gaming capital within the community, embodies both monologic and dialogic modes, with websites, forum posts, chatroom conversations, and even software artifacts themselves taking on persuasive inflections

    Don’t forget to save! User experience principles for video game narrative authoring tools.

    Get PDF
    Interactive Digital Narratives (IDNs) are a natural evolution of traditional storytelling melded with technological improvements brought about by the rapidly increasing digital revolution. This has and continues to enhance the complexities and functionality of the stories that we can tell. Video game narratives, both old and new, are considered close relatives of IDN, and due to their enhanced interactivity and presentational methods, further complicate the creation process. Authoring tool software aims to alleviate the complexities of this by abstracting underlying data models into accessible user interfaces that creatives, even those with limited technical experience, can use to author their stories. Unfortunately, despite the vast array of authoring tools in this space, user experience is often overlooked even though it is arguably one of the most vital components. This has resulted in a focus on the audience within IDN research rather than the authors, and consequently our knowledge and understanding of the impacts of user experience design decisions in authoring tools are limited. This thesis tackles the modeling of complex video game narrative structures and investigates how user experience design decisions within IDN authoring tools may impact the authoring process. I first introduce my concept of Discoverable Narrative which establishes a vocabulary for the analysis, categorization, and comparison of aspects of video game narrative that are discovered, observed, or experienced by players — something that existing models struggle to detail. I also develop and present my Novella Narrative Model which provides support for video game narrative elements and makes several novel innovations that set it apart from existing narrative models. This thesis then builds upon these models by presenting two bespoke user studies that examine the user experience of the state-of-the-art in IDN authoring tool design, together building a listing of seven general Themes and five principles (Metaphor Testing, Fast Track Testing, Structure, Experimentation, Branching) that highlight evidenced behavioral trends of authors based on different user experience design factors within IDN authoring tools. This represents some of the first work in this space that investigates the relationships between the user experience design of IDN authoring tools and the impacts that they can have on authors. Additionally, a generalized multi-stage pipeline for the design and development of IDN authoring tools is introduced, informed by professional industry- standard design techniques, in an effort to both ensure quality user experience within my own work and to raise awareness of the importance of following proper design processes when creating authoring tools, also serving as a template for doing so
    • 

    corecore