812 research outputs found

    An extensive analysis of efficient bug prediction configurations

    Get PDF
    Background: Bug prediction helps developers steer maintenance activities towards the buggy parts of a software. There are many design aspects to a bug predictor, each of which has several options, i.e., software metrics, machine learning model, and response variable. Aims: These design decisions should be judiciously made because an improper choice in any of them might lead to wrong, misleading, or even useless results. We argue that bug prediction con?gurations are intertwined and thus need to be evaluated in their entirety, in contrast to the common practice in the ?eld where each aspect is investigated in isolation. Method: We use a cost-aware evaluation scheme to evaluate 60 di?erent bug prediction con?guration combinations on ?ve open source Java projects. Results:We ?nd out that the best choices for building a cost-e?ective bug predictor are change metrics mixed with source code metrics as independent variables, Random Forest as the machine learning model, and the number of bugs as the response variable. Combining these con?guration options results in the most e?cient bug predictor across all subject systems. Conclusions: We demonstrate a strong evidence for the interplay among bug prediction con?gurations and provide concrete guidelines for researchers and practitioners on how to build and evaluate e?cient bug predictors

    Integrating Across Sustainability, Political, and Administrative Spheres: A Longitudinal Study of Actors’ Engagement in Open Data Ecosystems in Three Canadian Cities

    Get PDF
    Over the last decade, cities around the world have embraced the open data movement by launching open data portals. To successfully derive benefits from these initiatives, various individual and organizational actors need to engage with them. These actors undertake activities supporting data publication and dissemination in open data ecosystems. In this paper, we focus on enhancing the IS community’s contribution to the open data movement by conducting a longitudinal, qualitative archival analysis of open data initiatives in three Canadian cities: Edmonton, Toronto, and Montreal. Combining two complementary models of open data and information ecosystems, we explore how actors engage in and across the sustainability, political, and administrative spheres to influence open data initiatives. Our findings suggest most actors operate in a single sphere but that some can operate across two or all three spheres to become ecosystem anchors. Through these sphere-spanning efforts, ecosystem anchors help to shape the way in which open data initiatives evolve. We provide a theoretically grounded explanation of processes in successful open data initiatives and suggest new directions for practice

    Present and future resilience research driven by science and technology

    Get PDF
    Community resilience against major disasters is a multidisciplinary research field that garners an ever-increasing interest worldwide. This paper provides summaries of the discussions held on the subject matter and the research outcomes presented during the Second Resilience Workshop in Nanjing and Shanghai. It, thus, offers a community view of present work and future research directions identified by the workshop participants who hail from Asia – including China, Japan and Korea; Europe and the Americas

    Opinion Mining for Software Development: A Systematic Literature Review

    Get PDF
    Opinion mining, sometimes referred to as sentiment analysis, has gained increasing attention in software engineering (SE) studies. SE researchers have applied opinion mining techniques in various contexts, such as identifying developers’ emotions expressed in code comments and extracting users’ critics toward mobile apps. Given the large amount of relevant studies available, it can take considerable time for researchers and developers to figure out which approaches they can adopt in their own studies and what perils these approaches entail. We conducted a systematic literature review involving 185 papers. More specifically, we present 1) well-defined categories of opinion mining-related software development activities, 2) available opinion mining approaches, whether they are evaluated when adopted in other studies, and how their performance is compared, 3) available datasets for performance evaluation and tool customization, and 4) concerns or limitations SE researchers might need to take into account when applying/customizing these opinion mining techniques. The results of our study serve as references to choose suitable opinion mining tools for software development activities, and provide critical insights for the further development of opinion mining techniques in the SE domain

    A Closer Look into Recent Video-based Learning Research: A Comprehensive Review of Video Characteristics, Tools, Technologies, and Learning Effectiveness

    Full text link
    People increasingly use videos on the Web as a source for learning. To support this way of learning, researchers and developers are continuously developing tools, proposing guidelines, analyzing data, and conducting experiments. However, it is still not clear what characteristics a video should have to be an effective learning medium. In this paper, we present a comprehensive review of 257 articles on video-based learning for the period from 2016 to 2021. One of the aims of the review is to identify the video characteristics that have been explored by previous work. Based on our analysis, we suggest a taxonomy which organizes the video characteristics and contextual aspects into eight categories: (1) audio features, (2) visual features, (3) textual features, (4) instructor behavior, (5) learners activities, (6) interactive features (quizzes, etc.), (7) production style, and (8) instructional design. Also, we identify four representative research directions: (1) proposals of tools to support video-based learning, (2) studies with controlled experiments, (3) data analysis studies, and (4) proposals of design guidelines for learning videos. We find that the most explored characteristics are textual features followed by visual features, learner activities, and interactive features. Text of transcripts, video frames, and images (figures and illustrations) are most frequently used by tools that support learning through videos. The learner activity is heavily explored through log files in data analysis studies, and interactive features have been frequently scrutinized in controlled experiments. We complement our review by contrasting research findings that investigate the impact of video characteristics on the learning effectiveness, report on tasks and technologies used to develop tools that support learning, and summarize trends of design guidelines to produce learning video

    Annual Report 2016-2017

    Get PDF
    The College of Computing and Digital Media has always prided itself on curriculum, creative work, and research that stays current with changes in our various fields of instruction. As we looked back on our 2016-17 academic year, the need to chronicle the breadth and excellence of this work became clear. We are pleased to share with you this annual report, our first, highlighting our accomplishments. Last year, we began offering three new graduate programs and two new certificate programs. We also planned six degree programs and three new certificate programs for implementation in the current academic year. CDM faculty were published more than 100 times, had their films screened more than 200 times, and participated in over two dozen exhibitions. Our students were recognized for their scholarly and creative work, and our alumni accomplished amazing things, from winning a Student Academy Award to receiving a Pulitzer. We are proud of all the work we have done together. One notable priority for us in 2016-17 was creating and strengthening relationships with industry—including expanding our footprint at Cinespace and developing the iD Lab—as well as with the community, through partnerships with the Chicago Housing Authority, Wabash Lights, and other nonprofit organizations. We look forward to continuing to provide innovative programs and spaces this academic year. Two areas in particular we’ve been watching closely are makerspaces and the “internet of things.” We’ve already made significant commitments to these areas through the creation of our 4,500 square foot makerspace, the Idea Realization Lab, and our new cyber-physical systems bachelor’s program and lab. We are excited to continue providing the opportunities, curriculum, and facilities to support our remarkable students. David MillerDean, College of Computing and Digital Mediahttps://via.library.depaul.edu/cdmannual/1000/thumbnail.jp

    Usability analysis of contending electronic health record systems

    Get PDF
    In this paper, we report measured usability of two leading EHR systems during procurement. A total of 18 users participated in paired-usability testing of three scenarios: ordering and managing medications by an outpatient physician, medicine administration by an inpatient nurse and scheduling of appointments by nursing staff. Data for audio, screen capture, satisfaction rating, task success and errors made was collected during testing. We found a clear difference between the systems for percentage of successfully completed tasks, two different satisfaction measures and perceived learnability when looking at the results over all scenarios. We conclude that usability should be evaluated during procurement and the difference in usability between systems could be revealed even with fewer measures than were used in our study. © 2019 American Psychological Association Inc. All rights reserved.Peer reviewe

    Owl Eyes: Spotting UI Display Issues via Visual Understanding

    Full text link
    Graphical User Interface (GUI) provides a visual bridge between a software application and end users, through which they can interact with each other. With the development of technology and aesthetics, the visual effects of the GUI are more and more attracting. However, such GUI complexity posts a great challenge to the GUI implementation. According to our pilot study of crowdtesting bug reports, display issues such as text overlap, blurred screen, missing image always occur during GUI rendering on different devices due to the software or hardware compatibility. They negatively influence the app usability, resulting in poor user experience. To detect these issues, we propose a novel approach, OwlEye, based on deep learning for modelling visual information of the GUI screenshot. Therefore, OwlEye can detect GUIs with display issues and also locate the detailed region of the issue in the given GUI for guiding developers to fix the bug. We manually construct a large-scale labelled dataset with 4,470 GUI screenshots with UI display issues and develop a heuristics-based data augmentation method for boosting the performance of our OwlEye. The evaluation demonstrates that our OwlEye can achieve 85% precision and 84% recall in detecting UI display issues, and 90% accuracy in localizing these issues. We also evaluate OwlEye with popular Android apps on Google Play and F-droid, and successfully uncover 57 previously-undetected UI display issues with 26 of them being confirmed or fixed so far.Comment: Accepted to 35th IEEE/ACM International Conference on Automated Software Engineering (ASE 20

    Classification of Explainable Artificial Intelligence Methods through Their Output Formats

    Get PDF
    Machine and deep learning have proven their utility to generate data-driven models with high accuracy and precision. However, their non-linear, complex structures are often difficult to interpret. Consequently, many scholars have developed a plethora of methods to explain their functioning and the logic of their inferences. This systematic review aimed to organise these methods into a hierarchical classification system that builds upon and extends existing taxonomies by adding a significant dimension—the output formats. The reviewed scientific papers were retrieved by conducting an initial search on Google Scholar with the keywords “explainable artificial intelligence”; “explainable machine learning”; and “interpretable machine learning”. A subsequent iterative search was carried out by checking the bibliography of these articles. The addition of the dimension of the explanation format makes the proposed classification system a practical tool for scholars, supporting them to select the most suitable type of explanation format for the problem at hand. Given the wide variety of challenges faced by researchers, the existing XAI methods provide several solutions to meet the requirements that differ considerably between the users, problems and application fields of artificial intelligence (AI). The task of identifying the most appropriate explanation can be daunting, thus the need for a classification system that helps with the selection of methods. This work concludes by critically identifying the limitations of the formats of explanations and by providing recommendations and possible future research directions on how to build a more generally applicable XAI method. Future work should be flexible enough to meet the many requirements posed by the widespread use of AI in several fields, and the new regulation

    Exploring Digital Government transformation in the EU

    Get PDF
    This report presents the findings of the analysis of the state of the art conducted as part of the JRC research on “Exploring Digital Government Transformation in the EU: understanding public sector innovation in a data-driven society” (DIGIGOV), within the framework of the “European Location Interoperability Solutions for eGovernment (ELISE)" Action of the ISA2 Programme on Interoperability solutions for public administrations, businesses and citizens, coordinated by DIGIT. The results of the review of literature, based on almost 500 academic and grey literature sources, as well as the analysis of digital government policies in the EU Member States provide a synthetic overview of the main themes and topics of the digital government discourse. The report depicts the variety of existing conceptualisations and definitions of the digital government phenomenon, measured and expected effects of the application of more disruptive innovations and emerging technologies in government, as well as key drivers and barriers for transforming the public sector. Overall, the literature review shows that many sources appear overly optimistic with regard to the impact of digital government transformation, although the majority of them are based on normative views or expectations, rather than empirically tested insights. The authors therefore caution that digital government transformation should be researched empirically and with a due differentiation between evidence and hope. In this respect, the report paves the way to in-depth analysis of the effects that can be generated by digital innovation in public sector organisations. A digital transformation that implies the redesign of the tools and methods used in the machinery of government will require in fact a significant change in the institutional frameworks that regulate and help coordinate the governance systems in which such changing processes are implemented.JRC.B.6-Digital Econom
    • …
    corecore