16 research outputs found

    Synchronous development in open-source projects: A higher-level perspective

    Get PDF
    Mailing lists are a major communication channel for supporting developer coordina tion in open-source software projects. In a recent study, researchers explored tempo ral relationships (e.g., synchronization) between developer activities on source code and on the mailing list, relying on simple heuristics of developer collaboration (e.g., co-editing fles) and developer communication (e.g., sending e-mails to the mailing list). We propose two methods for studying synchronization between collaboration and communication activities from a higher-level perspective, which captures the complex activities and views of developers more precisely than the rather technical perspective of previous work. On the one hand, we explore developer collaboration at the level of features (not fles), which are higher-level concepts of the domain and not mere technical artifacts. On the other hand, we lift the view of developer com munication from a message-based model, which treats each e-mail individually, to a conversation-based model, which is semantically richer due to grouping e-mails that represent conceptually related discussions. By means of an empirical study, we investigate whether the diferent abstraction levels afect the observed relationship between commit activity and e-mail communication using state-of-the-art time series analysis. For this purpose, we analyze a combined history of 40 years of data for three highly active and widely deployed open-source projects: QEMU, BusyBox, and OpenSSL. Overall, we found evidence that a higher-level view on the coordina tion of developers leads to identifying a stronger statistical dependence between the technical activities of developers than a less abstract and rather technical view

    Mining domain-specific edit operations from model repositories with applications to semantic lifting of model differences and change profiling

    Get PDF
    Model transformations are central to model-driven software development. Applications of model transformations include creating models, handling model co-evolution, model merging, and understanding model evolution. In the past, various (semi-) automatic approaches to derive model transformations from meta-models or from examples have been proposed. These approaches require time-consuming handcrafting or the recording of concrete examples, or they are unable to derive complex transformations. We propose a novel unsupervised approach, called Ockham, which is able to learn edit operations from model histories in model repositories. Ockham is based on the idea that meaningful domain-specifc edit operations are the ones that compress the model diferences. It employs frequent subgraph mining to discover frequent structures in model diference graphs. We evaluate our approach in two controlled experiments and one real-world case study of a large-scale industrial model-driven architecture project in the railway domain. We found that our approach is able to discover frequent edit operations that have actually been applied before. Furthermore, Ockham is able to extract edit operations that are meaningful—in the sense of explaining model diferences through the edit operations they comprise—to practitioners in an industrial setting. We also discuss use cases (i.e., semantic lifting of model diferences and change profles) for the discovered edit operations in this industrial setting. We fnd that the edit operations discovered by Ockham can be used to better understand and simulate the evolution of models

    Reasoning on Knowledge Graphs with Debate Dynamics

    Full text link
    We propose a novel method for automatic reasoning on knowledge graphs based on debate dynamics. The main idea is to frame the task of triple classification as a debate game between two reinforcement learning agents which extract arguments -- paths in the knowledge graph -- with the goal to promote the fact being true (thesis) or the fact being false (antithesis), respectively. Based on these arguments, a binary classifier, called the judge, decides whether the fact is true or false. The two agents can be considered as sparse, adversarial feature generators that present interpretable evidence for either the thesis or the antithesis. In contrast to other black-box methods, the arguments allow users to get an understanding of the decision of the judge. Since the focus of this work is to create an explainable method that maintains a competitive predictive accuracy, we benchmark our method on the triple classification and link prediction task. Thereby, we find that our method outperforms several baselines on the benchmark datasets FB15k-237, WN18RR, and Hetionet. We also conduct a survey and find that the extracted arguments are informative for users.Comment: AAAI-202

    Mining domain-specific edit operations from model repositories with applications to semantic lifting of model differences and change profiling

    Get PDF
    Model transformations are central to model-driven software development. Applications of model transformations include creating models, handling model co-evolution, model merging, and understanding model evolution. In the past, various (semi-)automatic approaches to derive model transformations from meta-models or from examples have been proposed. These approaches require time-consuming handcrafting or the recording of concrete examples, or they are unable to derive complex transformations. We propose a novel unsupervised approach, called Ockham, which is able to learn edit operations from model histories in model repositories. Ockham is based on the idea that meaningful domain-specific edit operations are the ones that compress the model differences. It employs frequent subgraph mining to discover frequent structures in model difference graphs. We evaluate our approach in two controlled experiments and one real-world case study of a large-scale industrial model-driven architecture project in the railway domain. We found that our approach is able to discover frequent edit operations that have actually been applied before. Furthermore, Ockham is able to extract edit operations that are meaningful—in the sense of explaining model differences through the edit operations they comprise—to practitioners in an industrial setting. We also discuss use cases (i.e., semantic lifting of model differences and change profiles) for the discovered edit operations in this industrial setting. We find that the edit operations discovered by Ockham can be used to better understand and simulate the evolution of models

    Smart Fridge [RFID-enabled smart refrigerator]

    No full text
    Cyber-Flux Innovations presents the Smart Fridge, a fridge capable of tracking its contents through readily available and user-registered RFID tags. This information will then be stored and processed to provide beneficial applications for the user. The Smart Fridge will allow users to take control of their diet by presenting nutritional information in a user-friendly manner, warn users of products with upcoming expiration dates to avoid wasting food and even help users create their next grocery shopping list based on what they usually buy. All of these features will provide users with a convenient way of planning and managing their groceries and eating habits

    On Calibration of Graph Neural Networks for Node Classification

    Full text link
    Graphs can model real-world, complex systems by representing entities and their interactions in terms of nodes and edges. To better exploit the graph structure, graph neural networks have been developed, which learn entity and edge embeddings for tasks such as node classification and link prediction. These models achieve good performance with respect to accuracy, but the confidence scores associated with the predictions might not be calibrated. That means that the scores might not reflect the ground-truth probabilities of the predicted events, which would be especially important for safety-critical applications. Even though graph neural networks are used for a wide range of tasks, the calibration thereof has not been sufficiently explored yet. We investigate the calibration of graph neural networks for node classification, study the effect of existing post-processing calibration methods, and analyze the influence of model capacity, graph density, and a new loss function on calibration. Further, we propose a topology-aware calibration method that takes the neighboring nodes into account and yields improved calibration compared to baseline methods.Comment: Accepted by IJCNN 2022 (IEEE WCCI 2022

    Data-Powered Positive Deviance during the SARS-CoV-2 Pandemic—An Ecological Pilot Study of German Districts

    No full text
    We introduced the mixed-methods Data-Powered Positive Deviance (DPPD) framework as a potential addition to the set of tools used to search for effective response strategies against the SARS-CoV-2 pandemic. For this purpose, we conducted a DPPD study in the context of the early stages of the German SARS-CoV-2 pandemic. We used a framework of scalable quantitative methods to identify positively deviant German districts that is novel in the scientific literature on DPPD, and subsequently employed qualitative methods to identify factors that might have contributed to their comparatively successful reduction of the forward transmission rate. Our qualitative analysis suggests that quick, proactive, decisive, and flexible/pragmatic actions, the willingness to take risks and deviate from standard procedures, good information flows both in terms of data collection and public communication, alongside the utilization of social network effects were deemed highly important by the interviewed districts. Our study design with its small qualitative sample constitutes an exploratory and illustrative effort and hence does not allow for a clear causal link to be established. Thus, the results cannot necessarily be extrapolated to other districts as is. However, the findings indicate areas for further research to assess these strategies’ effectiveness in a broader study setting. We conclude by stressing DPPD’s strengths regarding replicability, scalability, adaptability, as well as its focus on local solutions, which make it a promising framework to be applied in various contexts, e.g., in the context of the Global South
    corecore