464 research outputs found

    A PRISMA-driven systematic mapping study on system assurance weakeners

    Full text link
    Context: An assurance case is a structured hierarchy of claims aiming at demonstrating that a given mission-critical system supports specific requirements (e.g., safety, security, privacy). The presence of assurance weakeners (i.e., assurance deficits, logical fallacies) in assurance cases reflects insufficient evidence, knowledge, or gaps in reasoning. These weakeners can undermine confidence in assurance arguments, potentially hindering the verification of mission-critical system capabilities. Objectives: As a stepping stone for future research on assurance weakeners, we aim to initiate the first comprehensive systematic mapping study on this subject. Methods: We followed the well-established PRISMA 2020 and SEGRESS guidelines to conduct our systematic mapping study. We searched for primary studies in five digital libraries and focused on the 2012-2023 publication year range. Our selection criteria focused on studies addressing assurance weakeners at the modeling level, resulting in the inclusion of 39 primary studies in our systematic review. Results: Our systematic mapping study reports a taxonomy (map) that provides a uniform categorization of assurance weakeners and approaches proposed to manage them at the modeling level. Conclusion: Our study findings suggest that the SACM (Structured Assurance Case Metamodel) -- a standard specified by the OMG (Object Management Group) -- may be the best specification to capture structured arguments and reason about their potential assurance weakeners

    Bots influence opinion dynamics without direct human-bot interaction: The mediating role of recommender systems

    Get PDF
    Bots' ability to influence public discourse is difficult to estimate. Recent studies found that hyperpartisan bots are unlikely to influence public opinion because bots often interact with already highly polarized users. However, previous studies focused on direct human-bot interactions (e.g., retweets, at-mentions, and likes). The present study suggests that political bots, zealots, and trolls may indirectly affect people's views via a platform's content recommendation system's mediating role, thus influencing opinions without direct human-bot interaction. Using an agent-based opinion dynamics simulation, we isolated the effect of a single bot-representing 1% of nodes in a network-on the opinion of rational Bayesian agents when a simple recommendation system mediates the agents' content consumption. We compare this experimental condition with an identical baseline condition where such a bot is absent. Across conditions, we use the same random seed and a psychologically realistic Bayesian opinion update rule so that conditions remain identical except for the bot presence. Results show that, even with limited direct interactions, the mere presence of the bot is sufficient to shift the average population's opinion. Virtually all nodes -not only nodes directly interacting with the bot- shifted towards more extreme opinions. Furthermore, the mere bot's presence significantly affected the internal representation of the recommender system. Overall, these findings offer a proof of concept that bots and hyperpartisan accounts can influence population opinions not only by directly interacting with humans but also by secondary effects, such as shifting platforms recommendation engines internal representations. The mediating role of recommender systems creates indirect causal pathways of algorithmic opinion manipulation.The study was funded by the Max Planck Institute for Human Development. D.B. was partly funded by a research grant from the Institute of Psychology at the Chinese Academy of Sciences

    Enabling the Development and Implementation of Digital Twins : Proceedings of the 20th International Conference on Construction Applications of Virtual Reality

    Get PDF
    Welcome to the 20th International Conference on Construction Applications of Virtual Reality (CONVR 2020). This year we are meeting on-line due to the current Coronavirus pandemic. The overarching theme for CONVR2020 is "Enabling the development and implementation of Digital Twins". CONVR is one of the world-leading conferences in the areas of virtual reality, augmented reality and building information modelling. Each year, more than 100 participants from all around the globe meet to discuss and exchange the latest developments and applications of virtual technologies in the architectural, engineering, construction and operation industry (AECO). The conference is also known for having a unique blend of participants from both academia and industry. This year, with all the difficulties of replicating a real face to face meetings, we are carefully planning the conference to ensure that all participants have a perfect experience. We have a group of leading keynote speakers from industry and academia who are covering up to date hot topics and are enthusiastic and keen to share their knowledge with you. CONVR participants are very loyal to the conference and have attended most of the editions over the last eighteen editions. This year we are welcoming numerous first timers and we aim to help them make the most of the conference by introducing them to other participants

    AI Management Beyond Myth and Hype: A Systematic Review and Synthesis of the Literature

    Get PDF
    Background: AI management has attracted increasing interest from researchers rooted in many disciplines, including information systems, strategy, and economics. In recent years, scholars with interests in these diverse fields have formulated similar research questions, investigated similar research contexts, and even often adopted similar methodologies when studying AI. Despite these commonalities, the AI management literature has largely evolved in an isolated fashion within specific fields, thereby impeding the development of cumulative knowledge. Moreover, views of AI’s anticipated trajectory have often oscillated between unjustifiably optimistic assessments of its benefits and extremely pessimistic appraisals of the risks it poses for organizations and society. Method: To move beyond the polarized discussion, this work offers a systematic review of the vast, interdisciplinary AI management literature, based on analysis of a large sample of articles published between 2010 and 2022. Results: We identify four main research streams in the AI management literature and associated, conflicting discussion, concerning four (data, labor, critical, and value) dimensions. Conclusion: The review conceptually and practically contributes to the IS field by documenting the literature’s evolution and highlighting avenues for future research trajectories. We believe that by outlining four key themes and visualizing them in an organized framework the study promotes a holistic and broader understanding of AI management research as a cross-disciplinary effort, for both researchers and practitioners, and provides suggestions that extend the framing of AI beyond myth and hype

    Explainable Artificial Intelligence in Data Science: From Foundational Issues Towards Socio-technical Considerations

    Get PDF
    A widespread need to explain the behavior and outcomes of AI-based systems has emerged, due to their ubiquitous presence. Thus, providing renewed momentum to the relatively new research area of eXplainable AI (XAI). Nowadays, the importance of XAI lies in the fact that the increasing control transference to this kind of system for decision making -or, at least, its use for assisting executive stakeholders- already afects many sensitive realms (as in Politics, Social Sciences, or Law). The decision making power handover to opaque AI systems makes mandatory explaining those, primarily in application scenarios where the stakeholders are unaware of both the high technology applied and the basic principles governing the technological solu tions. The issue should not be reduced to a merely technical problem; the explainer would be compelled to transmit richer knowledge about the system (including its role within the informational ecosystem where he/she works). To achieve such an aim, the explainer could exploit, if necessary, practices from other scientifc and humanistic areas. The frst aim of the paper is to emphasize and justify the need for a multidisciplinary approach that is benefciated from part of the scientifc and philosophical corpus on Explaining, underscoring the particular nuances of the issue within the feld of Data Science. The second objective is to develop some arguments justifying the authors’ bet by a more relevant role of ideas inspired by, on the one hand, formal techniques from Knowledge Representation and Reasoning, and on the other hand, the modeling of human reasoning when facing the explanation. This way, explaining modeling practices would seek a sound balance between the pure technical justifcation and the explainer-explainee agreement.Agencia Estatal de Investigación PID2019-109152GB-I00/AEI/10.13039/50110001103

    Multilevel Research in Information Systems: Concepts, Strategies, Problems, and Pitfalls

    Get PDF
    Information systems (IS) researchers often explore complex phenomena that result from the interplay between technologies and human actors; as such, IS research frequently involves constructs found at multiple levels of analysis, although rarely recognized as such. In fact, our targeted review of the IS literature found minimal explicit consideration of the issues posed by multilevel research although a number of studies implicitly conducted research at multiple levels. In this paper, we discuss the issues that result from not explicitly recognizing the multilevel nature of one’s work and offer guidance on how to identify and explicitly conduct multilevel IS research. Recognizing the relevance of multilevel research for the IS domain, we discuss a systematic approach to conduct quantitative multilevel IS research that is grounded in an overarching framework that focuses equally on testing variables and entities. We also highlight the unique role of IS in developing multilevel opportunities for researchers. Finally, we identify a number of gaps within the IS literature in which specific multilevel research questions may be articulated. Such explicit consideration of multilevel issues in future IS research will not only improve IS research but contribute to the larger discourse on multilevel research

    Why and How to Extract Conditional Statements From Natural Language Requirements

    Get PDF
    Functional requirements often describe system behavior by relating events to each other, e.g. "If the system detects an error (e_1), an error message shall be shown (e_2)". Such conditionals consist of two parts: the antecedent (see e_1) and the consequent (e_2), which convey strong, semantic information about the intended behavior of a system. Automatically extracting conditionals from texts enables several analytical disciplines and is already used for information retrieval and question answering. We found that automated conditional extraction can also provide added value to Requirements Engineering (RE) by facilitating the automatic derivation of acceptance tests from requirements. However, the potential of extracting conditionals has not yet been leveraged for RE. We are convinced that this has two principal reasons: 1) The extent, form, and complexity of conditional statements in RE artifacts is not well understood. We do not know how conditionals are formulated and logically interpreted by RE practitioners. This hinders the development of suitable approaches for extracting conditionals from RE artifacts. 2) Existing methods fail to extract conditionals from Unrestricted Natural Language (NL) in fine-grained form. That is, they do not consider the combinatorics between antecedents and consequents. They also do not allow to split them into more fine-granular text fragments (e.g., variable and condition), rendering the extracted conditionals unsuitable for RE downstream tasks such as test case derivation. This thesis contributes to both areas. In Part I, we present empirical results on the prevalence and logical interpretation of conditionals in RE artifacts. Our case study corroborates that conditionals are widely used in both traditional and agile requirements such as acceptance criteria. We found that conditionals in requirements mainly occur in explicit, marked form and may include up to three antecedents and two consequents. Hence, the extraction approach needs to understand conjunctions, disjunctions, and negations to fully capture the relation between antecedents and consequents. We also found that conditionals are a source of ambiguity and there is not just one way to interpret them formally. This affects any automated analysis that builds upon formalized requirements (e.g., inconsistency checking) and may also influence guidelines for writing requirements. Part II presents our tool-supported approach CiRA capable of detecting conditionals in NL requirements and extracting them in fine-grained form. For the detection, CiRA uses syntactically enriched BERT embeddings combined with a softmax classifier and outperforms existing methods (macro-F_1: 82%). Our experiments show that a sigmoid classifier built on RoBERTa embeddings is best suited to extract conditionals in fine-grained form (macro-F_1: 86%). We disclose our code, data sets, and trained models to facilitate replication. CiRA is available at http://www.cira.bth.se/demo/. In Part III, we highlight how the extraction of conditionals from requirements can help to create acceptance tests automatically. First, we motivate this use case in an empirical study and demonstrate that the lack of adequate acceptance tests is one of the major problems in agile testing. Second, we show how extracted conditionals can be mapped to a Cause-Effect-Graph from which test cases can be derived automatically. We demonstrate the feasibility of our approach in a case study with three industry partners. In our study, out of 578 manually created test cases, 71.8% can be generated automatically. Furthermore, our approach discovered 80 relevant test cases that were missed in manual test case design. At the end of this thesis, the reader will have an understanding of (1) the notion of conditionals in RE artifacts, (2) how to extract them in fine-grained form, and (3) the added value that the extraction of conditionals can provide to RE

    Computational Methods for Medical and Cyber Security

    Get PDF
    Over the past decade, computational methods, including machine learning (ML) and deep learning (DL), have been exponentially growing in their development of solutions in various domains, especially medicine, cybersecurity, finance, and education. While these applications of machine learning algorithms have been proven beneficial in various fields, many shortcomings have also been highlighted, such as the lack of benchmark datasets, the inability to learn from small datasets, the cost of architecture, adversarial attacks, and imbalanced datasets. On the other hand, new and emerging algorithms, such as deep learning, one-shot learning, continuous learning, and generative adversarial networks, have successfully solved various tasks in these fields. Therefore, applying these new methods to life-critical missions is crucial, as is measuring these less-traditional algorithms' success when used in these fields
    corecore