20,777 research outputs found

    Secure Cloud-Edge Deployments, with Trust

    Get PDF
    Assessing the security level of IoT applications to be deployed to heterogeneous Cloud-Edge infrastructures operated by different providers is a non-trivial task. In this article, we present a methodology that permits to express security requirements for IoT applications, as well as infrastructure security capabilities, in a simple and declarative manner, and to automatically obtain an explainable assessment of the security level of the possible application deployments. The methodology also considers the impact of trust relations among different stakeholders using or managing Cloud-Edge infrastructures. A lifelike example is used to showcase the prototyped implementation of the methodology

    Prevention in Healthcare: An Explainable AI Approach

    Get PDF
    Intrusion prevention is a critical aspect of maintaining the security of healthcare systems, especially in the context of sensitive patient data. Explainable AI can provide a way to improve the effectiveness of intrusion prevention by using machine learning algorithms to detect and prevent security breaches in healthcare systems. This approach not only helps ensure the confidentiality, integrity, and availability of patient data but also supports regulatory compliance. By providing clear and interpretable explanations for its decisions, explainable AI can enable healthcare professionals to understand the reasoning behind the intrusion detection system's alerts and take appropriate action. This paper explores the application of explainable AI for intrusion prevention in healthcare and its potential benefits for maintaining the security of healthcare systems

    Explainable Software Bot Contributions: Case Study of Automated Bug Fixes

    Full text link
    In a software project, esp. in open-source, a contribution is a valuable piece of work made to the project: writing code, reporting bugs, translating, improving documentation, creating graphics, etc. We are now at the beginning of an exciting era where software bots will make contributions that are of similar nature than those by humans. Dry contributions, with no explanation, are often ignored or rejected, because the contribution is not understandable per se, because they are not put into a larger context, because they are not grounded on idioms shared by the core community of developers. We have been operating a program repair bot called Repairnator for 2 years and noticed the problem of "dry patches": a patch that does not say which bug it fixes, or that does not explain the effects of the patch on the system. We envision program repair systems that produce an "explainable bug fix": an integrated package of at least 1) a patch, 2) its explanation in natural or controlled language, and 3) a highlight of the behavioral difference with examples. In this paper, we generalize and suggest that software bot contributions must explainable, that they must be put into the context of the global software development conversation
    • …
    corecore