2,492 research outputs found

    A survey of recommender systems for energy efficiency in buildings: Principles, challenges and prospects

    Full text link
    Recommender systems have significantly developed in recent years in parallel with the witnessed advancements in both internet of things (IoT) and artificial intelligence (AI) technologies. Accordingly, as a consequence of IoT and AI, multiple forms of data are incorporated in these systems, e.g. social, implicit, local and personal information, which can help in improving recommender systems' performance and widen their applicability to traverse different disciplines. On the other side, energy efficiency in the building sector is becoming a hot research topic, in which recommender systems play a major role by promoting energy saving behavior and reducing carbon emissions. However, the deployment of the recommendation frameworks in buildings still needs more investigations to identify the current challenges and issues, where their solutions are the keys to enable the pervasiveness of research findings, and therefore, ensure a large-scale adoption of this technology. Accordingly, this paper presents, to the best of the authors' knowledge, the first timely and comprehensive reference for energy-efficiency recommendation systems through (i) surveying existing recommender systems for energy saving in buildings; (ii) discussing their evolution; (iii) providing an original taxonomy of these systems based on specified criteria, including the nature of the recommender engine, its objective, computing platforms, evaluation metrics and incentive measures; and (iv) conducting an in-depth, critical analysis to identify their limitations and unsolved issues. The derived challenges and areas of future implementation could effectively guide the energy research community to improve the energy-efficiency in buildings and reduce the cost of developed recommender systems-based solutions.Comment: 35 pages, 11 figures, 1 tabl

    Three Levels of AI Transparency

    Get PDF
    Transparency is generally cited as a key consideration towards building Trustworthy AI. However, the concept of transparency is fragmented in AI research, often limited to transparency of the algorithm alone. While considerable attempts have been made to expand the scope beyond the algorithm, there has yet to be a holistic approach that includes not only the AI system, but also the user, and society at large. We propose that AI transparency operates on three levels, (1) Algorithmic Transparency, (2) Interaction Transparency, and (3) Social Transparency, all of which need to be considered to build trust in AI. We expand upon these levels using current research directions, and identify research gaps resulting from the conceptual fragmentation of AI transparency highlighted within the context of the three levels

    Responsibility and AI:Council of Europe Study DGI(2019)05

    Get PDF

    The Survey, Taxonomy, and Future Directions of Trustworthy AI: A Meta Decision of Strategic Decisions

    Full text link
    When making strategic decisions, we are often confronted with overwhelming information to process. The situation can be further complicated when some pieces of evidence are contradicted each other or paradoxical. The challenge then becomes how to determine which information is useful and which ones should be eliminated. This process is known as meta-decision. Likewise, when it comes to using Artificial Intelligence (AI) systems for strategic decision-making, placing trust in the AI itself becomes a meta-decision, given that many AI systems are viewed as opaque "black boxes" that process large amounts of data. Trusting an opaque system involves deciding on the level of Trustworthy AI (TAI). We propose a new approach to address this issue by introducing a novel taxonomy or framework of TAI, which encompasses three crucial domains: articulate, authentic, and basic for different levels of trust. To underpin these domains, we create ten dimensions to measure trust: explainability/transparency, fairness/diversity, generalizability, privacy, data governance, safety/robustness, accountability, reproducibility, reliability, and sustainability. We aim to use this taxonomy to conduct a comprehensive survey and explore different TAI approaches from a strategic decision-making perspective

    Towards Modelling and Verification of Social Explainable AI

    Full text link
    Social Explainable AI (SAI) is a new direction in artificial intelligence that emphasises decentralisation, transparency, social context, and focus on the human users. SAI research is still at an early stage. Consequently, it concentrates on delivering the intended functionalities, but largely ignores the possibility of unwelcome behaviours due to malicious or erroneous activity. We propose that, in order to capture the breadth of relevant aspects, one can use models and logics of strategic ability, that have been developed in multi-agent systems. Using the STV model checker, we take the first step towards the formal modelling and verification of SAI environments, in particular of their resistance to various types of attacks by compromised AI modules
    • …
    corecore