278 research outputs found

    The user perspective in professional information search

    Get PDF
    Computer Systems, Imagery and Medi

    A survey of recommender systems for energy efficiency in buildings: Principles, challenges and prospects

    Full text link
    Recommender systems have significantly developed in recent years in parallel with the witnessed advancements in both internet of things (IoT) and artificial intelligence (AI) technologies. Accordingly, as a consequence of IoT and AI, multiple forms of data are incorporated in these systems, e.g. social, implicit, local and personal information, which can help in improving recommender systems' performance and widen their applicability to traverse different disciplines. On the other side, energy efficiency in the building sector is becoming a hot research topic, in which recommender systems play a major role by promoting energy saving behavior and reducing carbon emissions. However, the deployment of the recommendation frameworks in buildings still needs more investigations to identify the current challenges and issues, where their solutions are the keys to enable the pervasiveness of research findings, and therefore, ensure a large-scale adoption of this technology. Accordingly, this paper presents, to the best of the authors' knowledge, the first timely and comprehensive reference for energy-efficiency recommendation systems through (i) surveying existing recommender systems for energy saving in buildings; (ii) discussing their evolution; (iii) providing an original taxonomy of these systems based on specified criteria, including the nature of the recommender engine, its objective, computing platforms, evaluation metrics and incentive measures; and (iv) conducting an in-depth, critical analysis to identify their limitations and unsolved issues. The derived challenges and areas of future implementation could effectively guide the energy research community to improve the energy-efficiency in buildings and reduce the cost of developed recommender systems-based solutions.Comment: 35 pages, 11 figures, 1 tabl

    Human-Understandable Explanations of Neural Networks

    Get PDF
    Das 21. Jahrhundert ist durch Datenströme enormen Ausmaßes gekennzeichnet. Dies hat die Popularität von Berechnungsmodellen, die sehr datenintensiv sind, wie z.B. neuronale Netze, drastisch erhöht. Aufgrund ihres großen Erfolges bei der Mustererkennung sind sie zu einem leistungsstarken Werkzeug für Vorhersagen, Klassifizierung und Empfehlungen in der Informatik, Statistik, Wirtschaft und vielen anderen Disziplinen geworden. Trotz dieser verbreiteten Anwendung sind neuronale Netze Blackbox-Modelle, d.h. sie geben keine leicht interpretierbaren Einblicke in die Struktur der approximierten Funktion oder in die Art und Weise, wie die Eingabe in die entsprechende Ausgabe umgewandelt wird. Die jüngste Forschung versucht, diese Blackboxen zu öffnen und ihr Innenleben zu enthüllen. Bisher haben sich die meisten Forschungsarbeiten darauf konzentriert, die Entscheidungen eines neuronalen Netzes auf einer sehr technischen Ebene und für ein Informatikfachpublikum zu erklären. Da neuronale Netze immer häufiger eingesetzt werden, auch von Menschen ohne tiefere Informatikkenntnisse, ist es von entscheidender Bedeutung, Ansätze zu entwickeln, die es ermöglichen, neuronale Netze auch für Nicht-Experten verständlich zu erklären. Das Ziel ist, dass Menschen verstehen können, warum das neuronale Netz bestimmte Entscheidungen getroffen hat, und dass sie das Ergebnis des Modells durchgehend interpretieren können. Diese Arbeit beschreibt ein Rahmenwerk, das es ermöglicht, menschlich verständliche Erklärungen für neuronale Netze zu liefern. Wir charakterisieren menschlich nachvollziehbare Erklärungen durch sieben Eigenschaften, nämlich Transparenz, Überprüfbarkeit, Vertrauen, Effektivität, Überzeugungskraft, Effizienz und Zufriedenheit. In dieser Arbeit stellen wir Erklärungsansätze vor, die diese Eigenschaften erfüllen. Zunächst stellen wir TransPer vor, ein Erklärungsrahmenwerk für neuronale Netze, insbesondere für solche, die in Produktempfehlungssystemen verwendet werden. Wir definieren Erklärungsmaße auf der Grundlage der Relevanz der Eingaben, um die Vorhersagequalität des neuronalen Netzes zu analysieren und KI-Anwendern bei der Verbesserung ihrer neuronalen Netze zu helfen. Dadurch werden Transparenz und Vertrauen geschaffen. In einem Anwendungsfall für ein Empfehlungssystem werden auch die Überzeugungskraft, die den Benutzer zum Kauf eines Produkts veranlasst, und die Zufriedenheit, die das Benutzererlebnis angenehmer macht, berücksichtigt. Zweitens, um die Blackbox des neuronalen Netzes zu öffnen, definieren wir eine neue Metrik für die Erklärungsqualität ObAlEx in der Bildklassifikation. Mit Hilfe von Objekterkennungsansätzen, Erklärungsansätzen und ObAlEx quantifizieren wir den Fokus von faltenden neuronalen Netzwerken auf die tatsächliche Evidenz. Dies bietet den Nutzern eine effektive Erklärung und Vertrauen, dass das Modell seine Klassifizierungsentscheidung tatsächlich auf der Grundlage des richtigen Teils des Eingabebildes getroffen hat. Darüber hinaus ermöglicht es die Überprüfbarkeit, d. h. die Möglichkeit für den Benutzer, dem Erklärungssystem mitzuteilen, dass sich das Modell auf die falschen Teile des Eingabebildes konzentriert hat. Drittens schlagen wir FilTag vor, einen Ansatz zur Erklärung von faltenden neuronalen Netzwerken durch die Kennzeichnung der Filter mit Schlüsselwörtern, die Bildklassen identifizieren. In ihrer Gesamtheit erklären diese Kennzeichnungen die Zweckbestimmung des Filters. Einzelne Bildklassifizierungen können dann intuitiv anhand der Kennzeichnungen der Filter, die das Eingabebild aktiviert, erklärt werden. Diese Erklärungen erhöhen die Überprüfbarkeit und das Vertrauen. Schließlich stellen wir FAIRnets vor, das darauf abzielt, Metadaten von neuronalen Netzen wie Architekturinformationen und Verwendungszweck bereitzustellen. Indem erklärt wird, wie das neuronale Netz aufgebaut ist werden neuronale Netzer transparenter; dadurch dass ein Nutzer schnell entscheiden kann, ob das neuronale Netz für den gewünschten Anwendungsfall relevant ist werden neuronale Netze effizienter. Alle vier Ansätze befassen sich mit der Frage, wie man Erklärungen von neuronalen Netzen für Nicht-Experten bereitstellen kann. Zusammen stellen sie einen wichtigen Schritt in Richtung einer für den Menschen verständlichen KI dar

    xxAI - Beyond Explainable AI

    Get PDF
    This is an open access book. Statistical machine learning (ML) has triggered a renaissance of artificial intelligence (AI). While the most successful ML models, including Deep Neural Networks (DNN), have developed better predictivity, they have become increasingly complex, at the expense of human interpretability (correlation vs. causality). The field of explainable AI (xAI) has emerged with the goal of creating tools and models that are both predictive and interpretable and understandable for humans. Explainable AI is receiving huge interest in the machine learning and AI research communities, across academia, industry, and government, and there is now an excellent opportunity to push towards successful explainable AI applications. This volume will help the research community to accelerate this process, to promote a more systematic use of explainable AI to improve models in diverse applications, and ultimately to better understand how current explainable AI methods need to be improved and what kind of theory of explainable AI is needed. After overviews of current methods and challenges, the editors include chapters that describe new developments in explainable AI. The contributions are from leading researchers in the field, drawn from both academia and industry, and many of the chapters take a clear interdisciplinary approach to problem-solving. The concepts discussed include explainability, causability, and AI interfaces with humans, and the applications include image processing, natural language, law, fairness, and climate science.https://digitalcommons.unomaha.edu/isqafacbooks/1000/thumbnail.jp

    Folk Theories, Recommender Systems, and Human-Centered Explainable Artificial Intelligence (HCXAI)

    Get PDF
    This study uses folk theories to enhance human-centered “explainable AI” (HCXAI). The complexity and opacity of machine learning has compelled the need for explainability. Consumer services like Amazon, Facebook, TikTok, and Spotify have resulted in machine learning becoming ubiquitous in the everyday lives of the non-expert, lay public. The following research questions inform this study: What are the folk theories of users that explain how a recommender system works? Is there a relationship between the folk theories of users and the principles of HCXAI that would facilitate the development of more transparent and explainable recommender systems? Using the Spotify music recommendation system as an example, 19 Spotify users were surveyed and interviewed to elicit their folk theories of how personalized recommendations work in a machine learning system. Seven folk theories emerged: complies, dialogues, decides, surveils, withholds and conceals, empathizes, and exploits. These folk theories support, challenge, and augment the principles of HCXAI. Taken collectively, the folk theories encourage HCXAI to take a broader view of XAI. The objective of HCXAI is to move towards a more user-centered, less technically focused XAI. The elicited folk theories indicate that this will require adopting principles that include policy implications, consumer protection issues, and concerns about intention and the possibility of manipulation. As a window into the complex user beliefs that inform their iii interactions with Spotify, the folk theories offer insights into how HCXAI systems can more effectively provide machine learning explainability to the non-expert, lay public

    xxAI - Beyond Explainable AI

    Get PDF
    This is an open access book. Statistical machine learning (ML) has triggered a renaissance of artificial intelligence (AI). While the most successful ML models, including Deep Neural Networks (DNN), have developed better predictivity, they have become increasingly complex, at the expense of human interpretability (correlation vs. causality). The field of explainable AI (xAI) has emerged with the goal of creating tools and models that are both predictive and interpretable and understandable for humans. Explainable AI is receiving huge interest in the machine learning and AI research communities, across academia, industry, and government, and there is now an excellent opportunity to push towards successful explainable AI applications. This volume will help the research community to accelerate this process, to promote a more systematic use of explainable AI to improve models in diverse applications, and ultimately to better understand how current explainable AI methods need to be improved and what kind of theory of explainable AI is needed. After overviews of current methods and challenges, the editors include chapters that describe new developments in explainable AI. The contributions are from leading researchers in the field, drawn from both academia and industry, and many of the chapters take a clear interdisciplinary approach to problem-solving. The concepts discussed include explainability, causability, and AI interfaces with humans, and the applications include image processing, natural language, law, fairness, and climate science

    Art and the science of generative AI: A deeper dive

    Full text link
    A new class of tools, colloquially called generative AI, can produce high-quality artistic media for visual arts, concept art, music, fiction, literature, video, and animation. The generative capabilities of these tools are likely to fundamentally alter the creative processes by which creators formulate ideas and put them into production. As creativity is reimagined, so too may be many sectors of society. Understanding the impact of generative AI - and making policy decisions around it - requires new interdisciplinary scientific inquiry into culture, economics, law, algorithms, and the interaction of technology and creativity. We argue that generative AI is not the harbinger of art's demise, but rather is a new medium with its own distinct affordances. In this vein, we consider the impacts of this new medium on creators across four themes: aesthetics and culture, legal questions of ownership and credit, the future of creative work, and impacts on the contemporary media ecosystem. Across these themes, we highlight key research questions and directions to inform policy and beneficial uses of the technology.Comment: This white paper is an expanded version of Epstein et al 2023 published in Science Perspectives on July 16, 2023 which you can find at the following DOI: 10.1126/science.adh445

    xxAI - Beyond Explainable AI

    Get PDF
    This is an open access book. Statistical machine learning (ML) has triggered a renaissance of artificial intelligence (AI). While the most successful ML models, including Deep Neural Networks (DNN), have developed better predictivity, they have become increasingly complex, at the expense of human interpretability (correlation vs. causality). The field of explainable AI (xAI) has emerged with the goal of creating tools and models that are both predictive and interpretable and understandable for humans. Explainable AI is receiving huge interest in the machine learning and AI research communities, across academia, industry, and government, and there is now an excellent opportunity to push towards successful explainable AI applications. This volume will help the research community to accelerate this process, to promote a more systematic use of explainable AI to improve models in diverse applications, and ultimately to better understand how current explainable AI methods need to be improved and what kind of theory of explainable AI is needed. After overviews of current methods and challenges, the editors include chapters that describe new developments in explainable AI. The contributions are from leading researchers in the field, drawn from both academia and industry, and many of the chapters take a clear interdisciplinary approach to problem-solving. The concepts discussed include explainability, causability, and AI interfaces with humans, and the applications include image processing, natural language, law, fairness, and climate science
    corecore