BieColl - Bielefeld eCollections
Not a member yet
1000 research outputs found
Sort by
Beyond Trial and Error in Reinforcement Learning
In this work, we address the trial-and-error nature of modern reinforcement learning (RL) methods by investigating approaches inspired by human cognition. By enhancing state representations and advancing causal reasoning and planning, we aim to improve RL performance, robustness, and explainability. Through diverse examples, we showcase the potential of these approaches to improve RL agents
Distributive Justice of Resource Allocation Through Artificial Intelligence
Artificial intelligence will take over leadership functions such as rewarding employee performance. It will therefore make decisions about employee outcomes and most likely allocate different resources to employees. Resource Theory of Social Exchange distinguishes six resource classes. The theory postulates that the value of some resources depend on the identity of the provider of the resource and on the relationship with the provider. This raises the question of whether certain resources, such as the resource affiliation, have a value when they are allocated by artificial intelligence. This contribution calls for studies that investigate the value of different resources allocated by artificial intelligence in leadership functions
Dueling Bandits with Delayed Feedback
Dueling Bandits is a well-studied extension of the Multi-Armed Bandits problem, in which the learner must select two arms in each time step and receives a binary feedback as an outcome of the chosen duel. However, all of the existing best arm identification algorithms for the Dueling Bandits setting assume that the feedback can be observed immediately after selecting the two arms. If this is not the case, the algorithms simply do nothing and wait until the feedback of the recent duel can be observed, which is a waste of runtime. We propose an algorithm that can already start a new duel even if the previous one is not finished and thus is much more time efficient. Our arm selection strategy balances the expected information gain of the chosen duel and the expected delay until we observe the feedback. By theoretically grounded confidence bounds we can ensure that the arms we discard are not the best arms with high probability
Langzeitverfügbarkeit mit Rosetta: ein Werkstattbericht
Für wissenschaftliche Bibliotheken in NRW ist seit Anfang 2022 durch das Kulturgesetzbuch (KulturGB NRW) vorgesehen, Verfahren zur Langzeitverfügbarkeit (LZV) für digitale Bestände zu etablieren. Die Landesinitiative Langzeitverfügbarkeit (LZV.NRW) baut zu diesem Zweck eine kooperative Infrastruktur auf, die sich an alle NRW-Hochschulen und ihre Bibliotheken richtet. Die UB Bielefeld nutzt das Angebot der Landesinitiative, das eine Lizenz für die Archivierungssoftware „Rosetta“ sowie Service und Support durch das hbz umfasst.
Im Rahmen des Projekts „Langzeitverfügbarkeit mit Rosetta“ evaluieren UB und Kompetenzzentrum Forschungsdaten die Archivierungssoftware, ein kommerzielles Produkt der Fa. Ex Libris. Es wird getestet, ob die Software dazu geeignet ist, ausgewählte bibliothekseigene Bestände langzeitverfügbar zu machen. Im zweiten Schritt soll ein Dienstleistungsangebot zur Langzeitverfügbarkeit von Forschungsdaten entwickelt werden, das sich an Bielefelder Forschende richtet
Keynot Talks
Tuesday, 25.06.2024
Prof. Dr. Anand Subramoney
Scalable Architectures for Neuromorphic Machine Learning
I will discuss how to design architectures for neuromorphic machine learning from first principles. These architectures take inspiration from biology without being constrained by biological details. Two major themes will be sparsity and asynchrony, and their significant role in scalable neuromorphic systems. I will present recent work from my group on using various forms of sparsity and distributed learning to improve the scalability and efficiency of neuromorphic deep learning models.
Prof. Dr. Kerstin Bunte
Scientific Machine Learning for Partially Observed Dynamical Systems
Nowadays, most successful machine learning (ML) techniques for the analysis of complex interdisciplinary data use significant amounts of measurements as input to a statistical system. The domain expert knowledge is often only used in data preprocessing. The subsequently trained technique appears as a “black box”, which is difficult to interpret and rarely allows insight into the underlying natural process. Especially in critical domains such as medicine and engineering, the analysis of dynamic data in the form of sequences and time series is often difficult. Due to natural or cost limitations and ethical considerations data is often irregularly and sparsely sampled and the underlying dynamic system is complex. Therefore, domain experts currently enter a time-consuming and laborious cycle of mechanistic model construction and simulation, often without direct use of the experimental data or the task at hand. We now combine the predictive power of ML and the explanatory power of mechanistic models.Therefore we perform learning in the space of dynamic models that represent the complex underlying natural processes, with potentially very few and limited measurements. We use principles of dimensionality reduction, such as subspace learning, to determine relevant areas in the parameter space of the underlying model as a first step to achieve task-driven model reduction. We furthermore incorporate identifiability analysis for informed posterior construction to improve learning with ill-posed systems caused by data limitations. Findings indicate the possibility of an alternative handling of epistemic uncertainties for scientific machine learning techniques applicable for all linear and classes of non-linear mechanistic models based on Lie symmetries.
Joint work of:Bunte, Kerstin; Tino, Peter; Oostwal, Elisa; Norden, Janis; Chappell, Michael; Smith, Dave
Prof. Dr. Holger Hoos
How and Why AI Will Shape the Future of Science and Engineering
Recent progress in artificial intelligence has elevated what used to be a highly specialised research area to a topic of public discourse and debate. In this presentation I will discuss why beyond the hype, there are good reasons to be excited, but also concerned about AI. Specifically, I will explain how and why AI will have transformative impact on all sciences and engineering disciplines. Based on my own research on the robustness of neural networks, I will discuss some of the fundamental strengths, weaknesses and limitations of current AI systems. Finally, I will share some thoughts on the most serious risks of deploying these systems quickly and broadly, as well as on what needs to be done in order to manage these risks and to realise the benefits AI can bring.
Prof. Dr. Christian Igel
Deep Learning for Large-Scale Tree Carbon Stock Estimation From Satellite Imagery
Trees play an important role for carbon sequestration, biodiversity, as well as timber and food production. We need a better characterization of woody resources at global scale to understand how they are affected by climate change and human management. Recent advances in satellite remote sensing and machine learning (ML) based computer vision makes this possible. This talk discusses large-scale mapping of individual trees using deep learning applied to high-resolution satellite imagery. The biomass of each tree, and thereby its carbon content, is estimated from the crown size using allometric equations. The parameters of these equations are learned from data. The functional relation is assumed to be non-decreasing. Such monotonicity constraints are powerful regularizers in ML in general. They can support fairness in computer-aided decision making and increase plausibility in data-driven scientific models. This talk introduces a conceptually simple and efficient neural network architecture for monotonic modelling that compares favorable to state-of-the-art alternatives. After this technical excursion, we present an application of our tree monitoring in Rwanda, where it helps quantifying progress of restoration projects and developing a pathway to reach the country’s goal of net zero emissions by 2050.
Wednesday, 26.06.24
Prof. Dr. Lucie Flek
Perspective Taking in Large Language Models
Perspective-taking, the process of conceptualizing the point of view of another person, remains a challenge for LLMs. Understanding the mental state of others – emotions, beliefs, intentions – is central for the ability to empathize in social interactions. It is also the key to choose the best action to take next. Enhancing perspective-taking capabilities of LLMs can unlock their potential to react better and safer to hints of distress, to engage in a more receptive argumentation, or to target an explanation to an audience. In this talk, I will present our recent perspective-taking experiments, and discuss further opportunities for bringing the human-centered perspectivist paradigm into the LLMs.
Prof. Dr. Henning Wachsmuth
LLM-based Argument Quality Improvement
Natural language processing (NLP) has recently seen a revolutionary breakthrough, due to the impressive capabilities of large language models (LLM). This also affects NLP research on computational argumentation: the computational analysis and synthesis of natural language arguments. While one of the core tasks studied in computational argumentation is the assessment of an argument’s quality, in this talk I look one beyond, namely at how to improve argument quality. Starting from basics of argumentation, I present insights from selected research of my group involving LLMs for improving argument quality. As part of this, I also look at the recent breakthroughs of LLMs and the paradigm shift that comes with them for computational argumentation in particular and for NLP in general.
Thursday, 27.06.2024
Prof. Dr. Sebastian Trimpe
Trustworthy AI for Physical Machines: Integrating Machine Learning and Control
AI promises significant advancements in engineering, enhancing both design and operation processes. Given that engineering focuses on physical machines like vehicles or robots, ensuring trustworthy solutions is crucial. This talk will explore how combining classical control methods with modern machine learning can create reliable algorithms for real-world applications. Specifically, we will discuss some of our recent research on (i) Bayesian optimization for controller learning, (ii) deep reinforcement learning, and (iii) approximate model-predictive control via imitation learning. The effectiveness of the developed algorithms will be demonstrated through experimental results on robotic hardware.
Prof. Dr. Malte Schilling
Biological Biases for Learning Robust Robot Behavior: Does Deep Reinforcement Learning Run into the Alignment Problem
OpenAlex im Kompetenznetzwerk Bibliometrie
Nachdem es über Jahrzehnte hinweg die proprietären Datenbanken Web of Science und Scopus waren, die für bibliometrische Analysen herangezogen wurde, ist mit OpenAlex eine interessante, für jedermann frei zugängliche und nutzbare Alternative entstanden. Ebenso wie eine Vielzahl anderer Akteure weltweit hat auch das BMBF-geförderte Kompetenzzentrum Bibliometrie das Potential der Datenbanken erkannt und fördert nun ein Projekt zur Kuratierung von Daten, die unter Beteiligung deutscher Institutionen entstanden sind. Der Vortrag führt in OpenAlex ein und gibt eine Übersicht über den Stand des Vorhabens
Frisst die Transformation ihre Kinder? Kostenimplikationen transformativer Verträge für Open Access Verlage
Mit der rasanten Ausbreitung des Internets hoffte die Wissenschaft, dass neues Wissen frei verfügbar und schnell verbreitet werden könne. Dennoch veröffentlichen die meisten Forschenden weiterhin in etablierten Zeitschriften, anstatt vollständig auf Open-Access-Alternativen umzusteigen. Dadurch bleibt die Marktmacht großer kommerzieller Verlage bestehen, die Tausende von Zeitschriften hinter Bezahlschranken kontrollieren.
Um diese Portfolios in Open Access umzuwandeln, verhandeln Forschungseinrichtungen weltweit „transformative Verträge“ wie in Deutschland u.a. die „DEAL“-Verträge: Artikel werden vollständig Open Access veröffentlicht, und Universitäten zahlen nur noch für deren Veröffentlichung, nicht mehr jedoch für Subskriptionen/Lesezugänge, die mit den Veröffentlichungsgebühren ebenfalls abgegolten werden.
Ich demonstriere theoretisch, dass Verlage, die über ein großes Portfolio etablierter, subskriptionsbasierter Fachzeitschriften verfügen, diese als Druckmittel nutzen können, um trotz sinkender Publikationszahlen hohe Einnahmen sicherzustellen. Das könnte Wettbewerbern schaden, die ausschließlich Open Access verlegen, und den Wettbewerb behindern. So könnte die dominante Position der großen Verlage weiter gefestigt werden
Finde den Verlag für Dein Open-Access-Buch: Bericht über die Erweiterung der Rechercheplattform „oa.finder“ um die Suche nach wissenschaftlichen Buchverlagen
Im Rahmen des Verbundprojekts open-access.network wird eine Plattform zur Recherche nach Wissenschaftsverlagen entwickelt. Es ist die Erweiterung des oa.finders, der aktuell publikationsrelevante Informationen zu rund 57.000 Zeitschriften und Journals enthält. Für die geplante Suche nach Publikationsmöglichkeiten von Büchern im Open Access wird die Expertise von Wissenschaftsverlagen erhoben und recherchierbar gemacht. Entwicklung und Implementierung des Tools erfolgen an der Universitätsbibliothek Bielefeld. Projektlaufzeit: 01.01.2023 bis 31.12.202
Leveraging Desirable and Undesirable Event Logs in Process Mining Tasks
Traditional process mining techniques utilize one event log as input to offer organizational insights. In many applications, information regarding undesirable process aspects may exist. However, the literature lacks a comprehensive overview of their integration into process mining tasks. In our paper, we explore leveraging data from both desirable and undesirable event logs to augment existing process mining tasks and develop innovative applications. Our aim is to systematically outline the potential for enhancements in this realm
COMETH - An Active Learning Approach Enhanced with Large Language Models
We present a system for supervision of technical processes, called COMETH, which involves an active learning approach. The system is able to identify anomalies with very little training data, through an efficient feedback process. COMETH has been successfully applied in the context of heating ventilation and air conditioning systems and in industrial machinery. Here, we describe the idea of combining the time series analysis COMETH with large language models to integrate further context information and thus provide the user with specific recommendations