92 research outputs found
Trends and challenges in funding and governance of universities in Europe
Higher education Institutions in the 21st century face grand global challenges, while at the same time many stakeholders have increasing expectations of universities. In Europe universities have been equally facing a complex financial situation in which conventional models of funding have been transformed and continue to evolve. This paper addresses some of the trends observed in the last decade connected to university funding and governance in relation to these global challenges and what kind of reforms are needed at system and institutional level to address the
T-Rep: Representation Learning for Time Series using Time-Embeddings
Multivariate time series present challenges to standard machine learning
techniques, as they are often unlabeled, high dimensional, noisy, and contain
missing data. To address this, we propose T-Rep, a self-supervised method to
learn time series representations at a timestep granularity. T-Rep learns
vector embeddings of time alongside its feature extractor, to extract temporal
features such as trend, periodicity, or distribution shifts from the signal.
These time-embeddings are leveraged in pretext tasks, to incorporate smooth and
fine-grained temporal dependencies in the representations, as well as reinforce
robustness to missing data. We evaluate T-Rep on downstream classification,
forecasting, and anomaly detection tasks. It is compared to existing
self-supervised algorithms for time series, which it outperforms in all three
tasks. We test T-Rep in missing data regimes, where it proves more resilient
than its counterparts. Finally, we provide latent space visualisation
experiments, highlighting the interpretability of the learned representations.Comment: Under review at ICLR 202
Greybox XAI: a Neural-Symbolic learning framework to produce interpretable predictions for image classification
Although Deep Neural Networks (DNNs) have great generalization and prediction capabilities, their
functioning does not allow a detailed explanation of their behavior. Opaque deep learning models are
increasingly used to make important predictions in critical environments, and the danger is that they make
and use predictions that cannot be justified or legitimized. Several eXplainable Artificial Intelligence (XAI)
methods that separate explanations from machine learning models have emerged, but have shortcomings
in faithfulness to the model actual functioning and robustness. As a result, there is a widespread agreement
on the importance of endowing Deep Learning models with explanatory capabilities so that they can
themselves provide an answer to why a particular prediction was made. First, we address the problem
of the lack of universal criteria for XAI by formalizing what an explanation is. We also introduced a
set of axioms and definitions to clarify XAI from a mathematical perspective. Finally, we present the
Greybox XAI, a framework that composes a DNN and a transparent model thanks to the use of a symbolic
Knowledge Base (KB). We extract a KB from the dataset and use it to train a transparent model (i.e., a
logistic regression). An encoder-decoder architecture is trained on RGB images to produce an output
similar to the KB used by the transparent model. Once the two models are trained independently, they
are used compositionally to form an explainable predictive model. We show how this new architecture is
accurate and explainable in several datasets.French ANRT (AssociationNationale Recherche Technologie - ANRT)SEGULA TechnologiesJuan de la Cierva Incorporacion grant - MCIN/AEI by "ESF Investing in your future" I JC2019-039152-IGoogle Research Scholar ProgramDepartment of Education of the Basque Government (Consolidated Research Group MATHMODE) IT1456-2
Should artificial agents ask for help in human-robot collaborative problem-solving?
International audienceTransferring as fast as possible the functioning of our brain to artificial intelligence is an ambitious goal that would help advance the state of the art in AI and robotics. It is in this perspective that we propose to start from hypotheses derived from an empirical study in a human-robot interaction and to verify if they are validated in the same way for children as for a basic reinforcement learning algorithm. Thus, we check whether receiving help from an expert when solving a simple close-ended task (the Towers of Hanoï) allows to accelerate or not the learning of this task, depending on whether the intervention is canonical or requested by the player. Our experiences have allowed us to conclude that, whether requested or not, a Q-learning algorithm benefits in the same way from expert help as children do
Explainable Artificial Intelligence (XAI): Concepts, Taxonomies, Opportunities and Challenges toward Responsible AI
In the last few years, Artificial Intelligence (AI) has achieved a notable momentum that, if harnessed appropriately, may deliver the best of expectations over many application sectors across the field. For this to occur shortly in Machine Learning, the entire community stands in front of the barrier of explainability, an inherent problem of the latest techniques brought by sub-symbolism (e.g. ensembles or Deep Neural Networks) that were not present in the last hype of AI (namely, expert systems and rule based models). Paradigms underlying this problem fall within the so-called eXplainable AI (XAI) field, which is widely acknowledged as a crucial feature for the practical deployment of AI models. The overview presented in this article examines the existing literature and contributions already done in the field of XAI, including a prospect toward what is yet to be reached. For this purpose we summarize previous efforts made to define explainability in Machine Learning, establishing a novel definition of explainable Machine Learning that covers such prior conceptual propositions with a major focus on the audience for which the explainability is sought. Departing from this definition, we propose and discuss about a taxonomy of recent contributions related to the explainability of different Machine Learning models, including those aimed at explaining Deep Learning methods for which a second dedicated taxonomy is built and examined in detail. This critical literature analysis serves as the motivating background for a series of challenges faced by XAI, such as the interesting crossroads of data fusion and explainability. Our prospects lead toward the concept of Responsible Artificial Intelligence, namely, a methodology for the large-scale implementation of AI methods in real organizations with fairness, model explainability and accountability at its core. Our ultimate goal is to provide newcomers to the field of XAI with a thorough taxonomy that can serve as reference material in order to stimulate future research advances, but also to encourage experts and professionals from other disciplines to embrace the benefits of AI in their activity sectors, without any prior bias for its lack of interpretability
Characterizing Patient-Reported Fatigue Using Electronic Diaries in Neurodegenerative and Immune-Mediated Inflammatory Diseases: Observational Study
\ua9 Adrien Bennetot, Rana Zia Ur Rehman, Robbin Romijnders, Zhi Li, Victoria Macrae, Kristen Davies, Wan-Fai Ng, Walter Maetzler, Jennifer Kudelka, Hanna Hildesheim, Kirsten Emmert, Emma Paulides, C Janneke van der Woude, Ralf Reilmann, Svenja Aufenberg, Meenakshi Chatterjee, Nikolay V Manyakov, Cl\ue9mence Pinaud, Stefan Avey. Background: Fatigue is a prevalent and debilitating symptom in many chronic conditions, including immune-mediated inflammatory diseases (IMIDs) and neurodegenerative diseases (NDDs). Fatigue often fluctuates significantly within and between days, yet traditional patient-reported outcomes (PROs) typically rely on recall periods of a week or more, potentially missing these short-term variations. The development of digital tools, such as electronic diaries (eDiaries), offers a unique opportunity to collect granular, real-time data. However, the feasibility, adherence, and comparability of eDiary-based assessments to established PROs require further investigation. Objective: This study aimed to evaluate the feasibility and acceptability of using a high-frequency eDiary to capture intraday variability in fatigue and to compare eDiary data with scores obtained from the Functional Assessment of Chronic Illness Therapy-Fatigue (FACIT-F), a validated weekly recall PRO. Methods: Data were collected from 159 participants enrolled in the IDEA-FAST (Identifying Digital Endpoints to Assess Fatigue, Sleep and Activities in Daily Living in Neurodegenerative Disorders and Immune-Mediated Inflammatory Diseases) feasibility study; a 4-week prospective observational study conducted at 4 European centers. Participants included individuals with NDDs (n=39), IMIDs (n=78), and healthy volunteers (n=42). Participants used an eDiary to report their physical and mental fatigue levels up to 4 times daily on a 7-point Likert scale (0=low and 6=high). Adherence was calculated as the proportion of completed eDiary entries relative to the total expected entries. Correlations between averaged eDiary scores and weekly FACIT-F scores were analyzed. Results: Adherence to the eDiary protocol was 5505/8880 (61.99%) overall, varying by cohort, with the highest adherence (1117/1200, 93.07%) observed in the primary Sj\uf6gren syndrome cohort and the lowest adherence in the Parkinson disease (410/960, 42.7%) and Huntington disease (320/720, 44.4%) cohorts. The average adherence was 430/1680 (43.45%) in the NDD cohorts and 3367/4560 (73.84%) in the IMID cohorts. Fatigue levels showed clear diurnal variation, with significantly higher fatigue reported in the evening compared to the morning (P<.001). A moderate correlation (Spearman=0.46, P<.001) was observed between eDiary fatigue scores and FACIT-F scores, with stronger cohort-specific associations for certain FACIT-F items. These results indicate that eDiaries provide complementary insights to weekly PROs by capturing intraday fluctuations in fatigue. Conclusions: This study demonstrates the feasibility, acceptability, and validity of using high-frequency eDiaries to assess fatigue in chronic conditions. By effectively detecting intra- and interday fatigue variations, eDiaries complement traditional PROs such as FACIT-F, offering a more nuanced understanding of fatigue patterns. Future research should explore optimized eDiary protocols to balance participant burden with data granularity
Recommended from our members
A Practical Tutorial on Explainable AI Techniques
The past years have been characterized by an upsurge in opaque automatic decision support systems, such as Deep Neural Networks (DNNs). Although DNNs have great generalization and prediction abilities, it is difficult to obtain detailed explanations for their behavior. As opaque Machine Learning models are increasingly being employed to make important predictions in critical domains, there is a danger of creating and using decisions that are not justifiable or legitimate. Therefore, there is a general agreement on the importance of endowing DNNs with explainability. EXplainable Artificial Intelligence (XAI) techniques can serve to verify and certify model outputs and enhance them with desirable notions such as trustworthiness, accountability, transparency, and fairness. This guide is intended to be the go-to handbook for anyone with a computer science background aiming to obtain an intuitive insight from Machine Learning models accompanied by explanations out-of-the-box. The article aims to rectify the lack of a practical XAI guide by applying XAI techniques, in particular, day-to-day models, datasets and use-cases. In each chapter, the reader will find a description of the proposed method as well as one or several examples of use with Python notebooks. These can be easily modified to be applied to specific applications. We also explain what the prerequisites are for using each technique, what the user will learn about them, and which tasks they are aimed at
- …
