9 research outputs found
Cost-efficient vaccination protocols for network epidemiology
We investigate methods to vaccinate contact networks -- i.e. removing nodes
in such a way that disease spreading is hindered as much as possible -- with
respect to their cost-efficiency. Any real implementation of such protocols
would come with costs related both to the vaccination itself, and gathering of
information about the network. Disregarding this, we argue, would lead to
erroneous evaluation of vaccination protocols. We use the
susceptible-infected-recovered model -- the generic model for diseases making
patients immune upon recovery -- as our disease-spreading scenario, and analyze
outbreaks on both empirical and model networks. For different relative costs,
different protocols dominate. For high vaccination costs and low costs of
gathering information, the so-called acquaintance vaccination is the most cost
efficient. For other parameter values, protocols designed for query-efficient
identification of the network's largest degrees are most efficient
Enhancing explainability and scrutability of recommender systems
Our increasing reliance on complex algorithms for recommendations calls for models and methods for explainable, scrutable, and trustworthy AI. While explainability is required for understanding the relationships between model inputs and outputs, a scrutable system allows us to modify its behavior as desired. These properties help bridge the gap between our expectations and the algorithmâs behavior and accordingly boost our trust in AI. Aiming to cope with information overload, recommender systems play a crucial role in ïŹltering content (such as products, news, songs, and movies) and shaping a personalized experience for their users. Consequently, there has been a growing demand from the information consumers to receive proper explanations for their personalized recommendations. These explanations aim at helping users understand why certain items are recommended to them and how their previous inputs to the system relate to the generation of such recommendations. Besides, in the event of receiving undesirable content, explanations could possibly contain valuable information as to how the systemâs behavior can be modiïŹed accordingly. In this thesis, we present our contributions towards explainability and scrutability of recommender systems: âą We introduce a user-centric framework, FAIRY, for discovering and ranking post-hoc explanations for the social feeds generated by black-box platforms. These explanations reveal relationships between usersâ proïŹles and their feed items and are extracted from the local interaction graphs of users. FAIRY employs a learning-to-rank (LTR) method to score candidate explanations based on their relevance and surprisal. âą We propose a method, PRINCE, to facilitate provider-side explainability in graph-based recommender systems that use personalized PageRank at their core. PRINCE explanations are comprehensible for users, because they present subsets of the userâs prior actions responsible for the received recommendations. PRINCE operates in a counterfactual setup and builds on a polynomial-time algorithm for ïŹnding the smallest counterfactual explanations. âą We propose a human-in-the-loop framework, ELIXIR, for enhancing scrutability and subsequently the recommendation models by leveraging user feedback on explanations. ELIXIR enables recommender systems to collect user feedback on pairs of recommendations and explanations. The feedback is incorporated into the model by imposing a soft constraint for learning user-speciïŹc item representations. We evaluate all proposed models and methods with real user studies and demonstrate their beneïŹts at achieving explainability and scrutability in recommender systems.Unsere zunehmende AbhĂ€ngigkeit von komplexen Algorithmen fĂŒr maschinelle Empfehlungen erfordert Modelle und Methoden fĂŒr erklĂ€rbare, nachvollziehbare und vertrauenswĂŒrdige KI. Zum Verstehen der Beziehungen zwischen Modellein- und ausgaben muss KI erklĂ€rbar sein. Möchten wir das Verhalten des Systems hingegen nach unseren Vorstellungen Ă€ndern, muss dessen Entscheidungsprozess nachvollziehbar sein. ErklĂ€rbarkeit und Nachvollziehbarkeit von KI helfen uns dabei, die LĂŒcke zwischen dem von uns erwarteten und dem tatsĂ€chlichen Verhalten der Algorithmen zu schlieĂen und unser Vertrauen in KI-Systeme entsprechend zu stĂ€rken. Um ein ĂbermaĂ an Informationen zu verhindern, spielen Empfehlungsdienste eine entscheidende Rolle um Inhalte (z.B. Produkten, Nachrichten, Musik und Filmen) zu ïŹltern und deren Benutzern eine personalisierte Erfahrung zu bieten. Infolgedessen erheben immer mehr In- formationskonsumenten Anspruch auf angemessene ErklĂ€rungen fĂŒr deren personalisierte Empfehlungen. Diese ErklĂ€rungen sollen den Benutzern helfen zu verstehen, warum ihnen bestimmte Dinge empfohlen wurden und wie sich ihre frĂŒheren Eingaben in das System auf die Generierung solcher Empfehlungen auswirken. AuĂerdem können ErklĂ€rungen fĂŒr den Fall, dass unerwĂŒnschte Inhalte empfohlen werden, wertvolle Informationen darĂŒber enthalten, wie das Verhalten des Systems entsprechend geĂ€ndert werden kann. In dieser Dissertation stellen wir unsere BeitrĂ€ge zu ErklĂ€rbarkeit und Nachvollziehbarkeit von Empfehlungsdiensten vor. âą Mit FAIRY stellen wir ein benutzerzentriertes Framework vor, mit dem post-hoc ErklĂ€rungen fĂŒr die von Black-Box-Plattformen generierten sozialen Feeds entdeckt und bewertet werden können. Diese ErklĂ€rungen zeigen Beziehungen zwischen BenutzerproïŹlen und deren Feeds auf und werden aus den lokalen Interaktionsgraphen der Benutzer extrahiert. FAIRY verwendet eine LTR-Methode (Learning-to-Rank), um die ErklĂ€rungen anhand ihrer Relevanz und ihres Grads unerwarteter Empfehlungen zu bewerten. âą Mit der PRINCE-Methode erleichtern wir das anbieterseitige Generieren von ErklĂ€rungen fĂŒr PageRank-basierte Empfehlungsdienste. PRINCE-ErklĂ€rungen sind fĂŒr Benutzer verstĂ€ndlich, da sie Teilmengen frĂŒherer Nutzerinteraktionen darstellen, die fĂŒr die erhaltenen Empfehlungen verantwortlich sind. PRINCE-ErklĂ€rungen sind somit kausaler Natur und werden von einem Algorithmus mit polynomieller Laufzeit erzeugt , um prĂ€zise ErklĂ€rungen zu ïŹnden. âą Wir prĂ€sentieren ein Human-in-the-Loop-Framework, ELIXIR, um die Nachvollziehbarkeit der Empfehlungsmodelle und die QualitĂ€t der Empfehlungen zu verbessern. Mit ELIXIR können Empfehlungsdienste Benutzerfeedback zu Empfehlungen und ErklĂ€rungen sammeln. Das Feedback wird in das Modell einbezogen, indem benutzerspeziïŹscher Einbettungen von Objekten gelernt werden. Wir evaluieren alle Modelle und Methoden in Benutzerstudien und demonstrieren ihren Nutzen hinsichtlich ErklĂ€rbarkeit und Nachvollziehbarkeit von Empfehlungsdiensten
LIPIcs, Volume 244, ESA 2022, Complete Volume
LIPIcs, Volume 244, ESA 2022, Complete Volum
Graphs behind data: A network-based approach to model different scenarios
openAl giorno dâoggi, i contesti che possono beneficiare di tecniche di estrazione della conoscenza a partire dai dati grezzi sono aumentati drasticamente. Di conseguenza, la definizione di modelli capaci di rappresentare e gestire dati altamente eterogenei Ăš un argomento di ricerca molto dibattuto in letteratura. In questa tesi, proponiamo una soluzione per affrontare tale problema. In particolare, riteniamo che la teoria dei grafi, e piĂč nello specifico le reti complesse, insieme ai suoi concetti ed approcci, possano rappresentare una valida soluzione. Infatti, noi crediamo che le reti complesse possano costituire un modello unico ed unificante per rappresentare e gestire dati altamente eterogenei. Sulla base di questa premessa, mostriamo come gli stessi concetti ed approcci abbiano la potenzialitĂ di affrontare con successo molti problemi aperti in diversi contesti. âNowadays, the amount and variety of scenarios that can benefit from techniques for extracting and managing knowledge from raw data have dramatically increased. As a result, the search for models capable of ensuring the representation and management of highly heterogeneous data is a hot topic in the data science literature. In this thesis, we aim to propose a solution to address this issue. In particular, we believe that graphs, and more specifically complex networks, as well as the concepts and approaches associated with them, can represent a solution to the problem mentioned above. In fact, we believe that they can be a unique and unifying model to uniformly represent and handle extremely heterogeneous data. Based on this premise, we show how the same concepts and/or approach has the potential to address different open issues in different contexts. âINGEGNERIA DELL'INFORMAZIONEopenVirgili, Luc
Recommended from our members
Connectomics of extrasynaptic signalling: applications to the nervous system of Caenorhabditis elegans
Connectomics â the study of neural connectivity â is primarily concerned with the mapping and characterisation of wired synaptic links; however, it is well established that long-distance chemical signalling via extrasynaptic volume transmission is also critical to brain function. As these interactions are not visible in the physical structure of the nervous system, current approaches to connectomics are unable to capture them.
This work addresses the problem of missing extrasynaptic interactions by demonstrating for the first time that whole-animal volume transmission networks can be mapped from gene expression and ligand-receptor interaction data, and analysed as part of the connectome. Complete networks are presented for the monoamine systems of Caenorhabditis elegans, along with a representative sample of selected neuropeptide systems.
A network analysis of the synaptic (wired) and extrasynaptic (wireless) connectomes is presented which reveals complex topological properties, including extrasynaptic rich-club organisation with interconnected hubs distinct from those in the synaptic and gap junction networks, and highly significant multilink motifs pinpointing locations in the network where aminergic and neuropeptide signalling is likely to modulate synaptic activity. Thus, the neuronal connectome can be modelled as a multiplex network with synaptic, gap junction, and neuromodulatory layers representing inter-neuronal interactions with different dynamics and polarity. This represents a prototype for understanding how extrasynaptic signalling can be integrated into connectomics research, and provides a novel dataset for the development of multilayer network algorithms.This work was supported by the Medical Research Council (MRC)
Proceedings of the International Congress on Interdisciplinarity in Social and Human Sciences
Interdisciplinarity is the main topic and the main goal of this conference.
Since the sixteen century with the creation of the first Academy of Sciences, in Napoles (Italy) (1568), and before
that with the creation of the Fine Arts Academies, the world of science and arts began to work independently, on
the contrary of the Academy of Plato, in Classical Antiquity, where science, art and sport went interconnected. Over
time, specific sciences began to be independent, and the specificity of sciences caused an increased difficulty in mutual
understanding.
The same trend has affected the Human and Social Sciences. Each of the specific sciences gave rise to a wide
range of particular fields. This has the advantage of allowing the deepening of specialised knowledge, but it means
that there is often only a piecemeal approach of the research object, not taking into account the its overall complexity.
So, it is important to work for a better understanding of the scientific phenomena with the complementarity of the different sciences, in an interdisciplinary perspective.
With this growing specialisation of sciences, Interdisciplinarity acquired more relevance for scientists to find moreencompassing and useful answers for their research questions.info:eu-repo/semantics/publishedVersio