176,971 research outputs found
Subjectivity in inductive inference
This paper examines circumstances under which subjectivity enhances the effectiveness of inductive reasoning. We consider agents facing a data generating process who are characterized by inference rules that may be purely objective (or data-based) or may incorporate subjective considerations. The basic intuition is that agents who invoke no subjective considerations are doomed to "overfit" the data and therefore engage in ineffective learning. The analysis places no computational or memory limitations on the agents|the role for subjectivity emerges in the presence of unlimited reasoning powers.Inductive inference, simplicity, prediction, learning
A Commentary on the Unsupervised Learning of Disentangled Representations
The goal of the unsupervised learning of disentangled representations is to
separate the independent explanatory factors of variation in the data without
access to supervision. In this paper, we summarize the results of Locatello et
al., 2019, and focus on their implications for practitioners. We discuss the
theoretical result showing that the unsupervised learning of disentangled
representations is fundamentally impossible without inductive biases and the
practical challenges it entails. Finally, we comment on our experimental
findings, highlighting the limitations of state-of-the-art approaches and
directions for future research
Learning Description Logic Ontologies: Five Approaches. Where Do They Stand?
Abstract
The quest for acquiring a formal representation of the knowledge of a domain of interest has attracted researchers with various backgrounds into a diverse field called ontology learning. We highlight classical machine learning and data mining approaches that have been proposed for (semi-)automating the creation of description logic (DL) ontologies. These are based on association rule mining, formal concept analysis, inductive logic programming, computational learning theory, and neural networks. We provide an overview of each approach and how it has been adapted for dealing with DL ontologies. Finally, we discuss the benefits and limitations of each of them for learning DL ontologies
Inductive logic programming at 30: a new introduction
Inductive logic programming (ILP) is a form of machine learning. The goal of
ILP is to induce a hypothesis (a set of logical rules) that generalises
training examples. As ILP turns 30, we provide a new introduction to the field.
We introduce the necessary logical notation and the main learning settings;
describe the building blocks of an ILP system; compare several systems on
several dimensions; describe four systems (Aleph, TILDE, ASPAL, and Metagol);
highlight key application areas; and, finally, summarise current limitations
and directions for future research.Comment: Paper under revie
GPNet: Simplifying Graph Neural Networks via Multi-channel Geometric Polynomials
Graph Neural Networks (GNNs) are a promising deep learning approach for
circumventing many real-world problems on graph-structured data. However, these
models usually have at least one of four fundamental limitations:
over-smoothing, over-fitting, difficult to train, and strong homophily
assumption. For example, Simple Graph Convolution (SGC) is known to suffer from
the first and fourth limitations. To tackle these limitations, we identify a
set of key designs including (D1) dilated convolution, (D2) multi-channel
learning, (D3) self-attention score, and (D4) sign factor to boost learning
from different types (i.e. homophily and heterophily) and scales (i.e. small,
medium, and large) of networks, and combine them into a graph neural network,
GPNet, a simple and efficient one-layer model. We theoretically analyze the
model and show that it can approximate various graph filters by adjusting the
self-attention score and sign factor. Experiments show that GPNet consistently
outperforms baselines in terms of average rank, average accuracy, complexity,
and parameters on semi-supervised and full-supervised tasks, and achieves
competitive performance compared to state-of-the-art model with inductive
learning task.Comment: 15 pages, 15 figure
Recommended from our members
Assessing the probability of patients reoffending after discharge from low to medium secure forensic mental health services: An inductive prevention paradox
Citizens of developed societies are troubled by those who commit âirrational' crimes against the person. Reoffending by ex-patients following their release from secure mental health services triggers particularly intense angst when amplified by media and political scrutiny. Forensic mental health service providers are expected to minimise the occurrence of such transgressions by releasing only those patients who are judged acceptably unlikely to reoffend. However, reoffending probabilities can only be estimated by observing behaviour in secure institutional settings designed specifically to prevent patients from transgressing. The article explores this âinductive prevention paradox' which arises when the implementation of measures designed to avoid an adverse event obscures direct observation of what might have happened if prophylaxis had not been attempted. The analysis presented draws on data obtained in 1999â2003 from two qualitative studies in medium to low secure UK institutions, one providing forensic mental health services and the other forensic learning disability services. We explored the views of 56 staff members and 21 patients about risk management in forensic services and undertook additional 25 staff interviews for case studies of the 21 patients. The wider applicability of the inductive prevention paradox will be considered in the Discussion. We argue that the prognostic limitations arising from prevention have been underestimated by policy makers and in official inquiries; and that the prevailing personal risk assessment framework needs to be complemented by greater attention to the environments which patients will be discharged into
Adversarial Attack and Defense on Graph Data: A Survey
Deep neural networks (DNNs) have been widely applied to various applications
including image classification, text generation, audio recognition, and graph
data analysis. However, recent studies have shown that DNNs are vulnerable to
adversarial attacks. Though there are several works studying adversarial attack
and defense strategies on domains such as images and natural language
processing, it is still difficult to directly transfer the learned knowledge to
graph structure data due to its representation challenges. Given the importance
of graph analysis, an increasing number of works start to analyze the
robustness of machine learning models on graph data. Nevertheless, current
studies considering adversarial behaviors on graph data usually focus on
specific types of attacks with certain assumptions. In addition, each work
proposes its own mathematical formulation which makes the comparison among
different methods difficult. Therefore, in this paper, we aim to survey
existing adversarial learning strategies on graph data and first provide a
unified formulation for adversarial learning on graph data which covers most
adversarial learning studies on graph. Moreover, we also compare different
attacks and defenses on graph data and discuss their corresponding
contributions and limitations. In this work, we systemically organize the
considered works based on the features of each topic. This survey not only
serves as a reference for the research community, but also brings a clear image
researchers outside this research domain. Besides, we also create an online
resource and keep updating the relevant papers during the last two years. More
details of the comparisons of various studies based on this survey are
open-sourced at
https://github.com/YingtongDou/graph-adversarial-learning-literature.Comment: In submission to Journal. For more open-source and up-to-date
information, please check our Github repository:
https://github.com/YingtongDou/graph-adversarial-learning-literatur
- âŠ