5,662 research outputs found
Structured Learning via Logistic Regression
A successful approach to structured learning is to write the learning
objective as a joint function of linear parameters and inference messages, and
iterate between updates to each. This paper observes that if the inference
problem is "smoothed" through the addition of entropy terms, for fixed
messages, the learning objective reduces to a traditional (non-structured)
logistic regression problem with respect to parameters. In these logistic
regression problems, each training example has a bias term determined by the
current set of messages. Based on this insight, the structured energy function
can be extended from linear factors to any function class where an "oracle"
exists to minimize a logistic loss.Comment: Advances in Neural Information Processing Systems 201
Recommended from our members
Temporal Bayesian classifiers for modelling muscular dystrophy expression data
The analysis of microarray data from time-series experiments requires specialised algorithms, which take the temporal ordering of the data into account. In this paper we explore a new architecture of Bayesian classifier that can be used to understand how biological mechanisms differ with respect to time. We show that this classifier improves the classification of microarray data and at the same time ensures that the models can easily be analysed by biologists by incorporating time transparently. In this paper we focus on data that has been generated to explore different types of muscular dystrophy
Integrated Inference and Learning of Neural Factors in Structural Support Vector Machines
Tackling pattern recognition problems in areas such as computer vision,
bioinformatics, speech or text recognition is often done best by taking into
account task-specific statistical relations between output variables. In
structured prediction, this internal structure is used to predict multiple
outputs simultaneously, leading to more accurate and coherent predictions.
Structural support vector machines (SSVMs) are nonprobabilistic models that
optimize a joint input-output function through margin-based learning. Because
SSVMs generally disregard the interplay between unary and interaction factors
during the training phase, final parameters are suboptimal. Moreover, its
factors are often restricted to linear combinations of input features, limiting
its generalization power. To improve prediction accuracy, this paper proposes:
(i) Joint inference and learning by integration of back-propagation and
loss-augmented inference in SSVM subgradient descent; (ii) Extending SSVM
factors to neural networks that form highly nonlinear functions of input
features. Image segmentation benchmark results demonstrate improvements over
conventional SSVM training methods in terms of accuracy, highlighting the
feasibility of end-to-end SSVM training with neural factors
- ā¦