7,473 research outputs found
A Survey on Metric Learning for Feature Vectors and Structured Data
The need for appropriate ways to measure the distance or similarity between
data is ubiquitous in machine learning, pattern recognition and data mining,
but handcrafting such good metrics for specific problems is generally
difficult. This has led to the emergence of metric learning, which aims at
automatically learning a metric from data and has attracted a lot of interest
in machine learning and related fields for the past ten years. This survey
paper proposes a systematic review of the metric learning literature,
highlighting the pros and cons of each approach. We pay particular attention to
Mahalanobis distance metric learning, a well-studied and successful framework,
but additionally present a wide range of methods that have recently emerged as
powerful alternatives, including nonlinear metric learning, similarity learning
and local metric learning. Recent trends and extensions, such as
semi-supervised metric learning, metric learning for histogram data and the
derivation of generalization guarantees, are also covered. Finally, this survey
addresses metric learning for structured data, in particular edit distance
learning, and attempts to give an overview of the remaining challenges in
metric learning for the years to come.Comment: Technical report, 59 pages. Changes in v2: fixed typos and improved
presentation. Changes in v3: fixed typos. Changes in v4: fixed typos and new
method
Encounter complexes and dimensionality reduction in protein-protein association
An outstanding challenge has been to understand the mechanism whereby proteins associate. We report here the results of exhaustively sampling the conformational space in protein–protein association using a physics-based energy function. The agreement between experimental intermolecular paramagnetic relaxation enhancement (PRE) data and the PRE profiles calculated from the docked structures shows that the method captures both specific and non-specific encounter complexes. To explore the energy landscape in the vicinity of the native structure, the nonlinear manifold describing the relative orientation of two solid bodies is projected onto a Euclidean space in which the shape of low energy regions is studied by principal component analysis. Results show that the energy surface is canyon-like, with a smooth funnel within a two dimensional subspace capturing over 75% of the total motion. Thus, proteins tend to associate along preferred pathways, similar to sliding of a protein along DNA in the process of protein-DNA recognition
DeepSig: Deep learning improves signal peptide detection in proteins
Motivation:
The identification of signal peptides in protein sequences is an important step toward protein localization and function characterization.
Results:
Here, we present DeepSig, an improved approach for signal peptide detection and cleavage-site prediction based on deep learning methods. Comparative benchmarks performed on an updated independent dataset of proteins show that DeepSig is the current best performing method, scoring better than other available state-of-the-art approaches on both signal peptide detection and precise cleavage-site identification.
Availability and implementation:
DeepSig is available as both standalone program and web server at https://deepsig.biocomp.unibo.it. All datasets used in this study can be obtained from the same website
Statistical Physics and Representations in Real and Artificial Neural Networks
This document presents the material of two lectures on statistical physics
and neural representations, delivered by one of us (R.M.) at the Fundamental
Problems in Statistical Physics XIV summer school in July 2017. In a first
part, we consider the neural representations of space (maps) in the
hippocampus. We introduce an extension of the Hopfield model, able to store
multiple spatial maps as continuous, finite-dimensional attractors. The phase
diagram and dynamical properties of the model are analyzed. We then show how
spatial representations can be dynamically decoded using an effective Ising
model capturing the correlation structure in the neural data, and compare
applications to data obtained from hippocampal multi-electrode recordings and
by (sub)sampling our attractor model. In a second part, we focus on the problem
of learning data representations in machine learning, in particular with
artificial neural networks. We start by introducing data representations
through some illustrations. We then analyze two important algorithms, Principal
Component Analysis and Restricted Boltzmann Machines, with tools from
statistical physics
Taming Wild High Dimensional Text Data with a Fuzzy Lash
The bag of words (BOW) represents a corpus in a matrix whose elements are the
frequency of words. However, each row in the matrix is a very high-dimensional
sparse vector. Dimension reduction (DR) is a popular method to address sparsity
and high-dimensionality issues. Among different strategies to develop DR
method, Unsupervised Feature Transformation (UFT) is a popular strategy to map
all words on a new basis to represent BOW. The recent increase of text data and
its challenges imply that DR area still needs new perspectives. Although a wide
range of methods based on the UFT strategy has been developed, the fuzzy
approach has not been considered for DR based on this strategy. This research
investigates the application of fuzzy clustering as a DR method based on the
UFT strategy to collapse BOW matrix to provide a lower-dimensional
representation of documents instead of the words in a corpus. The quantitative
evaluation shows that fuzzy clustering produces superior performance and
features to Principal Components Analysis (PCA) and Singular Value
Decomposition (SVD), two popular DR methods based on the UFT strategy
Sparse Bilinear Logistic Regression
In this paper, we introduce the concept of sparse bilinear logistic
regression for decision problems involving explanatory variables that are
two-dimensional matrices. Such problems are common in computer vision,
brain-computer interfaces, style/content factorization, and parallel factor
analysis. The underlying optimization problem is bi-convex; we study its
solution and develop an efficient algorithm based on block coordinate descent.
We provide a theoretical guarantee for global convergence and estimate the
asymptotical convergence rate using the Kurdyka-{\L}ojasiewicz inequality. A
range of experiments with simulated and real data demonstrate that sparse
bilinear logistic regression outperforms current techniques in several
important applications.Comment: 27 pages, 5 figure
Digging into acceptor splice site prediction : an iterative feature selection approach
Feature selection techniques are often used to reduce data dimensionality, increase classification performance, and gain insight into the processes that generated the data. In this paper, we describe an iterative procedure of feature selection and feature construction steps, improving the classification of acceptor splice sites, an important subtask of gene prediction.
We show that acceptor prediction can benefit from feature selection, and describe how feature selection techniques can be used to gain new insights in the classification of acceptor sites. This is illustrated by the identification of a new, biologically motivated feature: the AG-scanning feature.
The results described in this paper contribute both to the domain of gene prediction, and to research in feature selection techniques, describing a new wrapper based feature weighting method that aids in knowledge discovery when dealing with complex datasets
Kinetics of protein-DNA interaction: facilitated target location in sequence-dependent potential
Recognition and binding of specific sites on DNA by proteins is central for
many cellular functions such as transcription, replication, and recombination.
In the process of recognition, a protein rapidly searches for its specific site
on a long DNA molecule and then strongly binds this site. Here we aim to find a
mechanism that can provide both a fast search (1-10 sec) and high stability of
the specific protein-DNA complex ( M).
Earlier studies have suggested that rapid search involves the sliding of a
protein along the DNA. Here we consider sliding as a one-dimensional (1D)
diffusion in a sequence-dependent rough energy landscape. We demonstrate that,
in spite of the landscape's roughness, rapid search can be achieved if 1D
sliding is accompanied by 3D diffusion. We estimate the range of the specific
and non-specific DNA-binding energy required for rapid search and suggest
experiments that can test our mechanism. We show that optimal search requires a
protein to spend half of time sliding along the DNA and half diffusing in 3D.
We also establish that, paradoxically, realistic energy functions cannot
provide both rapid search and strong binding of a rigid protein. To reconcile
these two fundamental requirements we propose a search-and-fold mechanism that
involves the coupling of protein binding and partial protein folding.
Proposed mechanism has several important biological implications for search
in the presence of other proteins and nucleosomes, simultaneous search by
several proteins etc. Proposed mechanism also provides a new framework for
interpretation of experimental and structural data on protein-DNA interactions
Operators for transforming kernels into quasi-local kernels that improve SVM accuracy
Motivated by the crucial role that locality plays in various learning approaches, we present, in the framework of kernel machines for classification, a novel family of operators on kernels able to integrate local information into any kernel obtaining quasi-local kernels. The quasi-local kernels maintain the possibly global properties of the input kernel and they increase the kernel value as the points get closer in the feature space of the input kernel, mixing the effect of the input kernel with a kernel which is local in the feature space of the input one. If applied on a local kernel the operators introduce an additional level of locality equivalent to use a local kernel with non-stationary kernel width. The operators accept two parameters that regulate the width of the exponential influence of points in the locality-dependent component and the balancing between the feature-space local component and the input kernel. We address the choice of these parameters with a data-dependent strategy. Experiments carried out with SVM applying the operators on traditional kernel functions on a total of 43 datasets with di®erent characteristics and application domains, achieve very good results supported by statistical significance
- …