58,074 research outputs found
Systematic Definition of Protein Constituents along the Major Polarization Axis Reveals an Adaptive Reuse of the Polarization Machinery in Pheromone-Treated Budding Yeast
Polarizing cells extensively restructure cellular components in a spatially and temporally coupledmanner along the major axis of cellular extension. Budding yeast are a useful model of polarized growth, helping to define many molecular components of this conserved process. Besides budding, yeast cells also differentiate upon treatment with pheromone from the opposite mating type, forming a mating projection (the ‘shmoo’) by directional restructuring of the cytoskeleton, localized vesicular transport and overall reorganization of the cytosol. To characterize the proteomic localization changes ac-companying polarized growth, we developed and implemented a novel cell microarray-based imaging assay for measuring the spatial redistribution of a large fraction of the yeast proteome, and applied this assay to identify proteins localized along the mating projection following pheromone treatment. We further trained a machine learning algorithm to refine the cell imaging screen, identifying additional shmoo-localized proteins. In all, we identified 74 proteins that specifically localize to the mating projection, including previously uncharacterized proteins (Ycr043c, Ydr348c, Yer071c, Ymr295c, and Yor304c-a) and known polarization complexes such as the exocyst. Functional analysis of these proteins, coupled with quantitative analysis of individual organelle movements during shmoo formation, suggests a model in which the basic machinery for cell polarization is generally conserved between processe
Random Feature Maps via a Layered Random Projection (LaRP) Framework for Object Classification
The approximation of nonlinear kernels via linear feature maps has recently
gained interest due to their applications in reducing the training and testing
time of kernel-based learning algorithms. Current random projection methods
avoid the curse of dimensionality by embedding the nonlinear feature space into
a low dimensional Euclidean space to create nonlinear kernels. We introduce a
Layered Random Projection (LaRP) framework, where we model the linear kernels
and nonlinearity separately for increased training efficiency. The proposed
LaRP framework was assessed using the MNIST hand-written digits database and
the COIL-100 object database, and showed notable improvement in object
classification performance relative to other state-of-the-art random projection
methods.Comment: 5 page
Nonlinear Hebbian learning as a unifying principle in receptive field formation
The development of sensory receptive fields has been modeled in the past by a
variety of models including normative models such as sparse coding or
independent component analysis and bottom-up models such as spike-timing
dependent plasticity or the Bienenstock-Cooper-Munro model of synaptic
plasticity. Here we show that the above variety of approaches can all be
unified into a single common principle, namely Nonlinear Hebbian Learning. When
Nonlinear Hebbian Learning is applied to natural images, receptive field shapes
were strongly constrained by the input statistics and preprocessing, but
exhibited only modest variation across different choices of nonlinearities in
neuron models or synaptic plasticity rules. Neither overcompleteness nor sparse
network activity are necessary for the development of localized receptive
fields. The analysis of alternative sensory modalities such as auditory models
or V2 development lead to the same conclusions. In all examples, receptive
fields can be predicted a priori by reformulating an abstract model as
nonlinear Hebbian learning. Thus nonlinear Hebbian learning and natural
statistics can account for many aspects of receptive field formation across
models and sensory modalities
ELM regime classification by conformal prediction on an information manifold
Characterization and control of plasma instabilities known as edge-localized modes (ELMs) is crucial for the operation of fusion reactors. Recently, machine learning methods have demonstrated good potential in making useful inferences from stochastic fusion data sets. However, traditional classification methods do not offer an inherent estimate of the goodness of their prediction. In this paper, a distance-based conformal predictor classifier integrated with a geometric-probabilistic framework is presented. The first benefit of the approach lies in its comprehensive treatment of highly stochastic fusion data sets, by modeling the measurements with probability distributions in a metric space. This enables calculation of a natural distance measure between probability distributions: the Rao geodesic distance. Second, the predictions are accompanied by estimates of their accuracy and reliability. The method is applied to the classification of regimes characterized by different types of ELMs based on the measurements of global parameters and their error bars. This yields promising success rates and outperforms state-of-the-art automatic techniques for recognizing ELM signatures. The estimates of goodness of the predictions increase the confidence of classification by ELM experts, while allowing more reliable decisions regarding plasma control and at the same time increasing the robustness of the control system
Representing complex data using localized principal components with application to astronomical data
Often the relation between the variables constituting a multivariate data
space might be characterized by one or more of the terms: ``nonlinear'',
``branched'', ``disconnected'', ``bended'', ``curved'', ``heterogeneous'', or,
more general, ``complex''. In these cases, simple principal component analysis
(PCA) as a tool for dimension reduction can fail badly. Of the many alternative
approaches proposed so far, local approximations of PCA are among the most
promising. This paper will give a short review of localized versions of PCA,
focusing on local principal curves and local partitioning algorithms.
Furthermore we discuss projections other than the local principal components.
When performing local dimension reduction for regression or classification
problems it is important to focus not only on the manifold structure of the
covariates, but also on the response variable(s). Local principal components
only achieve the former, whereas localized regression approaches concentrate on
the latter. Local projection directions derived from the partial least squares
(PLS) algorithm offer an interesting trade-off between these two objectives. We
apply these methods to several real data sets. In particular, we consider
simulated astrophysical data from the future Galactic survey mission Gaia.Comment: 25 pages. In "Principal Manifolds for Data Visualization and
Dimension Reduction", A. Gorban, B. Kegl, D. Wunsch, and A. Zinovyev (eds),
Lecture Notes in Computational Science and Engineering, Springer, 2007, pp.
180--204,
http://www.springer.com/dal/home/generic/search/results?SGWID=1-40109-22-173750210-
Fast Selection of Spectral Variables with B-Spline Compression
The large number of spectral variables in most data sets encountered in
spectral chemometrics often renders the prediction of a dependent variable
uneasy. The number of variables hopefully can be reduced, by using either
projection techniques or selection methods; the latter allow for the
interpretation of the selected variables. Since the optimal approach of testing
all possible subsets of variables with the prediction model is intractable, an
incremental selection approach using a nonparametric statistics is a good
option, as it avoids the computationally intensive use of the model itself. It
has two drawbacks however: the number of groups of variables to test is still
huge, and colinearities can make the results unstable. To overcome these
limitations, this paper presents a method to select groups of spectral
variables. It consists in a forward-backward procedure applied to the
coefficients of a B-Spline representation of the spectra. The criterion used in
the forward-backward procedure is the mutual information, allowing to find
nonlinear dependencies between variables, on the contrary of the generally used
correlation. The spline representation is used to get interpretability of the
results, as groups of consecutive spectral variables will be selected. The
experiments conducted on NIR spectra from fescue grass and diesel fuels show
that the method provides clearly identified groups of selected variables,
making interpretation easy, while keeping a low computational load. The
prediction performances obtained using the selected coefficients are higher
than those obtained by the same method applied directly to the original
variables and similar to those obtained using traditional models, although
using significantly less spectral variables
- …