67 research outputs found
Analytic Kramer kernels, Lagrange-type interpolation series and de Branges spaces
The classical Kramer sampling theorem provides a method for obtaining orthogonal sampling formulas. In particular, when the involved kernel is analytic in the sampling parameter it can be stated in an abstract setting of reproducing kernel Hilbert spaces of entire functions which includes as a particular case the classical Shannon sampling theory. This abstract setting allows us to obtain a sort of converse result and to characterize when the sampling formula associated with an analytic Kramer kernel can be expressed as a Lagrange-type interpolation series. On the other hand, the de Branges spaces of entire functions satisfy orthogonal sampling formulas which can be written as Lagrange-type interpolation series. In this work some links between all these ideas are established
A Survey on the Krein-von Neumann Extension, the corresponding Abstract Buckling Problem, and Weyl-Type Spectral Asymptotics for Perturbed Krein Laplacians in Nonsmooth Domains
In the first (and abstract) part of this survey we prove the unitary
equivalence of the inverse of the Krein--von Neumann extension (on the
orthogonal complement of its kernel) of a densely defined, closed, strictly
positive operator, for some in a Hilbert space to an abstract buckling problem operator.
This establishes the Krein extension as a natural object in elasticity theory
(in analogy to the Friedrichs extension, which found natural applications in
quantum mechanics, elasticity, etc.).
In the second, and principal part of this survey, we study spectral
properties for , the Krein--von Neumann extension of the
perturbed Laplacian (in short, the perturbed Krein Laplacian)
defined on , where is measurable, bounded and
nonnegative, in a bounded open set belonging to a
class of nonsmooth domains which contains all convex domains, along with all
domains of class , .Comment: 68 pages. arXiv admin note: extreme text overlap with arXiv:0907.144
SlimPLS: A Method for Feature Selection in Gene Expression-Based Disease Classification
A major challenge in biomedical studies in recent years has been the classification of gene expression profiles into categories, such as cases and controls. This is done by first training a classifier by using a labeled training set containing labeled samples from the two populations, and then using that classifier to predict the labels of new samples. Such predictions have recently been shown to improve the diagnosis and treatment selection practices for several diseases. This procedure is complicated, however, by the high dimensionality if the data. While microarrays can measure the levels of thousands of genes per sample, case-control microarray studies usually involve no more than several dozen samples. Standard classifiers do not work well in these situations where the number of features (gene expression levels measured in these microarrays) far exceeds the number of samples. Selecting only the features that are most relevant for discriminating between the two categories can help construct better classifiers, in terms of both accuracy and efficiency. In this work we developed a novel method for multivariate feature selection based on the Partial Least Squares algorithm. We compared the method's variants with common feature selection techniques across a large number of real case-control datasets, using several classifiers. We demonstrate the advantages of the method and the preferable combinations of classifier and feature selection technique
- …