2,598 research outputs found
Notes on Regularized Least Squares
This is a collection of information about regularized least squares (RLS). The facts here are not new results, but we have not seen them usefully collected together before. A key goal of this work is to demonstrate that with RLS, we get certain things for free: if we can solve a single supervised RLS problem, we can search for a good regularization parameter lambda at essentially no additional cost.The discussion in this paper applies to dense regularized least squares, where we work with matrix factorizations of the data or kernel matrix. It is also possible to work with iterative methods such as conjugate gradient, and this is frequently the method of choice for large data sets in high dimensions with very few nonzero dimensions per point, such as text classifciation tasks. The results discussed here do not apply to iterative methods, which have different design tradeoffs.We present the results in greater detail than strictly necessary, erring on the side of showing our work. We hope that this will be useful to people trying to learn more about linear algebra manipulations in the machine learning context
ACA Implementation Monitoring and Tracking: New York Site Visit Report
Examines New York's progress in implementing the 2010 federal healthcare reform, including an executive order to establish a health insurance exchange, legislation to enact insurance reforms, and the debate over implementing the Basic Health Program
Puromycin Sensitivity of Ribosomal Label after Incorporation of 14C-Labelled Amino Acids into Isolated Mitochondria from Neurospora crassa
Radioactive amino acids were incorporated into isolated mitochondria from Neurospora crassa. Then the mitochondrial ribosomes were isolated and submitted to density gradient centrifugation. A preferential labelling of polysomes was observed. However, when the mitochondrial suspension was treated with puromycin after amino acid incorporation, no radioactivity could be detected in either the monosomes or the polysomes. The conclusion is drawn that isolated mitochondria under these conditions do not incorporate significant amounts of amino acids into proteins of their ribosomes
A Unifying View of Multiple Kernel Learning
Recent research on multiple kernel learning has lead to a number of
approaches for combining kernels in regularized risk minimization. The proposed
approaches include different formulations of objectives and varying
regularization strategies. In this paper we present a unifying general
optimization criterion for multiple kernel learning and show how existing
formulations are subsumed as special cases. We also derive the criterion's dual
representation, which is suitable for general smooth optimization algorithms.
Finally, we evaluate multiple kernel learning in this framework analytically
using a Rademacher complexity bound on the generalization error and empirically
in a set of experiments
Premise Selection for Mathematics by Corpus Analysis and Kernel Methods
Smart premise selection is essential when using automated reasoning as a tool
for large-theory formal proof development. A good method for premise selection
in complex mathematical libraries is the application of machine learning to
large corpora of proofs. This work develops learning-based premise selection in
two ways. First, a newly available minimal dependency analysis of existing
high-level formal mathematical proofs is used to build a large knowledge base
of proof dependencies, providing precise data for ATP-based re-verification and
for training premise selection algorithms. Second, a new machine learning
algorithm for premise selection based on kernel methods is proposed and
implemented. To evaluate the impact of both techniques, a benchmark consisting
of 2078 large-theory mathematical problems is constructed,extending the older
MPTP Challenge benchmark. The combined effect of the techniques results in a
50% improvement on the benchmark over the Vampire/SInE state-of-the-art system
for automated reasoning in large theories.Comment: 26 page
The Healthy Human Blood Microbiome: Fact or Fiction?
The blood that flows perpetually through our veins and arteries performs numerous functions essential to our survival. Besides distributing oxygen, this vast circulatory system facilitates nutrient transport, deters infection and dispenses heat throughout our bodies. Since human blood has traditionally been considered to be an entirely sterile environment, comprising only blood-cells, platelets and plasma, the detection of microbes in blood was consistently interpreted as an indication of infection. However, although a contentious concept, evidence for the existence of a healthy human blood-microbiome is steadily accumulating. While the origins, identities and functions of these unanticipated micro-organisms remain to be elucidated, information on blood-borne microbial phylogeny is gradually increasing. Given recent advances in microbial-hematology, we review current literature concerning the composition and origin of the human blood-microbiome, focusing on bacteria and their role in the configuration of both the diseased and healthy human blood-microbiomes. Specifically, we explore the ways in which dysbiosis in the supposedly innocuous blood-borne bacterial microbiome may stimulate pathogenesis. In addition to exploring the relationship between blood-borne bacteria and the development of complex disorders, we also address the matter of contamination, citing the influence of contaminants on the interpretation of blood-derived microbial datasets and urging the routine analysis of laboratory controls to ascertain the taxonomic and metabolic characteristics of environmentally-derived contaminant-taxa
Inhibition in multiclass classification
The role of inhibition is investigated in a multiclass support vector machine formalism inspired by the brain structure of insects. The so-called mushroom bodies have a set of output neurons, or classification functions,
that compete with each other to encode a particular input. Strongly active output neurons depress or inhibit the remaining outputs without knowing which is correct or incorrect. Accordingly, we propose to use a
classification function that embodies unselective inhibition and train it in the large margin classifier framework. Inhibition leads to more robust classifiers in the sense that they perform better on larger areas of appropriate hyperparameters when assessed with leave-one-out strategies. We also show that the classifier with inhibition is a tight bound to probabilistic exponential models and is Bayes consistent for 3-class problems.
These properties make this approach useful for data sets with a limited number of labeled examples. For larger data sets, there is no significant comparative advantage to other multiclass SVM approaches
- …