1,260,778 research outputs found
Better subset regression
To find efficient screening methods for high dimensional linear regression
models, this paper studies the relationship between model fitting and screening
performance. Under a sparsity assumption, we show that a subset that includes
the true submodel always yields smaller residual sum of squares (i.e., has
better model fitting) than all that do not in a general asymptotic setting.
This indicates that, for screening important variables, we could follow a
"better fitting, better screening" rule, i.e., pick a "better" subset that has
better model fitting. To seek such a better subset, we consider the
optimization problem associated with best subset regression. An EM algorithm,
called orthogonalizing subset screening, and its accelerating version are
proposed for searching for the best subset. Although the two algorithms cannot
guarantee that a subset they yield is the best, their monotonicity property
makes the subset have better model fitting than initial subsets generated by
popular screening methods, and thus the subset can have better screening
performance asymptotically. Simulation results show that our methods are very
competitive in high dimensional variable screening even for finite sample
sizes.Comment: 24 pages, 1 figur
Computing an Approximately Optimal Agreeable Set of Items
We study the problem of finding a small subset of items that is
\emph{agreeable} to all agents, meaning that all agents value the subset at
least as much as its complement. Previous work has shown worst-case bounds,
over all instances with a given number of agents and items, on the number of
items that may need to be included in such a subset. Our goal in this paper is
to efficiently compute an agreeable subset whose size approximates the size of
the smallest agreeable subset for a given instance. We consider three
well-known models for representing the preferences of the agents: ordinal
preferences on single items, the value oracle model, and additive utilities. In
each of these models, we establish virtually tight bounds on the approximation
ratio that can be obtained by algorithms running in polynomial time.Comment: A preliminary version appeared in Proceedings of the 26th
International Joint Conference on Artificial Intelligence (IJCAI), 201
Generalized Calogero models through reductions by discrete symmetries
We construct generalizations of the Calogero-Sutherland-Moser system by
appropriately reducing a classical Calogero model by a subset of its discrete
symmetries. Such reductions reproduce all known variants of these systems,
including some recently obtained generalizations of the spin-Sutherland model,
and lead to further generalizations of the elliptic model involving spins with
SU(n) non-invariant couplings.Comment: 14 pages, LaTeX, no figure
- …