5,112,942 research outputs found
Report on John Collier and His American Indian Policies, July 1947
This document, dated July 1947, composed by Individuals Opposed to Exploitation of Indians characterizes the policies and activities of former Commissioner of the United States (US) Bureau of Indian Affairs John Collier as impractical, iniquitous, and communistic, and claim they have led to FACTIONALISM, DISSENTION,ENMITY AND HATRED (emphasis in original) among the tribes (emphasis in original).
The document describes Collier as a Self-established [...] \u27GREAT MESSIAH\u27 of minority groups, and lists six actions taken by Collier during his tenure as Commissioner of the US Indian Bureau, including the Wheeler-Howard Bill, the Inter-American Indian Institute, the National Indian Institute, and inserting Resolution No.10 into the official minutes of the convention of the National Congress of American Indians in Denver, Colorado in 1944, urging the US Congress to appropriate public funds for the expenses of the National Indian Institute.
The report also names D\u27Arcy McNickle and Ruth Muskrat Bronson as two individuals working for Collier who failed to support bills by indigenous delegates and instead asked for support for the US Bureau of Indian Affairs. The report concludes by stating that Collier has schemed to unite the indigenous peoples with the Bureau in a way that is not compatible with the best interests of those indigenous peoples.https://commons.und.edu/burdick-papers/1216/thumbnail.jp
Stochastic Discriminative EM
Stochastic discriminative EM (sdEM) is an online-EM-type algorithm for
discriminative training of probabilistic generative models belonging to the
exponential family. In this work, we introduce and justify this algorithm as a
stochastic natural gradient descent method, i.e. a method which accounts for
the information geometry in the parameter space of the statistical model. We
show how this learning algorithm can be used to train probabilistic generative
models by minimizing different discriminative loss functions, such as the
negative conditional log-likelihood and the Hinge loss. The resulting models
trained by sdEM are always generative (i.e. they define a joint probability
distribution) and, in consequence, allows to deal with missing data and latent
variables in a principled way either when being learned or when making
predictions. The performance of this method is illustrated by several text
classification problems for which a multinomial naive Bayes and a latent
Dirichlet allocation based classifier are learned using different
discriminative loss functions.Comment: UAI 2014 paper + Supplementary Material. In Proceedings of the
Thirtieth Conference on Uncertainty in Artificial Intelligence (UAI 2014),
edited by Nevin L. Zhang and Jian Tian. AUAI Pres
Markov Network Structure Learning via Ensemble-of-Forests Models
Real world systems typically feature a variety of different dependency types
and topologies that complicate model selection for probabilistic graphical
models. We introduce the ensemble-of-forests model, a generalization of the
ensemble-of-trees model. Our model enables structure learning of Markov random
fields (MRF) with multiple connected components and arbitrary potentials. We
present two approximate inference techniques for this model and demonstrate
their performance on synthetic data. Our results suggest that the
ensemble-of-forests approach can accurately recover sparse, possibly
disconnected MRF topologies, even in presence of non-Gaussian dependencies
and/or low sample size. We applied the ensemble-of-forests model to learn the
structure of perturbed signaling networks of immune cells and found that these
frequently exhibit non-Gaussian dependencies with disconnected MRF topologies.
In summary, we expect that the ensemble-of-forests model will enable MRF
structure learning in other high dimensional real world settings that are
governed by non-trivial dependencies.Comment: 13 pages, 6 figure
Adaptive Monotone Shrinkage for Regression
We develop an adaptive monotone shrinkage estimator for regression models
with the following characteristics: i) dense coefficients with small but
important effects; ii) a priori ordering that indicates the probable predictive
importance of the features. We capture both properties with an empirical Bayes
estimator that shrinks coefficients monotonically with respect to their
anticipated importance. This estimator can be rapidly computed using a version
of Pool-Adjacent-Violators algorithm. We show that the proposed monotone
shrinkage approach is competitive with the class of all Bayesian estimators
that share the prior information. We further observe that the estimator also
minimizes Stein's unbiased risk estimate. Along with our key result that the
estimator mimics the oracle Bayes rule under an order assumption, we also prove
that the estimator is robust. Even without the order assumption, our estimator
mimics the best performance of a large family of estimators that includes the
least squares estimator, constant- ridge estimator, James-Stein
estimator, etc. All the theoretical results are non-asymptotic. Simulation
results and data analysis from a model for text processing are provided to
support the theory.Comment: Appearing in Uncertainty in Artificial Intelligence (UAI) 201
Efficient Bayesian Nonparametric Modelling of Structured Point Processes
This paper presents a Bayesian generative model for dependent Cox point
processes, alongside an efficient inference scheme which scales as if the point
processes were modelled independently. We can handle missing data naturally,
infer latent structure, and cope with large numbers of observed processes. A
further novel contribution enables the model to work effectively in higher
dimensional spaces. Using this method, we achieve vastly improved predictive
performance on both 2D and 1D real data, validating our structured approach.Comment: Presented at UAI 2014. Bibtex: @inproceedings{structcoxpp14_UAI,
Author = {Tom Gunter and Chris Lloyd and Michael A. Osborne and Stephen J.
Roberts}, Title = {Efficient Bayesian Nonparametric Modelling of Structured
Point Processes}, Booktitle = {Uncertainty in Artificial Intelligence (UAI)},
Year = {2014}
Matroid Bandits: Fast Combinatorial Optimization with Learning
A matroid is a notion of independence in combinatorial optimization which is
closely related to computational efficiency. In particular, it is well known
that the maximum of a constrained modular function can be found greedily if and
only if the constraints are associated with a matroid. In this paper, we bring
together the ideas of bandits and matroids, and propose a new class of
combinatorial bandits, matroid bandits. The objective in these problems is to
learn how to maximize a modular function on a matroid. This function is
stochastic and initially unknown. We propose a practical algorithm for solving
our problem, Optimistic Matroid Maximization (OMM); and prove two upper bounds,
gap-dependent and gap-free, on its regret. Both bounds are sublinear in time
and at most linear in all other quantities of interest. The gap-dependent upper
bound is tight and we prove a matching lower bound on a partition matroid
bandit. Finally, we evaluate our method on three real-world problems and show
that it is practical
Fast Ridge Regression with Randomized Principal Component Analysis and Gradient Descent
We propose a new two stage algorithm LING for large scale regression
problems. LING has the same risk as the well known Ridge Regression under the
fixed design setting and can be computed much faster. Our experiments have
shown that LING performs well in terms of both prediction accuracy and
computational efficiency compared with other large scale regression algorithms
like Gradient Descent, Stochastic Gradient Descent and Principal Component
Regression on both simulated and real datasets
Improved Densification of One Permutation Hashing
The existing work on densification of one permutation hashing reduces the
query processing cost of the -parameterized Locality Sensitive Hashing
(LSH) algorithm with minwise hashing, from to merely ,
where is the number of nonzeros of the data vector, is the number of
hashes in each hash table, and is the number of hash tables. While that is
a substantial improvement, our analysis reveals that the existing densification
scheme is sub-optimal. In particular, there is no enough randomness in that
procedure, which affects its accuracy on very sparse datasets.
In this paper, we provide a new densification procedure which is provably
better than the existing scheme. This improvement is more significant for very
sparse datasets which are common over the web. The improved technique has the
same cost of for query processing, thereby making it strictly
preferable over the existing procedure. Experimental evaluations on public
datasets, in the task of hashing based near neighbor search, support our
theoretical findings
Biological Individuals
The impressive variation amongst biological individuals generates many complexities in addressing the simple-sounding question what is a biological individual? A distinction between evolutionary and physiological individuals is useful in thinking about biological individuals, as is attention to the kinds of groups, such as superorganisms and species, that have sometimes been thought of as biological individuals. More fully understanding the conceptual space that biological individuals occupy also involves considering a range of other concepts, such as life, reproduction, and agency. There has been a focus in some recent discussions by both philosophers and biologists on how evolutionary individuals are created and regulated, as well as continuing work on the evolution of individuality
- …