7,991 research outputs found
Benefits of Alaska Native Corporations and the SBA 8(a) Program to Alaska Natives and Alaska
Senator Begich’s office asked ISER for assistance assembling information to document
the social and economic status of Alaska Natives and the benefits of the 8(a) program.
His purpose is to brief Missouri Senator McCaskill and her committee which is reviewing
the status of ANC contracts awarded under SBA’s 8(a) program. This review was
triggered by a 2006 GAO report recommending increased SBA oversight to 8(a)
contracting activity. Highlights of the GAO report are provided in Tab A.1; a letter dated
May 15, 2009, from Senators Begich and Murkowski to Sentaor McCaskill, outlining
their concerns is provided in Tab A.2.
As the Congressional Research Service report (Tab A.3) explains, the Small Business
Administration’s 8(a) program targeting socially and economically disadvantaged
individuals was operating under executive authority from about 1970, and under statutory
authority starting in 1978. A series of amendments from 1986 to 1992 recognized Alaska
Native Corporations (ANCs) as socially and economically disadvantaged for purposes of
program eligibility, exempted them from limitations on the number of qualifying
subsidiaries, from some restrictions on size and minimum time in business, and from the
ceiling on amounts for sole-source contracts. Between 1988 and 2005, the number of 8(a)
qualified ANC subsidiaries grew from one to 154 subsidiaries owned by 49 ANCs. The
dollar amount of 8(a) contracts to ANCs grew from 1.1
billion in 2004, approximately 80 percent of which was in sole-source contracts. (GAO
Highlights, Tab A.1)
The remainder of this briefing book is divided in three sections. Section 2 addresses
changes in the social and economic status of Alaska Natives from 1970--the year before
the enactment of the Alaska Native Claims Settlement Act and the subsequent creation of
the ANCs--to the present. ISER’s report on the “Status of Alaska Natives 2004” (Tab
B.1) finds that despite really significant improvements in social and economic conditions
among Alaska Natives, they still lag well behind other Alaskans in employment, income,
education, health status and living conditions. A collection of more recent analyses
updates the social and economic indicators to 2008. There were many concurrent changes
throughout this dynamic period of Alaska’s history and we cannot attribute all the
improvements to the ANCs, though it is clear that they play an important catalyst role. In
the final part of section 2 we attempt to provide some historical context for understanding
the role ANCs have played in improving the well-being of Alaska Natives.
Section C. documents the growth in ANCs and their contributions to Alaska Native
employment, income, social and cultural programs and wellbeing, and their major
contributions to the Alaska economy and society overall.
Section D. Looks specifically at the 8(a) program. Although there are a handful of 8(a)
firms with large federal contracts, the majority are small, village-based corporations
engaged in enterprise development in very challenging conditions. A collection of six
case studies illustrate the barriers to business development these small firms face and the
critical leverage that 8(a) contracting offers them.Mark BegichIntroduction / Status of Alaska Natives 1970 to 2000 / Benefits from Alaska Native Corporations / Benefits from the 8(a) progra
Lipschitz Adaptivity with Multiple Learning Rates in Online Learning
We aim to design adaptive online learning algorithms that take advantage of
any special structure that might be present in the learning task at hand, with
as little manual tuning by the user as possible. A fundamental obstacle that
comes up in the design of such adaptive algorithms is to calibrate a so-called
step-size or learning rate hyperparameter depending on variance, gradient
norms, etc. A recent technique promises to overcome this difficulty by
maintaining multiple learning rates in parallel. This technique has been
applied in the MetaGrad algorithm for online convex optimization and the Squint
algorithm for prediction with expert advice. However, in both cases the user
still has to provide in advance a Lipschitz hyperparameter that bounds the norm
of the gradients. Although this hyperparameter is typically not available in
advance, tuning it correctly is crucial: if it is set too small, the methods
may fail completely; but if it is taken too large, performance deteriorates
significantly. In the present work we remove this Lipschitz hyperparameter by
designing new versions of MetaGrad and Squint that adapt to its optimal value
automatically. We achieve this by dynamically updating the set of active
learning rates. For MetaGrad, we further improve the computational efficiency
of handling constraints on the domain of prediction, and we remove the need to
specify the number of rounds in advance.Comment: 22 pages. To appear in COLT 201
Dynamic Ad Allocation: Bandits with Budgets
We consider an application of multi-armed bandits to internet advertising
(specifically, to dynamic ad allocation in the pay-per-click model, with
uncertainty on the click probabilities). We focus on an important practical
issue that advertisers are constrained in how much money they can spend on
their ad campaigns. This issue has not been considered in the prior work on
bandit-based approaches for ad allocation, to the best of our knowledge.
We define a simple, stylized model where an algorithm picks one ad to display
in each round, and each ad has a \emph{budget}: the maximal amount of money
that can be spent on this ad. This model admits a natural variant of UCB1, a
well-known algorithm for multi-armed bandits with stochastic rewards. We derive
strong provable guarantees for this algorithm
Second-order Quantile Methods for Experts and Combinatorial Games
We aim to design strategies for sequential decision making that adjust to the
difficulty of the learning problem. We study this question both in the setting
of prediction with expert advice, and for more general combinatorial decision
tasks. We are not satisfied with just guaranteeing minimax regret rates, but we
want our algorithms to perform significantly better on easy data. Two popular
ways to formalize such adaptivity are second-order regret bounds and quantile
bounds. The underlying notions of 'easy data', which may be paraphrased as "the
learning problem has small variance" and "multiple decisions are useful", are
synergetic. But even though there are sophisticated algorithms that exploit one
of the two, no existing algorithm is able to adapt to both.
In this paper we outline a new method for obtaining such adaptive algorithms,
based on a potential function that aggregates a range of learning rates (which
are essential tuning parameters). By choosing the right prior we construct
efficient algorithms and show that they reap both benefits by proving the first
bounds that are both second-order and incorporate quantiles
Fighting Bandits with a New Kind of Smoothness
We define a novel family of algorithms for the adversarial multi-armed bandit
problem, and provide a simple analysis technique based on convex smoothing. We
prove two main results. First, we show that regularization via the
\emph{Tsallis entropy}, which includes EXP3 as a special case, achieves the
minimax regret. Second, we show that a wide class of
perturbation methods achieve a near-optimal regret as low as if the perturbation distribution has a bounded hazard rate. For example,
the Gumbel, Weibull, Frechet, Pareto, and Gamma distributions all satisfy this
key property.Comment: In Proceedings of NIPS, 201
PAC-Bayesian Analysis of Martingales and Multiarmed Bandits
We present two alternative ways to apply PAC-Bayesian analysis to sequences
of dependent random variables. The first is based on a new lemma that enables
to bound expectations of convex functions of certain dependent random variables
by expectations of the same functions of independent Bernoulli random
variables. This lemma provides an alternative tool to Hoeffding-Azuma
inequality to bound concentration of martingale values. Our second approach is
based on integration of Hoeffding-Azuma inequality with PAC-Bayesian analysis.
We also introduce a way to apply PAC-Bayesian analysis in situation of limited
feedback. We combine the new tools to derive PAC-Bayesian generalization and
regret bounds for the multiarmed bandit problem. Although our regret bound is
not yet as tight as state-of-the-art regret bounds based on other
well-established techniques, our results significantly expand the range of
potential applications of PAC-Bayesian analysis and introduce a new analysis
tool to reinforcement learning and many other fields, where martingales and
limited feedback are encountered
- …