855 research outputs found
Preservation: What Is It Good for?
The Article proceeds as follows: in Part A, the preservation doctrine is defined. In Part B, the history of the preservation doctrine is described. In Part C, there is an explanation as to the purpose of preservation. In Part D, there is a description of the appellate process in New York. In Part E, the statutory rules of the New York Court of Appeals are described. In Part F, there is a description of how the rules of preservation have loosened in New York since 2009. In Part G, there is a statistical analysis of the consequences of loosening the rules of preservation in New York. Finally, Part H shows how loosening the rules of preservation impacts the efficiency of appellate courts
The Effect of e-Business on Supply Chain Strategy
Internet technology has forced companies to redefine their business models so as to improve the extended enterprise performance - this is popularly called e-business. The focus has been on improving the extended enterprise transactions including Intraorganizational, Business-to-Consumer (B2C) and Business-to-Business (B2B) transactions. This shift in corporate focus allowed a number of companies to employ a hybrid approach, the Push-Pull supply chain paradigm. In this article we review and analyze the evolution of supply chain strategies from the traditional Push to Pull and finally to the hybrid Push-Pull approach. The analysis motivates the development of a framework that allows companies to identify the appropriate supply chain strategy depending on product characteristics. Finally, we introduce new opportunities that contribute and support this supply chain paradigm
Uplift Modeling with Multiple Treatments and General Response Types
Randomized experiments have been used to assist decision-making in many
areas. They help people select the optimal treatment for the test population
with certain statistical guarantee. However, subjects can show significant
heterogeneity in response to treatments. The problem of customizing treatment
assignment based on subject characteristics is known as uplift modeling,
differential response analysis, or personalized treatment learning in
literature. A key feature for uplift modeling is that the data is unlabeled. It
is impossible to know whether the chosen treatment is optimal for an individual
subject because response under alternative treatments is unobserved. This
presents a challenge to both the training and the evaluation of uplift models.
In this paper we describe how to obtain an unbiased estimate of the key
performance metric of an uplift model, the expected response. We present a new
uplift algorithm which creates a forest of randomized trees. The trees are
built with a splitting criterion designed to directly optimize their uplift
performance based on the proposed evaluation method. Both the evaluation method
and the algorithm apply to arbitrary number of treatments and general response
types. Experimental results on synthetic data and industry-provided data show
that our algorithm leads to significant performance improvement over other
applicable methods
A Practically Competitive and Provably Consistent Algorithm for Uplift Modeling
Randomized experiments have been critical tools of decision making for
decades. However, subjects can show significant heterogeneity in response to
treatments in many important applications. Therefore it is not enough to simply
know which treatment is optimal for the entire population. What we need is a
model that correctly customize treatment assignment base on subject
characteristics. The problem of constructing such models from randomized
experiments data is known as Uplift Modeling in the literature. Many algorithms
have been proposed for uplift modeling and some have generated promising
results on various data sets. Yet little is known about the theoretical
properties of these algorithms. In this paper, we propose a new tree-based
ensemble algorithm for uplift modeling. Experiments show that our algorithm can
achieve competitive results on both synthetic and industry-provided data. In
addition, by properly tuning the "node size" parameter, our algorithm is proved
to be consistent under mild regularity conditions. This is the first consistent
algorithm for uplift modeling that we are aware of.Comment: Accepted by 2017 IEEE International Conference on Data Minin
Online Pricing with Offline Data: Phase Transition and Inverse Square Law
This paper investigates the impact of pre-existing offline data on online
learning, in the context of dynamic pricing. We study a single-product dynamic
pricing problem over a selling horizon of periods. The demand in each
period is determined by the price of the product according to a linear demand
model with unknown parameters. We assume that before the start of the selling
horizon, the seller already has some pre-existing offline data. The offline
data set contains samples, each of which is an input-output pair consisting
of a historical price and an associated demand observation. The seller wants to
utilize both the pre-existing offline data and the sequential online data to
minimize the regret of the online learning process.
We characterize the joint effect of the size, location and dispersion of the
offline data on the optimal regret of the online learning process.
Specifically, the size, location and dispersion of the offline data are
measured by the number of historical samples , the distance between the
average historical price and the optimal price , and the standard
deviation of the historical prices , respectively. We show that the
optimal regret is , and design a learning algorithm based on the
"optimism in the face of uncertainty" principle, whose regret is optimal up to
a logarithmic factor. Our results reveal surprising transformations of the
optimal regret rate with respect to the size of the offline data, which we
refer to as phase transitions. In addition, our results demonstrate that the
location and dispersion of the offline data also have an intrinsic effect on
the optimal regret, and we quantify this effect via the inverse-square law.Comment: Forthcoming in Management Scienc
Learning to Optimize under Non-Stationarity
We introduce algorithms that achieve state-of-the-art \emph{dynamic regret}
bounds for non-stationary linear stochastic bandit setting. It captures natural
applications such as dynamic pricing and ads allocation in a changing
environment. We show how the difficulty posed by the non-stationarity can be
overcome by a novel marriage between stochastic and adversarial bandits
learning algorithms. Defining and as the problem dimension, the
\emph{variation budget}, and the total time horizon, respectively, our main
contributions are the tuned Sliding Window UCB (\texttt{SW-UCB}) algorithm with
optimal dynamic regret, and the
tuning free bandit-over-bandit (\texttt{BOB}) framework built on top of the
\texttt{SW-UCB} algorithm with best
dynamic regret
- …