222,956 research outputs found
Regularized Ordinal Regression and the ordinalNet R Package
Regularization techniques such as the lasso (Tibshirani 1996) and elastic net
(Zou and Hastie 2005) can be used to improve regression model coefficient
estimation and prediction accuracy, as well as to perform variable selection.
Ordinal regression models are widely used in applications where the use of
regularization could be beneficial; however, these models are not included in
many popular software packages for regularized regression. We propose a
coordinate descent algorithm to fit a broad class of ordinal regression models
with an elastic net penalty. Furthermore, we demonstrate that each model in
this class generalizes to a more flexible form, for instance to accommodate
unordered categorical data. We introduce an elastic net penalty class that
applies to both model forms. Additionally, this penalty can be used to shrink a
non-ordinal model toward its ordinal counterpart. Finally, we introduce the R
package ordinalNet, which implements the algorithm for this model class
Approximate IPA: Trading Unbiasedness for Simplicity
When Perturbation Analysis (PA) yields unbiased sensitivity estimators for
expected-value performance functions in discrete event dynamic systems, it can
be used for performance optimization of those functions. However, when PA is
known to be unbiased, the complexity of its estimators often does not scale
with the system's size. The purpose of this paper is to suggest an alternative
approach to optimization which balances precision with computing efforts by
trading off complicated, unbiased PA estimators for simple, biased approximate
estimators. Furthermore, we provide guidelines for developing such estimators,
that are largely based on the Stochastic Flow Modeling framework. We suggest
that if the relative error (or bias) is not too large, then optimization
algorithms such as stochastic approximation converge to a (local) minimum just
like in the case where no approximation is used. We apply this approach to an
example of balancing loss with buffer-cost in a finite-buffer queue, and prove
a crucial upper bound on the relative error. This paper presents the initial
study of the proposed approach, and we believe that if the idea gains traction
then it may lead to a significant expansion of the scope of PA in optimization
of discrete event systems.Comment: 8 pages, 8 figure
Transport or Store? Synthesizing Flow-based Microfluidic Biochips using Distributed Channel Storage
Flow-based microfluidic biochips have attracted much atten- tion in the EDA
community due to their miniaturized size and execution efficiency. Previous
research, however, still follows the traditional computing model with a
dedicated storage unit, which actually becomes a bottleneck of the performance
of bio- chips. In this paper, we propose the first architectural synthe- sis
framework considering distributed storage constructed tem- porarily from
transportation channels to cache fluid samples. Since distributed storage can
be accessed more efficiently than a dedicated storage unit and channels can
switch between the roles of transportation and storage easily, biochips with
this dis- tributed computing architecture can achieve a higher execution
efficiency even with fewer resources. Experimental results con- firm that the
execution efficiency of a bioassay can be improved by up to 28% while the
number of valves in the biochip can be reduced effectively.Comment: ACM/IEEE Design Automation Conference (DAC), June 201
Implicit sampling for path integral control, Monte Carlo localization, and SLAM
The applicability and usefulness of implicit sampling in stochastic optimal
control, stochastic localization, and simultaneous localization and mapping
(SLAM), is explored; implicit sampling is a recently-developed
variationally-enhanced sampling method. The theory is illustrated with
examples, and it is found that implicit sampling is significantly more
efficient than current Monte Carlo methods in test problems for all three
applications
- …