28 research outputs found
Towards Machine Wald
The past century has seen a steady increase in the need of estimating and
predicting complex systems and making (possibly critical) decisions with
limited information. Although computers have made possible the numerical
evaluation of sophisticated statistical models, these models are still designed
\emph{by humans} because there is currently no known recipe or algorithm for
dividing the design of a statistical model into a sequence of arithmetic
operations. Indeed enabling computers to \emph{think} as \emph{humans} have the
ability to do when faced with uncertainty is challenging in several major ways:
(1) Finding optimal statistical models remains to be formulated as a well posed
problem when information on the system of interest is incomplete and comes in
the form of a complex combination of sample data, partial knowledge of
constitutive relations and a limited description of the distribution of input
random variables. (2) The space of admissible scenarios along with the space of
relevant information, assumptions, and/or beliefs, tend to be infinite
dimensional, whereas calculus on a computer is necessarily discrete and finite.
With this purpose, this paper explores the foundations of a rigorous framework
for the scientific computation of optimal statistical estimators/models and
reviews their connections with Decision Theory, Machine Learning, Bayesian
Inference, Stochastic Optimization, Robust Optimization, Optimal Uncertainty
Quantification and Information Based Complexity.Comment: 37 page
Incremental proximal methods for large scale convex optimization
Laboratory for Information and Decision Systems Report LIDS-P-2847We consider the minimization of a sumâm [over]i=1 fi (x) consisting of a large
number of convex component functions fi . For this problem, incremental methods
consisting of gradient or subgradient iterations applied to single components have
proved very effective. We propose new incremental methods, consisting of proximal
iterations applied to single components, as well as combinations of gradient, subgradient,
and proximal iterations. We provide a convergence and rate of convergence
analysis of a variety of such methods, including some that involve randomization in
the selection of components.We also discuss applications in a few contexts, including
signal processing and inference/machine learning.United States. Air Force Office of Scientific Research (grant FA9550-10-1-0412
Quantification of Fibrin in Blood Thrombi Formed in Hemodialysis Central Venous Catheters: A Pilot Study on 43 CVCs
PURPOSE:Fibrin deposition and thrombotic occlusion represent a serious cause of access dysfunction in hemodialysis central venous catheters (CVCs). The aim of this work was to define and apply a method for imaging and quantifying fibrin in thrombi formed into the side holes of CVCs.
METHODS:Forty-three CVCs removed from a cohort of dialyzed patients were analyzed in this pilot study. Hematoxylin and eosin and a modified Carstair's staining were applied on permanent thrombus sections. Fluorescence microscopy and image analysis were performed to quantify the fibrin amount.
RESULTS:Highly fluorescent areas were invariably associated with fibrin by Carstair's method. The deposition of concentric layers of fibrin and erythrocytes was easily identified by fluorescence microscopy, showing growth features of the thrombus. Fibrin amount in diabetic patients was significantly higher than that in nondiabetic patients with median (interquartile range) values of 51% (47-68%) and 44% (30-54%), respectively (p=0.032). No significant difference in fibrin content was found by grouping data according to catheter type, permanence time, insertion site and dialysis vintage. Higher variability in fibrin values was found in thrombi from CVCs removed after 1-15 days compared with 16-60 days. A trend of an increase in fibrin amount in thrombi was noted according to blood platelet count at CVC insertion.
CONCLUSIONS:The analytical method presented here proved to be a rapid and effective way for quantifying fibrin content in thrombi formed on CVCs with potential application in future clinical studies
Positive polynomial matrices for LPV controller synthesis
Positive polynomial matrices and linear matrix inequalities (LMI) can be used to design linear parameter varying (LPV) controllers depending polynomially on the scheduling parameters, and robust to polynomial parametric uncertainty. The salient features of the approach are (a) the ability to design a controller of order and structure fixed a priori; (b) the use of a transfer function, or polynomial modeling framework that bypasses difficulties typically encountered with canonical state-space representations of LPV controllers; (c) the existence of a user-friendly Matlab interface to model this class of LMI problems. The main limitation of the approach is the choice of a nominal, or central characteristic polynomial around which the design is carried out