331,383 research outputs found

    Master of Science

    Get PDF
    thesisThis thesis presents the validation results on a modified heavy metal panel assay known as HYMET4 Blood Panel. It lays out the importance of metal testing in the clinical setting, and presents the underlying motivations that compelled the improvement of the current assay. We validated the measurement of arsenic, cadmium, lead, and mercury in whole blood using an Agilent Isocratic HPLC system and autosampler as the sample introduction system, herein referred to as Isocratic Pump Direct Injection (IPDI). The autosampler accommodates two 45 vial holders that increases sample throughputs. Sample preparation and introduction have been modified as well as data analysis parameters. Validation studies conducted were imprecision, sensitivity, accuracy, Analytical Measurement Range (AMR) or linearity, recovery, and carryover. EDTA, gold, DMSA, and DMPS have been used to study mercury stability in solution. The data from the validation studies of all four metals in the panel are analyzed. The results of the imprecision, sensitivity, accuracy, AMR, recovery, and carryover studies are as follows and are well promising. Of the four chelators used for the mercury stability study, DMPS gave the best results. The validation results suggest that the modifications made to the HYMET4 Blood Panel assay have substantially improved the assay. The successful validation of the modified assay also suggests that the improved assay will increase the laboratory throughput with the use of the two 45 vial holders. It will increase sensitivity of the results using the newer analytical system. It will also reduce current sample volume by one fifth, which leads to laboratory cost saving and patient specimen volume reduction. It has been proposed based on comparison data that the use of Cetac autosampler will give a more robust touch to the improvement process of the assay. Cetac autosampler is engineered uniquely for use in trace element testing as opposed to the IPDI system engineered for biochemical analysis

    Robust computation of linear models by convex relaxation

    Get PDF
    Consider a dataset of vector-valued observations that consists of noisy inliers, which are explained well by a low-dimensional subspace, along with some number of outliers. This work describes a convex optimization problem, called REAPER, that can reliably fit a low-dimensional model to this type of data. This approach parameterizes linear subspaces using orthogonal projectors, and it uses a relaxation of the set of orthogonal projectors to reach the convex formulation. The paper provides an efficient algorithm for solving the REAPER problem, and it documents numerical experiments which confirm that REAPER can dependably find linear structure in synthetic and natural data. In addition, when the inliers lie near a low-dimensional subspace, there is a rigorous theory that describes when REAPER can approximate this subspace.Comment: Formerly titled "Robust computation of linear models, or How to find a needle in a haystack

    Algorithms and Hardness for Robust Subspace Recovery

    Full text link
    We consider a fundamental problem in unsupervised learning called \emph{subspace recovery}: given a collection of mm points in Rn\mathbb{R}^n, if many but not necessarily all of these points are contained in a dd-dimensional subspace TT can we find it? The points contained in TT are called {\em inliers} and the remaining points are {\em outliers}. This problem has received considerable attention in computer science and in statistics. Yet efficient algorithms from computer science are not robust to {\em adversarial} outliers, and the estimators from robust statistics are hard to compute in high dimensions. Are there algorithms for subspace recovery that are both robust to outliers and efficient? We give an algorithm that finds TT when it contains more than a dn\frac{d}{n} fraction of the points. Hence, for say d=n/2d = n/2 this estimator is both easy to compute and well-behaved when there are a constant fraction of outliers. We prove that it is Small Set Expansion hard to find TT when the fraction of errors is any larger, thus giving evidence that our estimator is an {\em optimal} compromise between efficiency and robustness. As it turns out, this basic problem has a surprising number of connections to other areas including small set expansion, matroid theory and functional analysis that we make use of here.Comment: Appeared in Proceedings of COLT 201

    Non-convex Optimization for Machine Learning

    Full text link
    A vast majority of machine learning algorithms train their models and perform inference by solving optimization problems. In order to capture the learning and prediction problems accurately, structural constraints such as sparsity or low rank are frequently imposed or else the objective itself is designed to be a non-convex function. This is especially true of algorithms that operate in high-dimensional spaces or that train non-linear models such as tensor models and deep networks. The freedom to express the learning problem as a non-convex optimization problem gives immense modeling power to the algorithm designer, but often such problems are NP-hard to solve. A popular workaround to this has been to relax non-convex problems to convex ones and use traditional methods to solve the (convex) relaxed optimization problems. However this approach may be lossy and nevertheless presents significant challenges for large scale optimization. On the other hand, direct approaches to non-convex optimization have met with resounding success in several domains and remain the methods of choice for the practitioner, as they frequently outperform relaxation-based techniques - popular heuristics include projected gradient descent and alternating minimization. However, these are often poorly understood in terms of their convergence and other properties. This monograph presents a selection of recent advances that bridge a long-standing gap in our understanding of these heuristics. The monograph will lead the reader through several widely used non-convex optimization techniques, as well as applications thereof. The goal of this monograph is to both, introduce the rich literature in this area, as well as equip the reader with the tools and techniques needed to analyze these simple procedures for non-convex problems.Comment: The official publication is available from now publishers via http://dx.doi.org/10.1561/220000005
    • …
    corecore