72 research outputs found

    Optimizing Water Consumption Using Crop Water Production Functions

    Get PDF

    Learning without the Phase: Regularized PhaseMax Achieves Optimal Sample Complexity

    Get PDF
    The problem of estimating an unknown signal, x_0 ϵ R^n, from a vector y ϵ R^m consisting of m magnitude-only measurements of the form y_i = |a_ix_o|, where a_i’s are the rows of a known measurement matrix A is a classical problem known as phase retrieval. This problem arises when measuring the phase is costly or altogether infeasible. In many applications in machine learning, signal processing, statistics, etc., the underlying signal has certain structure (sparse, low-rank, finite alphabet, etc.), opening of up the possibility of recovering x_0 from a number of measurements smaller than the ambient dimension, i.e., m < n. Ideally, one would like to recover the signal from a number of phaseless measurements that is on the order of the "degrees of freedom" of the structured x_0. To this end, inspired by the PhaseMax algorithm, we formulate a convex optimization problem, where the objective function relies on an initial estimate of the true signal and also includes an additive regularization term to encourage structure. The new formulation is referred to as regularized PhaseMax. We analyze the performance of regularized PhaseMax to find the minimum number of phaseless measurements required for perfect signal recovery. The results are asymptotic and are in terms of the geometrical properties (such as the Gaussian width) of certain convex cones. When the measurement matrix has i.i.d. Gaussian entries, we show that our proposed method is indeed order-wise optimal, allowing perfect recovery from a number of phaseless measurements that is only a constant factor away from the degrees of freedom. We explicitly compute this constant factor, in terms of the quality of the initial estimate, by deriving the exact phase transition. The theory well matches empirical results from numerical simulations

    The Performance Analysis of Generalized Margin Maximizer (GMM) on Separable Data

    Get PDF
    Logistic models are commonly used for binary classification tasks. The success of such models has often been attributed to their connection to maximum-likelihood estimators. It has been shown that gradient descent algorithm, when applied on the logistic loss, converges to the max-margin classifier (a.k.a. hard-margin SVM). The performance of the max-margin classifier has been recently analyzed. Inspired by these results, in this paper, we present and study a more general setting, where the underlying parameters of the logistic model possess certain structures (sparse, block-sparse, low-rank, etc.) and introduce a more general framework (which is referred to as "Generalized Margin Maximizer", GMM). While classical max-margin classifiers minimize the 22-norm of the parameter vector subject to linearly separating the data, GMM minimizes any arbitrary convex function of the parameter vector. We provide a precise analysis of the performance of GMM via the solution of a system of nonlinear equations. We also provide a detailed study for three special cases: (11) 2\ell_2-GMM that is the max-margin classifier, (22) 1\ell_1-GMM which encourages sparsity, and (33) \ell_{\infty}-GMM which is often used when the parameter vector has binary entries. Our theoretical results are validated by extensive simulation results across a range of parameter values, problem instances, and model structures.Comment: ICML 2020 (submitted February 2020

    The Performance Analysis of Generalized Margin Maximizer (GMM) on Separable Data

    Get PDF
    Logistic models are commonly used for binary classification tasks. The success of such models has often been attributed to their connection to maximum-likelihood estimators. It has been shown that gradient descent algorithm, when applied on the logistic loss, converges to the max-margin classifier (a.k.a. hard-margin SVM). The performance of the max-margin classifier has been recently analyzed. Inspired by these results, in this paper, we present and study a more general setting, where the underlying parameters of the logistic model possess certain structures (sparse, block-sparse, low-rank, etc.) and introduce a more general framework (which is referred to as "Generalized Margin Maximizer", GMM). While classical max-margin classifiers minimize the 2-norm of the parameter vector subject to linearly separating the data, GMM minimizes any arbitrary convex function of the parameter vector. We provide a precise analysis of the performance of GMM via the solution of a system of nonlinear equations. We also provide a detailed study for three special cases: (1) ℓ₂-GMM that is the max-margin classifier, (2) ℓ₁-GMM which encourages sparsity, and (3) ℓ_∞-GMM which is often used when the parameter vector has binary entries. Our theoretical results are validated by extensive simulation results across a range of parameter values, problem instances, and model structures

    Modeling biochar-soil depth dependency on fecal coliform straining under subsurface drip irrigation

    Get PDF
    Funding Information: This work was supported by Shahrekord University, Iran. N. Sepehrnia is funded by a Marie Skłodowska-Curie Individual Fellowship, United Kingdom under the grant agreement No. 101026287. We acknowledge University of Aberdeen, UK for supporting this project.Peer reviewedPublisher PD

    Estimation of soil water retention curve in semi-arid areas using fractal dimension

    Get PDF
    The soil water retention curve (SWRC) is one of the important hydraulic functions in water flow modeling and solute transport in the porous medium. Direct measurement of SWRC is time consuming and expensive, therefore different models have been developed to describe it. In this study, a model based on fractal theory was derived to estimate water retention curve. The fractal dimension of SWRC (DSWRC) for 130 soil samples (with a spread range of soil texture) were determined and tried to find out a simple relation between this parameter and easily available soil properties such as clay, silt and sand contents, lime percent and bulk density by applying multiple linear regression analysis. The measured DSWRC for 110 soil samples used for regression analysis and 20 soil samples was used for model validation. The regression analysis showed a linear relationship between DSWRC, with clay, silt contents and soil bulk density with the goodness of fit, R2 = 0.909, but lime content did not show any significant effect on SWRC prediction improvement. Therefore, it can be concluded that estimating SWRC in calcareous soil using DSWRC obtained from soil easily measured properties will be a good, rapid and reliable alternative for reliable estimation of soil hydraulic properties of these areas.Keywords: Fractal model; lime percent; Regression analysis; Soil water retention curv

    Estimation of soil water retention curve using fractal dimension

    Get PDF
    The soil water retention curve (SWRC) is a fundamental hydraulic property majorly used to study flow transport in soils and calculate plant-available water. Since, direct measurement of SWRC is time-consuming and expensive, different models have been developed to estimate SWRC. In this study, a fractal-based model was developed to predict SWRC. A wide range of soil textures (130 soil samples) was used to determine the fractal dimension of SWRC (DSWRC). Moreover, the SWRC pedotransfer functions were established based on easily available soil properties such as particle size  distribution and bulk density by applying multiple linear regression analysis. The measured DSWRC for 110 soil samples was considered for function parameterization and the remaining was used for model validation. The results illustrated that the DSWRC linearly correlates with clay and silt contents and soil bulk density (r2 = 0.909). The SWRC can, therefore, be easily and concisely estimated by the proposed fractal-based functions. Key words: Fractal model; Pedotransfer functions; Regression analysis; Soil water retention curv

    Learning without the Phase: Regularized PhaseMax Achieves Optimal Sample Complexity

    Get PDF
    The problem of estimating an unknown signal, x_0 ϵ R^n, from a vector y ϵ R^m consisting of m magnitude-only measurements of the form y_i = |a_ix_o|, where a_i’s are the rows of a known measurement matrix A is a classical problem known as phase retrieval. This problem arises when measuring the phase is costly or altogether infeasible. In many applications in machine learning, signal processing, statistics, etc., the underlying signal has certain structure (sparse, low-rank, finite alphabet, etc.), opening of up the possibility of recovering x_0 from a number of measurements smaller than the ambient dimension, i.e., m < n. Ideally, one would like to recover the signal from a number of phaseless measurements that is on the order of the "degrees of freedom" of the structured x_0. To this end, inspired by the PhaseMax algorithm, we formulate a convex optimization problem, where the objective function relies on an initial estimate of the true signal and also includes an additive regularization term to encourage structure. The new formulation is referred to as regularized PhaseMax. We analyze the performance of regularized PhaseMax to find the minimum number of phaseless measurements required for perfect signal recovery. The results are asymptotic and are in terms of the geometrical properties (such as the Gaussian width) of certain convex cones. When the measurement matrix has i.i.d. Gaussian entries, we show that our proposed method is indeed order-wise optimal, allowing perfect recovery from a number of phaseless measurements that is only a constant factor away from the degrees of freedom. We explicitly compute this constant factor, in terms of the quality of the initial estimate, by deriving the exact phase transition. The theory well matches empirical results from numerical simulations
    corecore