186 research outputs found

    Estimation Diversity and Energy Efficiency in Distributed Sensing

    Full text link
    Distributed estimation based on measurements from multiple wireless sensors is investigated. It is assumed that a group of sensors observe the same quantity in independent additive observation noises with possibly different variances. The observations are transmitted using amplify-and-forward (analog) transmissions over non-ideal fading wireless channels from the sensors to a fusion center, where they are combined to generate an estimate of the observed quantity. Assuming that the Best Linear Unbiased Estimator (BLUE) is used by the fusion center, the equal-power transmission strategy is first discussed, where the system performance is analyzed by introducing the concept of estimation outage and estimation diversity, and it is shown that there is an achievable diversity gain on the order of the number of sensors. The optimal power allocation strategies are then considered for two cases: minimum distortion under power constraints; and minimum power under distortion constraints. In the first case, it is shown that by turning off bad sensors, i.e., sensors with bad channels and bad observation quality, adaptive power gain can be achieved without sacrificing diversity gain. Here, the adaptive power gain is similar to the array gain achieved in Multiple-Input Single-Output (MISO) multi-antenna systems when channel conditions are known to the transmitter. In the second case, the sum power is minimized under zero-outage estimation distortion constraint, and some related energy efficiency issues in sensor networks are discussed.Comment: To appear at IEEE Transactions on Signal Processin

    A Proper Orthogonal Decomposition-based inverse material parameter optimization method with applications to cardiac mechanics

    Get PDF
    We are currently witnessing the advent of a revolutionary new tool for biomedical research. Complex mathematical models of "living cells" are being arranged into representative tissue assemblies and utilized to produce models of integrated tissue and organ function. This enables more sophisticated simulation tools that allows for greater insight into disease and guide the development of modern therapies. The development of realistic computer models of mechanical behaviour for soft biological tissues, such as cardiac tissue, is dependent on the formulation of appropriate constitutive laws and accurate identification of their material parameters. The main focus of this contribution is to investigate a Proper Orthogonal Decomposition with Interpolation (PODI) based method for inverse material parameter optimization in the field of cardiac mechanics. Material parameters are calibrated for a left ventricular and bi-ventricular human heart model during the diastolic filling phase. The calibration method combines a MATLAB-based Levenberg Marquardt algorithm with the in-house PODIbased software ORION. The calibration results are then compared against the full-order solution which is obtained using an in-house code based on the element-free Galerkin method, which is assumed to be the exact solution. The results obtained from this novel calibration method demonstrate that PODI provides the means to drastically reduce computation time but at the same time maintain a similar level of accuracy as provided by the conventional approach

    Estimation of Default Probabilities with Support Vector Machines

    Get PDF
    Predicting default probabilities is important for firms and banks to operate successfully and to estimate their specific risks. There are many reasons to use nonlinear techniques for predicting bankruptcy from financial ratios. Here we propose the so called Support Vector Machine (SVM) to estimate default probabilities of German firms. Our analysis is based on the Creditreform database. The results reveal that the most important eight predictors related to bankruptcy for these German firms belong to the ratios of activity, profitability, liquidity, leverage and the percentage of incremental inventories. Based on the performance measures, the SVM tool can predict a firms default risk and identify the insolvent firm more accurately than the benchmark logit model. The sensitivity investigation and a corresponding visualization tool reveal that the classifying ability of SVM appears to be superior over a wide range of the SVM parameters. Based on the nonparametric Nadaraya-Watson estimator, the expected returns predicted by the SVM for regression have a significant positive linear relationship with the risk scores obtained for classification. This evidence is stronger than empirical results for the CAPM based on a linear regression and confirms that higher risks need to be compensated by higher potential returns.Support Vector Machine, Bankruptcy, Default Probabilities Prediction, Expected Profitability, CAPM.

    Sufficient Conditions for Feasibility and Optimality of Real-Time Optimization Schemes - I. Theoretical Foundations

    Get PDF
    The idea of iterative process optimization based on collected output measurements, or "real-time optimization" (RTO), has gained much prominence in recent decades, with many RTO algorithms being proposed, researched, and developed. While the essential goal of these schemes is to drive the process to its true optimal conditions without violating any safety-critical, or "hard", constraints, no generalized, unified approach for guaranteeing this behavior exists. In this two-part paper, we propose an implementable set of conditions that can enforce these properties for any RTO algorithm. The first part of the work is dedicated to the theory behind the sufficient conditions for feasibility and optimality (SCFO), together with their basic implementation strategy. RTO algorithms enforcing the SCFO are shown to perform as desired in several numerical examples - allowing for feasible-side convergence to the plant optimum where algorithms not enforcing the conditions would fail.Comment: Working paper; supplementary material available at: http://infoscience.epfl.ch/record/18807

    A Polynomial Algorithm for a NP-hard to solve Optimization Problem

    Get PDF
    Since Markowitz in 1952 described an efficient and practical way of finding the optimal portfolio allocation in the normal distributed case, a lot of progress in several directions has been made. The main objective of this thesis is to replace the original risk measure of the Markowitz setting by a more suitable one, Value-at-Risk. In adressing the optimal allocation problem in a slightly more general setting, thereby still allowing for a large number of different asset classes, an efficient algorithm is developed for finding the exact solution in the case of specially distributed losses. Applying this algorithm to even more general loss distributions results in a not necessarily exact matching of the VaR optimum. However, in this case, upper bounds for the euclidean distance between the exact optimum and the output of the proposed algorithm are given. An investigation of these upper bounds shows, that in general the algorithm results in quite good approximations to the VaR optimum. Finally, an application of a stochastic branch & bound algorithm to the current problem is discussed

    Integrated Optimization And Learning Methods Of Predictive And Prescriptive Analytics

    Get PDF
    A typical decision problem optimizes one or more objectives subject to a set of constraints on its decision variables. Most real-world decision problems contain uncertain parameters. The exponential growth of data availability, ease of accessibility in computational power, and more efficient optimization techniques have paved the way for machine learning tools to effectively predict these uncertain parameters. Traditional machine learning models measure the quality of predictions based on the closeness between true and predicted values and ignore decision problems involving uncertain parameters for which predicted values are treated as the true values.Standard approaches passing point estimates of machine learning models into decision problems as replacement of uncertain parameters lose the connection between predictive and prescriptive tasks. Recently developed methods to strengthen the bond between predictive and prescriptive tasks still rely on either first predict, then optimize strategy or use approximation techniques in integrating predictive and prescriptive tasks. We develop an integrated framework for performing predictive and prescriptive analytics concurrently to realize the best prescriptive performance under uncertainty. This framework is applicable to all prescriptive tasks involving uncertainty. Further, it is scalable to handle integrated predictive and prescriptive tasks with reasonable computational effort and enables users to apply decomposition algorithms for large-scale problems. The framework also accommodates prediction tasks ranging from simple regression to more complex black-box neural network models. The integrated optimization framework is composed of two integration approaches. The first approach integrates regression-based prediction and mathematical programming-based prescription tasks as a bilevel program. While the lower-level problem prescribes decisions based on the predicted outcome for a specific observation, the upper-level evaluates the quality of decisions with respect to true values. The upper-level problem can be considered as a prescriptive error, and the goal is to minimize this prescriptive error. In order to achieve the same performance in external data sets (test) compared to internal data sets (train), we offer different approaches to control the prescription generalization error associated with out-of-sample observation. We develop a decomposition algorithm for large-scale problems by leveraging a progressive hedging algorithm to solve the resulting bilevel formulation. The second approach integrates the learning of neural network-based prediction and optimization tasks as a nested neural network. While the predictive neural network promotes decisions based on predicted outcomes, the prescriptive neural network evaluates the quality of predicted decisions with respect to true values. We also propose a weight initialization process for nested neural networks and build a decomposition algorithm for large-scale problems. Our results for the example problems validate the performance of our proposed integrated predictive and prescriptive optimization and training frameworks. With customarily generated synthetic data sets, proposed methods surpass all of the first predict, then optimize approaches and recently developed approximate integration methods for both in-sample and out of sample data sets. We also observe how the proposed generalization error controlling approach improves results in out of sample data sets. Customarily generated synthetic data pairs at different levels of correlation and non-linearity graphically show us how different methods converge to each other

    Robust construction of differential emission measure profiles using a regularized maximum likelihood method

    Get PDF
    Extreme-ultraviolet (EUV) observations provide considerable insight into evolving physical conditions in the active solar atmosphere. For a prescribed density and temperature structure, it is straightforward to construct the corresponding differential emission measure profile Ο(T)\xi(T), such that Ο(T) dT\xi(T) \, dT is proportional to the emissivity from plasma in the temperature range [T,T+dT][T, T + dT]. Here we study the inverse problem of obtaining a valid Ο(T)\xi(T) profile from a set of EUV spectral line intensities observed at a pixel within a solar image. Our goal is to introduce and develop a regularized maximum likelihood (RML) algorithm designed to address the mathematically ill-posed problem of constructing differential emission measure profiles from a discrete set of EUV intensities in specified wavelength bands, specifically those observed by the Atmospheric Imaging Assembly (AIA) on the NASA Solar Dynamics Observatory. The RML method combines features of maximum likelihood and regularized approaches used by other authors. It is also guaranteed to produce a positive definite differential emission measure profile. We evaluate and compare the effectiveness of the method against other published algorithms, using both simulated data generated from parametric differential emission profile forms, and AIA data from a solar eruptive event on 2010 November 3. Similarities and differences between the differential emission measure profiles and maps reconstructed by the various algorithms are discussed. The RML inversion method is mathematically rigorous, computationally efficient, and produces acceptable measures of performance in the following three key areas: fidelity to the data, accuracy in the reconstruction, and robustness in the presence of data noise. As such, it shows considerable promise for computing differential emission measure profiles from datasets of discrete spectral lines
    • 

    corecore