1,415 research outputs found

    Imposing Economic Constraints in Nonparametric Regression: Survey, Implementation and Extension

    Get PDF
    Economic conditions such as convexity, homogeneity, homotheticity, and monotonicity are all important assumptions or consequences of assumptions of economic functionals to be estimated. Recent research has seen a renewed interest in imposing constraints in nonparametric regression. We survey the available methods in the literature, discuss the challenges that present themselves when empirically implementing these methods and extend an existing method to handle general nonlinear constraints. A heuristic discussion on the empirical implementation for methods that use sequential quadratic programming is provided for the reader and simulated and empirical evidence on the distinction between constrained and unconstrained nonparametric regression surfaces is covered.identification, concavity, Hessian, constraint weighted bootstrapping, earnings function

    A hybrid multigrid technique for computing steady-state solutions to supersonic flows

    Get PDF
    Recently, Li and Sanders have introduced a class of finite difference schemes to approximate generally discontinuous solutions to hyperbolic systems of conservation laws. These equations have the form together with relevant boundary conditions. When modelling hypersonic spacecraft reentry, the differential equations above are frequently given by the compressible Euler equations coupled with a nonequilibrium chemistry model. For these applications, steady state solutions are often sought. Many tens (to hundreds) of super computer hours can be devoted to a single three space dimensional simulation. The primary difficulty is the inability to rapidly and reliably capture the steady state. In these notes, we demonstrate that a particular variant from the schemes presented can be combined with a particular multigrid approach to capture steady state solutions to the compressible Euler equations in one space dimension. We show that the rate of convergence to steady state coming from this multigrid implementation is vastly superior to the traditional approach of artificial time relaxation. Moreover, we demonstrate virtual grid independence. That is, the rate of convergence does not depend on the degree of spatial grid refinement

    Multiple feature-enhanced synthetic aperture radar imaging

    Get PDF
    Non-quadratic regularization based image formation is a recently proposed framework for feature-enhanced radar imaging. Specific image formation techniques in this framework have so far focused on enhancing one type of feature, such as strong point scatterers, or smooth regions. However, many scenes contain a number of such features. We develop an image formation technique that simultaneously enhances multiple types of features by posing the problem as one of sparse signal representation based on overcomplete dictionaries. Due to the complex-valued nature of the reflectivities in SAR, our new approach is designed to sparsely represent the magnitude of the complex-valued scattered field in terms of multiple features, which turns the image reconstruction problem into a joint optimization problem over the representation of the magnitude and the phase of the underlying field reflectivities. We formulate the mathematical framework needed for this method and propose an iterative solution for the corresponding joint optimization problem. We demonstrate the effectiveness of this approach on various SAR images

    Regularisation methods for imaging from electrical measurements

    Get PDF
    In Electrical Impedance Tomography the conductivity of an object is estimated from boundary measurements. An array of electrodes is attached to the surface of the object and current stimuli are applied via these electrodes. The resulting voltages are measured. The process of estimating the conductivity as a function of space inside the object from voltage measurements at the surface is called reconstruction. Mathematically the ElT reconstruction is a non linear inverse problem, the stable solution of which requires regularisation methods. Most common regularisation methods impose that the reconstructed image should be smooth. Such methods confer stability to the reconstruction process, but limit the capability of describing sharp variations in the sought parameter. In this thesis two new methods of regularisation are proposed. The first method, Gallssian anisotropic regularisation, enhances the reconstruction of sharp conductivity changes occurring at the interface between a contrasting object and the background. As such changes are step changes, reconstruction with traditional smoothing regularisation techniques is unsatisfactory. The Gaussian anisotropic filtering works by incorporating prior structural information. The approximate knowledge of the shapes of contrasts allows us to relax the smoothness in the direction normal to the expected boundary. The construction of Gaussian regularisation filters that express such directional properties on the basis of the structural information is discussed, and the results of numerical experiments are analysed. The method gives good results when the actual conductivity distribution is in accordance with the prior information. When the conductivity distribution violates the prior information the method is still capable of properly locating the regions of contrast. The second part of the thesis is concerned with regularisation via the total variation functional. This functional allows the reconstruction of discontinuous parameters. The properties of the functional are briefly introduced, and an application in inverse problems in image denoising is shown. As the functional is non-differentiable, numerical difficulties are encountered in its use. The aim is therefore to propose an efficient numerical implementation for application in ElT. Several well known optimisation methods arc analysed, as possible candidates, by theoretical considerations and by numerical experiments. Such methods are shown to be inefficient. The application of recent optimisation methods called primal- dual interior point methods is analysed be theoretical considerations and by numerical experiments, and an efficient and stable algorithm is developed. Numerical experiments demonstrate the capability of the algorithm in reconstructing sharp conductivity profiles

    A comparison of two closely-related approaches to aerodynamic design optimization

    Get PDF
    Two related methods for aerodynamic design optimization are compared. The methods, called the implicit gradient approach and the variational (or optimal control) approach, both attempt to obtain gradients necessary for numerical optimization at a cost significantly less than that of the usual black-box approach that employs finite difference gradients. While the two methods are seemingly quite different, they are shown to differ (essentially) in that the order of discretizing the continuous problem, and of applying calculus, is interchanged. Under certain circumstances, the two methods turn out to be identical. We explore the relationship between these methods by applying them to a model problem for duct flow that has many features in common with transonic flow over an airfoil. We find that the gradients computed by the variational method can sometimes be sufficiently inaccurate to cause the optimization to fail

    Computational aspects of a three dimensional non-intrusive particle motion tracking system

    Get PDF
    Development of a technique for non-intrusive particle motion tracking in three dimensions is considered. This technique is based on the principle of magnetic induction. In particular, the determination of the position and onentation of the particle from the information gathered is the pnncipal focus of this thesis. The development of such a system is motivated by the need to understand the flow patterns of granular material. This is of cntical importance in dealing with problems associated with bulk solids flows which occur in almost all industries and in natural geological events. A study of the current diagnostic techniques reveals the limitations in their ability to track the motion of an individual particle in a mass flow of other particles. These techniques fail when the particle must be tracked in three dimensions in a non-intrusive manner. The diagnostic technique we consider results in an unconstrained minimization problem of an overdetennined system of nonlinear equations. The Levenberg-Marquardt algorithm is used to solve such a system to predict the location of the particle. The viability of this technique is established through simulated and actual expenmental results. Practical problems such as the effect of noise are considered. Directions for future work are provided

    A FLEXIBLE METHOD FOR EMPIRICALLY ESTIMATING PROBABILITY FUNCTIONS

    Get PDF
    This paper presents a hyperbolic trigonometric (HT) transformation procedure for empirically estimating a cumulative probability distribution function (cdf), from which the probability density function (pdf) can be obtained by differentiation. Maximum likelihood (ML) is the appropriate estimation technique, but a particularly appealing feature of the HT transformation as opposed to other zero-one transformations is that the transformed cdf can be fitted with ordinary least squares (OLS) regression. Although OLS estimates are biased and inconsistent, they are usually very close to ML estimates; thus use of OLS estimates as starting values greatly facilitates use of numerical search procedures to obtain ML estimates. ML estimates have desirable asymptotic properties. The procedure is no more difficult to use than unconstrained nonlinear regression. Advantages of the procedure as compared to alternative procedures for fitting probability functions are discussed in the manuscript. Use of the conditional method is illustrated by application to two sets of yield response data.Research Methods/ Statistical Methods,

    Globally Convergent Algorithms for Maximum a Posteriori Transmission Tomography

    Full text link
    This paper reviews and compares three maximum likelihood algorithms for transmission tomography. One of these algorithms is the EM algorithm, one is based on a convexity argument devised by De Pierro in the context of emission tomography, and one is an ad hoc gradient algorithm. The algorithms enjoy desirable local and global convergence properties and combine gracefully with Bayesian smoothing priors. Preliminary numerical testing of the algorithms on simulated data suggest that the convex algorithm and the ad hoc gradient algorithm are computationally superior to the EM algorithm. This superiority stems from the larger number of exponentiations required by the EM algorithm. The convex and gradient algorithms are well adapted to parallel computing.Peer Reviewedhttp://deepblue.lib.umich.edu/bitstream/2027.42/86016/1/Fessler101.pd
    corecore