7,090 research outputs found

    A Path Algorithm for Constrained Estimation

    Full text link
    Many least squares problems involve affine equality and inequality constraints. Although there are variety of methods for solving such problems, most statisticians find constrained estimation challenging. The current paper proposes a new path following algorithm for quadratic programming based on exact penalization. Similar penalties arise in l1l_1 regularization in model selection. Classical penalty methods solve a sequence of unconstrained problems that put greater and greater stress on meeting the constraints. In the limit as the penalty constant tends to ∞\infty, one recovers the constrained solution. In the exact penalty method, squared penalties are replaced by absolute value penalties, and the solution is recovered for a finite value of the penalty constant. The exact path following method starts at the unconstrained solution and follows the solution path as the penalty constant increases. In the process, the solution path hits, slides along, and exits from the various constraints. Path following in lasso penalized regression, in contrast, starts with a large value of the penalty constant and works its way downward. In both settings, inspection of the entire solution path is revealing. Just as with the lasso and generalized lasso, it is possible to plot the effective degrees of freedom along the solution path. For a strictly convex quadratic program, the exact penalty algorithm can be framed entirely in terms of the sweep operator of regression analysis. A few well chosen examples illustrate the mechanics and potential of path following.Comment: 26 pages, 5 figure

    Atypical late-time singular regimes accurately diagnosed in stagnation-point-type solutions of 3D Euler flows

    Full text link
    We revisit, both numerically and analytically, the finite-time blowup of the infinite-energy solution of 3D Euler equations of stagnation-point-type introduced by Gibbon et al. (1999). By employing the method of mapping to regular systems, presented in Bustamante (2011) and extended to the symmetry-plane case by Mulungye et al. (2015), we establish a curious property of this solution that was not observed in early studies: before but near singularity time, the blowup goes from a fast transient to a slower regime that is well resolved spectrally, even at mid-resolutions of 5122.512^2. This late-time regime has an atypical spectrum: it is Gaussian rather than exponential in the wavenumbers. The analyticity-strip width decays to zero in a finite time, albeit so slowly that it remains well above the collocation-point scale for all simulation times t<T∗−10−9000t < T^* - 10^{-9000}, where T∗T^* is the singularity time. Reaching such a proximity to singularity time is not possible in the original temporal variable, because floating point double precision (≈10−16\approx 10^{-16}) creates a `machine-epsilon' barrier. Due to this limitation on the \emph{original} independent variable, the mapped variables now provide an improved assessment of the relevant blowup quantities, crucially with acceptable accuracy at an unprecedented closeness to the singularity time: $T^*- t \approx 10^{-140}.

    On relative errors of floating-point operations: optimal bounds and applications

    Get PDF
    International audienceRounding error analyses of numerical algorithms are most often carried out via repeated applications of the so-called standard models of floating-point arithmetic. Given a round-to-nearest function fl and barring underflow and overflow, such models bound the relative errors E 1 (t) = |t − fl(t)|/|t| and E 2 (t) = |t − fl(t)|/|fl(t)| by the unit roundoff u. This paper investigates the possibility and the usefulness of refining these bounds, both in the case of an arbitrary real t and in the case where t is the exact result of an arithmetic operation on some floating-point numbers. We show that E 1 (t) and E 2 (t) are optimally bounded by u/(1 + u) and u, respectively, when t is real or, under mild assumptions on the base and the precision, when t = x ± y or t = xy with x, y two floating-point numbers. We prove that while this remains true for division in base ÎČ > 2, smaller, attainable bounds can be derived for both division in base ÎČ = 2 and square root. This set of optimal bounds is then applied to the rounding error analysis of various numerical algorithms: in all cases, we obtain significantly shorter proofs of the best-known error bounds for such algorithms, and/or improvements on these bounds themselves

    Accurate and Efficient Expression Evaluation and Linear Algebra

    Full text link
    We survey and unify recent results on the existence of accurate algorithms for evaluating multivariate polynomials, and more generally for accurate numerical linear algebra with structured matrices. By "accurate" we mean that the computed answer has relative error less than 1, i.e., has some correct leading digits. We also address efficiency, by which we mean algorithms that run in polynomial time in the size of the input. Our results will depend strongly on the model of arithmetic: Most of our results will use the so-called Traditional Model (TM). We give a set of necessary and sufficient conditions to decide whether a high accuracy algorithm exists in the TM, and describe progress toward a decision procedure that will take any problem and provide either a high accuracy algorithm or a proof that none exists. When no accurate algorithm exists in the TM, it is natural to extend the set of available accurate operations by a library of additional operations, such as x+y+zx+y+z, dot products, or indeed any enumerable set which could then be used to build further accurate algorithms. We show how our accurate algorithms and decision procedure for finding them extend to this case. Finally, we address other models of arithmetic, and the relationship between (im)possibility in the TM and (in)efficient algorithms operating on numbers represented as bit strings.Comment: 49 pages, 6 figures, 1 tabl

    Adaptive smartphone-based sensor fusion for estimating competitive rowing kinematic metrics.

    Get PDF
    Competitive rowing highly values boat position and velocity data for real-time feedback during training, racing and post-training analysis. The ubiquity of smartphones with embedded position (GPS) and motion (accelerometer) sensors motivates their possible use in these tasks. In this paper, we investigate the use of two real-time digital filters to achieve highly accurate yet reasonably priced measurements of boat speed and distance traveled. Both filters combine acceleration and location data to estimate boat distance and speed; the first using a complementary frequency response-based filter technique, the second with a Kalman filter formalism that includes adaptive, real-time estimates of effective accelerometer bias. The estimates of distance and speed from both filters were validated and compared with accurate reference data from a differential GPS system with better than 1 cm precision and a 5 Hz update rate, in experiments using two subjects (an experienced club-level rower and an elite rower) in two different boats on a 300 m course. Compared with single channel (smartphone GPS only) measures of distance and speed, the complementary filter improved the accuracy and precision of boat speed, boat distance traveled, and distance per stroke by 44%, 42%, and 73%, respectively, while the Kalman filter improved the accuracy and precision of boat speed, boat distance traveled, and distance per stroke by 48%, 22%, and 82%, respectively. Both filters demonstrate promise as general purpose methods to substantially improve estimates of important rowing performance metrics

    Enhancing spatial resolution of remotely sensed data for mapping freshwater environments

    Get PDF
    Freshwater environments are important for ecosystem services and biodiversity. These environments are subject to many natural and anthropogenic changes, which influence their quality; therefore, regular monitoring is required for their effective management. High biotic heterogeneity, elongated land/water interaction zones, and logistic difficulties with access make field based monitoring on a large scale expensive, inconsistent and often impractical. Remote sensing (RS) is an established mapping tool that overcomes these barriers. However, complex and heterogeneous vegetation and spectral variability due to water make freshwater environments challenging to map using remote sensing technology. Satellite images available for New Zealand were reviewed, in terms of cost, and spectral and spatial resolution. Particularly promising image data sets for freshwater mapping include the QuickBird and SPOT-5. However, for mapping freshwater environments a combination of images is required to obtain high spatial, spectral, radiometric, and temporal resolution. Data fusion (DF) is a framework of data processing tools and algorithms that combines images to improve spectral and spatial qualities. A range of DF techniques were reviewed and tested for performance using panchromatic and multispectral QB images of a semi-aquatic environment, on the southern shores of Lake Taupo, New Zealand. In order to discuss the mechanics of different DF techniques a classification consisting of three groups was used - (i) spatially-centric (ii) spectrally-centric and (iii) hybrid. Subtract resolution merge (SRM) is a hybrid technique and this research demonstrated that for a semi aquatic QuickBird image it out performed Brovey transformation (BT), principal component substitution (PCS), local mean and variance matching (LMVM), and optimised high pass filter addition (OHPFA). However some limitations were identified with SRM, which included the requirement for predetermined band weights, and the over-representation of the spatial edges in the NIR bands due to their high spectral variance. This research developed three modifications to the SRM technique that addressed these limitations. These were tested on QuickBird (QB), SPOT-5, and Vexcel aerial digital images, as well as a scanned coloured aerial photograph. A visual qualitative assessment and a range of spectral and spatial quantitative metrics were used to evaluate these modifications. These included spectral correlation and root mean squared error (RMSE), Sobel filter based spatial edges RMSE, and unsupervised classification. The first modification addressed the issue of predetermined spectral weights and explored two alternative regression methods (Least Absolute Deviation, and Ordinary Least Squares) to derive image-specific band weights for use in SRM. Both methods were found equally effective; however, OLS was preferred as it was more efficient in processing band weights compared to LAD. The second modification used a pixel block averaging function on high resolution panchromatic images to derive spatial edges for data fusion. This eliminated the need for spectral band weights, minimised spectral infidelity, and enabled the fusion of multi-platform data. The third modification addressed the issue of over-represented spatial edges by introducing a sophisticated contrast and luminance index to develop a new normalising function. This improved the spatial representation of the NIR band, which is particularly important for mapping vegetation. A combination of the second and third modification of SRM was effective in simultaneously minimising the overall spectral infidelity and undesired spatial errors for the NIR band of the fused image. This new method has been labelled Contrast and Luminance Normalised (CLN) data fusion, and has been demonstrated to make a significant contribution in fusing multi-platform, multi-sensor, multi-resolution, and multi-temporal data. This contributes to improvements in the classification and monitoring of fresh water environments using remote sensing
    • 

    corecore