4 research outputs found

    Reducing the Influence of Tiny Normwise Relative Errors on Performance Profiles

    No full text
    It is a widespread but little-noticed phenomenon that the normwise relative error βˆ₯xβˆ’yβˆ₯/βˆ₯xβˆ₯\|x-y\| / \|x\| of vectors xx and yy of floating point numbers, where yy is an approximation to xx, can be many orders of magnitude smaller than the unit roundoff. We analyze this phenomenon and show that in the ∞\infty-norm it happens precisely when xx has components of widely varying magnitude and every component of xx of largest magnitude agrees with the corresponding component of yy. Performance profiles are a popular way to compare competing algorithms according to particular measures of performance. We show that performance profiles based on normwise relative errors can give a misleading impression due to the influence of zero or tiny errors. We propose a transformation that reduces the influence of these extreme errors in a controlled manner, while preserving the monotonicity of the underlying data and leaving the performance profile unchanged at its left end-point. Numerical examples with both artificial and genuine data illustrate the benefits of the transformation

    Reducing the influence of tiny normwise relative errors on performance profiles

    No full text
    It is a widespread but little-noticed phenomenon that the normwise relative error β€–x βˆ’ yβ€–/β€–x β€– of vectors x and y of floating point numbers of the same precision, where y is an approximation to x, can be many orders of magnitude smaller than the unit roundoff. We analyze this phenomenon and show that in the ∞-norm it happens precisely when x has components of widely varying magnitude and every component of x of largest magnitude agrees with the corresponding component of y. Performance profiles are a popular way to compare competing algorithms according to particular measures of performance. We show that performance profiles based on normwise relative errors can give a misleading impression due to the influence of zero or tiny normwise relative errors. We propose a transformation that reduces the influence of these extreme errors in a controlled manner, while preserving the monotonicity of the underlying data and leaving the performance profile unchanged at its left end-point. Numerical examples with both artificial and genuine data illustrate the benefits of the transformation
    corecore