3 research outputs found

    VizMark: Benchmarking Visibility Preprocessing

    Get PDF
    We present a new means of comparing visibility algorithms by means of the implementation of a standard reference solution against which visibility algorithms may be tested. This will allow new and existing visibility algorithms to be objectively tested. An accurate reference solution was developed that employs an optimised ray casting algorithm to calculate visibility. Due to the excessive computational overhead in this calculations, a parallel implementation reduces the amount of time needed to produce the reference solution. The benchmarker component determines the accuracy of the algorithm undergoing testing based on a number of image error metrics which take into account the quality of the final, rendered image. This paper discusses the components that make up the VizMark system and how each is tested

    Toward an improved error metric

    No full text
    In many computer vision algorithms, the well known Euclidean or SSD (sum of the squared differences) metric is prevalent and justified from a maximum likelihood perspective when the additive noise is Gaussian. However, Gaussian noise distribution assumption is often invalid. Previous research has found that other metrics such as double exponential metric or Cauchy metric provide better results, in accordance with the maximum likelihood approach. In this paper, we examine different error metrics and provide a theoretical approach to derive a rich set of nonlinear estimations. Our results on image databases show more robust results are obtained for noise estimation based on the proposed error metric analysis. 1
    corecore