8 research outputs found

    A new algorithm for calculating perceived colour difference of images

    Get PDF
    Faithful colour reproduction of digital images requires a reliable measure to compare such images in order to evaluate the reproduction performance. The conventional methods attempt to apply the CIE Colorimetry based colour difference equations, such as CIELAB, CMC, CIE94 and CIEDE2000, to complex images on a pixel-by-pixel basis, and calculates the overall colour difference as the averaged difference of each pixel in the image. This method is simple and straightforward but often does not represent the colour difference perceived by human visual system. This paper proposes a new algorithm for calculating the overall colour difference between a reproduced image and its original. The results obtained show that this new metric provides a quantitative measure that more closely corresponds to the colour difference perceived by human visual system

    A hybrid of fuzzy theory and quadratic function for estimating and refining transmission map

    Full text link
    漏 T脺B陌TAK In photographs captured in outdoor environments, particles in the air cause light attenuation and degrade image quality. This effect is especially obvious in hazy environments. In this study, a fuzzy theory is proposed to estimate the transmission map of a single image. To overcome the problem of oversaturation in dehazed images, a quadratic-function-based method is proposed to refine the transmission map. In addition, the color vector of the atmospheric light is estimated using the top 1% of the brightest light area. Finally, the dehazed image is reconstructed using the transmission map and the estimated atmospheric light. Experimental results demonstrate that the proposed hybrid method performs better than the other existing methods in terms of color oversaturation, visibility, and quantitative evaluation

    DWT based digital watermarking fidelity and robustness evaluation

    Get PDF
    An Image AdaptiveWatermarking method based on the Discrete Wavelet Transform is presented in this paper. The robustness and fidelity of the proposed method are evaluated and the method is compared to state-of-the-art watermarking techniques available in the literature. For the evaluation of watermark transparency, an image fidelity factor based on a perceptual distortion metric is introduced. On the other hand, a degradation factor is introduced for the evaluation of watermark robustness against JPEG compression and resizing. The new fidelity metric allows a perceptually aware objective quantification of image fidelity. The suitability of the proposed metric for the fidelity evaluation of still image watermarking is supported by simulation results.V Workshop de Computaci贸n Gr谩fica, Im谩genes Y Visualizaci贸nRed de Universidades con Carreras en Inform谩tica (RedUNCI

    Color Image Fidelity Metrics Evaluated Using Image Distortion Maps

    No full text
    Anumber of color image #delity metrics were evaluated by looking at howwell they predict visibility of image reproduction errors on perceptual image distortion maps. The image distortion maps are obtained empirically. Subjects examined image pairs consisting of an original and a reproduction. They marked locations on the reproduction that di#ered detectably from the original. We refer to the distribution of error marks by the subjects as image distortion maps. The empirically obtained image distortion maps are compared to the visible difference calculated using #1# the widely used root mean square error #point-by-point RMS# computed in uncalibrated RGB values, #2# the point-by-point CIELAB #E 94 values #CIE, 1994#, and #3# S-CIELAB #E 94 , a spatial extension of CIELAB #E metric. The uncalibrated RMS metric did not predict the perceptual image distortion data very well. The CIELAB #E 94 metric is a calibrated color #delity metric, and gave better predictions. The S-CIELAB metric, whi..

    Low Latency Rendering with Dataflow Architectures

    Get PDF
    The research presented in this thesis concerns latency in VR and synthetic environments. Latency is the end-to-end delay experienced by the user of an interactive computer system, between their physical actions and the perceived response to these actions. Latency is a product of the various processing, transport and buffering delays present in any current computer system. For many computer mediated applications, latency can be distracting, but it is not critical to the utility of the application. Synthetic environments on the other hand attempt to facilitate direct interaction with a digitised world. Direct interaction here implies the formation of a sensorimotor loop between the user and the digitised world - that is, the user makes predictions about how their actions affect the world, and see these predictions realised. By facilitating the formation of the this loop, the synthetic environment allows users to directly sense the digitised world, rather than the interface, and induce perceptions, such as that of the digital world existing as a distinct physical place. This has many applications for knowledge transfer and efficient interaction through the use of enhanced communication cues. The complication is, the formation of the sensorimotor loop that underpins this is highly dependent on the fidelity of the virtual stimuli, including latency. The main research questions we ask are how can the characteristics of dataflow computing be leveraged to improve the temporal fidelity of the visual stimuli, and what implications does this have on other aspects of the fidelity. Secondarily, we ask what effects latency itself has on user interaction. We test the effects of latency on physical interaction at levels previously hypothesized but unexplored. We also test for a previously unconsidered effect of latency on higher level cognitive functions. To do this, we create prototype image generators for interactive systems and virtual reality, using dataflow computing platforms. We integrate these into real interactive systems to gain practical experience of how the real perceptible benefits of alternative rendering approaches, but also what implications are when they are subject to the constraints of real systems. We quantify the differences of our systems compared with traditional systems using latency and objective image fidelity measures. We use our novel systems to perform user studies into the effects of latency. Our high performance apparatuses allow experimentation at latencies lower than previously tested in comparable studies. The low latency apparatuses are designed to minimise what is currently the largest delay in traditional rendering pipelines and we find that the approach is successful in this respect. Our 3D low latency apparatus achieves lower latencies and higher fidelities than traditional systems. The conditions under which it can do this are highly constrained however. We do not foresee dataflow computing shouldering the bulk of the rendering workload in the future but rather facilitating the augmentation of the traditional pipeline with a very high speed local loop. This may be an image distortion stage or otherwise. Our latency experiments revealed that many predictions about the effects of low latency should be re-evaluated and experimenting in this range requires great care

    Anales del XIII Congreso Argentino de Ciencias de la Computaci贸n (CACIC)

    Get PDF
    Contenido: Arquitecturas de computadoras Sistemas embebidos Arquitecturas orientadas a servicios (SOA) Redes de comunicaciones Redes heterog茅neas Redes de Avanzada Redes inal谩mbricas Redes m贸viles Redes activas Administraci贸n y monitoreo de redes y servicios Calidad de Servicio (QoS, SLAs) Seguridad inform谩tica y autenticaci贸n, privacidad Infraestructura para firma digital y certificados digitales An谩lisis y detecci贸n de vulnerabilidades Sistemas operativos Sistemas P2P Middleware Infraestructura para grid Servicios de integraci贸n (Web Services o .Net)Red de Universidades con Carreras en Inform谩tica (RedUNCI
    corecore