48 research outputs found

    Perception and Mitigation of Artifacts in a Flat Panel Tiled Display System

    Get PDF
    Flat panel displays continue to dominate the display market. Larger, higher resolution flat panel displays are now in demand for scientific, business, and entertainment purposes. Manufacturing such large displays is currently difficult and expensive. Alternately, larger displays can be constructed by tiling smaller flat panel displays. While this approach may prove to be more cost effective, appropriate measures must be taken to achieve visual seamlessness and uniformity. In this project we conducted a set of experiments to study the perception and mitigation of image artifacts in tiled display systems. In the first experiment we used a prototype tiled display to investigate its current viability and to understand what critical perceptible visual artifacts exist in this system. Based on word frequencies of the survey responses, the most disruptive artifacts perceived were ranked. On the basis of these findings, we conducted a second experiment to test the effectiveness of image processing algorithms designed to mitigate some of the most distracting artifacts without changing the physical properties of the display system. Still images were processed using several algorithms and evaluated by observers using magnitude scaling. Participants in the experiment noticed statistically significant improvement in image quality from one of the two algorithms. Similar testing should be conducted to evaluate the effectiveness of the algorithms on video content. While much work still needs to be done, the contributions of this project should enable the development of an image processing pipeline to mitigate perceived artifacts in flat panel display systems and provide the groundwork for extending such a pipeline to realtime applications

    Scene-Dependency of Spatial Image Quality Metrics

    Get PDF
    This thesis is concerned with the measurement of spatial imaging performance and the modelling of spatial image quality in digital capturing systems. Spatial imaging performance and image quality relate to the objective and subjective reproduction of luminance contrast signals by the system, respectively; they are critical to overall perceived image quality. The Modulation Transfer Function (MTF) and Noise Power Spectrum (NPS) describe the signal (contrast) transfer and noise characteristics of a system, respectively, with respect to spatial frequency. They are both, strictly speaking, only applicable to linear systems since they are founded upon linear system theory. Many contemporary capture systems use adaptive image signal processing, such as denoising and sharpening, to optimise output image quality. These non-linear processes change their behaviour according to characteristics of the input signal (i.e. the scene being captured). This behaviour renders system performance “scene-dependent” and difficult to measure accurately. The MTF and NPS are traditionally measured from test charts containing suitable predefined signals (e.g. edges, sinusoidal exposures, noise or uniform luminance patches). These signals trigger adaptive processes at uncharacteristic levels since they are unrepresentative of natural scene content. Thus, for systems using adaptive processes, the resultant MTFs and NPSs are not representative of performance “in the field” (i.e. capturing real scenes). Spatial image quality metrics for capturing systems aim to predict the relationship between MTF and NPS measurements and subjective ratings of image quality. They cascade both measures with contrast sensitivity functions that describe human visual sensitivity with respect to spatial frequency. The most recent metrics designed for adaptive systems use MTFs measured using the dead leaves test chart that is more representative of natural scene content than the abovementioned test charts. This marks a step toward modelling image quality with respect to real scene signals. This thesis presents novel scene-and-process-dependent MTFs (SPD-MTF) and NPSs (SPDNPS). They are measured from imaged pictorial scene (or dead leaves target) signals to account for system scene-dependency. Further, a number of spatial image quality metrics are revised to account for capture system and visual scene-dependency. Their MTF and NPS parameters were substituted for SPD-MTFs and SPD-NPSs. Likewise, their standard visual functions were substituted for contextual detection (cCSF) or discrimination (cVPF) functions. In addition, two novel spatial image quality metrics are presented (the log Noise Equivalent Quanta (NEQ) and Visual log NEQ) that implement SPD-MTFs and SPD-NPSs. The metrics, SPD-MTFs and SPD-NPSs were validated by analysing measurements from simulated image capture pipelines that applied either linear or adaptive image signal processing. The SPD-NPS measures displayed little evidence of measurement error, and the metrics performed most accurately when they used SPD-NPSs measured from images of scenes. The benefit of deriving SPD-MTFs from images of scenes was traded-off, however, against measurement bias. Most metrics performed most accurately with SPD-MTFs derived from dead leaves signals. Implementing the cCSF or cVPF did not increase metric accuracy. The log NEQ and Visual log NEQ metrics proposed in this thesis were highly competitive, outperforming metrics of the same genre. They were also more consistent than the IEEE P1858 Camera Phone Image Quality (CPIQ) metric when their input parameters were modified. The advantages and limitations of all performance measures and metrics were discussed, as well as their practical implementation and relevant applications

    Quantitative electroluminescence measurements of PV devices

    Get PDF
    Electroluminescence (EL) imaging is a fast and comparatively low-cost method for spatially resolved analysis of photovoltaic (PV) devices. A Silicon CCD or InGaAs camera is used to capture the near infrared radiation, emitted from a forward biased PV device. EL images can be used to identify defects, like cracks and shunts but also to map physical parameters, like series resistance. The lack of suitable image processing routines often prevents automated and setup-independent quantitative analysis. This thesis provides a tool-set, rather than a specific solution to address this problem. Comprehensive and novel procedures to calibrate imaging systems, to evaluate image quality, to normalize images and to extract features are presented. For image quality measurement the signal-to-noise ratio (SNR) is obtained from a set of EL images. Its spatial average depends on the size of the background area within the EL image. In this work the SNR will be calculated spatially resolved and as (background independent) averaged parameter using only one EL image and no additional information of the imaging system. This thesis presents additional methods to measure image sharpness spatially resolved and introduces a new parameter to describe resolvable object size. This allows equalising images of different resolutions and of different sharpness allowing artefact-free comparison. The flat field image scales the emitted EL signal to the detected image intensity. It is often measured through imaging a homogeneous light source such as a red LCD screen in close distance to the camera lens. This measurement however only partially removes vignetting the main contributor to the flat field. This work quantifies the vignetting correction quality and introduces more sophisticated vignetting measurement methods. Especially outdoor EL imaging often includes perspective distortion of the measured PV device. This thesis presents methods to automatically detect and correct for this distortion. This also includes intensity correction due to different irradiance angles. Single-time-effects and hot pixels are image artefacts that can impair the EL image quality. They can conceivably be confused with cell defects. Their detection and removal is described in this thesis. The methods presented enable direct pixel-by-pixel comparison for EL images of the same device taken at different measurement and exposure times, even if imaged by different contractors. EL statistics correlating cell intensity to crack length and PV performance parameters are extracted from EL and dark I-V curves. This allows for spatially resolved performance measurement without the need for laborious flash tests to measure the light I-V- curve. This work aims to convince the EL community of certain calibration- and imaging routines, which will allow setup independent, automatable, standardised and therefore comparable results. Recognizing the benefits of EL imaging for quality control and failure detection, this work paves the way towards cheaper and more reliable PV generation. The code used in this work is made available to public as library and interactive graphical application for scientific image processing

    Mitigation of contrast loss in underwater images

    Get PDF
    The quality of an underwater image is degraded due to the effects of light scattering in water, which are resolution loss and contrast loss. Contrast loss is the main degradation problem in underwater images which is caused by the effect of optical back-scatter. A method is proposed to improve the contrast of an underwater image by mitigating the effect of optical back-scatter after image acquisition. The proposed method is based on the inverse model of an underwater image model, which is validated experimentally in this work. It suggests that the recovered image can be obtained by subtracting the intensity value due to the effect of optical back-scatter from the degraded image pixel and then scaling the remaining by a factor due to the effect of optical extinction. Three filters are proposed to estimate for optical back-scatter in a degraded image. Among these three filters, the performance of BS-CostFunc filter is the best. The physical model of the optical extinction indicates that the optical extinction can be calculated by knowing the level of optical back-scatter. Results from simulations with synthetic images and experiments with real constrained images in monochrome indicate that the maximum optical back-scatter estimation error is less than 5%. The proposed algorithm can significantly improve the contrast of a monochrome underwater image. Results of colour simulations with synthetic colour images and experiments with real constrained colour images indicate that the proposed method is applicable to colour images with colour fidelity. However, for colour images in wide spectral bands, such as RGB, the colour of the improved images is similar to the colour of that of the reference images. Yet, the improved images are darker than the reference images in terms of intensity. The darkness of the improved images is because of the effect of noise on the level of estimation errors.EThOS - Electronic Theses Online Servicety of ManchesterThe Petroleum Institute in Abu DhabiGBUnited Kingdo

    Color to gray conversions for stereo matching

    Get PDF
    The thesis belongs to the Computer Graphics and Computer Vision fields, it copes with the image color to grayscale conversion problem with the intent of improving the results in the context of stereo matching. Many different state of the art color to grayscale conversion algorithms have been evaluated, implemented and tested inside the stereo matching context, and a new ad-hoc algorithm has been proposed that optimizes the conversion process by evaluating the whole set of images to be matched simultaneously. La tesi si colloca nel settore della Computer Graphics e della Computer Vision e affronta il problema della conversione di un immmagine a colori in toni di grigio allo scopo di migliorare il processo di calcolo delle corrispondenze tra coppie di immagini. In questo ambito sono stati analizzati, implementati e valutati diversi algoritmi per la conversione in toni di grigio noti in letteratura e proposto un nuovo algoritmo specifico per questa problematica. La soluzione proposta affronta la conversione valutando contemporanemente tutto l'insieme di immagini da far corrispondere

    A simplified HDR image processing pipeline for digital photography

    Get PDF
    High Dynamic Range (HDR) imaging has revolutionized the digital imaging. It allows capture, storage, manipulation, and display of full dynamic range of the captured scene. As a result, it has spawned whole new possibilities for digital photography, from photorealistic to hyper-real. With all these advantages, the technique is expected to replace the conventional 8-bit Low Dynamic Range (LDR) imaging in the future. However, HDR results in an even more complex imaging pipeline including new techniques for capturing, encoding, and displaying images. The goal of this thesis is to bridge the gap between conventional imaging pipeline to the HDR’s in as simple a way as possible. We make three contributions. First we show that a simple extension of gamma encoding suffices as a representation to store HDR images. Second, gamma as a control for image contrast can be ‘optimally’ tuned on a per image basis. Lastly, we show a general tone curve, with detail preservation, suffices to tone map an image (there is only a limited need for the expensive spatially varying tone mappers). All three of our contributions are evaluated psychophysically. Together they support our general thesis that an HDR workflow, similar to that already used in photography, might be used. This said, we believe the adoption of HDR into photography is, perhaps, less difficult than it is sometimes posed to be

    Kuvanlaatukokemuksen arvionnin instrumentit

    Get PDF
    This dissertation describes the instruments available for image quality evaluation, develops new methods for subjective image quality evaluation and provides image and video databases for the assessment and development of image quality assessment (IQA) algorithms. The contributions of the thesis are based on six original publications. The first publication introduced the VQone toolbox for subjective image quality evaluation. It created a platform for free-form experimentation with standardized image quality methods and was the foundation for later studies. The second publication focused on the dilemma of reference in subjective experiments by proposing a new method for image quality evaluation: the absolute category rating with dynamic reference (ACR-DR). The third publication presented a database (CID2013) in which 480 images were evaluated by 188 observers using the ACR-DR method proposed in the prior publication. Providing databases of image files along with their quality ratings is essential in the field of IQA algorithm development. The fourth publication introduced a video database (CVD2014) based on having 210 observers rate 234 video clips. The temporal aspect of the stimuli creates peculiar artifacts and degradations, as well as challenges to experimental design and video quality assessment (VQA) algorithms. When the CID2013 and CVD2014 databases were published, most state-of-the-art I/VQAs had been trained on and tested against databases created by degrading an original image or video with a single distortion at a time. The novel aspect of CID2013 and CVD2014 was that they consisted of multiple concurrent distortions. To facilitate communication and understanding among professionals in various fields of image quality as well as among non-professionals, an attribute lexicon of image quality, the image quality wheel, was presented in the fifth publication of this thesis. Reference wheels and terminology lexicons have a long tradition in sensory evaluation contexts, such as taste experience studies, where they are used to facilitate communication among interested stakeholders; however, such an approach has not been common in visual experience domains, especially in studies on image quality. The sixth publication examined how the free descriptions given by the observers influenced the ratings of the images. Understanding how various elements, such as perceived sharpness and naturalness, affect subjective image quality can help to understand the decision-making processes behind image quality evaluation. Knowing the impact of each preferential attribute can then be used for I/VQA algorithm development; certain I/VQA algorithms already incorporate low-level human visual system (HVS) models in their algorithms.VÀitöskirja tarkastelee ja kehittÀÀ uusia kuvanlaadun arvioinnin menetelmiÀ, sekÀ tarjoaa kuva- ja videotietokantoja kuvanlaadun arviointialgoritmien (IQA) testaamiseen ja kehittÀmiseen. Se, mikÀ koetaan kauniina ja miellyttÀvÀnÀ, on psykologisesti kiinnostava kysymys. TyöllÀ on myös merkitystÀ teollisuuteen kameroiden kuvanlaadun kehittÀmisessÀ. VÀitöskirja sisÀltÀÀ kuusi julkaisua, joissa tarkastellaan aihetta eri nÀkökulmista. I. julkaisussa kehitettiin sovellus kerÀÀmÀÀn ihmisten antamia arvioita esitetyistÀ kuvista tutkijoiden vapaaseen kÀyttöön. Se antoi mahdollisuuden testata standardoituja kuvanlaadun arviointiin kehitettyjÀ menetelmiÀ ja kehittÀÀ niiden pohjalta myös uusia menetelmiÀ luoden perustan myöhemmille tutkimuksille. II. julkaisussa kehitettiin uusi kuvanlaadun arviointimenetelmÀ. MenetelmÀ hyödyntÀÀ sarjallista kuvien esitystapaa, jolla muodostettiin henkilöille mielikuva kuvien laatuvaihtelusta ennen varsinaista arviointia. TÀmÀn todettiin vÀhentÀvÀn tulosten hajontaa ja erottelevan pienempiÀ kuvanlaatueroja. III. julkaisussa kuvaillaan tietokanta, jossa on 188 henkilön 480 kuvasta antamat laatuarviot ja niihin liittyvÀt kuvatiedostot. Tietokannat ovat arvokas työkalu pyrittÀessÀ kehittÀmÀÀn algoritmeja kuvanlaadun automaattiseen arvosteluun. NiitÀ tarvitaan mm. opetusmateriaalina tekoÀlyyn pohjautuvien algoritmien kehityksessÀ sekÀ vertailtaessa eri algoritmien suorituskykyÀ toisiinsa. MitÀ paremmin algoritmin tuottama ennuste korreloi ihmisten antamiin laatuarvioihin, sen parempi suorituskyky sillÀ voidaan sanoa olevan. IV. julkaisussa esitellÀÀn tietokanta, jossa on 210 henkilön 234 videoleikkeestÀ tekemÀt laatuarviot ja niihin liittyvÀt videotiedostot. Ajallisen ulottuvuuden vuoksi videoÀrsykkeiden virheet ovat erilaisia kuin kuvissa, mikÀ tuo omat haasteensa videoiden laatua arvioiville algoritmeille (VQA). Aikaisempien tietokantojen Àrsykkeet on muodostettu esimerkiksi sumentamalla yksittÀistÀ kuvaa asteittain, jolloin ne sisÀltÀvÀt vain yksiulotteisia vÀÀristymiÀ. Nyt esitetyt tietokannat poikkeavat aikaisemmista ja sisÀltÀvÀt useita samanaikaisia vÀÀristymistÀ, joiden interaktio kuvanlaadulle voi olla merkittÀvÀÀ. V. julkaisussa esitellÀÀn kuvanlaatuympyrÀ (image quality wheel). Se on kuvanlaadun kÀsitteiden sanasto, joka on kerÀtty analysoimalla 146 henkilön tuottamat 39 415 kuvanlaadun sanallista kuvausta. Sanastoilla on pitkÀt perinteet aistinvaraisen arvioinnin tutkimusperinteessÀ, mutta niitÀ ei ole aikaisemmin kehitetty kuvanlaadulle. VI. tutkimuksessa tutkittiin, kuinka arvioitsijoiden antamat kÀsitteet vaikuttavat kuvien laadun arviointiin. Esimerkiksi kuvien arvioitu terÀvyys tai luonnollisuus auttaa ymmÀrtÀmÀÀn laadunarvioinnin taustalla olevia pÀÀtöksentekoprosesseja. Tietoa voidaan kÀyttÀÀ esimerkiksi kuvan- ja videonlaadun arviointialgoritmien (I/VQA) kehitystyössÀ

    Image Quality Evaluation in Lossy Compressed Images

    Get PDF
    This research focuses on the quantification of image quality in lossy compressed images, exploring the impact of digital artefacts and scene characteristics upon image quality evaluation. A subjective paired comparison test was implemented to assess perceived quality of JPEG 2000 against baseline JPEG over a range of different scene types. Interval scales were generated for both algorithms, which indicated a subjective preference for JPEG 2000, particularly at low bit rates, and these were confirmed by an objective distortion measure. The subjective results did not follow this trend for some scenes however, and both algorithms were found to be scene dependent as a result of the artefacts produced at high compression rates. The scene dependencies were explored from the interval scale results, which allowed scenes to be grouped according to their susceptibilities to each of the algorithms. Groupings were correlated with scene measures applied in a linked study. A pilot study was undertaken to explore perceptibility thresholds of JPEG 2000 of the same set of images. This work was developed with a further experiment to investigate the thresholds of perceptibility and acceptability of higher resolution JPEG 2000 compressed images. A set of images was captured using a professional level full-frame Digital Single Lens Reflex camera, using a raw workflow and carefully controlled image-processing pipeline. The scenes were quantified using a set of simple scene metrics to classify them according to whether they were average, higher than, or lower than average, for a number of scene properties known to affect image compression and perceived image quality; these were used to make a final selection of test images. Image fidelity was investigated using the method of constant stimuli to quantify perceptibility thresholds and just noticeable differences (JNDs) of perceptibility. Thresholds and JNDs of acceptability were also quantified to explore suprathreshold quality evaluation. The relationships between the two thresholds were examined and correlated with the results from the scene measures, to identify more or less susceptible scenes. It was found that the level and differences between the two thresholds was an indicator of scene dependency and could be predicted by certain types of scene characteristics. A third study implemented the soft copy quality ruler as an alternative psychophysical method, by matching the quality of compressed images to a set of images varying in a single attribute, separated by known JND increments of quality. The imaging chain and image processing workflow were evaluated using objective measures of tone reproduction and spatial frequency response. An alternative approach to the creation of ruler images was implemented and tested, and the resulting quality rulers were used to evaluate a subset of the images from the previous study. The quality ruler was found to be successful in identifying scene susceptibilities and observer sensitivity. The fourth investigation explored the implementation of four different image quality metrics. These were the Modular Image Difference Metric, the Structural Similarity Metric, The Multi-scale Structural Similarity Metric and the Weighted Structural Similarity Metric. The metrics were tested against the subjective results and all were found to have linear correlation in terms of predictability of image quality

    Image quality assessment : utility, beauty, appearance

    Get PDF

    BCR’s CDP Digital Imaging Best Practices, Version 2.0

    Get PDF
    This is the published version.These Best Practices — also referred to as the CDP Best Practices -- have been created through the collaboration of working groups pulled from library, museum and archive practitioners. Version 1 was created through funding from the Institute for Museum and Library Services through a grant to the University of Denver and the Colorado Digitization Program in 2003. Version 2 of the guidelines were published by BCR in 2008 and represents a significant update of practices under the leadership of their CDP Digital Imaging Best Practices Working Group. The intent has been to help standardize and share protocols governing the implementation of digital projects. The result of these collaborations is a set of best practice documents that cover issues such as digital imaging, Dublin Core metadata and digital audio. These best practice documents are intended to help with the design and implementation of digitization projects. Because they were collaboratively designed by experts in the field, you can be certain they include the best possible information, in addition to having been field tested and proven in practice. These best practice documents are an ongoing collaborative project, and LYRASIS will add information and new documents as they are developed
    corecore