126 research outputs found

    Spectral imaging of human portraits and image quality

    Get PDF
    This dissertation addresses the problem of capturing spectral images for human portraits and evaluating image quality of spectral images. A new spectral imaging approach is proposed in this dissertation for spectral images of human portraits. Thorough statistical analysis is performed for spectral reflectances from various races and different face parts. A spectral imaging system has been designed and calibrated for human portraits. The calibrated imaging system has the ability to represent not only the facial skin but also the spectra of lips, eyes and hair from various races as well. The generated spectral images can be applied to color-imaging system design and analysis. To evaluate the image quality of spectral imaging systems, a visual psychophysical image quality experiment has been performed in this dissertation. The spectral images were simulated based on real spectral imaging system. Meaningful image quality results have been obtained for spectral images generated from different spectral imaging systems. To bridge the gap between the physical measures and subjective visual perceptions of image quality, four image distortion factors were defined. Image quality metrics were obtained and evaluated based statistical analysis and multiple analysis. The image quality metrics have high correlation with subjective assessment for image quality. The image quality contribution of the distortion factors were evaluated. As an extension of the work other researchers in MCSL have initiated, this dissertation research will, working with other researchers in MCSL, put effort to build a publicly accessible database of spectral images, Lippmann2000

    A computer implementation of an orthonormal expansion method for digital image noise suppression

    Get PDF
    Images are usually corrupted by noise which comes from various sources: noise in the recording media (e.g. film grain noise), and noise introduced in the transmission channel. Noise degrades the visual quality of images and obscures the detail information in the images. One of the major sources of noise for images recorded on films is film grain noise. An orthonormal expansion algorithm for digital image noise suppression is implemented. The objective is to preserve as much sharpness and produce as few artifacts in the processed image as possible. The method sections an image into non-overlapping blocks. Each block is treated as a matrix which is decomposed as a sum of outer products of its singular vectors. The coefficient of each outer product is modified by a scaling function and the matrix is reconstructed. The resulting image shows a reduction of noise. The two major problems in the method are: 1. the blocking artifacts due to the sectioned processing, and, 2. the trade-off between the suppression of noise and the loss of sharpness. By separating the image into the low frequency and the high frequency components and processing only the latter component, the method is able to reduce the blocking artifacts to an invisible level. To obtain the optimal trade-off between the suppression of noise and the loss of sharpness, systematic variations of the coefficient scaling function were used to process the image. The best choice of the scaling function is found to be [ 1 - (σi / ai ) 3 ] which is a little different from the least-square-error estimate, [ 1 - (σi / ai ) 2 ]

    Human Jury Assessment of Image Quality as a Measurement: Modeling with Bayes Network

    Get PDF
    Image quality assessment has been done previously manually by human jury assessment as reference. Due to lack of rationality in human jury voting and its high costs it is desirable to replace it with instrumental measurements that can predict jury assessment reliably. But high uncertainty in jury assessments and sensitivity of image context make it cumbersome for the instrumental measurements. Previous research has shown that modeling with a Bayesian network can resolve some of the problems. A Bayesian network is a belief network of causal model representation of multivariate probabilistic distributions that describes the relationships between the interacting nodes in the form of conditional independency. By conditioning and marginalization operations we can estimate the conditional probabilities of unmeasured elements and their uncertainty in Bayes network. In this thesis we have considered a four-layer pre-existing Bayes network consisting of both qualitative and quantitative component and we have tried to assess probabilities of quality elements assessed by jurors based on instrumental measurement values. To analyze and to quantify the relationship between perceptual quality elements and instrumental measurements, we have calculated mutual information from our provided data set. Based on mutual information calculation and Kullback-Leibler distance measure we have investigated the sensitivity of the network, and we have tried to validate a feasible network model where network parameters have been selected such a way that it minimizes the uncertainties of our chosen Bayes network

    Kuvanlaatukokemuksen arvionnin instrumentit

    Get PDF
    This dissertation describes the instruments available for image quality evaluation, develops new methods for subjective image quality evaluation and provides image and video databases for the assessment and development of image quality assessment (IQA) algorithms. The contributions of the thesis are based on six original publications. The first publication introduced the VQone toolbox for subjective image quality evaluation. It created a platform for free-form experimentation with standardized image quality methods and was the foundation for later studies. The second publication focused on the dilemma of reference in subjective experiments by proposing a new method for image quality evaluation: the absolute category rating with dynamic reference (ACR-DR). The third publication presented a database (CID2013) in which 480 images were evaluated by 188 observers using the ACR-DR method proposed in the prior publication. Providing databases of image files along with their quality ratings is essential in the field of IQA algorithm development. The fourth publication introduced a video database (CVD2014) based on having 210 observers rate 234 video clips. The temporal aspect of the stimuli creates peculiar artifacts and degradations, as well as challenges to experimental design and video quality assessment (VQA) algorithms. When the CID2013 and CVD2014 databases were published, most state-of-the-art I/VQAs had been trained on and tested against databases created by degrading an original image or video with a single distortion at a time. The novel aspect of CID2013 and CVD2014 was that they consisted of multiple concurrent distortions. To facilitate communication and understanding among professionals in various fields of image quality as well as among non-professionals, an attribute lexicon of image quality, the image quality wheel, was presented in the fifth publication of this thesis. Reference wheels and terminology lexicons have a long tradition in sensory evaluation contexts, such as taste experience studies, where they are used to facilitate communication among interested stakeholders; however, such an approach has not been common in visual experience domains, especially in studies on image quality. The sixth publication examined how the free descriptions given by the observers influenced the ratings of the images. Understanding how various elements, such as perceived sharpness and naturalness, affect subjective image quality can help to understand the decision-making processes behind image quality evaluation. Knowing the impact of each preferential attribute can then be used for I/VQA algorithm development; certain I/VQA algorithms already incorporate low-level human visual system (HVS) models in their algorithms.Väitöskirja tarkastelee ja kehittää uusia kuvanlaadun arvioinnin menetelmiä, sekä tarjoaa kuva- ja videotietokantoja kuvanlaadun arviointialgoritmien (IQA) testaamiseen ja kehittämiseen. Se, mikä koetaan kauniina ja miellyttävänä, on psykologisesti kiinnostava kysymys. Työllä on myös merkitystä teollisuuteen kameroiden kuvanlaadun kehittämisessä. Väitöskirja sisältää kuusi julkaisua, joissa tarkastellaan aihetta eri näkökulmista. I. julkaisussa kehitettiin sovellus keräämään ihmisten antamia arvioita esitetyistä kuvista tutkijoiden vapaaseen käyttöön. Se antoi mahdollisuuden testata standardoituja kuvanlaadun arviointiin kehitettyjä menetelmiä ja kehittää niiden pohjalta myös uusia menetelmiä luoden perustan myöhemmille tutkimuksille. II. julkaisussa kehitettiin uusi kuvanlaadun arviointimenetelmä. Menetelmä hyödyntää sarjallista kuvien esitystapaa, jolla muodostettiin henkilöille mielikuva kuvien laatuvaihtelusta ennen varsinaista arviointia. Tämän todettiin vähentävän tulosten hajontaa ja erottelevan pienempiä kuvanlaatueroja. III. julkaisussa kuvaillaan tietokanta, jossa on 188 henkilön 480 kuvasta antamat laatuarviot ja niihin liittyvät kuvatiedostot. Tietokannat ovat arvokas työkalu pyrittäessä kehittämään algoritmeja kuvanlaadun automaattiseen arvosteluun. Niitä tarvitaan mm. opetusmateriaalina tekoälyyn pohjautuvien algoritmien kehityksessä sekä vertailtaessa eri algoritmien suorituskykyä toisiinsa. Mitä paremmin algoritmin tuottama ennuste korreloi ihmisten antamiin laatuarvioihin, sen parempi suorituskyky sillä voidaan sanoa olevan. IV. julkaisussa esitellään tietokanta, jossa on 210 henkilön 234 videoleikkeestä tekemät laatuarviot ja niihin liittyvät videotiedostot. Ajallisen ulottuvuuden vuoksi videoärsykkeiden virheet ovat erilaisia kuin kuvissa, mikä tuo omat haasteensa videoiden laatua arvioiville algoritmeille (VQA). Aikaisempien tietokantojen ärsykkeet on muodostettu esimerkiksi sumentamalla yksittäistä kuvaa asteittain, jolloin ne sisältävät vain yksiulotteisia vääristymiä. Nyt esitetyt tietokannat poikkeavat aikaisemmista ja sisältävät useita samanaikaisia vääristymistä, joiden interaktio kuvanlaadulle voi olla merkittävää. V. julkaisussa esitellään kuvanlaatuympyrä (image quality wheel). Se on kuvanlaadun käsitteiden sanasto, joka on kerätty analysoimalla 146 henkilön tuottamat 39 415 kuvanlaadun sanallista kuvausta. Sanastoilla on pitkät perinteet aistinvaraisen arvioinnin tutkimusperinteessä, mutta niitä ei ole aikaisemmin kehitetty kuvanlaadulle. VI. tutkimuksessa tutkittiin, kuinka arvioitsijoiden antamat käsitteet vaikuttavat kuvien laadun arviointiin. Esimerkiksi kuvien arvioitu terävyys tai luonnollisuus auttaa ymmärtämään laadunarvioinnin taustalla olevia päätöksentekoprosesseja. Tietoa voidaan käyttää esimerkiksi kuvan- ja videonlaadun arviointialgoritmien (I/VQA) kehitystyössä

    A Log NEQ based comparison of several silver halide and electronic pictorial imaging systems

    Get PDF
    A protocol for determining log NEQ and a new metric, the Equivalent Image Quality or EIQ, are presented for the generalized experimental evaluation of noise related monochrome pictorial image quality. These metrics are then applied to the evaluation of a pictorial CCD camera and several pictorial films. Emphasis is placed on the development and verification of experimental techniques which do not require elaborate equipment or support facilities. Data analysis is conducted using only available software packages and personal computers. Conclusions are drawn concerning the performance of CCD based and silver halide imaging systems which allow for the objective comparison of the images they produce, and the fundamental differences in the characteristics and requirements of the two systems as applied to pictorial imaging are noted

    Perceptual Image Quality Of Launch Vehicle Imaging Telescopes

    Get PDF
    A large fleet (in the hundreds) of high quality telescopes are used for tracking and imaging of launch vehicles during ascent from Cape Canaveral Air Force Station and Kennedy Space Center. A maintenance tool has been development for use with these telescopes. The tool requires rankings of telescope condition in terms of the ability to generate useful imagery. It is thus a case of ranking telescope conditions on the basis of the perceptual image quality of their imagery. Perceptual image quality metrics that are well-correlated to observer opinions of image quality have been available for several decades. However, these are quite limited in their applications, not being designed to compare various optical systems. The perceptual correlation of the metrics implies that a constant image quality curve (such as the boundary between two qualitative categories labeled as excellent and good) would have a constant value of the metric. This is not the case if the optical system parameters (such as object distance or aperture diameter) are varied. No published data on such direct variation is available and this dissertation presents an investigation made into the perceptual metric responses as system parameters are varied. This investigation leads to some non-intuitive conclusions. The perceptual metrics are reviewed as well as more common metrics and their inability to perform in the necessary manner for the research of interest. Perceptual test methods are also reviewed, as is the human visual system. iv Image formation theory is presented in a non-traditional form, yielding the surprising result that perceptual image quality is invariant under changes in focal length if the final displayed image remains constant. Experimental results are presented of changes in perceived image quality as aperture diameter is varied. Results are analyzed and shortcomings in the process and metrics are discussed. Using the test results, predictions are made about the form of the metric response to object distance variations, and subsequent testing was conducted to validate the predictions. The utility of the results, limitations of applicability, and the immediate ability to further generalize the results is presented

    A Comparison study of input scanning resolution requirements for AM and FM screening

    Get PDF
    The advent of computers and their impact on the graphic arts and printing industry has, and will continue to, change the methodology of working and workflow in prepress operations. The conversion of analog materials (prints, artwork, transparencies, studio work) into a digital format requires the use of scanners or digital cameras, coupled with the knowledge of output requirements as related to client expectations. The chosen input sampling ratio (sampling rate in relation to halftone screening) impacts output quality, as well as many aspects of prepress workflow efficiency. The ability to predict printed results begins with the correct conversion of originals into digital information and then an appropriate conversion into the output materials for the intended press condition. This conversion of originals into digital information can be broken down into four general components. First, the image must be scanned to the size of the final output. Second, the input sampling ratio must be determined, in relation to the screening requirements of the job. This ratio should be appropriate to the needs of the printing condition for the final press sheet. Third, the highlight, highlight to midtone and shadow placement points must be determined in order to achieve the correct tone reproduction. Fourth, decisions must be made as to the image correction system to be employed in order to obtain consistent digital files from the scanner and prepress workflow. Factors relating to image correction and enhancement include such details as gray balance, color cast correction, dot gain, ink trapping, hue error, unsharp masking, all areas that impact quality. These are generally applied from within software packages that work with the scanner, or from within image manipulation software after the digital conversion is complete. The question of what is the necessary input sampling ratio for traditional AM screening has traditionally been based on the Nyquist Sampling Theorem. The basis for determining input sampling ratio requirements for frequency modulated (FM) screening is less clear. The Nyquist Theorem (originally from electrical engineering and communications research) has been applied to the graphic arts, leading to the general acceptance of a standard 2:1 ratio for most prepress scanning work. The ratio means that the sampling rate should be twice the screen frequency. This thesis set out to determine if there are dif ferences in input sampling ratio scanning requirements, based on the screen frequency rx selection (lOOlpi AM, 1751pi AM and 21|lFM used in this study), when generating films and/or plates for printing, that might question this interpretation of the Nyquist Sam pling Theorem as it relates to the graphic arts. Five images were tonally balanced over three different screening frequencies and six different sampling ratios. A reference image was generated for each condition using the Nyquist Sampling ratio of 2:1. Observers were then asked to rate the images in terms of quality against the standard. Statistical analysis was then applied to the data in order to observe interactions, similarities and differences. A pilot study was first run in order to determine the amount of unsharp masking to use on the images that would be manipulated in the main study. Seven images were pre sented from which four were selected for the final study. Thirty observers were asked for their preference on the amount of sharpening to use. It was found that for this condition (7 images) observers preferred the same amount of sharpening for the 1751pi AM and 21u FM screens, but slightly more sharpening for the lOOlpi AM screen. This information was then applied to the main study images. An additional image previously published was added after the pilot study, as it contained elements not found in the other images The unsharp masking applied to this image was the same as at the time of publication. The main study focused on the interaction of image type, screen frequency and varia tions of input scanner sampling ratios as it relates to output. The results indicated that image type, sampling ratio, sampling ratio - frequency interaction were factors, but fre quency alone was not. However, viewing the interaction chart of frequency and sampling ratio for the 1751pi AM and 21u FM screens alone, an insignificant difference was indi cated (at a 95% confidence level). The conclusion can therefore be drawn that at the higher screen frequencies tested in this study, viewer observations showed that the input sampling ratios should be the same for 1751pi and 21)1 FM screens. Continuous tone orginals should be scanned at a sam pling ratio of 1.75:1. This answered the question of whether FM screening technology can withstand a reduced input sampling ratio and maintain quality, which this study finds cannot. At the lower screen ruling of lOOlpi the input scanner sampling ratio requirement, based on viewer preferences of the five images presented, can be reduced to a 1.5:1

    An Evaluation of photo CD\u27s resolving power in scanning various-speed films for archival purposes

    Get PDF
    While the advantages of digital archiving are numerous, the process has been slow to be implemented because of several limitations, particularly high costs. Photo CD shows great promise as a technology for archiving not only because of its cost- effectiveness, but also its speed, multi- resolution format, and efficient compression. Organizations that are beginning to construct digital archives of their resources are doing so to the tune of several million images. When archiving such large quantities of images, one wants to anticipate as many future uses as possible to avoid further scanning costs. Of all potential uses for an archived image, printing on coated stock with a fine line-screen will have among the highest resolution requirements. Although the Photo CD master format offers much flexibility, there is some concern that the format does not provide enough resolution for commercial-grade printing, especially at greater enlargement percentages. In these cases, better results may be achieved with Pro Photo CD, which is more expensive and much slower, but provides four times as much resolution as the master Photo CD. However, simply having more resolution does not necessarily translate into improved image quality. The benefit of the added resolution is likely dependent on the speed of the film and whether there really is more information in the emulsion to be captured. For films above a certain speed, the graininess of the film may offset the extra resolution provided by Pro Photo CD, and no improvement in image quality will be gained. The film speed at which this would occur is currently unknown. Testing the scan quality of various film speeds at 16*base and 64*base can help define the boundary of when Pro Photo CD offers a real advantage, if any, for archiving 35mm film. The findings would supply some guidelines for organizations faced with making decisions about how to use Photo CD most appropriately for archival purposes. To this end, three films of varying speeds and resolving powers were chosen: Ektachrome Lumiere 100, Ektachrome Professional 100 and Ektachrome Elite 200. Two test objects were obtained: an RIT alphanumeric resolution target and a continuous tone photograph containing objects with fine detail. Scans of these chromes were made with both Photo CD and Pro Photo CD scanners. An objective analysis was made by observing the smallest levels resolved on the resolution target for each of the films at both the 16*base and the 64*base resolutions. A subjective analysis was conducted by a panel of respondents making pairedcomparison judgments of image quality for the continuous tone test images. It was hypothesized that differences between the 16*base scan and the 64*base scan would be detected only with the Ektachrome Lumiere 100 film in both the objective and subjective analyses. No differences were expected between the two resolutions for the other two films. This is because it was theorized that the added noise introduced by the higher grain of the faster films would offset the extra resolution provided by the 64*base scan. The results did not concur with the given hypothesis. Instead, differences were noted for all of the tested films in both the objective and subjective evaluations. The data from the alphanumeric resolution target shows an improvement in resolving power with the 64*base resolution for all three films tested. In the subjective test, an increase was also observed in the large enlargement of the Professional 100 and the Elite 200 films. Both of these indicate that none of the films tested contained enough noise to offset the benefit of the added resolution. It should be noted that the differences observed were slight. In terms of recommendations for choosing among Photo CD options for digital archives, the following guidelines can be concluded. All films with speeds of 100 or less will see some, though slight, added benefit from the extra resolution provided by Pro Photo CD. Films with an ISO speed rating of 200 that use fine- grain technology, such as Kodak\u27s TGRAIN, will also see benefit from added resolution. However, in either case the benefit does not seem to subjectively matter unless the image is enlarged beyond the standard dimensional limits of 16*base. Thus, unless large-format reproduction of archived images is likely, Pro Photo CD scans are not necessary

    Better Images : Understanding and Measuring Subjective Image-Quality

    Get PDF
    The objective in this thesis was to examine the psychological process of image-quality estimation, specifically focusing on people who are naïve in this respect and on how they estimate high-quality images. Quality estimation in this context tends to be a preference task, and to be subjective. The aim in this thesis is to enhance understanding of viewing behaviour and estimation rules in the subjective assessment of image-quality. On a more general level, the intention is to shed light on estimation processes in preference tasks. An Interpretation-Based Quality (IBQ) method was therefore developed to investigate the rules used by naïve participants in their quality estimations. It combines qualitative and quantitative approaches, and complements standard methods of image-quality measurement. The findings indicate that the content of the image influences perceptions of its quality: it influences how the interaction between the content and the changing image features is interpreted (Study 1). The IBQ method was also used to create three subjective quality dimensions: naturalness of colour, darkness and sharpness (Study 2). These dimensions were used to describe the performance of camera components. The IBQ also revealed individual differences in estimation rules: the participants differed as to whether they included interpretation of the changes perceived in an image in their estimations or whether they just commented on them (Study 4). Viewing behaviour was measured to enable examination of the task properties as well as the individual differences. Viewing behaviour was compared in two tasks that are commonly used in studies on image-quality estimation: the estimation of difference and the estimation of difference in quality (Study 3). The results showed that viewing behaviour differed even in two magnitude-estimation tasks with identical material. When they were estimating quality the participants concentrated mainly on the semantically important areas of the image, whereas in the difference-estimation task they also examined wider areas. Further examination of quality-estimation task revealed individual differences in the viewing behaviour and in the importance these viewing behaviour groups attached to the interpretation of changes in their estimations (Study 4). It seems that people engaged in a subjective preference-estimation task use different estimation rules, which is also reflected in their viewing behaviour. The findings reported in this thesis indicate that: 1) people are able to describe the basis of their quality estimations even without training when they are allowed to use their own vocabulary; 2) the IBQ method has the potential to reveal the rules used in quality estimation; 3) changes in instructions influence the way people search for information from the images; and 4) there are individual differences in terms of rules and viewing behaviour in quality-estimation tasks.Tämä väitöskirja käsittelee subjektiivista kuvanlaadun arviointiprosessia, etenkin keskittyen kuvanlaadun arvioinnin suhteen kouluttamattomien ihmisten korkea laatuisten kuvien arviointiin. Kuvanlaadulla tarkoitetaan tässä kuvan prosessointiin liittyviä tekijöitä. Tavoitteena on lisätä ymmärrystä kuvanlaadun arviointiprosessista ja sen mittaamisesta. Kuvanlaadun arviointiprosessissa on yleisesti keskitytty saamaan yksi arvio laadusta tai yksi arvio jollain etukäteen määritellyllä skaalalla. Tällöin emme tiedä mihin kouluttamaton arvioitsija olisi kiinnittänyt huomionsa ja millä perusteilla hän olisi kuvaa arvioinut. Tätä selvittämään kehitimme menetelmän, jolla voimme tarkastella ihmisten arvioissaan käyttämiä perusteita. Ihmiset kuvailivat perusteita vapaasti ja kun he saivat käyttää omaa sanastoaan, he olivat myös johdonmukaisia arvioissaan. Tätä menetelmää käytettiin myös selvittämään subjektiivisia kuvanlaatu-ulottuvuuksia, joita olivat värien luonnollisuus, tummuus ja tarkkuus. Toinen osa väitöskirjaa käsittelee kuvanlaadun arviointitehtävää prosessina. Selvitimme miten pieni muutos koehenkilöille annetussa ohjeistuksessa muuttaa heidän tapaansa katsella kuvaa heidän tehdessä siihen liittyviä arvioita. Tehtävänä oli kahdessa kuvassa näkyvien erojen arviointi joko erojen suuruuden tai kuvanlaadun erojen mukaan. Kuvanlaatua arvioitaessa huomio kiinnittyi enemmän kohtiin, jotka olivat semanttisesti merkityksellisiä, kun eroja arvioitaessa laajempi alue otettiin huomioon. Tarkastelimme myös kuvanlaadunarviointeihin liittyviä yksilöiden välisiä eroja. Koehenkilöt pystyttiin jakamaan kolmeen ryhmään heidän katselutapojensa perusteella. Nämä katselutaparyhmät erosivat toisistaan myös siinä kuinka paljon he käyttivät arvioinneissaan perusteina vaikutelmia, jotka syntyivät kuvanlaadun muutosten pohjalta. Toiset keskittyivät arvioimaan kuvanlaatua siihen liittyvien attribuuttien mukaan, kun toiset käyttivät perusteina näiden attribuuttien kuvan viestiin synnyttämiä vaikutelmia. Korkean kuvanlaadun arvioinnissa on usein kyseessä mieltymyksiin perustuva laadun arviointi. Tällöin on tärkeää antaa ihmisten käyttää omia käsitteitään, sekä ottaa huomioon että pienimmätkin tekijät, kuten sanavalinnat kysymyksissä ja ihmisten väliset eroavuudet, vaikuttavat arviointeihin. Tämä väitöskirja antaa eväitä tarkastella arviointiprosessia

    The pictures we like are our image: continuous mapping of favorite pictures into self-assessed and attributed personality traits

    Get PDF
    Flickr allows its users to tag the pictures they like as “favorite”. As a result, many users of the popular photo-sharing platform produce galleries of favorite pictures. This article proposes new approaches, based on Computational Aesthetics, capable to infer the personality traits of Flickr users from the galleries above. In particular, the approaches map low-level features extracted from the pictures into numerical scores corresponding to the Big-Five Traits, both self-assessed and attributed. The experiments were performed over 60,000 pictures tagged as favorite by 300 users (the PsychoFlickr Corpus). The results show that it is possible to predict beyond chance both self-assessed and attributed traits. In line with the state-of-the art of Personality Computing, these latter are predicted with higher effectiveness (correlation up to 0.68 between actual and predicted traits)
    corecore