1,310 research outputs found

    Computer mediated colour fidelity and communication

    Get PDF
    Developments in technology have meant that computercontrolled imaging devices are becoming more powerful and more affordable. Despite their increasing prevalence, computer-aided design and desktop publishing software has failed to keep pace, leading to disappointing colour reproduction across different devices. Although there has been a recent drive to incorporate colour management functionality into modern computer systems, in general this is limited in scope and fails to properly consider the way in which colours are perceived. Furthermore, differences in viewing conditions or representation severely impede the communication of colour between groups of users. The approach proposed here is to provide WYSIWYG colour across a range of imaging devices through a combination of existing device characterisation and colour appearance modeling techniques. In addition, to further facilitate colour communication, various common colour notation systems are defined by a series of mathematical mappings. This enables both the implementation of computer-based colour atlases (which have a number of practical advantages over physical specifiers) and also the interrelation of colour represented in hitherto incompatible notations. Together with the proposed solution, details are given of a computer system which has been implemented. The system was used by textile designers for a real task. Prior to undertaking this work, designers were interviewed in order to ascertain where colour played an important role in their work and where it was found to be a problem. A summary of the findings of these interviews together with a survey of existing approaches to the problems of colour fidelity and communication in colour computer systems are also given. As background to this work, the topics of colour science and colour imaging are introduced

    Blind image quality assessment: from heuristic-based to learning-based

    Get PDF
    Image quality assessment (IQA) plays an important role in numerous digital image processing applications, including image compression, image transmission, and image restoration, etc. The goal of objective IQA is to develop computational models that can predict image quality in a way being consistent with human perception. Compared with subjective quality evaluations such as psycho-visual tests, objective IQA metrics have the advantages of predicting image quality automatically and effectively in a timely manner. This thesis focuses on a particular type of objective IQA – blind IQA (BIQA), where the developed methods not only achieve objective IQA, but also are able to assess the perceptual quality of digital images without access to their pristine reference counterparts. Firstly, a novel blind image sharpness evaluator is introduced in Chapter 3, which leverages the discrepancy measures of structural degradation. Secondly, a “completely blind” quality assessment metric for gamut-mapped images is designed in Chapter 4, which does not need subjective quality scores during the model training. Thirdly, a general-purpose BIQA method is presented in Chapter 5, which can evaluate the quality of digital images without prior knowledge on the types of distortions. Finally, in Chapter 6, a deep neural network-based general-purpose BIQA method is proposed, which is fully data driven and trained in an end-to-end manner. In summary, four BIQA methods are introduced in this thesis, where the first three are heuristic-based and the last one is learning-based. Unlike heuristics-based ones, the learning-based method does not involves manually engineered feature designs

    Individualized Models of Colour Differentiation through Situation-Specific Modelling

    Get PDF
    In digital environments, colour is used for many purposes: for example, to encode information in charts, signify missing field information on websites, and identify active windows and menus. However, many people have inherited, acquired, or situationally-induced Colour Vision Deficiency (CVD), and therefore have difficulties differentiating many colours. Recolouring tools have been developed that modify interface colours to make them more differentiable for people with CVD, but these tools rely on models of colour differentiation that do not represent the majority of people with CVD. As a result, existing recolouring tools do not help most people with CVD. To solve this problem, I developed Situation-Specific Modelling (SSM), and applied it to colour differentiation to develop the Individualized model of Colour Differentiation (ICD). SSM utilizes an in-situ calibration procedure to measure a particular user’s abilities within a particular situation, and a modelling component to extend the calibration measurements into a full representation of the user’s abilities. ICD applies in-situ calibration to measuring a user’s unique colour differentiation abilities, and contains a modelling component that is capable of representing the colour differentiation abilities of almost any individual with CVD. This dissertation presents four versions of the ICD and one application of the ICD to recolouring. First, I describe the development and evaluation of a feasibility implementation of the ICD that tests the viability of the SSM approach. Second, I present revised calibration and modelling components of the ICD that reduce the calibration time from 32 minutes to two minutes. Next, I describe the third and fourth ICD versions that improve the applicability of the ICD to recolouring tools by reducing the colour differentiation prediction time and increasing the power of each prediction. Finally, I present a new recolouring tool (ICDRecolour) that uses the ICD model to steer the recolouring process. In a comparative evaluation, ICDRecolour achieved 90% colour matching accuracy for participants – 20% better than existing recolouring tools – for a wide range of CVDs. By modelling the colour differentiation abilities of a particular user in a particular environment, the ICD enables the extension of recolouring tools to helping most people with CVD, thereby reducing the difficulties that people with CVD experience when using colour in digital environments

    Algorithms for compression of high dynamic range images and video

    Get PDF
    The recent advances in sensor and display technologies have brought upon the High Dynamic Range (HDR) imaging capability. The modern multiple exposure HDR sensors can achieve the dynamic range of 100-120 dB and LED and OLED display devices have contrast ratios of 10^5:1 to 10^6:1. Despite the above advances in technology the image/video compression algorithms and associated hardware are yet based on Standard Dynamic Range (SDR) technology, i.e. they operate within an effective dynamic range of up to 70 dB for 8 bit gamma corrected images. Further the existing infrastructure for content distribution is also designed for SDR, which creates interoperability problems with true HDR capture and display equipment. The current solutions for the above problem include tone mapping the HDR content to fit SDR. However this approach leads to image quality associated problems, when strong dynamic range compression is applied. Even though some HDR-only solutions have been proposed in literature, they are not interoperable with current SDR infrastructure and are thus typically used in closed systems. Given the above observations a research gap was identified in the need for efficient algorithms for the compression of still images and video, which are capable of storing full dynamic range and colour gamut of HDR images and at the same time backward compatible with existing SDR infrastructure. To improve the usability of SDR content it is vital that any such algorithms should accommodate different tone mapping operators, including those that are spatially non-uniform. In the course of the research presented in this thesis a novel two layer CODEC architecture is introduced for both HDR image and video coding. Further a universal and computationally efficient approximation of the tone mapping operator is developed and presented. It is shown that the use of perceptually uniform colourspaces for internal representation of pixel data enables improved compression efficiency of the algorithms. Further proposed novel approaches to the compression of metadata for the tone mapping operator is shown to improve compression performance for low bitrate video content. Multiple compression algorithms are designed, implemented and compared and quality-complexity trade-offs are identified. Finally practical aspects of implementing the developed algorithms are explored by automating the design space exploration flow and integrating the high level systems design framework with domain specific tools for synthesis and simulation of multiprocessor systems. The directions for further work are also presented

    Façade colour and aesthetic response: Examining patterns of response within the context of urban design and planning policy in Sydney

    Get PDF
    The overall aim of this research was to examine aesthetic response to façade colour. Drawing on a range of theories and studies from environment-behaviour studies (EBS), Nasar’s (1994) probabilistic model of aesthetic response to building attributes provided a theoretical framework within which to examine patterns of response. Prompted by the Development Control Plan for Sydney Regional Environmental Plan: Sydney Harbour Catchment (NSWDOP, 2005), this research also linked its aims and methods to planning policy in Sydney. The main research questions focussed on whether changes in aesthetic response are associated with variations in façade colour; and whether changes in judgements about building size, congruity and preference are associated with differences in façade colour. A quasi-experimental research design was used to examine patterns of aesthetic response. The independent variable was represented by four façade colours in two classifications. An existing process, environmental colour mapping, was augmented with digital technology and used to isolate, identify and manipulate the independent variable and for preparation of visual stimuli (Foote, 1983; Iijima, 1995; Lenclos, 1977; Porter, 1997). Façade colour classifications were created from extant colour theories (including those of Albers, 1963; Hard & Sivik, 2001 and Itten, 1961). The façade colour classifications were further developed using F-sort and Q-sort methodology (Amin, 2000; Miller, Wiley & Wolfe, 1986; Stephenson, 1953). Ten dependent variables, linked to overall aesthetic response, were drawn from studies relating to environmental evaluation, building congruity and preference (Groat, 1992; Janssens, 2001; Russell, 1988; Russell, 2003; Russell, Ward & Pratt, 1981; Wohlwill & Harris, 1980). The dependent variables were presented in the form of a semantic differential rating scale and a sample group of 288 evaluated the visual stimuli. The Latin-square technique was used for the controlled presentation of visual stimuli. Factor analysis, correlation analysis and analysis of variance were applied to the data. The findings indicate that variations in aesthetic response are associated with differences in façade colour. Judgements about building size varied by up to 5% and buildings featuring contrasting façade colours were judged to be larger and more dominant. Judgements about a building’s congruity varied by up to 13% and buildings that featured harmonious colours were considered to be more congruous. Preference varied and harmonious façade colours were not necessarily preferred over contrasting façade colours. The outcomes from this research suggest that a new approach to façade colour within the context of planning policy may be appropriate. A model of façade colour evaluation is presented and, unlike current planning guidelines, the model allows for a participatory approach to façade colour evaluation and specification. The model allows for factors that may influence aesthetic response to façade colour (such as contextual, perceptual and idiographic factors) as well as variation in architectural expression with respect to façade colour

    Optimising Light Source Spectrum to Reduce the Energy Absorbed by Objects

    Get PDF
    Light is used to illuminate objects in the built environment. Humans can only observe light reflected from an object. Light absorbed by an object turns into heat and does not contribute to visibility. Since the spectral output of the new lighting technologies can be tuned, it is possible to imagine a lighting system that detects the colours of objects and emits customised light to minimise the absorbed energy. Previous optimisation studies investigated the use of narrowband LEDs to maximise the efficiency and colour quality of a light source. While these studies aimed to tune a white light source for general use, the lighting system proposed here minimises the energy consumed by lighting by detecting colours of objects and emitting customised light onto each coloured part of the object. This thesis investigates the feasibility of absorption-minimising light source spectra and their impact on the colour appearance of objects and energy consumption. Two computational studies were undertaken to form the theoretical basis of the absorption-minimising light source spectra. Computational simulations show that the theoretical single-peak spectra can lower the energy consumption up to around 38 % to 62 %, and double-peak test spectra can result in energy savings up to 71 %, without causing colour shifts. In these studies, standard reference illuminants, theoretical test spectra and coloured test samples were used. These studies are followed by the empirical evidence collected from two psychophysical experiments. Data from the experiments show that observers find the colour appearance of objects equally natural and attractive under spectrally optimised spectra and reference white light sources. An increased colour difference, to a certain extent, is found acceptable, which allows even higher energy savings. However, the translucent nature of some objects may negatively affect the results

    Personalized Color Vision Deficiency Friendly Image Generation

    Get PDF
    Approximately, 350 million people, a proportion of 8%, suffer from color vision deficiency (CVD). While image generation algorithms have been highly successful in synthesizing high-quality images, CVD populations are unintentionally excluded from target users and have difficulties understanding the generated images as normal viewers do. Although a straightforward baseline can be formed by combining generation models and recolor compensation methods as the post-processing, the CVD friendliness of the result images is still limited since the input image content of recolor methods is not CVD-oriented and will be fixed during the recolor compensation process. Besides, the CVD populations can not be fully served since the varying degrees of CVD are often neglected in recoloring methods. To address these issues, we introduce a personalized CVD-friendly image generation algorithm distinguished by two key features: (i) the ability to produce CVD-oriented images that align with the needs of CVD populations, and (ii) the capacity to generate continuous personalized images for people with various CVD degrees through disentangling the color representation based on a triple-latent structure. Quantitative and qualitative experiments affirm the effectiveness of our proposed image generation model, demonstrating its practicality and superior performance compared to standard generation models and combination baselines across multiple datasets

    Intuitive and Accurate Material Appearance Design and Editing

    Get PDF
    Creating and editing high-quality materials for photorealistic rendering can be a difficult task due to the diversity and complexity of material appearance. Material design is the process by which artists specify the reflectance properties of a surface, such as its diffuse color and specular roughness. Even with the support of commercial software packages, material design can be a time-consuming trial-and-error task due to the counter-intuitive nature of the complex reflectance models. Moreover, many material design tasks require the physical realization of virtually designed materials as the final step, which makes the process even more challenging due to rendering artifacts and the limitations of fabrication. In this dissertation, we propose a series of studies and novel techniques to improve the intuitiveness and accuracy of material design and editing. Our goal is to understand how humans visually perceive materials, simplify user interaction in the design process and, and improve the accuracy of the physical fabrication of designs. Our first work focuses on understanding the perceptual dimensions for measured material data. We build a perceptual space based on a low-dimensional reflectance manifold that is computed from crowd-sourced data using a multi-dimensional scaling model. Our analysis shows the proposed perceptual space is consistent with the physical interpretation of the measured data. We also put forward a new material editing interface that takes advantage of the proposed perceptual space. We visualize each dimension of the manifold to help users understand how it changes the material appearance. Our second work investigates the relationship between translucency and glossiness in material perception. We conduct two human subject studies to test if subsurface scattering impacts gloss perception and examine how the shape of an object influences this perception. Based on our results, we discuss why it is necessary to include transparent and translucent media for future research in gloss perception and material design. Our third work addresses user interaction in the material design system. We present a novel Augmented Reality (AR) material design prototype, which allows users to visualize their designs against a real environment and lighting. We believe introducing AR technology can make the design process more intuitive and improve the authenticity of the results for both novice and experienced users. To test this assumption, we conduct a user study to compare our prototype with the traditional material design system with gray-scale background and synthetic lighting. The results demonstrate that with the help of AR techniques, users perform better in terms of objectively measured accuracy and time and they are subjectively more satisfied with their results. Finally, our last work turns to the challenge presented by the physical realization of designed materials. We propose a learning-based solution to map the virtually designed appearance to a meso-scale geometry that can be easily fabricated. Essentially, this is a fitting problem, but compared with previous solutions, our method can provide the fabrication recipe with higher reconstruction accuracy for a large fitting gamut. We demonstrate the efficacy of our solution by comparing the reconstructions with existing solutions and comparing fabrication results with the original design. We also provide an application of bi-scale material editing using the proposed method

    Color image quality measures and retrieval

    Get PDF
    The focus of this dissertation is mainly on color image, especially on the images with lossy compression. Issues related to color quantization, color correction, color image retrieval and color image quality evaluation are addressed. A no-reference color image quality index is proposed. A novel color correction method applied to low bit-rate JPEG image is developed. A novel method for content-based image retrieval based upon combined feature vectors of shape, texture, and color similarities has been suggested. In addition, an image specific color reduction method has been introduced, which allows a 24-bit JPEG image to be shown in the 8-bit color monitor with 256-color display. The reduction in download and decode time mainly comes from the smart encoder incorporating with the proposed color reduction method after color space conversion stage. To summarize, the methods that have been developed can be divided into two categories: one is visual representation, and the other is image quality measure. Three algorithms are designed for visual representation: (1) An image-based visual representation for color correction on low bit-rate JPEG images. Previous studies on color correction are mainly on color image calibration among devices. Little attention was paid to the compressed image whose color distortion is evident in low bit-rate JPEG images. In this dissertation, a lookup table algorithm is designed based on the loss of PSNR in different compression ratio. (2) A feature-based representation for content-based image retrieval. It is a concatenated vector of color, shape, and texture features from region of interest (ROI). (3) An image-specific 256 colors (8 bits) reproduction for color reduction from 16 millions colors (24 bits). By inserting the proposed color reduction method into a JPEG encoder, the image size could be further reduced and the transmission time is also reduced. This smart encoder enables its decoder using less time in decoding. Three algorithms are designed for image quality measure (IQM): (1) A referenced IQM based upon image representation in very low-dimension. Previous studies on IQMs are based on high-dimensional domain including spatial and frequency domains. In this dissertation, a low-dimensional domain IQM based on random projection is designed, with preservation of the IQM accuracy in high-dimensional domain. (2) A no-reference image blurring metric. Based on the edge gradient, the degree of image blur can be measured. (3) A no-reference color IQM based upon colorfulness, contrast and sharpness

    Remote Visual Observation of Real Places Through Virtual Reality Headsets

    Get PDF
    Virtual Reality has always represented a fascinating yet powerful opportunity that has attracted studies and technology developments, especially since the latest release on the market of powerful high-resolution and wide field-of-view VR headsets. While the great potential of such VR systems is common and accepted knowledge, issues remain related to how to design systems and setups capable of fully exploiting the latest hardware advances. The aim of the proposed research is to study and understand how to increase the perceived level of realism and sense of presence when remotely observing real places through VR headset displays. Hence, to produce a set of guidelines that give directions to system designers about how to optimize the display-camera setup to enhance performance, focusing on remote visual observation of real places. The outcome of this investigation represents unique knowledge that is believed to be very beneficial for better VR headset designs towards improved remote observation systems. To achieve the proposed goal, this thesis presents a thorough investigation of existing literature and previous researches, which is carried out systematically to identify the most important factors ruling realism, depth perception, comfort, and sense of presence in VR headset observation. Once identified, these factors are further discussed and assessed through a series of experiments and usability studies, based on a predefined set of research questions. More specifically, the role of familiarity with the observed place, the role of the environment characteristics shown to the viewer, and the role of the display used for the remote observation of the virtual environment are further investigated. To gain more insights, two usability studies are proposed with the aim of defining guidelines and best practices. The main outcomes from the two studies demonstrate that test users can experience an enhanced realistic observation when natural features, higher resolution displays, natural illumination, and high image contrast are used in Mobile VR. In terms of comfort, simple scene layouts and relaxing environments are considered ideal to reduce visual fatigue and eye strain. Furthermore, sense of presence increases when observed environments induce strong emotions, and depth perception improves in VR when several monocular cues such as lights and shadows are combined with binocular depth cues. Based on these results, this investigation then presents a focused evaluation on the outcomes and introduces an innovative eye-adapted High Dynamic Range (HDR) approach, which the author believes to be of great improvement in the context of remote observation when combined with eye-tracked VR headsets. Within this purpose, a third user study is proposed to compare static HDR and eye-adapted HDR observation in VR, to assess that the latter can improve realism, depth perception, sense of presence, and in certain cases even comfort. Results from this last study confirmed the author expectations, proving that eye-adapted HDR and eye tracking should be used to achieve best visual performances for remote observation in modern VR systems
    • …
    corecore