15,709 research outputs found

    Escaping RGBland: Selecting Colors for Statistical Graphics

    Get PDF
    Statistical graphics are often augmented by the use of color coding information contained in some variable. When this involves the shading of areas (and not only points or lines) - e.g., as in bar plots, pie charts, mosaic displays or heatmaps - it is important that the colors are perceptually based and do not introduce optical illusions or systematic bias. Here, we discuss how the perceptually-based Hue-Chroma-Luminance (HCL) color space can be used for deriving suitable color palettes for coding categorical data (qualitative palettes) and numerical variables (sequential and diverging palettes).Series: Research Report Series / Department of Statistics and Mathematic

    Redefining A in RGBA: Towards a Standard for Graphical 3D Printing

    Full text link
    Advances in multimaterial 3D printing have the potential to reproduce various visual appearance attributes of an object in addition to its shape. Since many existing 3D file formats encode color and translucency by RGBA textures mapped to 3D shapes, RGBA information is particularly important for practical applications. In contrast to color (encoded by RGB), which is specified by the object's reflectance, selected viewing conditions and a standard observer, translucency (encoded by A) is neither linked to any measurable physical nor perceptual quantity. Thus, reproducing translucency encoded by A is open for interpretation. In this paper, we propose a rigorous definition for A suitable for use in graphical 3D printing, which is independent of the 3D printing hardware and software, and which links both optical material properties and perceptual uniformity for human observers. By deriving our definition from the absorption and scattering coefficients of virtual homogeneous reference materials with an isotropic phase function, we achieve two important properties. First, a simple adjustment of A is possible, which preserves the translucency appearance if an object is re-scaled for printing. Second, determining the value of A for a real (potentially non-homogeneous) material, can be achieved by minimizing a distance function between light transport measurements of this material and simulated measurements of the reference materials. Such measurements can be conducted by commercial spectrophotometers used in graphic arts. Finally, we conduct visual experiments employing the method of constant stimuli, and derive from them an embedding of A into a nearly perceptually uniform scale of translucency for the reference materials.Comment: 20 pages (incl. appendices), 20 figures. Version with higher quality images: https://cloud-ext.igd.fraunhofer.de/s/pAMH67XjstaNcrF (main article) and https://cloud-ext.igd.fraunhofer.de/s/4rR5bH3FMfNsS5q (appendix). Supplemental material including code: https://cloud-ext.igd.fraunhofer.de/s/9BrZaj5Uh5d0cOU/downloa

    An intuitive control space for material appearance

    Get PDF
    Many different techniques for measuring material appearance have been proposed in the last few years. These have produced large public datasets, which have been used for accurate, data-driven appearance modeling. However, although these datasets have allowed us to reach an unprecedented level of realism in visual appearance, editing the captured data remains a challenge. In this paper, we present an intuitive control space for predictable editing of captured BRDF data, which allows for artistic creation of plausible novel material appearances, bypassing the difficulty of acquiring novel samples. We first synthesize novel materials, extending the existing MERL dataset up to 400 mathematically valid BRDFs. We then design a large-scale experiment, gathering 56,000 subjective ratings on the high-level perceptual attributes that best describe our extended dataset of materials. Using these ratings, we build and train networks of radial basis functions to act as functionals mapping the perceptual attributes to an underlying PCA-based representation of BRDFs. We show that our functionals are excellent predictors of the perceived attributes of appearance. Our control space enables many applications, including intuitive material editing of a wide range of visual properties, guidance for gamut mapping, analysis of the correlation between perceptual attributes, or novel appearance similarity metrics. Moreover, our methodology can be used to derive functionals applicable to classic analytic BRDF representations. We release our code and dataset publicly, in order to support and encourage further research in this direction

    Choosing Colors for Geometric Graphs via Color Space Embeddings

    Full text link
    Graph drawing research traditionally focuses on producing geometric embeddings of graphs satisfying various aesthetic constraints. After the geometric embedding is specified, there is an additional step that is often overlooked or ignored: assigning display colors to the graph's vertices. We study the additional aesthetic criterion of assigning distinct colors to vertices of a geometric graph so that the colors assigned to adjacent vertices are as different from one another as possible. We formulate this as a problem involving perceptual metrics in color space and we develop algorithms for solving this problem by embedding the graph in color space. We also present an application of this work to a distributed load-balancing visualization problem.Comment: 12 pages, 4 figures. To appear at 14th Int. Symp. Graph Drawing, 200

    GreenVis: Energy-Saving Color Schemes for Sequential Data Visualization on OLED Displays

    Get PDF
    The organic light emitting diode (OLED) display has recently become popular in the consumer electronics market. Compared with current LCD display technology, OLED is an emerging display technology that emits light by the pixels themselves and doesn’t need an external back light as the illumination source. In this paper, we offer an approach to reduce power consumption on OLED displays for sequential data visualization. First, we create a multi-objective optimization approach to find the most energy-saving color scheme for given visual perception difference levels. Second, we apply the model in two situations: pre-designed color schemes and auto generated color schemes. Third, our experiment results show that the energy-saving sequential color scheme can reduce power consumption by 17.2% for pre-designed color schemes. For auto-generated color schemes, it can save 21.9% of energy in comparison to the reference color scheme for sequential data

    Evaluation of color representation for texture analysis

    Get PDF
    Since more than 50 years texture in image material is a topic of research. Hereby, color was ignored mostly. This study compares 70 different configurations for texture analysis, using four features. For the configurations we used: (i) a gray value texture descriptor: the co-occurrence matrix and a color texture descriptor: the color correlogram, (ii) six color spaces, and (iii) several quantization schemes. A three classifier combination was used to classify the output of the configurations on the VisTex texture database. The results indicate that the use of a coarse HSV color space quantization can substantially improve texture recognition compared to various other gray and color quantization schemes

    Fast Color Space Transformations Using Minimax Approximations

    Full text link
    Color space transformations are frequently used in image processing, graphics, and visualization applications. In many cases, these transformations are complex nonlinear functions, which prohibits their use in time-critical applications. In this paper, we present a new approach called Minimax Approximations for Color-space Transformations (MACT).We demonstrate MACT on three commonly used color space transformations. Extensive experiments on a large and diverse image set and comparisons with well-known multidimensional lookup table interpolation methods show that MACT achieves an excellent balance among four criteria: ease of implementation, memory usage, accuracy, and computational speed

    A Similarity Measure for Material Appearance

    Get PDF
    We present a model to measure the similarity in appearance between different materials, which correlates with human similarity judgments. We first create a database of 9,000 rendered images depicting objects with varying materials, shape and illumination. We then gather data on perceived similarity from crowdsourced experiments; our analysis of over 114,840 answers suggests that indeed a shared perception of appearance similarity exists. We feed this data to a deep learning architecture with a novel loss function, which learns a feature space for materials that correlates with such perceived appearance similarity. Our evaluation shows that our model outperforms existing metrics. Last, we demonstrate several applications enabled by our metric, including appearance-based search for material suggestions, database visualization, clustering and summarization, and gamut mapping.Comment: 12 pages, 17 figure
    • …
    corecore