1,541 research outputs found

    Harnessing Collaborative Technologies: Helping Funders Work Together Better

    Get PDF
    This report was produced through a joint research project of the Monitor Institute and the Foundation Center. The research included an extensive literature review on collaboration in philanthropy, detailed analysis of trends from a recent Foundation Center survey of the largest U.S. foundations, interviews with 37 leading philanthropy professionals and technology experts, and a review of over 170 online tools.The report is a story about how new tools are changing the way funders collaborate. It includes three primary sections: an introduction to emerging technologies and the changing context for philanthropic collaboration; an overview of collaborative needs and tools; and recommendations for improving the collaborative technology landscapeA "Key Findings" executive summary serves as a companion piece to this full report

    A Similarity Measure for Material Appearance

    Get PDF
    We present a model to measure the similarity in appearance between different materials, which correlates with human similarity judgments. We first create a database of 9,000 rendered images depicting objects with varying materials, shape and illumination. We then gather data on perceived similarity from crowdsourced experiments; our analysis of over 114,840 answers suggests that indeed a shared perception of appearance similarity exists. We feed this data to a deep learning architecture with a novel loss function, which learns a feature space for materials that correlates with such perceived appearance similarity. Our evaluation shows that our model outperforms existing metrics. Last, we demonstrate several applications enabled by our metric, including appearance-based search for material suggestions, database visualization, clustering and summarization, and gamut mapping.Comment: 12 pages, 17 figure

    An intuitive control space for material appearance

    Get PDF
    Many different techniques for measuring material appearance have been proposed in the last few years. These have produced large public datasets, which have been used for accurate, data-driven appearance modeling. However, although these datasets have allowed us to reach an unprecedented level of realism in visual appearance, editing the captured data remains a challenge. In this paper, we present an intuitive control space for predictable editing of captured BRDF data, which allows for artistic creation of plausible novel material appearances, bypassing the difficulty of acquiring novel samples. We first synthesize novel materials, extending the existing MERL dataset up to 400 mathematically valid BRDFs. We then design a large-scale experiment, gathering 56,000 subjective ratings on the high-level perceptual attributes that best describe our extended dataset of materials. Using these ratings, we build and train networks of radial basis functions to act as functionals mapping the perceptual attributes to an underlying PCA-based representation of BRDFs. We show that our functionals are excellent predictors of the perceived attributes of appearance. Our control space enables many applications, including intuitive material editing of a wide range of visual properties, guidance for gamut mapping, analysis of the correlation between perceptual attributes, or novel appearance similarity metrics. Moreover, our methodology can be used to derive functionals applicable to classic analytic BRDF representations. We release our code and dataset publicly, in order to support and encourage further research in this direction

    Choosing Colors for Geometric Graphs via Color Space Embeddings

    Full text link
    Graph drawing research traditionally focuses on producing geometric embeddings of graphs satisfying various aesthetic constraints. After the geometric embedding is specified, there is an additional step that is often overlooked or ignored: assigning display colors to the graph's vertices. We study the additional aesthetic criterion of assigning distinct colors to vertices of a geometric graph so that the colors assigned to adjacent vertices are as different from one another as possible. We formulate this as a problem involving perceptual metrics in color space and we develop algorithms for solving this problem by embedding the graph in color space. We also present an application of this work to a distributed load-balancing visualization problem.Comment: 12 pages, 4 figures. To appear at 14th Int. Symp. Graph Drawing, 200

    Convolutional Color Constancy

    Full text link
    Color constancy is the problem of inferring the color of the light that illuminated a scene, usually so that the illumination color can be removed. Because this problem is underconstrained, it is often solved by modeling the statistical regularities of the colors of natural objects and illumination. In contrast, in this paper we reformulate the problem of color constancy as a 2D spatial localization task in a log-chrominance space, thereby allowing us to apply techniques from object detection and structured prediction to the color constancy problem. By directly learning how to discriminate between correctly white-balanced images and poorly white-balanced images, our model is able to improve performance on standard benchmarks by nearly 40%

    New software for comparing the color gamuts generated by printing technologies

    Get PDF
    In the color industry, it is vital to know the color gamut of a given device. Several tools for visualizing and comparing color gamuts are available but they each have some drawbacks. Therefore, the aim of this work was to develop and validate new software for comparing the color gamuts generated by printing devices; we also developed an automated color measurement system. The software simultaneously represents the gamuts in the 3D CIELAB space. It also calculates the Gamut Comparison Index and the volume using two algorithms (Convex Hull and Alpha Shapes). To evaluate the performance of our software, we first compared the results it obtained for the color gamuts with those from other comparison methods such as representation in the CIE 1931 chromaticity diagram or other color spaces. Next, we used Interactive Color Correction in 3 Dimensions (ICC3D) software to compare the gamut representations and volumes. Our software allowed us to identify differences between color gamuts that were not discriminated by other methods. This new software will enable the study and comparison of gamuts generated by different printing technologies and using different printing substrates, International Color Consortium profiles, inks, and light sources, thereby helping to achieve high quality color images.Optics Group (FQM151, University of Granada)University of Granada (pre-doctoral contract, Training Programme for Research Staff, FPU)Funding for open access charge: University of Granada/CBU
    corecore