28,163 research outputs found

    Sharpness rules

    Get PDF
    A large-scale psychophysical experiment was performed examining the effects of various simultaneous variations of image parameters on perceived image sharpness. The goal of this experiment was to unlock some of the rules of image sharpness perception. A paired comparison paradigm was used to compare images of different resolution, contrast, noise, and sharpening. In total, 50 people performed over 140,000 observations. The results indicate that there are several very interesting tradeoffs between the various parameters of contrast, noise, resolution, and spatial sharpening. An interval scale of image sharpness was created. This scale was then used to test the results of several existing models of color and spatial vision. The ultimate goal of this experiment, along with the visual modeling is to obtain a mathematical model of perceived image quality

    Reliability, Sufficiency, and the Decomposition of Proper Scores

    Full text link
    Scoring rules are an important tool for evaluating the performance of probabilistic forecasting schemes. In the binary case, scoring rules (which are strictly proper) allow for a decomposition into terms related to the resolution and to the reliability of the forecast. This fact is particularly well known for the Brier Score. In this paper, this result is extended to forecasts for finite--valued targets. Both resolution and reliability are shown to have a positive effect on the score. It is demonstrated that resolution and reliability are directly related to forecast attributes which are desirable on grounds independent of the notion of scores. This finding can be considered an epistemological justification of measuring forecast quality by proper scores. A link is provided to the original work of DeGroot et al (1982), extending their concepts of sufficiency and refinement. The relation to the conjectured sharpness principle of Gneiting et al (2005a) is elucidated.Comment: v1: 9 pages; submitted to International Journal of Forecasting v2: 12 pages; Significant change of contents; stronger focus on decomposition; Extensive comments on and extensions of earlier work, in particular sufficienc

    Sharp benefit-to-cost rules for the evolution of cooperation on regular graphs

    Full text link
    We study two of the simple rules on finite graphs under the death-birth updating and the imitation updating discovered by Ohtsuki, Hauert, Lieberman and Nowak [Nature 441 (2006) 502-505]. Each rule specifies a payoff-ratio cutoff point for the magnitude of fixation probabilities of the underlying evolutionary game between cooperators and defectors. We view the Markov chains associated with the two updating mechanisms as voter model perturbations. Then we present a first-order approximation for fixation probabilities of general voter model perturbations on finite graphs subject to small perturbation in terms of the voter model fixation probabilities. In the context of regular graphs, we obtain algebraically explicit first-order approximations for the fixation probabilities of cooperators distributed as certain uniform distributions. These approximations lead to a rigorous proof that both of the rules of Ohtsuki et al. are valid and are sharp.Comment: Published in at http://dx.doi.org/10.1214/12-AAP849 the Annals of Applied Probability (http://www.imstat.org/aap/) by the Institute of Mathematical Statistics (http://www.imstat.org

    Analyzing image-text relations for semantic media adaptation and personalization

    Get PDF
    Progress in semantic media adaptation and personalisation requires that we know more about how different media types, such as texts and images, work together in multimedia communication. To this end, we present our ongoing investigation into image-text relations. Our idea is that the ways in which the meanings of images and texts relate in multimodal documents, such as web pages, can be classified on the basis of low-level media features and that this classification should be an early processing step in systems targeting semantic multimedia analysis. In this paper we present the first empirical evidence that humans can predict something about the main theme of a text from an accompanying image, and that this prediction can be emulated by a machine via analysis of low- level image features. We close by discussing how these findings could impact on applications for news adaptation and personalisation, and how they may generalise to other kinds of multimodal documents and to applications for semantic media retrieval, browsing, adaptation and creation

    Fuzzy control system for a remote focusing microscope

    Get PDF
    Space Station Crew Health Care System procedures require the use of an on-board microscope whose slide images will be transmitted for analysis by ground-based microbiologists. Focusing of microscope slides is low on the list of crew priorities, so NASA is investigating the option of telerobotic focusing controlled by the microbiologist on the ground, using continuous video feedback. However, even at Space Station distances, the transmission time lag may disrupt the focusing process, severely limiting the number of slides that can be analyzed within a given bandwidth allocation. Substantial time could be saved if on-board automation could pre-focus each slide before transmission. The authors demonstrate the feasibility of on-board automatic focusing using a fuzzy logic ruled-based system to bring the slide image into focus. The original prototype system was produced in under two months and at low cost. Slide images are captured by a video camera, then digitized by gray-scale value. A software function calculates an index of 'sharpness' based on gray-scale contrasts. The fuzzy logic rule-based system uses feedback to set the microscope's focusing control in an attempt to maximize sharpness. The systems as currently implemented performs satisfactorily in focusing a variety of slide types at magnification levels ranging from 10 to 1000x. Although feasibility has been demonstrated, the system's performance and usability could be improved substantially in four ways: by upgrading the quality and resolution of the video imaging system (including the use of full color); by empirically defining and calibrating the index of image sharpness; by letting the overall focusing strategy vary depending on user-specified parameters; and by fine-tuning the fuzzy rules, set definitions, and procedures used

    Adiabaticity and spectral splits in collective neutrino transformations

    Full text link
    Neutrinos streaming off a supernova core transform collectively by neutrino-neutrino interactions, leading to "spectral splits" where an energy E_split divides the transformed spectrum sharply into parts of almost pure but different flavors. We present a detailed description of the spectral split phenomenon which is conceptually and quantitatively understood in an adiabatic treatment of neutrino-neutrino effects. Central to this theory is a self-consistency condition in the form of two sum rules (integrals over the neutrino spectra that must equal certain conserved quantities). We provide explicit analytic and numerical solutions for various neutrino spectra. We introduce the concept of the adiabatic reference frame and elaborate on the relative adiabatic evolution. Violating adiabaticity leads to the spectral split being "washed out". The sharpness of the split appears to be represented by a surprisingly universal function.Comment: 20 pages, revtex, 13 figure

    Evaluating epidemic forecasts in an interval format

    Get PDF
    For practical reasons, many forecasts of case, hospitalization and death counts in the context of the current COVID-19 pandemic are issued in the form of central predictive intervals at various levels. This is also the case for the forecasts collected in the COVID-19 Forecast Hub (https://covid19forecasthub.org/). Forecast evaluation metrics like the logarithmic score, which has been applied in several infectious disease forecasting challenges, are then not available as they require full predictive distributions. This article provides an overview of how established methods for the evaluation of quantile and interval forecasts can be applied to epidemic forecasts in this format. Specifically, we discuss the computation and interpretation of the weighted interval score, which is a proper score that approximates the continuous ranked probability score. It can be interpreted as a generalization of the absolute error to probabilistic forecasts and allows for a decomposition into a measure of sharpness and penalties for over- and underprediction
    • 

    corecore