14 research outputs found

    Optimal prefilters for display enhancement

    Get PDF
    Creating images from a set of discrete samples is arguably the most common operation in computer graphics and image processing, lying, for example, at the heart of rendering and image downscaling techniques. Traditional tools for this task are based on classic sampling theory and are modeled under mathematical conditions which are, in most cases, unrealistic; for example, sinc reconstruction – required by Shannon theorem in order to recover a signal exactly – is impossible to achieve in practice because LCD displays perform a box-like interpolation of the samples. Moreover, when an image is made for a human to look at, it will necessarily undergo some modifications due to the human optical system and all the neural processes involved in vision. Finally, image processing practitioners noticed that sinc prefiltering – also required by Shannon theorem – often leads to visually unpleasant images. From these facts, we can deduce that we cannot guarantee, via classic sampling theory, that the signal we see in a display is the best representation of the original image we had in first place. In this work, we propose a novel family of image prefilters based on modern sampling theory, and on a simple model of how the human visual system perceives an image on a display. The use of modern sampling theory guarantees us that the perceived image, based on this model, is indeed the best representation possible, and at virtually no computational overhead. We analyze the spectral properties of these prefilters, showing that they offer the possibility of trading-off aliasing and ringing, while guaranteeing that images look sharper then those generated with both classic and state-of-the-art filters. Finally, we compare it against other solutions in a selection of applications which include Monte Carlo rendering and image downscaling, also giving directions on how to apply it in different contexts.Exibir imagens a partir de um conjunto discreto de amostras é certamente uma das operações mais comuns em computação gráfica e processamento de imagens. Ferramentas tradicionais para essa tarefa são baseadas no teorema de Shannon e são modeladas em condições matemáticas que são, na maior parte dos casos, irrealistas; por exemplo, reconstrução com sinc – necessária pelo teorema de Shannon para recuperar um sinal exatamente – é impossível na prática, já que displays LCD realizam uma reconstrução mais próxima de uma interpolação com kernel box. Além disso, profissionais em processamento de imagem perceberam que prefiltragem com sinc – também requerida pelo teorema de Shannon – em geral leva a imagens visualmente desagradáveis devido ao fenômeno de ringing: oscilações próximas a regiões de descontinuidade nas imagens. Desses fatos, deduzimos que não é possível garantir, via ferramentas tradicionais de amostragem e reconstrução, que a imagem que observamos em um display digital é a melhor representação para a imagem original. Neste trabalho, propomos uma família de prefiltros baseada em teoria de amostragem generalizada e em um modelo de como o sistema ótico do olho humano modifica uma imagem. Proposta por Unser and Aldroubi (1994), a teoria de amostragem generalizada é mais geral que o teorema proposto por Shannon, e mostra como é possível pré-filtrar e reconstruir sinais usando kernels diferentes do sinc. Modelamos o sistema ótico do olho como uma câmera com abertura finita e uma lente delgada, o que apesar de ser simples é suficiente para os nossos propósitos. Além de garantir aproximação ótima quando reconstruindo as amostras por um display e filtrando a imagem com o modelo do sistema ótico humano, a teoria de amostragem generalizada garante que essas operações são extremamente eficientes, todas lineares no número de pixels de entrada. Também, analisamos as propriedades espectrais desses filtros e de técnicas semelhantes na literatura, mostrando que é possível obter um bom tradeoff entre aliasing e ringing (principais artefatos quando lidamos com amostragem e reconstrução de imagens), enquanto garantimos que as imagens finais são mais nítidas que aquelas geradas por técnicas existentes na literatura. Finalmente, mostramos algumas aplicações da nossa técnica em melhoria de imagens, adaptação à distâncias de visualização diferentes, redução de imagens e renderização de imagens sintéticas por método de Monte Carlo

    Sampling—50 Years After Shannon

    Get PDF
    This paper presents an account of the current state of sampling, 50 years after Shannon's formulation of the sampling theorem. The emphasis is on regular sampling where the grid is uniform. This topic has benefited from a strong research revival during the past few years, thanks in part to the mathematical connections that were made with wavelet theory. To introduce the reader to the modern, Hilbert-space formulation, we re-interpret Shannon's sampling procedure as an orthogonal projection onto the subspace of bandlimited functions. We then extend the standard sampling paradigm for a representation of functions in the more general class of "shift-invariant" functions spaces, including splines and wavelets. Practically, this allows for simpler—and possibly more realistic—interpolation models, which can be used in conjunction with a much wider class of (anti-aliasing) pre-filters that are not necessarily ideal lowpass. We summarize and discuss the results available for the determination of the approximation error and of the sampling rate when the input of the system is essentially arbitrary; e.g., non-bandlimited. We also review variations of sampling that can be understood from the same unifying perspective. These include wavelets, multi-wavelets, Papoulis generalized sampling, finite elements, and frames. Irregular sampling and radial basis functions are briefly mentioned

    Radiocarbon Evidence for the Importance of Surface Vegetation on Fermentation and Methanogenesis in Contrasting Types of Boreal Peatlands

    Get PDF
    We found a consistent distribution pattern for radiocarbon in dissolved organic carbon (DOC), dissolved inorganic carbon (DIC), and methane replicated across spatial and temporal scales in northern peatlands from Minnesota to Alaska. The 14C content of DOC is relatively modern throughout the peat column, to depths of 3 m. In sedge-dominated peatlands, the 14C contents of the products of respiration, CH4 and DIC, are essentially the same and are similar to that of DOC. In Sphagnum- and woody plant-dominated peatlands with few sedges, however, the respiration products are similar but intermediate between the 14C contents of the solid phase peat and the DOC. Preliminary data indicates qualitative differences in the pore water DOC, depending on the extent of sedge cover, consistent with the hypothesis that the DOC in sedge-dominated peatlands is more reactive than DOC in peatlands where Sphagnum or other vascular plants dominate. These data are supported by molecular level analysis of DOC by ultrahigh-resolution mass spectrometry that suggests more dramatic changes with depth in the composition of DOC in the sedge-dominated peatland pore waters relative to changes observed in DOC where Sphagnum dominates. The higher reactivity of DOC from sedge-dominated peatlands may be a function of either different source materials or environmental factors that are related to the abundance of sedges in peatlands

    Least-Squares Image Resizing Using Finite Differences

    Get PDF
    We present an optimal spline-based algorithm for the enlargement or reduction of digital images with arbitrary (noninteger) scaling factors. This projection-based approach can be realized thanks to a new finite difference method that allows the computation of inner products with analysis functions that are B-splines of any degree n. A noteworthy property of the algorithm is that the computational complexity per pixel does not depend on the scaling factor a. For a given choice of basis functions, the results of our method are consistently better than those of the standard interpolation procedure; the present scheme achieves a reduction of artifacts such as aliasing and blocking and a significant improvement of the signal-to-noise ratio. The method can be generalized to include other classes of piecewise polynomial functions, expressed as linear combinations of B-splines and their derivatives

    Façonnement de l'Interférence en vue d'une Optimisation Globale d'un Système Moderne de Communication

    Get PDF
    A communication is impulsive whenever the information-bearing signal is burst-like in time. Examples of the impulsive concept are: impulse-radio signals, that is, wireless signals occurring within short intervals of time; optical signals conveyed by photons; speech signals represented by sound pressure variations; pulse-position modulated electrical signals; a sequence of arrival/departure events in a queue; neural spike trains in the brain. Understanding impulsive communications requires to identify what is peculiar to this transmission paradigm, that is, different from traditional continuous communications.In order to address the problem of understanding impulsive vs. non-impulsive communications, the framework of investigation must include the following aspects: the different interference statistics directly following from the impulsive signal structure; the different interaction of the impulsive signal with the physical medium; the actual possibility for impulsive communications of coding information into the time structure, relaxing the implicit assumption made in continuous transmissions that time is a mere support. This thesis partially addresses a few of the above issues, and draws future lines of investigation. In particular, we studied: multiple access channels where each user adopts time-hopping spread-spectrum; systems using a specific prefilter at the transmitter side, namely the transmit matched filter (also known as time reversal), particularly suited for ultrawide bandwidhts; the distribution function of interference for impulsive systems in several different settings.Une communication est impulsive chaque fois que le signal portant des informations est intermittent dans le temps et que la transmission se produit à rafales. Des exemples du concept impulsife sont : les signaux radio impulsifs, c’est-à-dire des signaux très courts dans le temps; les signaux optiques utilisé dans les systèmes de télécommunications; certains signaux acoustiques et, en particulier, les impulsions produites par le système glottale; les signaux électriques modulés en position d’impulsions; une séquence d’événements dans une file d’attente; les trains de potentiels neuronaux dans le système neuronal. Ce paradigme de transmission est différent des communications continues traditionnelles et la compréhension des communications impulsives est donc essentielle. Afin d’affronter le problème des communications impulsives, le cadre de la recherche doit inclure les aspects suivants : la statistique d’interférence qui suit directement la structure des signaux impulsifs; l’interaction du signal impulsif avec le milieu physique; la possibilité pour les communications impulsives de coder l’information dans la structure temporelle. Cette thèse adresse une partie des questions précédentes et trace des lignes indicatives pour de futures recherches. En particulier, nous avons étudié: un système d'accès multiple où les utilisateurs adoptent des signaux avec étalement de spectre par saut temporel (time-hopping spread spectrum) pour communiquer vers un récepteur commun; un système avec un préfiltre à l'émetteur, et plus précisément un transmit matched filter, également connu comme time reversal dans la littérature de systèmes à bande ultra large; un modèle d'interférence pour des signaux impulsifs

    Designing a Colour Filter for Making Cameras more Colorimetric

    Get PDF
    If a camera were to capture colour like a human observer, fundamentally, it should sense the light information as the way the human visual system does. It is necessary to either replicate the human visual sensitivity responses or reproduce the three-number colour representations - e.g. CIE XYZ tristimulus values - to obtain an accurate colour measurement. In practice, however, the camera sensors generally deviate from the ideal sensitivities of the human visual system. Consequently, the colour triplets a camera records are device-dependent, which generally differ from the standard observer tristimulus values. The colorimetric performance can be improved by either correcting camera responses to the reference ground-truth values using sophisticated mathematical transformations or using more imaging sensors/filters to capture more information about the incident light. These methods have their disadvantages: the former increases the computational complexity and the latter increases the system complexity and the overall cost. In this thesis, we aim to make the digital camera capture colours more like the human visual perception by placing a colour filter in front of the camera so as to alter its spectral sensitivity functions as desired. The central contribution of this study is to carefully design a colour filter for a given camera so that the ‘filter+camera’ setting having the new sensitivities becomes almost colorimetric, i.e. recording the colour triplets that can be linearly transformed to the ground-truth XYZ tristimulus values. The starting point for this thesis is to design the filter that makes the filtered camera best achieve the Luther condition, i.e. the new effective camera sensitivity functions after filtering are a linear combination of the colour matching function of the human visual system. Under this condition, the camera can capture any incoming colour signal accurately in the sense that the captured RGBs are almost a linear transform from the XYZ tristimuli. Next, we reformulate the problem formulation for finding the optimal filter that targets the more generalised Vora-Value goodness measure. The Vora-Value, by definition, measures the similarity between the vector spaces spanned by the spectral sensitivities of a camera and the XYZ colour matching functions underpinning the human visual system. The Vora-Value has the advantage that the best filter is related to the target human visual space and not fixed coordinates (e.g. the XYZ and RGB colour matching functions have different coordinate values but are in the same vector space). As well as developing a method that finding a filter maximises the Vora-Value (makes the vector spaces most similar), we examine the relationship between the Vora-Value and Luther condition optimisations. We show that the Luther-condition optimisation also maximises the Vora-Value if we find the filter that makes a linear combination of the camera sensitivities most similar to a linear transform of XYZ (that is orthonormal). This is an important result as the Luther optimisation is much simpler to implement and faster to execute. So we can use the simpler Luther-condition formulation to maximise the Vora-Value measure using a more straightforward algorithm. A strength and weakness of the Luther and Vora-Vora optimisations is that they assume - as an explicit part of their formulations - that all spectra are equally likely. But, this is not the case in real imaging applications. So we extend our filter design algorithms in a data-driven manner that it optimises for the best colorimetric estimates given a collection of illuminants and surface reflectance data. Our extended method uses quadratic programming that allows us to add linear inequality constraints into the problem formulation. We show how to find filters that have smooth distribution and bounded transmittance (e.g. transmit at least 50% of the light) across the spectrum. Constraints like these make the filters more useful and feasible could make the filters easier to manufacture. We show that we can find smooth and highly transmissive colour filters that when placed in front of a digital camera can make the camera significantly more colorimetric and hence can be used for colour measurement applications with high demand in colour accuracy

    A design guide for energy-efficient research laboratories

    Full text link

    On Improving Generalization of CNN-Based Image Classification with Delineation Maps Using the CORF Push-Pull Inhibition Operator

    Get PDF
    Deployed image classification pipelines are typically dependent on the images captured in real-world environments. This means that images might be affected by different sources of perturbations (e.g. sensor noise in low-light environments). The main challenge arises by the fact that image quality directly impacts the reliability and consistency of classification tasks. This challenge has, hence, attracted wide interest within the computer vision communities. We propose a transformation step that attempts to enhance the generalization ability of CNN models in the presence of unseen noise in the test set. Concretely, the delineation maps of given images are determined using the CORF push-pull inhibition operator. Such an operation transforms an input image into a space that is more robust to noise before being processed by a CNN. We evaluated our approach on the Fashion MNIST data set with an AlexNet model. It turned out that the proposed CORF-augmented pipeline achieved comparable results on noise-free images to those of a conventional AlexNet classification model without CORF delineation maps, but it consistently achieved significantly superior performance on test images perturbed with different levels of Gaussian and uniform noise

    NASA Tech Briefs, Spring 1984

    Get PDF
    Topics include: NASA TU Services: Technology Utilization services that can assist you in learning about and applying NASA technology. New Product Ideas: A summary of selected innovations of value to manufacturers for the development of new products; Electronic Components and Circuits; Electronic Systems; Physical Sciences; Materials; Life Sciences; Mechanics; Machinery; Fabrication Technology; Mathematics and Information Sciences
    corecore