36 research outputs found

    Optimal prefilters for display enhancement

    Get PDF
    Creating images from a set of discrete samples is arguably the most common operation in computer graphics and image processing, lying, for example, at the heart of rendering and image downscaling techniques. Traditional tools for this task are based on classic sampling theory and are modeled under mathematical conditions which are, in most cases, unrealistic; for example, sinc reconstruction – required by Shannon theorem in order to recover a signal exactly – is impossible to achieve in practice because LCD displays perform a box-like interpolation of the samples. Moreover, when an image is made for a human to look at, it will necessarily undergo some modifications due to the human optical system and all the neural processes involved in vision. Finally, image processing practitioners noticed that sinc prefiltering – also required by Shannon theorem – often leads to visually unpleasant images. From these facts, we can deduce that we cannot guarantee, via classic sampling theory, that the signal we see in a display is the best representation of the original image we had in first place. In this work, we propose a novel family of image prefilters based on modern sampling theory, and on a simple model of how the human visual system perceives an image on a display. The use of modern sampling theory guarantees us that the perceived image, based on this model, is indeed the best representation possible, and at virtually no computational overhead. We analyze the spectral properties of these prefilters, showing that they offer the possibility of trading-off aliasing and ringing, while guaranteeing that images look sharper then those generated with both classic and state-of-the-art filters. Finally, we compare it against other solutions in a selection of applications which include Monte Carlo rendering and image downscaling, also giving directions on how to apply it in different contexts.Exibir imagens a partir de um conjunto discreto de amostras é certamente uma das operações mais comuns em computação gráfica e processamento de imagens. Ferramentas tradicionais para essa tarefa são baseadas no teorema de Shannon e são modeladas em condições matemáticas que são, na maior parte dos casos, irrealistas; por exemplo, reconstrução com sinc – necessária pelo teorema de Shannon para recuperar um sinal exatamente – é impossível na prática, já que displays LCD realizam uma reconstrução mais próxima de uma interpolação com kernel box. Além disso, profissionais em processamento de imagem perceberam que prefiltragem com sinc – também requerida pelo teorema de Shannon – em geral leva a imagens visualmente desagradáveis devido ao fenômeno de ringing: oscilações próximas a regiões de descontinuidade nas imagens. Desses fatos, deduzimos que não é possível garantir, via ferramentas tradicionais de amostragem e reconstrução, que a imagem que observamos em um display digital é a melhor representação para a imagem original. Neste trabalho, propomos uma família de prefiltros baseada em teoria de amostragem generalizada e em um modelo de como o sistema ótico do olho humano modifica uma imagem. Proposta por Unser and Aldroubi (1994), a teoria de amostragem generalizada é mais geral que o teorema proposto por Shannon, e mostra como é possível pré-filtrar e reconstruir sinais usando kernels diferentes do sinc. Modelamos o sistema ótico do olho como uma câmera com abertura finita e uma lente delgada, o que apesar de ser simples é suficiente para os nossos propósitos. Além de garantir aproximação ótima quando reconstruindo as amostras por um display e filtrando a imagem com o modelo do sistema ótico humano, a teoria de amostragem generalizada garante que essas operações são extremamente eficientes, todas lineares no número de pixels de entrada. Também, analisamos as propriedades espectrais desses filtros e de técnicas semelhantes na literatura, mostrando que é possível obter um bom tradeoff entre aliasing e ringing (principais artefatos quando lidamos com amostragem e reconstrução de imagens), enquanto garantimos que as imagens finais são mais nítidas que aquelas geradas por técnicas existentes na literatura. Finalmente, mostramos algumas aplicações da nossa técnica em melhoria de imagens, adaptação à distâncias de visualização diferentes, redução de imagens e renderização de imagens sintéticas por método de Monte Carlo

    A flexible heterogeneous video processor system for television applications

    Get PDF
    A new video processing architecture for high-end TV applications is presented, featuring a flexible heterogeneous multi-processor architecture, executing video tasks in parallel and independently. The signal flow graph and the processors are programmable, enabling an optimal picture quality for different TV display modes. The concept is verified by an experimental chip design. The architecture allows several video streams to be processed and displayed in parallel and in a programmable way, with an individual signal qualit

    A flexible heterogeneous video processor system for television applications

    Full text link

    Small-kernel image restoration

    Get PDF
    The goal of image restoration is to remove degradations that are introduced during image acquisition and display. Although image restoration is a difficult task that requires considerable computation, in many applications the processing must be performed significantly faster than is possible with traditional algorithms implemented on conventional serial architectures. as demonstrated in this dissertation, digital image restoration can be efficiently implemented by convolving an image with a small kernel. Small-kernel convolution is a local operation that requires relatively little processing and can be easily implemented in parallel. A small-kernel technique must compromise effectiveness for efficiency, but if the kernel values are well-chosen, small-kernel restoration can be very effective.;This dissertation develops a small-kernel image restoration algorithm that minimizes expected mean-square restoration error. The derivation of the mean-square-optimal small kernel parallels that of the Wiener filter, but accounts for explicit spatial constraints on the kernel. This development is thorough and rigorous, but conceptually straightforward: the mean-square-optimal kernel is conditioned only on a comprehensive end-to-end model of the imaging process and spatial constraints on the kernel. The end-to-end digital imaging system model accounts for the scene, acquisition blur, sampling, noise, and display reconstruction. The determination of kernel values is directly conditioned on the specific size and shape of the kernel. Experiments presented in this dissertation demonstrate that small-kernel image restoration requires significantly less computation than a state-of-the-art implementation of the Wiener filter yet the optimal small-kernel yields comparable restored images.;The mean-square-optimal small-kernel algorithm and most other image restoration algorithms require a characterization of the image acquisition device (i.e., an estimate of the device\u27s point spread function or optical transfer function). This dissertation describes an original method for accurately determining this characterization. The method extends the traditional knife-edge technique to explicitly deal with fundamental sampled system considerations of aliasing and sample/scene phase. Results for both simulated and real imaging systems demonstrate the accuracy of the method

    Multiresolution models in image restoration and reconstruction with medical and other applications

    Get PDF

    Festschrift zum 60. Geburtstag von Wolfgang Strasser

    Get PDF
    Die vorliegende Festschrift ist Prof. Dr.-Ing. Dr.-Ing. E.h. Wolfgang Straßer zu seinem 60. Geburtstag gewidmet. Eine Reihe von Wissenschaftlern auf dem Gebiet der Computergraphik, die alle aus der "Tübinger Schule" stammen, haben - zum Teil zusammen mit ihren Schülern - Aufsätze zu dieser Schrift beigetragen. Die Beiträge reichen von der Objektrekonstruktion aus Bildmerkmalen über die physikalische Simulation bis hin zum Rendering und der Visualisierung, vom theoretisch ausgerichteten Aufsatz bis zur praktischen gegenwärtigen und zukünftigen Anwendung. Diese thematische Buntheit verdeutlicht auf anschauliche Weise die Breite und Vielfalt der Wissenschaft von der Computergraphik, wie sie am Lehrstuhl Straßer in Tübingen betrieben wird. Schon allein an der Tatsache, daß im Bereich der Computergraphik zehn Professoren an Universitäten und Fachhochschulen aus Tübingen kommen, zeigt sich der prägende Einfluß Professor Straßers auf die Computergraphiklandschaft in Deutschland. Daß sich darunter mehrere Physiker und Mathematiker befinden, die in Tübingen für dieses Fach gewonnen werden konnten, ist vor allem seinem Engagement und seiner Ausstrahlung zu verdanken. Neben der Hochachtung vor den wissenschaftlichen Leistungen von Professor Straßer hat sicherlich seine Persönlichkeit einen entscheidenden Anteil an der spontanten Bereischaft der Autoren, zu dieser Festschrift beizutragen. Mit außergewöhnlich großem persönlichen Einsatz fördert er Studenten, Doktoranden und Habilitanden, vermittelt aus seinen reichen internationalen Beziehungen Forschungskontakte und schafft so außerordentlich gute Voraussetzungen für selbständige wissenschafliche Arbeit. Die Autoren wollen mit ihrem Beitrag Wolfgang Straßer eine Freude bereiten und verbinden mit ihrem Dank den Wunsch, auch weiterhin an seinem fachlich wie menschlich reichen und bereichernden Wirken teilhaben zu dürfen

    Modeling the television process

    Get PDF
    Also issued as Thesis (Ph. D.)--Massachusetts Institute of Technology, Dept. of Electrical Engineering and Computer Science, 1986.Includes bibliographical references.Supported in part by members of the Center for Advanced Television Studies.Michael Anthony Isnardi

    Efficient algorithms for arbitrary sample rate conversion with application to wave field synthesis

    Get PDF
    Arbitrary sample rate conversion (ASRC) is used in many fields of digital signal processing to alter the sampling rate of discrete-time signals by arbitrary, potentially time-varying ratios. This thesis investigates efficient algorithms for ASRC and proposes several improvements. First, closed-form descriptions for the modified Farrow structure and Lagrange interpolators are derived that are directly applicable to algorithm design and analysis. Second, efficient implementation structures for ASRC algorithms are investigated. Third, this thesis considers coefficient design methods that are optimal for a selectable error norm and optional design constraints. Finally, the performance of different algorithms is compared for several performance metrics. This enables the selection of ASRC algorithms that meet the requirements of an application with minimal complexity. Wave field synthesis (WFS), a high-quality spatial sound reproduction technique, is the main application considered in this work. For WFS, sophisticated ASRC algorithms improve the quality of moving sound sources. However, the improvements proposed in this thesis are not limited to WFS, but applicable to general-purpose ASRC problems.Verfahren zur unbeschränkten Abtastratenwandlung (arbitrary sample rate conversion,ASRC) ermöglichen die Änderung der Abtastrate zeitdiskreter Signale um beliebige, zeitvarianteVerhältnisse. ASRC wird in vielen Anwendungen digitaler Signalverarbeitung eingesetzt.In dieser Arbeit wird die Verwendung von ASRC-Verfahren in der Wellenfeldsynthese(WFS), einem Verfahren zur hochqualitativen, räumlich korrekten Audio-Wiedergabe, untersucht.Durch ASRC-Algorithmen kann die Wiedergabequalität bewegter Schallquellenin WFS deutlich verbessert werden. Durch die hohe Zahl der in einem WFS-Wiedergabesystembenötigten simultanen ASRC-Operationen ist eine direkte Anwendung hochwertigerAlgorithmen jedoch meist nicht möglich.Zur Lösung dieses Problems werden verschiedene Beiträge vorgestellt. Die Komplexitätder WFS-Signalverarbeitung wird durch eine geeignete Partitionierung der ASRC-Algorithmensignifikant reduziert, welche eine effiziente Wiederverwendung von Zwischenergebnissenermöglicht. Dies erlaubt den Einsatz hochqualitativer Algorithmen zur Abtastratenwandlungmit einer Komplexität, die mit der Anwendung einfacher konventioneller ASRCAlgorithmenvergleichbar ist. Dieses Partitionierungsschema stellt jedoch auch zusätzlicheAnforderungen an ASRC-Algorithmen und erfordert Abwägungen zwischen Performance-Maßen wie der algorithmischen Komplexität, Speicherbedarf oder -bandbreite.Zur Verbesserung von Algorithmen und Implementierungsstrukturen für ASRC werdenverschiedene Maßnahmen vorgeschlagen. Zum Einen werden geschlossene, analytischeBeschreibungen für den kontinuierlichen Frequenzgang verschiedener Klassen von ASRCStruktureneingeführt. Insbesondere für Lagrange-Interpolatoren, die modifizierte Farrow-Struktur sowie Kombinationen aus Überabtastung und zeitkontinuierlichen Resampling-Funktionen werden kompakte Darstellungen hergeleitet, die sowohl Aufschluss über dasVerhalten dieser Filter geben als auch eine direkte Verwendung in Design-Methoden ermöglichen.Einen zweiten Schwerpunkt bildet das Koeffizientendesign für diese Strukturen, insbesonderezum optimalen Entwurf bezüglich einer gewählten Fehlernorm und optionaler Entwurfsbedingungenund -restriktionen. Im Gegensatz zu bisherigen Ansätzen werden solcheoptimalen Entwurfsmethoden auch für mehrstufige ASRC-Strukturen, welche ganzzahligeÜberabtastung mit zeitkontinuierlichen Resampling-Funktionen verbinden, vorgestellt.Für diese Klasse von Strukturen wird eine Reihe angepasster Resampling-Funktionen vorgeschlagen,welche in Verbindung mit den entwickelten optimalen Entwurfsmethoden signifikanteQualitätssteigerungen ermöglichen.Die Vielzahl von ASRC-Strukturen sowie deren Design-Parameter bildet eine Hauptschwierigkeitbei der Auswahl eines für eine gegebene Anwendung geeigneten Verfahrens.Evaluation und Performance-Vergleiche bilden daher einen dritten Schwerpunkt. Dazu wirdzum Einen der Einfluss verschiedener Entwurfsparameter auf die erzielbare Qualität vonASRC-Algorithmen untersucht. Zum Anderen wird der benötigte Aufwand bezüglich verschiedenerPerformance-Metriken in Abhängigkeit von Design-Qualität dargestellt.Auf diese Weise sind die Ergebnisse dieser Arbeit nicht auf WFS beschränkt, sondernsind in einer Vielzahl von Anwendungen unbeschränkter Abtastratenwandlung nutzbar

    Improving Filtering for Computer Graphics

    Get PDF
    When drawing images onto a computer screen, the information in the scene is typically more detailed than can be displayed. Most objects, however, will not be close to the camera, so details have to be filtered out, or anti-aliased, when the objects are drawn on the screen. I describe new methods for filtering images and shapes with high fidelity while using computational resources as efficiently as possible. Vector graphics are everywhere, from drawing 3D polygons to 2D text and maps for navigation software. Because of its numerous applications, having a fast, high-quality rasterizer is important. I developed a method for analytically rasterizing shapes using wavelets. This approach allows me to produce accurate 2D rasterizations of images and 3D voxelizations of objects, which is the first step in 3D printing. I later improved my method to handle more filters. The resulting algorithm creates higher-quality images than commercial software such as Adobe Acrobat and is several times faster than the most highly optimized commercial products. The quality of texture filtering also has a dramatic impact on the quality of a rendered image. Textures are images that are applied to 3D surfaces, which typically cannot be mapped to the 2D space of an image without introducing distortions. For situations in which it is impossible to change the rendering pipeline, I developed a method for precomputing image filters over 3D surfaces. If I can also change the pipeline, I show that it is possible to improve the quality of texture sampling significantly in real-time rendering while using the same memory bandwidth as used in traditional methods
    corecore