7 research outputs found

    Morphological Antialiasing and Topological Reconstruction

    Get PDF
    International audienceMorphological antialiasing is a post-processing approach which does note require additional samples computation. This algorithm acts as a non-linear filter, ill-suited to massively parallel hardware architectures. We redesigned the initial method using multiple passes with, in particular, a new approach to line length computation. We also introduce in the method the notion of topological reconstruction to correct the weaknesses of postprocessing antialiasing techniques. Our method runs as a pure post-process filter providing full-image antialiasing at high framerates, competing with traditional MSAA

    New Directions in Subband Coding

    Get PDF
    Two very different subband coders are described. The first is a modified dynamic bit-allocation-subband coder (D-SBC) designed for variable rate coding situations and easily adaptable to noisy channel environments. It can operate at rates as low as 12 kb/s and still give good quality speech. The second coder is a 16-kb/s waveform coder, based on a combination of subband coding and vector quantization (VQ-SBC). The key feature of this coder is its short coding delay, which makes it suitable for real-time communication networks. The speech quality of both coders has been enhanced by adaptive postfiltering. The coders have been implemented on a single AT&T DSP32 signal processo

    A journey in a procedural volume Optimization and filtering of Perlin noise

    Get PDF
    National audiencePerlin noise is the most widely used tool in procedural texture synthesis. It is a simple and fast method to enhance the quantity of detail or to render natural materials with no use of storage resources. However, this technique is very sensitive to aliasing artifacts, especially when composed with shape and color functions. Moreover, it is computationally intensive and can become slow, especially when generating procedural volumes of density in real time. This study aims at analyzing Perlin noise properties in order to control the apparition of artifacts and optimize the computational cost. We present a method for computing a maximum and minimum frequency threshold per noise component, we propose an idea to handle the case of non linear transforms of the noise, and show an optimization method for volume generation

    Improving Filtering for Computer Graphics

    Get PDF
    When drawing images onto a computer screen, the information in the scene is typically more detailed than can be displayed. Most objects, however, will not be close to the camera, so details have to be filtered out, or anti-aliased, when the objects are drawn on the screen. I describe new methods for filtering images and shapes with high fidelity while using computational resources as efficiently as possible. Vector graphics are everywhere, from drawing 3D polygons to 2D text and maps for navigation software. Because of its numerous applications, having a fast, high-quality rasterizer is important. I developed a method for analytically rasterizing shapes using wavelets. This approach allows me to produce accurate 2D rasterizations of images and 3D voxelizations of objects, which is the first step in 3D printing. I later improved my method to handle more filters. The resulting algorithm creates higher-quality images than commercial software such as Adobe Acrobat and is several times faster than the most highly optimized commercial products. The quality of texture filtering also has a dramatic impact on the quality of a rendered image. Textures are images that are applied to 3D surfaces, which typically cannot be mapped to the 2D space of an image without introducing distortions. For situations in which it is impossible to change the rendering pipeline, I developed a method for precomputing image filters over 3D surfaces. If I can also change the pipeline, I show that it is possible to improve the quality of texture sampling significantly in real-time rendering while using the same memory bandwidth as used in traditional methods

    Novel Digital Alias-Free Signal Processing Approaches to FIR Filtering Estimation

    Get PDF
    This thesis aims at developing a new methodology of filtering continuous-time bandlimited signals and piecewise-continuous signals from their discrete-time samples. Unlike the existing state-of-the-art filters, my filters are not adversely affected by aliasing, allowing the designers to flexibly select the sampling rates of the processed signal to reach the required accuracy of signal filtering rather than meeting stiff and often demanding constraints imposed by the classical theory of digital signal processing (DSP). The impact of this thesis is cost reduction of alias-free sampling, filtering and other digital processing blocks, particularly when the processed signals have sparse and unknown spectral support. Novel approaches are proposed which can mitigate the negative effects of aliasing, thanks to the use of nonuniform random/pseudorandom sampling and processing algorithms. As such, the proposed approaches belong to the family of digital alias-free signal processing (DASP). Namely, three main approaches are considered: total random (ToRa), stratified (StSa) and antithetical stratified (AnSt) random sampling techniques. First, I introduce a finite impulse response (FIR) filter estimator for each of the three considered techniques. In addition, a generalised estimator that encompasses the three filter estimators is also proposed. Then, statistical properties of all estimators are investigated to assess their quality. Properties such as expected value, bias, variance, convergence rate, and consistency are all inspected and unveiled. Moreover, closed-form mathematical expression is devised for the variance of each single estimator. Furthermore, quality assessment of the proposed estimators is examined in two main cases related to the smoothness status of the filter convolution’s integrand function, \u1d454(\u1d461,\u1d70f)∶=\u1d465(\u1d70f)ℎ(\u1d461−\u1d70f), and its first two derivatives. The first main case is continuous and differentiable functions \u1d454(\u1d461,\u1d70f), \u1d454′(\u1d461,\u1d70f), and \u1d454′′(\u1d461,\u1d70f). Whereas in the second main case, I cover all possible instances where some/all of such functions are piecewise-continuous and involving a finite number of bounded discontinuities. Primarily obtained results prove that all considered filter estimators are unbiassed and consistent. Hence, variances of the estimators converge to zero after certain number of sample points. However, the convergence rate depends on the selected estimator and which case of smoothness is being considered. In the first case (i.e. continuous \u1d454(\u1d461,\u1d70f) and its derivatives), ToRa, StSa and AnSt filter estimators converge uniformly at rates of \u1d441−1, \u1d441−3, and \u1d441−5 respectively, where 2\u1d441 is the total number of sample points. More interestingly, in the second main case, the convergence rates of StSa and AnSt estimators are maintained even if there are some discontinuities in the first-order derivative (FOD) with respect to \u1d70f of \u1d454(\u1d461,\u1d70f) (for StSa estimator) or in the second-order derivative (SOD) with respect to \u1d70f of \u1d454(\u1d461,\u1d70f) (for AnSt). Whereas these rates drop to \u1d441−2 and \u1d441−4 (for StSa and AnSt, respectively) if the zero-order derivative (ZOD) (for StSa) and FOD (for AnSt) are piecewise-continuous. Finally, if the ZOD of \u1d454(\u1d461,\u1d70f) is piecewise-continuous, then the uniform convergence rate of the AnSt estimator further drops to \u1d441−2. For practical reasons, I also introduce the utilisation of the three estimators in a special situation where the input signal is pseudorandomly sampled from otherwise uniform and dense grid. An FIR filter model with an oversampled finite-duration impulse response, timely aligned with the grid, is proposed and meant to be stored in a lookup table of the implemented filter’s memory to save processing time. Then, a synchronised convolution sum operation is conducted to estimate the filter output. Finally, a new unequally spaced Lagrange interpolation-based rule is proposed. The so-called composite 3-nonuniform-sample (C3NS) rule is employed to estimate area under the curve (AUC) of an integrand function rather than the simple Rectangular rule. I then carry out comparisons for the convergence rates of different estimators based on the two interpolation rules. The proposed C3NS estimator outperforms other Rectangular rule estimators on the expense of higher computational complexity. Of course, this extra cost could only be justifiable for some specific applications where more accurate estimation is required

    Compression and Subjective Quality Assessment of 3D Video

    Get PDF
    In recent years, three-dimensional television (3D TV) has been broadly considered as the successor to the existing traditional two-dimensional television (2D TV) sets. With its capability of offering a dynamic and immersive experience, 3D video (3DV) is expected to expand conventional video in several applications in the near future. However, 3D content requires more than a single view to deliver the depth sensation to the viewers and this, inevitably, increases the bitrate compared to the corresponding 2D content. This need drives the research trend in video compression field towards more advanced and more efficient algorithms. Currently, the Advanced Video Coding (H.264/AVC) is the state-of-the-art video coding standard which has been developed by the Joint Video Team of ISO/IEC MPEG and ITU-T VCEG. This codec has been widely adopted in various applications and products such as TV broadcasting, video conferencing, mobile TV, and blue-ray disc. One important extension of H.264/AVC, namely Multiview Video Coding (MVC) was an attempt to multiple view compression by taking into consideration the inter-view dependency between different views of the same scene. This codec H.264/AVC with its MVC extension (H.264/MVC) can be used for encoding either conventional stereoscopic video, including only two views, or multiview video, including more than two views. In spite of the high performance of H.264/MVC, a typical multiview video sequence requires a huge amount of storage space, which is proportional to the number of offered views. The available views are still limited and the research has been devoted to synthesizing an arbitrary number of views using the multiview video and depth map (MVD). This process is mandatory for auto-stereoscopic displays (ASDs) where many views are required at the viewer side and there is no way to transmit such a relatively huge number of views with currently available broadcasting technology. Therefore, to satisfy the growing hunger for 3D related applications, it is mandatory to further decrease the bitstream by introducing new and more efficient algorithms for compressing multiview video and depth maps. This thesis tackles the 3D content compression targeting different formats i.e. stereoscopic video and depth-enhanced multiview video. Stereoscopic video compression algorithms introduced in this thesis mostly focus on proposing different types of asymmetry between the left and right views. This means reducing the quality of one view compared to the other view aiming to achieve a better subjective quality against the symmetric case (the reference) and under the same bitrate constraint. The proposed algorithms to optimize depth-enhanced multiview video compression include both texture compression schemes as well as depth map coding tools. Some of the introduced coding schemes proposed for this format include asymmetric quality between the views. Knowing that objective metrics are not able to accurately estimate the subjective quality of stereoscopic content, it is suggested to perform subjective quality assessment to evaluate different codecs. Moreover, when the concept of asymmetry is introduced, the Human Visual System (HVS) performs a fusion process which is not completely understood. Therefore, another important aspect of this thesis is conducting several subjective tests and reporting the subjective ratings to evaluate the perceived quality of the proposed coded content against the references. Statistical analysis is carried out in the thesis to assess the validity of the subjective ratings and determine the best performing test cases

    Randomized sampling and multiplier-less filtering

    Get PDF
    Thesis (Ph. D.)--Massachusetts Institute of Technology, Dept. of Electrical Engineering and Computer Science, 2008.Includes bibliographical references (p. 151-153).This thesis considers the benefits of randomization in two fundamental signal processing techniques: sampling and filtering. The first part develops randomized non-uniform sampling as a method to mitigate the effects of aliasing. Randomization of the sampling times is shown to convert aliasing error due to uniform under-sampling into uncorrelated shapeable noise. In certain applications, especially perceptual ones, this form of error may be preferable. Two sampling structures with are developed in this thesis. In the first, denoted simple randomized sampling, non-white sampling processes can be designed to frequency-shape the error spectrum, so that its power is minimized in the band of interest. In the second model, denoted filtered randomized sampling, a pre-filter, post-filter, and the sampling process can be designed to further frequency-shape the error to improve performance. The thesis develops design techniques using parametric binary process models to optimize the performance of randomized non-uniform sampling. In addition, a detailed second-order error analysis, including performance bounds and results from simulation, is presented for each type of sampling. The second part of this thesis develops randomization as a method to improve the performance of multiplier-less FIR filters. Static multiplier-less filters, even when carefully designed, result in frequency distortion as compared to a desired continuous-valued filter. Replacing each static tap with a binary random process is shown to mitigate this distortion, converting the error into uncorrelated shapeable noise. As with randomized sampling, in certain applications this form of error may be preferable. This thesis presents a FIR Direct Form I randomized multiplier-less filter structure denoted binary randomized filtering (BRF). In its most general form, BRF incorporates over-sampling combined with a tapped delay-line that changes in time according to a binary vector process.(cont)The time and tap correlation of the binary vector process can be designed to improve the error performance. The thesis develops design techniques using parametric binary vector process models to do so. In addition, a detailed second-order error analysis, including performance bounds, error scaling with over-sampling, and results from simulation, is presented for the various forms of BRF.by Sourav R. Dey.Ph.D
    corecore