217,014 research outputs found

    Binaural HRTF Based Spatialisation: New Approaches and Implementation

    Get PDF
    New approaches to Head Related Transfer Function (HRTF) based artificial spatialisation of audio are presented and discussed in this paper. A brief summary of the topic of audio spatialisation and HRTF interpolation is offered, followed by an appraisal of the existing minimum phase HRTF interpolation method. Novel alternatives are then suggested which essentially approach the problem of phase interpolation more directly. The first technique, based on magnitude interpolation and phase truncation, aims to use the empirical HRTFs without the need for complex data preparation or manipulation, while minimizing any approximations that may be introduced by data transformations. A second approach augments a functionally based phase model with low frequency non-linear frequency scaling based on the empirical HRTFs, allowing a more accurate phase representation of the more relevant lower frequency end of the spectrum. This more complex approach is deconstructed from an implementation point of view. Testing of both algorithms is then presented, which highlights their success, and favorable performance over minimum phase plus delay methods

    View-dependent precomputed light transport using non-linear Gaussian function approximations

    Get PDF
    Thesis (S.M.)--Massachusetts Institute of Technology, Dept. of Electrical Engineering and Computer Science, February 2006.Includes bibliographical references (p. 43-46).We propose a real-time method for rendering rigid objects with complex view-dependent effects under distant all-frequency lighting. Existing precomputed light transport approaches can render rich global illumination effects, but high-frequency view-dependent effects such as sharp highlights remain a challenge. We introduce a new representation of the light transport operator based on sums of Gaussians. The non-linear parameters of the representation allow for 1) arbitrary bandwidth because scale is encoded as a direct parameter; and 2) high-quality interpolation across view and mesh triangles because we interpolate the average direction of the incoming light, thereby preventing linear cross-fading artifacts. However, fitting the precomputed light transport data to this new representation requires solving a non-linear regression problem that is more involved than traditional linear and non-linear (truncation) approximation techniques. We present a new data fitting method based on optimization that includes energy terms aimed at enforcing good interpolation. We demonstrate that our method achieves high visual quality for a small storage cost and fast rendering time.by Paul Elijah Green.S.M

    Neural View-Interpolation for Sparse Light Field Video

    No full text
    We suggest representing light field (LF) videos as "one-off" neural networks (NN), i.e., a learned mapping from view-plus-time coordinates to high-resolution color values, trained on sparse views. Initially, this sounds like a bad idea for three main reasons: First, a NN LF will likely have less quality than a same-sized pixel basis representation. Second, only few training data, e.g., 9 exemplars per frame are available for sparse LF videos. Third, there is no generalization across LFs, but across view and time instead. Consequently, a network needs to be trained for each LF video. Surprisingly, these problems can turn into substantial advantages: Other than the linear pixel basis, a NN has to come up with a compact, non-linear i.e., more intelligent, explanation of color, conditioned on the sparse view and time coordinates. As observed for many NN however, this representation now is interpolatable: if the image output for sparse view coordinates is plausible, it is for all intermediate, continuous coordinates as well. Our specific network architecture involves a differentiable occlusion-aware warping step, which leads to a compact set of trainable parameters and consequently fast learning and fast execution
    • …
    corecore