93,788 research outputs found

    Conservative treatment of boundary interfaces for overlaid grids and multi-level grid adaptations

    Get PDF
    Conservative algorithms for boundary interfaces of overlaid grids are presented. The basic method is zeroth order, and is extended to a higher order method using interpolation and subcell decomposition. The present method, strictly based on a conservative constraint, is tested with overlaid grids for various applications of unsteady and steady supersonic inviscid flows with strong shock waves. The algorithm is also applied to a multi-level grid adaptation in which the next level finer grid is overlaid on the coarse base grid with an arbitrary orientation

    Estimation Of Multiple Local Orientations In Image Signals

    Get PDF
    Local orientation estimation can be posed as the problem of finding the minimum grey level variance axis within a local neighbourhood. In 2D image signals, this corresponds to the eigensystem analysis of a 22-tensor, which yields valid results for single orientations. We describe extensions to multiple overlaid orientations, which may be caused by transparent objects, crossings, bifurcations, corners etc. Multiple orientation detection is based on the eigensystem analysis of an appropriately extended tensor, yielding so-called mixed orientation parameters. These mixed orientation parameters can be regarded as another tensor built from the sought individual orientation parameters. We show how the mixed orientation tensor can be decomposed into the individual orientations by finding the roots of a polynomial. Applications are, e.g., in directional filtering and interpolation, feature extraction for corners or crossings, and signal separation

    Audibility and Interpolation of Head-Above-Torso Orientation in Binaural Technology

    Get PDF
    Head-related transfer functions (HRTFs) incorporate fundamental cues required for human spatial hearing and are often applied to auralize results obtained from room acoustic simulations. HRTFs are typically available for various directions of sound incidence and a fixed head-above-torso orientation (HATO). If-in interactive auralizations-HRTFs are exchanged according to the head rotations of a listener, the auralization result most often corresponds to a listener turning head and torso simultaneously, while-in reality-listeners usually turn their head independently above a fixed torso. In the present study, we show that accounting for HATO produces clearly audible differences, thereby suggesting the relevance of correct HATO when aiming at perceptually transparent binaural synthesis. Furthermore, we addressed the efficient representation of variable HATO in interactive acoustic simulations using spatial interpolation. Hereby, we evaluated two different approaches: interpolating between HRTFs with identical torso-to-source but different head-to-source orientations (head interpolation) and interpolating between HRTFs with the same head-to-source but different torso-to-source orientations (torso interpolation). Torso interpolation turned out to be more robust against increasing interpolation step width. In this case the median threshold of audibility for the head-above-torso resolution was about 25 degrees, whereas with head interpolation the threshold was about 10 degrees. Additionally, we tested a non-interpolation approach (nearest neighbor) as a suitable means for mobile applications with limited computational capacities

    The What-And-Where Filter: A Spatial Mapping Neural Network for Object Recognition and Image Understanding

    Full text link
    The What-and-Where filter forms part of a neural network architecture for spatial mapping, object recognition, and image understanding. The Where fllter responds to an image figure that has been separated from its background. It generates a spatial map whose cell activations simultaneously represent the position, orientation, ancl size of all tbe figures in a scene (where they are). This spatial map may he used to direct spatially localized attention to these image features. A multiscale array of oriented detectors, followed by competitve and interpolative interactions between position, orientation, and size scales, is used to define the Where filter. This analysis discloses several issues that need to be dealt with by a spatial mapping system that is based upon oriented filters, such as the role of cliff filters with and without normalization, the double peak problem of maximum orientation across size scale, and the different self-similar interpolation properties across orientation than across size scale. Several computationally efficient Where filters are proposed. The Where filter rnay be used for parallel transformation of multiple image figures into invariant representations that are insensitive to the figures' original position, orientation, and size. These invariant figural representations form part of a system devoted to attentive object learning and recognition (what it is). Unlike some alternative models where serial search for a target occurs, a What and Where representation can he used to rapidly search in parallel for a desired target in a scene. Such a representation can also be used to learn multidimensional representations of objects and their spatial relationships for purposes of image understanding. The What-and-Where filter is inspired by neurobiological data showing that a Where processing stream in the cerebral cortex is used for attentive spatial localization and orientation, whereas a What processing stream is used for attentive object learning and recognition.Advanced Research Projects Agency (ONR-N00014-92-J-4015, AFOSR 90-0083); British Petroleum (89-A-1204); National Science Foundation (IRI-90-00530, Graduate Fellowship); Office of Naval Research (N00014-91-J-4100, N00014-95-1-0409, N00014-95-1-0657); Air Force Office of Scientific Research (F49620-92-J-0499, F49620-92-J-0334

    Design of an FPGA-based smart camera and its application towards object tracking : a thesis presented in partial fulfilment of the requirements for the degree of Master of Engineering in Electronics and Computer Engineering at Massey University, Manawatu, New Zealand

    Get PDF
    Smart cameras and hardware image processing are not new concepts, yet despite the fact both have existed several decades, not much literature has been presented on the design and development process of hardware based smart cameras. This thesis will examine and demonstrate the principles needed to develop a smart camera on hardware, based on the experiences from developing an FPGA-based smart camera. The smart camera is applied on a Terasic DE0 FPGA development board, using Terasic’s 5 megapixel GPIO camera. The algorithm operates at 120 frames per second at a resolution of 640x480 by utilising a modular streaming approach. Two case studies will be explored in order to demonstrate the development techniques established in this thesis. The first case study will develop the global vision system for a robot soccer implementation. The algorithm will identify and calculate the positions and orientations of each robot and the ball. Like many robot soccer implementations each robot has colour patches on top to identify each robot and aid finding its orientation. The ball is comprised of a single solid colour that is completely distinct from the colour patches. Due to the presence of uneven light levels a YUV-like colour space labelled YC1C2 is used in order to make the colour values more light invariant. The colours are then classified using a connected components algorithm to segment the colour patches. The shapes of the classified patches are then used to identify the individual robots, and a CORDIC function is used to calculate the orientation. The second case study will investigate an improved colour segmentation design. A new HSY colour space is developed by remapping the Cartesian coordinate system from the YC1C2 to a polar coordinate system. This provides improved colour segmentation results by allowing for variations in colour value caused by uneven light patterns and changing light levels

    Deep Reflectance Maps

    Get PDF
    Undoing the image formation process and therefore decomposing appearance into its intrinsic properties is a challenging task due to the under-constraint nature of this inverse problem. While significant progress has been made on inferring shape, materials and illumination from images only, progress in an unconstrained setting is still limited. We propose a convolutional neural architecture to estimate reflectance maps of specular materials in natural lighting conditions. We achieve this in an end-to-end learning formulation that directly predicts a reflectance map from the image itself. We show how to improve estimates by facilitating additional supervision in an indirect scheme that first predicts surface orientation and afterwards predicts the reflectance map by a learning-based sparse data interpolation. In order to analyze performance on this difficult task, we propose a new challenge of Specular MAterials on SHapes with complex IllumiNation (SMASHINg) using both synthetic and real images. Furthermore, we show the application of our method to a range of image-based editing tasks on real images.Comment: project page: http://homes.esat.kuleuven.be/~krematas/DRM

    Modelling Aspects of Planar Multi-Mode Antennas for Direction-of-Arrival Estimation

    Get PDF
    Multi-mode antennas are an alternative to classical antenna arrays, and hence a promising emerging sensor technology for a vast variety of applications in the areas of array signal processing and digital communications. An unsolved problem is to describe the radiation pattern of multi-mode antennas in closed analytic form based on calibration measurements or on electromagnetic field (EMF) simulation data. As a solution, we investigate two modeling methods: One is based on the array interpolation technique (AIT), the other one on wavefield modeling (WM). Both methods are able to accurately interpolate quantized EMF data of a given multi-mode antenna, in our case a planar four-port antenna developed for the 6-8.5 GHz range. Since the modeling methods inherently depend on parameter sets, we investigate the influence of the parameter choice on the accuracy of both models. Furthermore, we evaluate the impact of modeling errors for coherent maximum-likelihood direction-of-arrival (DoA) estimation given different model parameters. Numerical results are presented for a single polarization component. Simulations reveal that the estimation bias introduced by model errors is subject to the chosen model parameters. Finally, we provide optimized sets of AIT and WM parameters for the multi-mode antenna under investigation. With these parameter sets, EMF data samples can be reproduced in interpolated form with high angular resolution
    • …
    corecore