46 research outputs found

    Ultra wideband antenna array processing under spatial aliasing

    Get PDF
    Given a certain transmission frequency, Shannon spatial sampling limit de¯nes an upper bound for the antenna element spacing. Beyond this bound, the exceeded ambiguity avoids correct estimation of the signal parameters (i.e., array manifold crossing). This spacing limit is inversely proportional to the frequency of transmis- sion. Therefore, to meet a wider spectral support, the element spacing should be decreased. However, practical implementations of closely spaced elements result in a detrimental increase in electromagnetic mutual couplings among the sensors. Further- more, decreasing the spacing reduces the array angle resolution. In this dissertation, the problem of Direction of Arrival (DOA) estimation of broadband sources is ad- dressed when the element spacing of a Uniform Array Antenna (ULA) is inordinate. It is illustrated that one can resolve the aliasing ambiguity by utilizing the frequency diversity of the broadband sources. An algorithm, based on Maximum Likelihood Estimator (MLE), is proposed to estimate the transmitted data signal and the DOA of each source. In the sequel, a subspace-based algorithm is developed and the prob- lem of order estimation is discussed. The adopted signaling framework assumes a subband hopping transmission in order to resolve the problem of source associations and system identi¯cation. The proposed algorithms relax the stringent maximum element-spacing constraint of the arrays pertinent to the upper-bound of frequency transmission and suggest that, under some mild constraints, the element spacing can be conveniently increased. An approximate expression for the estimation error has also been developed to gauge the behavior of the proposed algorithms. Through con- ¯rmatory simulation, it is shown that the performance gain of the proposed setup is potentially signi¯cant, speci¯cally when the transmitters are closely spaced and under low Signal to Noise Ratio (SNR), which makes it applicable to license-free communication

    Acoustic event detection and localization using distributed microphone arrays

    Get PDF
    Automatic acoustic scene analysis is a complex task that involves several functionalities: detection (time), localization (space), separation, recognition, etc. This thesis focuses on both acoustic event detection (AED) and acoustic source localization (ASL), when several sources may be simultaneously present in a room. In particular, the experimentation work is carried out with a meeting-room scenario. Unlike previous works that either employed models of all possible sound combinations or additionally used video signals, in this thesis, the time overlapping sound problem is tackled by exploiting the signal diversity that results from the usage of multiple microphone array beamformers. The core of this thesis work is a rather computationally efficient approach that consists of three processing stages. In the first, a set of (null) steering beamformers is used to carry out diverse partial signal separations, by using multiple arbitrarily located linear microphone arrays, each of them composed of a small number of microphones. In the second stage, each of the beamformer output goes through a classification step, which uses models for all the targeted sound classes (HMM-GMM, in the experiments). Then, in a third stage, the classifier scores, either being intra- or inter-array, are combined using a probabilistic criterion (like MAP) or a machine learning fusion technique (fuzzy integral (FI), in the experiments). The above-mentioned processing scheme is applied in this thesis to a set of complexity-increasing problems, which are defined by the assumptions made regarding identities (plus time endpoints) and/or positions of sounds. In fact, the thesis report starts with the problem of unambiguously mapping the identities to the positions, continues with AED (positions assumed) and ASL (identities assumed), and ends with the integration of AED and ASL in a single system, which does not need any assumption about identities or positions. The evaluation experiments are carried out in a meeting-room scenario, where two sources are temporally overlapped; one of them is always speech and the other is an acoustic event from a pre-defined set. Two different databases are used, one that is produced by merging signals actually recorded in the UPC¿s department smart-room, and the other consists of overlapping sound signals directly recorded in the same room and in a rather spontaneous way. From the experimental results with a single array, it can be observed that the proposed detection system performs better than either the model based system or a blind source separation based system. Moreover, the product rule based combination and the FI based fusion of the scores resulting from the multiple arrays improve the accuracies further. On the other hand, the posterior position assignment is performed with a very small error rate. Regarding ASL and assuming an accurate AED system output, the 1-source localization performance of the proposed system is slightly better than that of the widely-used SRP-PHAT system, working in an event-based mode, and it even performs significantly better than the latter one in the more complex 2-source scenario. Finally, though the joint system suffers from a slight degradation in terms of classification accuracy with respect to the case where the source positions are known, it shows the advantage of carrying out the two tasks, recognition and localization, with a single system, and it allows the inclusion of information about the prior probabilities of the source positions. It is worth noticing also that, although the acoustic scenario used for experimentation is rather limited, the approach and its formalism were developed for a general case, where the number and identities of sources are not constrained
    corecore