2,082 research outputs found

    Idealized computational models for auditory receptive fields

    Full text link
    This paper presents a theory by which idealized models of auditory receptive fields can be derived in a principled axiomatic manner, from a set of structural properties to enable invariance of receptive field responses under natural sound transformations and ensure internal consistency between spectro-temporal receptive fields at different temporal and spectral scales. For defining a time-frequency transformation of a purely temporal sound signal, it is shown that the framework allows for a new way of deriving the Gabor and Gammatone filters as well as a novel family of generalized Gammatone filters, with additional degrees of freedom to obtain different trade-offs between the spectral selectivity and the temporal delay of time-causal temporal window functions. When applied to the definition of a second-layer of receptive fields from a spectrogram, it is shown that the framework leads to two canonical families of spectro-temporal receptive fields, in terms of spectro-temporal derivatives of either spectro-temporal Gaussian kernels for non-causal time or the combination of a time-causal generalized Gammatone filter over the temporal domain and a Gaussian filter over the logspectral domain. For each filter family, the spectro-temporal receptive fields can be either separable over the time-frequency domain or be adapted to local glissando transformations that represent variations in logarithmic frequencies over time. Within each domain of either non-causal or time-causal time, these receptive field families are derived by uniqueness from the assumptions. It is demonstrated how the presented framework allows for computation of basic auditory features for audio processing and that it leads to predictions about auditory receptive fields with good qualitative similarity to biological receptive fields measured in the inferior colliculus (ICC) and primary auditory cortex (A1) of mammals.Comment: 55 pages, 22 figures, 3 table

    Techniques to Improve the Efficiency of Data Transmission in Cable Networks

    Get PDF
    The cable television (CATV) networks, since their introduction in the late 1940s, have now become a crucial part of the broadcasting industry. To keep up with growing demands from the subscribers, cable networks nowadays not only provide television programs but also deliver two-way interactive services such as telephone, high-speed Internet and social TV features. A new standard for CATV networks is released every five to six years to satisfy the growing demands from the mass market. From this perspective, this thesis is concerned with three main aspects for the continuing development of cable networks: (i) efficient implementations of backward-compatibility functions from the old standard, (ii) addressing and providing solutions for technically-challenging issues in the current standard and, (iii) looking for prospective features that can be implemented in the future standard. Since 1997, five different versions of the digital CATV standard had been released in North America. A new standard often contains major improvements over the previous one. The latest version of the standard, namely DOCSIS 3.1 (released in late 2013), is packed with state-of-the-art technologies and allows approximately ten times the amount of traffic as compared to the previous standard, DOCSIS 3.0 (released in 2008). Backward-compatibility is a must-have function for cable networks. In particular, to facilitate the system migration from older standards to a newer one, the backward compatible functions in the old standards must remain in the newer-standard products. More importantly, to keep the implementation cost low, the inherited backward compatible functions must be redesigned by taking advantage of the latest technology and algorithms. To improve the backward-compatibility functions, the first contribution of the thesis focuses on redesigning the pulse shaping filter by exploiting infinite impulse response (IIR) filter structures as an alternative to the conventional finite impulse response (FIR) structures. Comprehensive comparisons show that more economical filters with better performance can be obtained by the proposed design algorithm, which considers a hybrid parameterization of the filter's transfer function in combination with a constraint on the pole radius to be less than 1. The second contribution of the thesis is a new fractional timing estimation algorithm based on peak detection by log-domain interpolation. When compared with the commonly-used timing detection method, which is based on parabolic interpolation, the proposed algorithm yields more accurate estimation with a comparable implementation cost. The third contribution of the thesis is a technique to estimate the multipath channel for DOCSIS 3.1 cable networks. DOCSIS 3.1 is markedly different from prior generations of CATV networks in that OFDM/OFDMA is employed to create a spectrally-efficient signal. In order to effectively demodulate such a signal, it is necessary to employ a demodulation circuit which involves estimation and tracking of the multipath channel. The estimation and tracking must be highly accurate because extremely dense constellations such as 4096-QAM and possibly 16384-QAM can be used in DOCSIS 3.1. The conventional OFDM channel estimators available in the literature either do not perform satisfactorily or are not suitable for the DOCSIS 3.1 channel. The novel channel estimation technique proposed in this thesis iteratively searches for parameters of the channel paths. The proposed technique not only substantially enhances the channel estimation accuracy, but also can, at no cost, accurately identify the delay of each echo in the system. The echo delay information is valuable for proactive maintenance of the network. The fourth contribution of this thesis is a novel scheme that allows OFDM transmission without the use of a cyclic prefix (CP). The structure of OFDM in the current DOCSIS 3.1 does not achieve the maximum throughput if the channel has multipath components. The multipath channel causes inter-symbol-interference (ISI), which is commonly mitigated by employing CP. The CP acts as a guard interval that, while successfully protecting the signal from ISI, reduces the transmission throughput. The problem becomes more severe for downstream direction, where the throughput of the entire system is determined by the user with the worst channel. To solve the problem, this thesis proposes major alterations to the current DOCSIS 3.1 OFDM/OFDMA structure. The alterations involve using a pair of Nyquist filters at the transceivers and an efficient time-domain equalizer (TEQ) at the receiver to reduce ISI down to a negligible level without the need of CP. Simulation results demonstrate that, by incorporating the proposed alterations to the DOCSIS 3.1 down-link channel, the system can achieve the maximum throughput over a wide range of multipath channel conditions

    Gaussian Mixture Reduction for Bayesian Target Tracking in Clutter

    Get PDF
    The Bayesian solution for tracking a target in clutter results naturally in a target state Gaussian mixture probability density function (pdf) which is a sum of weighted Gaussian pdf\u27s, or mixture components. As new tracking measurements are received, the number of mixture components increases without bound, and eventually a reduced-component approximation of the original Gaussian mixture pdf is necessary to evaluate the target state pdf efficiently while maintaining good tracking performance. Many approximation methods exist, but these methods are either ad hoc or use rather crude approximation techniques. Recent studies have shown that a measure-function-based mixture reduction algorithm (MRA) may be used to generate a high-quality reduced-component approximation to the original target state Gaussian mixture pdf. To date, the Integral Square Error (ISE) cost-function-based MRA has been shown to provide better tracking performance than any previously published Bayesian tracking in heavy clutter algorithm. Research conducted for this thesis has led to the development of a new measure function, the Correlation Measure (CM), which gauges the similarity between a full- and reduced-component Gaussian mixture pdf. This new measure function is implemented in an MRA and tested in a simulated scenario of a single target in heavy clutter. Results indicate that the CM MRA provides slightly better performance than the ISE cost-function-based MRA, but only by a small margin

    Data compression techniques applied to high resolution high frame rate video technology

    Get PDF
    An investigation is presented of video data compression applied to microgravity space experiments using High Resolution High Frame Rate Video Technology (HHVT). An extensive survey of methods of video data compression, described in the open literature, was conducted. The survey examines compression methods employing digital computing. The results of the survey are presented. They include a description of each method and assessment of image degradation and video data parameters. An assessment is made of present and near term future technology for implementation of video data compression in high speed imaging system. Results of the assessment are discussed and summarized. The results of a study of a baseline HHVT video system, and approaches for implementation of video data compression, are presented. Case studies of three microgravity experiments are presented and specific compression techniques and implementations are recommended

    Parameter Estimation from Time-Series Data with Correlated Errors: A Wavelet-Based Method and its Application to Transit Light Curves

    Full text link
    We consider the problem of fitting a parametric model to time-series data that are afflicted by correlated noise. The noise is represented by a sum of two stationary Gaussian processes: one that is uncorrelated in time, and another that has a power spectral density varying as 1/fγ1/f^\gamma. We present an accurate and fast [O(N)] algorithm for parameter estimation based on computing the likelihood in a wavelet basis. The method is illustrated and tested using simulated time-series photometry of exoplanetary transits, with particular attention to estimating the midtransit time. We compare our method to two other methods that have been used in the literature, the time-averaging method and the residual-permutation method. For noise processes that obey our assumptions, the algorithm presented here gives more accurate results for midtransit times and truer estimates of their uncertainties.Comment: Accepted in ApJ. Illustrative code may be found at http://www.mit.edu/~carterja/code/ . 17 page
    • …
    corecore