428 research outputs found

    Audio-assisted movie dialogue detection

    Get PDF
    An audio-assisted system is investigated that detects if a movie scene is a dialogue or not. The system is based on actor indicator functions. That is, functions which define if an actor speaks at a certain time instant. In particular, the cross-correlation and the magnitude of the corresponding the cross-power spectral density of a pair of indicator functions are input to various classifiers, such as voted perceptions, radial basis function networks, random trees, and support vector machines for dialogue/non-dialogue detection. To boost classifier efficiency AdaBoost is also exploited. The aforementioned classifiers are trained using ground truth indicator functions determined by human annotators for 41 dialogue and another 20 non-dialogue audio instances. For testing, actual indicator functions are derived by applying audio activity detection and actor clustering to audio recordings. 23 instances are randomly chosen among the aforementioned 41 dialogue instances, 17 of which correspond to dialogue scenes and 6 to non-dialogue ones. Accuracy ranging between 0.739 and 0.826 is reported. © 2008 IEEE

    Audio-assisted movie dialogue detection

    Get PDF
    An audio-assisted system is investigated that detects if a movie scene is a dialogue or not. The system is based on actor indicator functions. That is, functions which define if an actor speaks at a certain time instant. In particular, the crosscorrelation and the magnitude of the corresponding the crosspower spectral density of a pair of indicator functions are input to various classifiers, such as voted perceptrons, radial basis function networks, random trees, and support vector machines for dialogue/non-dialogue detection. To boost classifier efficiency AdaBoost is also exploited. The aforementioned classifiers are trained using ground truth indicator functions determined by human annotators for 41 dialogue and another 20 non-dialogue audio instances. For testing, actual indicator functions are derived by applying audio activity detection and actor clustering to audio recordings. 23 instances are randomly chosen among the aforementioned 41 dialogue instances, 17 of which correspond to dialogue scenes and 6 to non-dialogue ones. Accuracy ranging between 0.739 and 0.826 is reported

    Multirate Frequency Transformations: Wideband AM-FM Demodulation with Applications to Signal Processing and Communications

    Get PDF
    The AM-FM (amplitude & frequency modulation) signal model finds numerous applications in image processing, communications, and speech processing. The traditional approaches towards demodulation of signals in this category are the analytic signal approach, frequency tracking, or the energy operator approach. These approaches however, assume that the amplitude and frequency components are slowly time-varying, e.g., narrowband and incur significant demodulation error in the wideband scenarios. In this thesis, we extend a two-stage approach towards wideband AM-FM demodulation that combines multirate frequency transformations (MFT) enacted through a combination of multirate systems with traditional demodulation techniques, e.g., the Teager-Kasiser energy operator demodulation (ESA) approach to large wideband to narrowband conversion factors. The MFT module comprises of multirate interpolation and heterodyning and converts the wideband AM-FM signal into a narrowband signal, while the demodulation module such as ESA demodulates the narrowband signal into constituent amplitude and frequency components that are then transformed back to yield estimates for the wideband signal. This MFT-ESA approach is then applied to the various problems of: (a) wideband image demodulation and fingerprint demodulation, where multidimensional energy separation is employed, (b) wideband first-formant demodulation in vowels, and (c) wideband CPM demodulation with partial response signaling, to demonstrate its validity in both monocomponent and multicomponent scenarios as an effective multicomponent AM-FM signal demodulation and analysis technique for image processing, speech processing, and communications based applications

    Communications Biophysics

    Get PDF
    Contains research objectives and summary of research on nine research projects split into four sections.National Institutes of Health (Grant 5 ROI NS11000-03)National Institutes of Health (Grant 1 P01 NS13126-01)National Institutes of Health (Grant 1 RO1 NS11153-01)National Institutes of Health (Grant 2 R01 NS10916-02)Harvard-M.I.T. Rehabilitation Engineering CenterU. S. Department of Health, Education, and Welfare (Grant 23-P-55854)National Institutes of Health (Grant 1 ROl NS11680-01)National Institutes of Health (Grant 5 ROI NS11080-03)M.I.T. Health Sciences Fund (Grant 76-07)National Institutes of Health (Grant 5 T32 GM07301-02)National Institutes of Health (Grant 5 TO1 GM01555-10

    Sensory Communication

    Get PDF
    Contains table of contents on Section 2, an introduction, reports on eleven research projects and a list of publications.National Institutes of Health Grant 5 R01 DC00117National Institutes of Health Grant 5 R01 DC00270National Institutes of Health Contract 2 P01 DC00361National Institutes of Health Grant 5 R01 DC00100National Institutes of Health Contract 7 R29 DC00428National Institutes of Health Grant 2 R01 DC00126U.S. Air Force - Office of Scientific Research Grant AFOSR 90-0200U.S. Navy - Office of Naval Research Grant N00014-90-J-1935National Institutes of Health Grant 5 R29 DC00625U.S. Navy - Office of Naval Research Grant N00014-91-J-1454U.S. Navy - Office of Naval Research Grant N00014-92-J-181

    Applications of nonuniform sampling in wideband multichannel communication systems

    Get PDF
    This research is an investigation into utilising randomised sampling in communication systems to ease the sampling rate requirements of digitally processing narrowband signals residing within a wide range of overseen frequencies. By harnessing the aliasing suppression capabilities of such sampling schemes, it is shown that certain processing tasks, namely spectrum sensing, can be performed at significantly low sampling rates compared to those demanded by uniform-sampling-based digital signal processing. The latter imposes sampling frequencies of at least twice the monitored bandwidth regardless of the spectral activity within. Aliasing can otherwise result in irresolvable processing problems, as the spectral support of the present signal is a priori unknown. Lower sampling rates exploit the processing module(s) resources (such as power) more efficiently and avoid the possible need for premium specialised high-cost DSP, especially if the handled bandwidth is considerably wide. A number of randomised sampling schemes are examined and appropriate spectral analysis tools are used to furnish their salient features. The adopted periodogram-type estimators are tailored to each of the schemes and their statistical characteristics are assessed for stationary, and cyclostationary signals. Their ability to alleviate the bandwidth limitation of uniform sampling is demonstrated and the smeared-aliasing defect that accompanies randomised sampling is also quantified. In employing the aforementioned analysis tools a novel wideband spectrum sensing approach is introduced. It permits the simultaneous sensing of a number of nonoverlapping spectral subbands constituting a wide range of monitored frequencies. The operational sampling rates of the sensing procedure are not limited or dictated by the overseen bandwidth antithetical to uniform-sampling-based techniques. Prescriptive guidelines are developed to ensure that the proposed technique satisfies certain detection probabilities predefined by the user. These recommendations address the trade-off between the required sampling rate and the length of the signal observation window (sensing time) in a given scenario. Various aspects of the introduced multiband spectrum sensing approach are investigated and its applicability highlighted

    Speech Enhancement By Exploiting The Baseband Phase Structure Of Voiced Speech For Effective Non-Stationary Noise Estimation

    Get PDF
    Speech enhancement is one of the most important and challenging issues in the speech communication and signal processing field. It aims to minimize the effect of additive noise on the quality and intelligibility of the speech signal. Speech quality is the measure of noise remaining after the processing on the speech signal and of how pleasant the resulting speech sounds, while intelligibility refers to the accuracy of understanding speech. Speech enhancement algorithms are designed to remove the additive noise with minimum speech distortion.The task of speech enhancement is challenging due to lack of knowledge about the corrupting noise. Hence, the most challenging task is to estimate the noise which degrades the speech. Several approaches has been adopted for noise estimation which mainly fall under two categories: single channel algorithms and multiple channel algorithms. Due to this, the speech enhancement algorithms are also broadly classified as single and multiple channel enhancement algorithms.In this thesis, speech enhancement is studied in acoustic and modulation domains along with both amplitude and phase enhancement. We propose a noise estimation technique based on the spectral sparsity, detected by using the harmonic property of voiced segment of the speech. We estimate the frame to frame phase difference for the clean speech from available corrupted speech. This estimated frame-to-frame phase difference is used as a means of detecting the noise-only frequency bins even in voiced frames. This gives better noise estimation for the highly non-stationary noises like babble, restaurant and subway noise. This noise estimation along with the phase difference as an additional prior is used to extend the standard spectral subtraction algorithm. We also verify the effectiveness of this noise estimation technique when used with the Minimum Mean Squared Error Short Time Spectral Amplitude Estimator (MMSE STSA) speech enhancement algorithm. The combination of MMSE STSA and spectral subtraction results in further improvement of speech quality

    Sensory Communication

    Get PDF
    Contains table of contents for Section 2 and reports on five research projects.National Institutes of Health Contract 2 R01 DC00117National Institutes of Health Contract 1 R01 DC02032National Institutes of Health Contract 2 P01 DC00361National Institutes of Health Contract N01 DC22402National Institutes of Health Grant R01-DC001001National Institutes of Health Grant R01-DC00270National Institutes of Health Grant 5 R01 DC00126National Institutes of Health Grant R29-DC00625U.S. Navy - Office of Naval Research Grant N00014-88-K-0604U.S. Navy - Office of Naval Research Grant N00014-91-J-1454U.S. Navy - Office of Naval Research Grant N00014-92-J-1814U.S. Navy - Naval Air Warfare Center Training Systems Division Contract N61339-94-C-0087U.S. Navy - Naval Air Warfare Center Training System Division Contract N61339-93-C-0055U.S. Navy - Office of Naval Research Grant N00014-93-1-1198National Aeronautics and Space Administration/Ames Research Center Grant NCC 2-77
    • …
    corecore