19,084 research outputs found

    Signal processing techniques for analysis of heart sounds and electrocardiograms

    Get PDF
    Audible heart sounds represent less than 5% of the vibrational energy associated with the cardiac cycle. In this study, experiments have been conducted to explore the feasibility of examining cardiac vibration by means of a single display encompassing the entire bandwidth of the oscillations and relating components at different frequencies. Zero-phase-shift digital filtering is shown to be required in producing such displays, which extend from a recognizable phonocardiogram at one frequency extreme to a recognizable apexcardiogram at the other. Certain features in mid-systole and early diastole, observed by means of this technique, appear not to have been previously described. Frequency modulation of an audio-frequency sinusoid by a complex signal is shown to be effective in generating sounds analogous to that signal and containing the same information, but occupying a bandwidth suitable to optimum human auditory perception. The generation of such sounds using an exponential-response voltage- controlled oscillator is found to be most appropriate for converting amplitude as well as frequency changes in the original signal into pitch changes in the new sounds, utilizing the human auditory system\u27s more acute discrimination of pitch changes than amplitude changes. Pseudologarithmic compression of the input signal is shown to facilitate emphasis in the converted sounds upon changes at high or low amplitudes in the original signal. A noise-control circuit has been implemented for amplitude modulation of the converted signal to de- emphasize sounds arising from portions of the input signal below a chosen amplitude threshold. This method is shown to facilitate the transmission of analogs of audible and normally inaudible sounds over standard telephone channels, and to permit the slowing down of the converted sounds with no loss of information due to decreased frequencies. The approximation of an arbitrary waveform by a piecewise-linear (PL) function is shown to permit economical digital storage in parametric form. Fourier series and Fourier transforms may be readily calculated directly from the PL breakpoint parameters without further approximation, and the number of breakpoints needed to define the PL approximation is significantly lower than the number of uniformly-spaced samples required to satisfy the Nyquist sampling criterion; aliasing problems are shown not to arise. Thus data compression is feasible by this means without recourse to a parametric model defined for the signal (e.g., speech) being processed. Methods of automatic adaptive PL sampling and waveform reconstruction are discussed, and microcomputer algorithms implemented for this purpose are described in detail. Examples are given of the application of PL techniques to electrocardiography, phonocardiography, and the digitization of speech

    Different Techniques and Algorithms for Biomedical Signal Processing

    Get PDF
    This paper is intended to give a broad overview of the complex area of biomedical and their use in signal processing. It contains sufficient theoretical materials to provide some understanding of the techniques involved for the researcher in the field. This paper consists of two parts: feature extraction and pattern recognition. The first part provides a basic understanding as to how the time domain signal of patient are converted to the frequency domain for analysis. The second part provides basic for understanding the theoretical and practical approaches to the development of neural network models and their implementation in modeling biological syste

    An information theoretic characterisation of auditory encoding.

    Get PDF
    The entropy metric derived from information theory provides a means to quantify the amount of information transmitted in acoustic streams like speech or music. By systematically varying the entropy of pitch sequences, we sought brain areas where neural activity and energetic demands increase as a function of entropy. Such a relationship is predicted to occur in an efficient encoding mechanism that uses less computational resource when less information is present in the signal: we specifically tested the hypothesis that such a relationship is present in the planum temporale (PT). In two convergent functional MRI studies, we demonstrated this relationship in PT for encoding, while furthermore showing that a distributed fronto-parietal network for retrieval of acoustic information is independent of entropy. The results establish PT as an efficient neural engine that demands less computational resource to encode redundant signals than those with high information content

    Auscultating heart and breath sounds through patients’ gowns: who does this and does it matter?

    Get PDF
    Background Doctors are taught to auscultate with the stethoscope applied to the skin, but in practice may be seen applying the stethoscope to the gown. Objectives To determine how often doctors auscultate heart and breath sounds through patients’ gowns, and to assess the impact of this approach on the quality of the sounds heard. Methods A sample of doctors in the west of Scotland were sent an email in 2014 inviting them to answer an anonymous questionnaire about how they auscultated heart and breath sounds. Normal heart sounds from two subjects were recorded through skin, through skin and gown, and through skin, gown and dressing gown. These were played to doctors, unaware of the origin of each recording, who completed a questionnaire about the method and quality of the sounds they heard. Results 206 of 445 (46%) doctors completed the questionnaire. 124 (60%) stated that they listened to patients’ heart sounds, and 156 (76%) to patients’ breath sounds, through patients’ gowns. Trainees were more likely to do this compared with consultants (OR 3.39, 95% CI 1.74 to 6.65). Doctors of all grades considered this practice affected the quality of the sounds heard. 32 doctors listened to the recorded heart sounds. 23 of the 64 (36%) skin and 23 of the 64 (36%) gown recordings were identified. The majority of doctors (74%) could not differentiate between skin or gown recordings, but could tell them apart from the double layer recordings (p=0.02). Trainees were more likely to hear artefactual added sounds (p=0.04). Conclusions Many doctors listen to patients’ heart and breath sounds through hospital gowns, at least occasionally. In a short test, most doctors could not distinguish between sounds heard through a gown or skin. Further work is needed to determine the impact of this approach to auscultation on the identification of murmurs and added sounds

    Spatial audio in small display screen devices

    Get PDF
    Our work addresses the problem of (visual) clutter in mobile device interfaces. The solution we propose involves the translation of technique-from the graphical to the audio domain-for expliting space in information representation. This article presents an illustrative example in the form of a spatialisedaudio progress bar. In usability tests, participants performed background monitoring tasks significantly more accurately using this spatialised audio (a compared with a conventional visual) progress bar. Moreover, their performance in a simultaneously running, visually demanding foreground task was significantly improved in the eye-free monitoring condition. These results have important implications for the design of multi-tasking interfaces for mobile devices
    corecore