24 research outputs found
Audio Compression using a Modified Vector Quantization algorithm for Mastering Applications
Audio data compression is used to reduce the transmission bandwidth and storage requirements of audio data. It is the second stage in the audio mastering process with audio equalization being the first stage. Compression algorithms such as BSAC, MP3 and AAC are used as standards in this paper. The challenge faced in audio compression is compressing the signal at low bit rates. The previous algorithms which work well at low bit rates cannot be dominant at higher bit rates and vice-versa. This paper proposes an altered form of vector quantization algorithm which produces a scalable bit stream which has a number of fine layers of audio fidelity. This modified form of the vector quantization algorithm is used to generate a perceptually audio coder which is scalable and uses the quantization and encoding stages which are responsible for the psychoacoustic and arithmetical terminations that are actually detached as practically all the data detached during the prediction phases at the encoder side is supplemented towards the audio signal at decoder stage. Therefore, clearly the quantization phase which is modified to produce a bit stream which is scalable. This modified algorithm works well at both lower and higher bit rates. Subjective evaluations were done by audio professionals using the MUSHRA test and the mean normalized scores at various bit rates was noted and compared with the previous algorithms
Audio Compression using a Modified Vector Quantization algorithm for Mastering Applications
Audio data compression is used to reduce the transmission bandwidth and storage requirements of audio data. It is the second stage in the audio mastering process with audio equalization being the first stage. Compression algorithms such as BSAC, MP3 and AAC are used as standards in this paper. The challenge faced in audio compression is compressing the signal at low bit rates. The previous algorithms which work well at low bit rates cannot be dominant at higher bit rates and vice-versa. This paper proposes an altered form of vector quantization algorithm which produces a scalable bit stream which has a number of fine layers of audio fidelity. This modified form of the vector quantization algorithm is used to generate a perceptually audio coder which is scalable and uses the quantization and encoding stages which are responsible for the psychoacoustic and arithmetical terminations that are actually detached as practically all the data detached during the prediction phases at the encoder side is supplemented towards the audio signal at decoder stage. Therefore, clearly the quantization phase which is modified to produce a bit stream which is scalable. This modified algorithm works well at both lower and higher bit rates. Subjective evaluations were done by audio professionals using the MUSHRA test and the mean normalized scores at various bit rates was noted and compared with the previous algorithms
Machine Learning Methods with Noisy, Incomplete or Small Datasets
In many machine learning applications, available datasets are sometimes incomplete, noisy or affected by artifacts. In supervised scenarios, it could happen that label information has low quality, which might include unbalanced training sets, noisy labels and other problems. Moreover, in practice, it is very common that available data samples are not enough to derive useful supervised or unsupervised classifiers. All these issues are commonly referred to as the low-quality data problem. This book collects novel contributions on machine learning methods for low-quality datasets, to contribute to the dissemination of new ideas to solve this challenging problem, and to provide clear examples of application in real scenarios
Machine Learning and Data Mining Applications in Power Systems
This Special Issue was intended as a forum to advance research and apply machine-learning and data-mining methods to facilitate the development of modern electric power systems, grids and devices, and smart grids and protection devices, as well as to develop tools for more accurate and efficient power system analysis. Conventional signal processing is no longer adequate to extract all the relevant information from distorted signals through filtering, estimation, and detection to facilitate decision-making and control actions. Machine learning algorithms, optimization techniques and efficient numerical algorithms, distributed signal processing, machine learning, data-mining statistical signal detection, and estimation may help to solve contemporary challenges in modern power systems. The increased use of digital information and control technology can improve the grid’s reliability, security, and efficiency; the dynamic optimization of grid operations; demand response; the incorporation of demand-side resources and integration of energy-efficient resources; distribution automation; and the integration of smart appliances and consumer devices. Signal processing offers the tools needed to convert measurement data to information, and to transform information into actionable intelligence. This Special Issue includes fifteen articles, authored by international research teams from several countries
Recommended from our members
Time-domain Compressive Beamforming for Medical Ultrasound Imaging
Over the past 10 years, Compressive Sensing has gained a lot of visibility from the medical imaging research community. The most compelling feature for the use of Compressive Sensing is its ability to perform perfect reconstructions of under-sampled signals using l1-minimization. Of course, that counter-intuitive feature has a cost. The lacking information is compensated for by a priori knowledge of the signal under certain mathematical conditions. This technology is currently used in some commercial MRI scanners to increase the acquisition rate hence decreasing discomfort for the patient while increasing patient turnover. For echography, the applications could go from fast 3D echocardiography to simplified, cheaper echography systems.
Real-time ultrasound imaging scanners have been available for nearly 50 years. During these 50 years of existence, much has changed in their architecture, electronics, and technologies. However one component remains present: the beamformer. From analog beamformers to software beamformers, the technology has evolved and brought much diversity to the world of beam formation. Currently, most commercial scanners use several focalized ultrasonic pulses to probe tissue. The time between two consecutive focalized pulses is not compressible, limiting the frame rate. Indeed, one must wait for a pulse to propagate back and forth from the probe to the deepest point imaged before firing a new pulse.
In this work, we propose to outline the development of a novel software beamforming technique that uses Compressive Sensing. Time-domain Compressive Beamforming (t-CBF) uses computational models and regularization to reconstruct de-cluttered ultrasound images. One of the main features of t-CBF is its use of only one transmit wave to insonify the tissue. Single-wave imaging brings high frame rates to the modality, for example allowing a physician to see precisely the movements of the heart walls or valves during a heart cycle. t-CBF takes into account the geometry of the probe as well as its physical parameters to improve resolution and attenuate artifacts commonly seen in single-wave imaging such as side lobes.
In this thesis, we define a mathematical framework for the beamforming of ultrasonic data compatible with Compressive Sensing. Then, we investigate its capabilities on simple simulations in terms of resolution and super-resolution. Finally, we adapt t-CBF to real-life ultrasonic data. In particular, we reconstruct 2D cardiac images at a frame rate 100-fold higher than typical values