201 research outputs found

    Remote Heart Rate Estimation Using Consumer-Grade Cameras

    Get PDF
    There are many ways in which the remote non-contact detection of the human heart rate might be useful. This is especially true if it can be done using inexpensive equipment such as consumer-grade cameras. Many studies and experiments have been performed in recent years to help reliably determine the heart rate from video footage of a person. The methods have taken an analysis approach which involves temporal Itering and frequency spectrum examination. This study attempts to answer questions about the noise sources which inhibit these methods from estimating the heart rate. Other statistical processes are examined for their use in reducing the noise in the system. Methods for locating the skin of a moving individual are explored and used with the purpose for acquiring the heart rate. Alternative methods borrowed from other fields are also introduced to find if they have merit in remote heart rate detection

    New Stategies for Single-channel Speech Separation

    Get PDF

    Hybrid solutions to instantaneous MIMO blind separation and decoding: narrowband, QAM and square cases

    Get PDF
    Future wireless communication systems are desired to support high data rates and high quality transmission when considering the growing multimedia applications. Increasing the channel throughput leads to the multiple input and multiple output and blind equalization techniques in recent years. Thereby blind MIMO equalization has attracted a great interest.Both system performance and computational complexities play important roles in real time communications. Reducing the computational load and providing accurate performances are the main challenges in present systems. In this thesis, a hybrid method which can provide an affordable complexity with good performance for Blind Equalization in large constellation MIMO systems is proposed first. Saving computational cost happens both in the signal sep- aration part and in signal detection part. First, based on Quadrature amplitude modulation signal characteristics, an efficient and simple nonlinear function for the Independent Compo- nent Analysis is introduced. Second, using the idea of the sphere decoding, we choose the soft information of channels in a sphere, and overcome the so- called curse of dimensionality of the Expectation Maximization (EM) algorithm and enhance the final results simultaneously. Mathematically, we demonstrate in the digital communication cases, the EM algorithm shows Newton -like convergence.Despite the widespread use of forward -error coding (FEC), most multiple input multiple output (MIMO) blind channel estimation techniques ignore its presence, and instead make the sim- plifying assumption that the transmitted symbols are uncoded. However, FEC induces code structure in the transmitted sequence that can be exploited to improve blind MIMO channel estimates. In final part of this work, we exploit the iterative channel estimation and decoding performance for blind MIMO equalization. Experiments show the improvements achievable by exploiting the existence of coding structures and that it can access the performance of a BCJR equalizer with perfect channel information in a reasonable SNR range. All results are confirmed experimentally for the example of blind equalization in block fading MIMO systems

    Optimization of Coding of AR Sources for Transmission Across Channels with Loss

    Get PDF

    Advances in Sonar Technology

    Get PDF
    The demand to explore the largest and also one of the richest parts of our planet, the advances in signal processing promoted by an exponential growth in computation power and a thorough study of sound propagation in the underwater realm, have lead to remarkable advances in sonar technology in the last years.The work on hand is a sum of knowledge of several authors who contributed in various aspects of sonar technology. This book intends to give a broad overview of the advances in sonar technology of the last years that resulted from the research effort of the authors in both sonar systems and their applications. It is intended for scientist and engineers from a variety of backgrounds and even those that never had contact with sonar technology before will find an easy introduction with the topics and principles exposed here

    Implementing Linear Predictive Coding based on a statistical model for LTE fronthaul

    Get PDF
    This thesis studies the application of Linear Predictive coding (LPC) in the downlink of Long Term Evolution (LTE) fronthaul, which comprises of BBU and RRH. This can act as an additional module in the existing system. Today, the transmission of a single complex sample from the BBU to the RRH consumes 30 bits. The research of the thesis is to analyze the application of linear prediction theory in the LTE downlink transmission, which will work as a compression scheme and reduce this 30 bits to lower value, at the same time fulfill the Error Vector Magnitude (EVM) requirement stated in the LTE standards made by 3rd Generation Partnership Project (3GPP). As 4G-LTE and the upcoming access technologies will deal with large number of data samples in the transmission, it is an advantage if those data samples can be compressed without destroying the information content. LPC or linear prediction coding has been proved to be a very effective method for speech compression in audio related applications. In this thesis, the same logic of compression is applied on digital data samples of the LTE and the results are analyzed. It is found that, if LPC is applied properly on the LTE, it is possible to compress data samples efficiently and transmit them from the BBU to the RRH with fewer bits. At the RRH those compressed data samples can be processed and the main information data can be reconstructed, with additional quantization error and noise. This is obvious because LPC is a lossy compression method. A statistical model is established to generate a table of linear prediction filter coefficients which will be present both at the BBU and the RRH, when compression and decompression of data samples are performed. Entropy is also calculated in order to analyze the achievable compression on an actual error vector after implementing certain compression coding such as Huffman coding. The specific coding technique is left as a scope of future research.Due to the growth of number of users and faster communication methods, mobile operators have to use the allocated resources more efficiently to meet the user demands. Like any other systems, mobile communication networks go through series of updates over time. In mobile communication system, these updates are known as “Releases”. The transition from 3rd Generation (3G) to 4th Generation (4G) took place with Release 8 in 2008. Many new techniques are introduced in 4G in order to use the available resources more efficiently for improving quality of services (QoSs). LTE (Long term evolution) or more commonly known as 4G communication system deals with much larger amount of data traffic than any other previous technologies. Hence it is of utmost importance that the operators make use of the allocated bandwidth more efficiently to serve the ever increasing number of users. It is possible for LTE to deal with this large amount of data due to the use of OFDM modulation technique which ensures better quality of communication. In OFDM, there exists multiple blocks of frequency bands stacked together as a whole, which are not related to one another. The LTE structure is different from any previous systems. In telecommunication systems, there exists a unit which handles all the data traffic to and from the transmitter and the receiver. This module is called the base station. In LTE, the base station is divided into two parts namely the Baseband Unit (BBU) and the Radio Unit (RU), where almost all the data processing takes place at the BBU, and the RU is used as both transmitter and receiver when data is exchanged to and from a mobile device. In recent years, a new type of architecture is proposed, which is called the C-RAN (Cloud Radio Access Network). In C-RAN, the BBU and the RU would be placed at two different locations. Multiple BBUs can be placed together at a single place called the BBU pool, whereas the RUs will be placed in separate places far from the BBU pool and connected via optical fibers. In this structure, RU is known as RRH (Remote Radio Head) as they are separated from the BBU. One main advantage of such a structure is that, only the RRH is placed near the user locality and the BBU can be put at the network operator’s vicinity. This also helps in reducing the operating and maintenance cost for the operator in many ways. Since the LTE imposes with massive amount of data traffic on the fronthaul (almost tenfold of the actual information data after applying error correcting coding, control signals etc.), it is very important to carry out compression of those data traffic before they are sent from the BBU to the RU. If good compression is carried out, then it becomes possible to accommodate more users, using the available resources. Although analog signals are used to transmit a message from the transmitter to the receiver over a medium, it is always important to convert those analog signal to digital signal to be transmitted from one block to the next block for processing, through the connecting link. The main purpose of this thesis work is to apply a compression technique which will minimize the number of bits needed to represent each of those data samples transmitted from the BBU to the RRH. The compression technique used in this thesis is to employ a module which will use certain number of previous data samples values to make a prediction of the next data sample. Then this predicted data sample is compared with the actual data sample and their difference is found. The difference between these two samples has a low magnitude, as a result it is possible to use lower number of bits in the digital domain to represent this value, and finally transmitted through the link to the RRH. At the RRH, the same prediction module is used to utilize these received samples of low magnitude, to make a prediction of the original data samples which are intended to be sent at the first place. In order to make the prediction module to function properly, it is very important to set up the filter values, which are known as the prediction coefficients. These coefficients play the role of successfully predicting data samples which are very similar to the original data samples. These coefficients are calculated by statistical method so that they can be used for any set of random data sample vector in the LTE. This thesis studies the performance of applying this prediction technique in LTE. In order to identify the efficiency of this applied compression technique, certain parameters are calculated using various simulations, and compared with the set of values as specified by the main researching bodies of the LTE. It is found that, the applied compression technique works fine in LTE as the simulation results support the validity of the scheme. It also proves that, it is possible to introduce this compression technique as an extension to the upcoming upgrades of the LTE, and this will facilitate accommodating more users with the available infrastructure resources

    Physical layer authentication for wireless communications

    Get PDF
    指導教員:姜 暁

    Model-based speech enhancement for hearing aids

    Get PDF
    corecore