302 research outputs found

    Scenedesmus biomass productivity and nutrient removal from wet market wastewater, a bio-kinetic study

    Get PDF
    The current study aims to investigate the production of microalgae biomass as a function for different wet market wastewater ratios (10, 25, 50, 75 and 100%) and Scenedesmus sp. initial concentrations (104 , 105 , 106 , 107 cells/mL) through the phycoremediation process. The biomass production, total nitrogen (TN), total phosphorus (TP) and total organic compounds (TOC) were determined daily. The pseudo-first order kinetic model was used to measure the potential of Scendesmus sp. in removing nutrients while the Verhulst logistic kinetic model was used to study the growth kinetic. The study revealed that the maximum productivity of Scenedesmus sp. biomass was recorded with 106 cells/mL of the initial concentration in 50% wet market wastewater (98.54 mg/L/day), and the highest removal of TP, TN, and TOC was obtained (85, 90 and 65% respectively). Total protein and lipid contents in the biomass yield produced in the wet market wastewater were more than that in the biomass produced in the BBM (41.7 vs. 37.4 and 23.2 vs. 19.2%, respectively). The results of GC–MS confirmed detection of 44 compounds in the biomass from the wet market wastewater compared to four compounds in the BBM. These compounds have several applications in pharmaceutical and personal care products, chemical industry and antimicrobial activity. These findings indicated the applicability of wet market wastewater as a production medium for microalgae biomass

    Cancellation of Towing Ship Interference in Passive SONAR in a Shallow Ocean Environment

    Get PDF
    Towed array sonars are preferred for detecting stealthy underwater targets that emit faint acoustic signals in the ocean, especially in shallow waters. However, the towing ship being near to the array behaves as a loud target, introducing additional interfering signals to the array, severely affecting the detection and classification of potential targets. Canceling this underlying interference signal is a challenging task and is investigated in this paper for a shallow ocean operational scenario where the problem is more critical due to the multipath phenomenon. A method exploiting the eigenvector analysis of spatio-temporal covariance matrix based on space time adaptive processing is proposed for suppressing tow ship interference and thus improving target detection. The developed algorithm learns the interference patterns in the presence of target signals to mitigate the interference across azimuth and to remove the spectral leakage of own-ship. The algorithm is statistically analyzed through a set of relevant metrics and is tested on simulated data that are equivalent to the data received by a towed linear array of acoustic sensors in a shallow ocean. The results indicate a reduction of 20-25dB in the tow ship interference power while the detection of long-range low SNR targets remain largely unaffected with minimal power-loss. In addition, it is demonstrated that the spectral leakage of tow ship, on multiple beams across the azimuth, due to multipath, is also alleviated leading to superior classification capabilities. The robustness of the proposed algorithm is validated by the open ocean experiment in the coastal shallow region of the Arabian Sea at Off-Kochi area of India, which produced results in close agreement with the simulations. A comparison of the simulation and experimental results with the existing PCI and ECA methods is also carried out, suggesting the proposed method is quite effective in suppressing the tow ship interference and is immensely beneficial for the detection and classification of long-range targets

    Universal Approximation of Linear Time-Invariant (LTI) Systems through RNNs: Power of Randomness in Reservoir Computing

    Full text link
    Recurrent neural networks (RNNs) are known to be universal approximators of dynamic systems under fairly mild and general assumptions, making them good tools to process temporal information. However, RNNs usually suffer from the issues of vanishing and exploding gradients in the standard RNN training. Reservoir computing (RC), a special RNN where the recurrent weights are randomized and left untrained, has been introduced to overcome these issues and has demonstrated superior empirical performance in fields as diverse as natural language processing and wireless communications especially in scenarios where training samples are extremely limited. On the contrary, the theoretical grounding to support this observed performance has not been fully developed at the same pace. In this work, we show that RNNs can provide universal approximation of linear time-invariant (LTI) systems. Specifically, we show that RC can universally approximate a general LTI system. We present a clear signal processing interpretation of RC and utilize this understanding in the problem of simulating a generic LTI system through RC. Under this setup, we analytically characterize the optimal probability distribution function for generating the recurrent weights of the underlying RNN of the RC. We provide extensive numerical evaluations to validate the optimality of the derived optimum distribution of the recurrent weights of the RC for the LTI system simulation problem. Our work results in clear signal processing-based model interpretability of RC and provides theoretical explanation for the power of randomness in setting instead of training RC's recurrent weights. It further provides a complete optimum analytical characterization for the untrained recurrent weights, marking an important step towards explainable machine learning (XML) which is extremely important for applications where training samples are limited.Comment: This work has been submitted to the IEEE for possible publication. Copyright may be transferred without notice, after which this version may no longer be accessibl

    Enhancing brain-computer interfacing through advanced independent component analysis techniques

    No full text
    A Brain-computer interface (BCI) is a direct communication system between a brain and an external device in which messages or commands sent by an individual do not pass through the brain’s normal output pathways but is detected through brain signals. Some severe motor impairments, such as Amyothrophic Lateral Sclerosis, head trauma, spinal injuries and other diseases may cause the patients to lose their muscle control and become unable to communicate with the outside environment. Currently no effective cure or treatment has yet been found for these diseases. Therefore using a BCI system to rebuild the communication pathway becomes a possible alternative solution. Among different types of BCIs, an electroencephalogram (EEG) based BCI is becoming a popular system due to EEG’s fine temporal resolution, ease of use, portability and low set-up cost. However EEG’s susceptibility to noise is a major issue to develop a robust BCI. Signal processing techniques such as coherent averaging, filtering, FFT and AR modelling, etc. are used to reduce the noise and extract components of interest. However these methods process the data on the observed mixture domain which mixes components of interest and noise. Such a limitation means that extracted EEG signals possibly still contain the noise residue or coarsely that the removed noise also contains part of EEG signals embedded. Independent Component Analysis (ICA), a Blind Source Separation (BSS) technique, is able to extract relevant information within noisy signals and separate the fundamental sources into the independent components (ICs). The most common assumption of ICA method is that the source signals are unknown and statistically independent. Through this assumption, ICA is able to recover the source signals. Since the ICA concepts appeared in the fields of neural networks and signal processing in the 1980s, many ICA applications in telecommunications, biomedical data analysis, feature extraction, speech separation, time-series analysis and data mining have been reported in the literature. In this thesis several ICA techniques are proposed to optimize two major issues for BCI applications: reducing the recording time needed in order to speed up the signal processing and reducing the number of recording channels whilst improving the final classification performance or at least with it remaining the same as the current performance. These will make BCI a more practical prospect for everyday use. This thesis first defines BCI and the diverse BCI models based on different control patterns. After the general idea of ICA is introduced along with some modifications to ICA, several new ICA approaches are proposed. The practical work in this thesis starts with the preliminary analyses on the Southampton BCI pilot datasets starting with basic and then advanced signal processing techniques. The proposed ICA techniques are then presented using a multi-channel event related potential (ERP) based BCI. Next, the ICA algorithm is applied to a multi-channel spontaneous activity based BCI. The final ICA approach aims to examine the possibility of using ICA based on just one or a few channel recordings on an ERP based BCI. The novel ICA approaches for BCI systems presented in this thesis show that ICA is able to accurately and repeatedly extract the relevant information buried within noisy signals and the signal quality is enhanced so that even a simple classifier can achieve good classification accuracy. In the ERP based BCI application, after multichannel ICA the data just applied to eight averages/epochs can achieve 83.9% classification accuracy whilst the data by coherent averaging can reach only 32.3% accuracy. In the spontaneous activity based BCI, the use of the multi-channel ICA algorithm can effectively extract discriminatory information from two types of singletrial EEG data. The classification accuracy is improved by about 25%, on average, compared to the performance on the unpreprocessed data. The single channel ICA technique on the ERP based BCI produces much better results than results using the lowpass filter. Whereas the appropriate number of averages improves the signal to noise rate of P300 activities which helps to achieve a better classification. These advantages will lead to a reliable and practical BCI for use outside of the clinical laboratory

    Modeling Of Power Line Communication Channel For Automatic Meter Reading System With LDPC Codes

    Get PDF
    In this era of modernization, one of the promising emerging technologies is Power Line Communication (PLC) system. In previous research fields, modeling of PLC channel, mostly for indoor applications has been studied. However, the need to study that for outdoor applications, such as the Automatic Meter Reading (AMR) systems is also vital. Moreover, standardization bodies have considered the use of LDPC codes restricted for indoor systems. Thus, in this paper, not only we model the PLC channel based on AMR applications, but also, we apply LDPC coding scheme to the system. To accomplish the objectives, firstly, we model the PLC-AMR channel, which includes multipath phenomenon. Additionally, PLC noise, usually occurring in the channel, is modeled. The modulation technique applied is BPSK and the performance of the system with varying load impedances is compared. The coded system consists of irregular LDPC codes, with two different constructions of the Parity-Check matrix, namely that by Radford Neal and reduced size of DVBS2. The performances of respective systems are then compared. Using LDPC by Radford Neal, the performances are analyzed with varied code rates

    Robust text independent closed set speaker identification systems and their evaluation

    Get PDF
    PhD ThesisThis thesis focuses upon text independent closed set speaker identi cation. The contributions relate to evaluation studies in the presence of various types of noise and handset e ects. Extensive evaluations are performed on four databases. The rst contribution is in the context of the use of the Gaussian Mixture Model-Universal Background Model (GMM-UBM) with original speech recordings from only the TIMIT database. Four main simulations for Speaker Identi cation Accuracy (SIA) are presented including di erent fusion strategies: Late fusion (score based), early fusion (feature based) and early-late fusion (combination of feature and score based), late fusion using concatenated static and dynamic features (features with temporal derivatives such as rst order derivative delta and second order derivative delta-delta features, namely acceleration features), and nally fusion of statistically independent normalized scores. The second contribution is again based on the GMM-UBM approach. Comprehensive evaluations of the e ect of Additive White Gaussian Noise (AWGN), and Non-Stationary Noise (NSN) (with and without a G.712 type handset) upon identi cation performance are undertaken. In particular, three NSN types with varying Signal to Noise Ratios (SNRs) were tested corresponding to: street tra c, a bus interior and a crowded talking environment. The performance evaluation also considered the e ect of late fusion techniques based on score fusion, namely mean, maximum, and linear weighted sum fusion. The databases employed were: TIMIT, SITW, and NIST 2008; and 120 speakers were selected from each database to yield 3,600 speech utterances. The third contribution is based on the use of the I-vector, four combinations of I-vectors with 100 and 200 dimensions were employed. Then, various fusion techniques using maximum, mean, weighted sum and cumulative fusion with the same I-vector dimension were used to improve the SIA. Similarly, both interleaving and concatenated I-vector fusion were exploited to produce 200 and 400 I-vector dimensions. The system was evaluated with four di erent databases using 120 speakers from each database. TIMIT, SITW and NIST 2008 databases were evaluated for various types of NSN namely, street-tra c NSN, bus-interior NSN and crowd talking NSN; and the G.712 type handset at 16 kHz was also applied. As recommendations from the study in terms of the GMM-UBM approach, mean fusion is found to yield overall best performance in terms of the SIA with noisy speech, whereas linear weighted sum fusion is overall best for original database recordings. However, in the I-vector approach the best SIA was obtained from the weighted sum and the concatenated fusion.Ministry of Higher Education and Scienti c Research (MoHESR), and the Iraqi Cultural Attach e, Al-Mustansiriya University, Al-Mustansiriya University College of Engineering in Iraq for supporting my PhD scholarship

    Design and development of mobile channel simulators using digital signal processing techniques

    Get PDF
    A mobile channel simulator can be constructed either in the time domain using a tapped delay line filter or in the frequency domain using the time variant transfer function of the channel. Transfer function modelling has many advantages over impulse response modelling. Although the transfer function channel model has been envisaged by several researchers as an alternative to the commonly employed tapped delay line model, so far it has not been implemented. In this work, channel simulators for single carrier and multicarrier OFDM system based on time variant transfer function of the channel have been designed and implemented using DSP techniques in SIMULINK. For a single carrier system, the simulator was based on Bello's transfer function channel model. Bello speculated that about 10Βτ(_MAX) frequency domain branches might result in a very good approximation of the channel (where ĐČ is the signal bandwidth and τ(_MAX) is the maximum excess delay of the multi-path channel). The simulation results showed that 10Bτ(_MAX) branches gave close agreement with the tapped delay line model(where Be is the coherence bandwidth). This number is π times higher than the previously speculated 10Bτ(_MAX).For multicarrier OFDM system, the simulator was based on the physical (PHY) layer standard for IEEE 802.16-2004 Wireless Metropolitan Area Network (WirelessMAN) and employed measured channel transfer functions at the 2.5 GHz and 3.5 GHz bands in the simulations. The channel was implemented in the frequency domain by carrying out point wise multiplication of the spectrum of OFDM time The simulator was employed to study BER performance of rate 1/2 and rate 3/4 coded systems with QPSK and 16-QAM constellations under a variety of measured channel transfer functions. The performance over the frequency selective channel mainly depended upon the frequency domain fading and the channel coding rate

    Digital pre-distortion of radio frequency digital to analog converters in a DOCSIS application

    Get PDF
    The use of Community Antenna Television Network (CATV) cable systems are a very common way that subscribers use to access the internet and download data. The transmitters that send the signals to subscribers must conform to a very stringent specification known as DOCSIS. Using traditional high frequency design techniques to meet this specification often lead to a lengthy and difficult production process where several calibrations have to be made. In order to send a digitally modulated signal that conforms to the DOCSIS specification some sort of conversion between the discrete digital domain and the analog domain must occur. To accomplish this a Digital to Analog converter is used. In recent years, the clocking or sampling frequency that can be used for Digital to Analog converters (DACs) has been rapidly increasing. The clocking frequency is directly proportional to the bandwidth that can be transmitted. DAC's that have exceptionally high clocking frequencies can be referred to as Radio Frequency DAC's. The clocking frequency of these devices has now progressed to the point where direct digital synthesis can be used for a DOCSIS transmitter without any analog frequency conversion stages. Since Radio Frequency DAC's are real devices the output is not a perfect representation of the discrete signal that is sent to it. Unwanted distortion is added that can be measured at the analog output. Removal of this distortion or at least significantly reducing it could be the difference meeting or or not meeting the DOCSIS specification. This thesis will explore the usage of these devices in this application. The basic structure of DAC's as well as the distortion signals themselves will be investigated in order to develop a method where the distortion can be removed. Ideally this can be done in a way that is suitable to be integrated into a transmitter architecture and meet the specification. The frequency response of the major distortion products across the DOCSIS band is measured. Once this is done a way to match these frequency responses is needed so a cancellation signal can be created that removes the distortion. A method is developed that uses an iterative algorithm to find filter coefficients whose frequency response matches that of the distortion signals as closely as possible. Since these cancellation signals are added to the discrete signal to be transmitted before the interface with the Radio Frequency DAC the process is known as pre-distortion. The generated coefficients are used in digital filters as part of a pre-distortion design. Tests are performed with discrete signals that are close approximates to a DOCSIS signal that would be sent to a subscriber. Measured results show a decrease in the power of targeted distortion signals. The reduction of the distortion level is enough that the DOCSIS specification is met for all test signals

    Advanced OFDM systems for terrestrial multimedia links

    Get PDF
    Recently, there has been considerable discussion about new wireless technologies and standards able to achieve high data rates. Due to the recent advances of digital signal processing and Very Large Scale Integration (VLSI) technologies, the initial obstacles encountered for the implementation of Orthogonal Frequency Division Multiplexing (OFDM) modulation schemes, such as massive complex multiplications and high speed memory accesses, do not exist anymore. OFDM offers strong multipath protection due to the insertion of the guard interval; in particular, the OFDM-based DVB-T standard had proved to offer excellent performance for the broadcasting of multimedia streams with bitrates over ten megabits per second in difficult terrestrial propagation channels, for fixed and portable applications. Nevertheless, for mobile scenarios, improving the receiver design is not enough to achieve error-free transmission especially in presence of deep shadow and multipath fading and some modifications of the standard can be envisaged. To address long and medium range applications like live mobile wireless television production, some further modifications are required to adapt the modulated bandwidth and fully exploit channels up to 24MHz wide. For these reasons, an extended OFDM system is proposed that offers variable bandwidth, improved protection to shadow and multipath fading and enhanced robustness thanks to the insertion of deep time-interleaving coupled with a powerful turbo codes concatenated error correction scheme. The system parameters and the receiver architecture have been described in C++ and verified with extensive simulations. In particular, the study of the receiver algorithms was aimed to achieve the optimal tradeoff between performances and complexity. Moreover, the modulation/demodulation chain has been implemented in VHDL and a prototype system has been manufactured. Ongoing field trials are demonstrating the ability of the proposed system to successfully overcome the impairments due to mobile terrestrial channels, like multipath and shadow fading. For short range applications, Time-Division Multiplexing (TDM) is an efficient way to share the radio resource between multiple terminals. The main modulation parameters for a TDM system are discussed and it is shown that the 802.16a TDM OFDM physical layer fulfills the application requirements; some practical examples are given. A pre-distortion method is proposed that exploit the reciprocity of the radio channel to perform a partial channel inversion achieving improved performances with no modifications of existing receivers
    • 

    corecore