362 research outputs found

    Signals and Images in Sea Technologies

    Get PDF
    Life below water is the 14th Sustainable Development Goal (SDG) envisaged by the United Nations and is aimed at conserving and sustainably using the oceans, seas, and marine resources for sustainable development. It is not difficult to argue that signals and image technologies may play an essential role in achieving the foreseen targets linked to SDG 14. Besides increasing the general knowledge of ocean health by means of data analysis, methodologies based on signal and image processing can be helpful in environmental monitoring, in protecting and restoring ecosystems, in finding new sensor technologies for green routing and eco-friendly ships, in providing tools for implementing best practices for sustainable fishing, as well as in defining frameworks and intelligent systems for enforcing sea law and making the sea a safer and more secure place. Imaging is also a key element for the exploration of the underwater world for various scopes, ranging from the predictive maintenance of sub-sea pipelines and other infrastructure projects, to the discovery, documentation, and protection of sunken cultural heritage. The scope of this Special Issue encompasses investigations into techniques and ICT approaches and, in particular, the study and application of signal- and image-based methods and, in turn, exploration of the advantages of their application in the previously mentioned areas

    Signal processing techniques for mobile multimedia systems

    Get PDF
    Recent trends in wireless communication systems show a significant demand for the delivery of multimedia services and applications over mobile networks - mobile multimedia - like video telephony, multimedia messaging, mobile gaming, interactive and streaming video, etc. However, despite the ongoing development of key communication technologies that support these applications, the communication resources and bandwidth available to wireless/mobile radio systems are often severely limited. It is well known, that these bottlenecks are inherently due to the processing capabilities of mobile transmission systems, and the time-varying nature of wireless channel conditions and propagation environments. Therefore, new ways of processing and transmitting multimedia data over mobile radio channels have become essential which is the principal focus of this thesis. In this work, the performance and suitability of various signal processing techniques and transmission strategies in the application of multimedia data over wireless/mobile radio links are investigated. The proposed transmission systems for multimedia communication employ different data encoding schemes which include source coding in the wavelet domain, transmit diversity coding (space-time coding), and adaptive antenna beamforming (eigenbeamforming). By integrating these techniques into a robust communication system, the quality (SNR, etc) of multimedia signals received on mobile devices is maximised while mitigating the fast fading and multi-path effects of mobile channels. To support the transmission of high data-rate multimedia applications, a well known multi-carrier transmission technology known as Orthogonal Frequency Division Multiplexing (OFDM) has been implemented. As shown in this study, this results in significant performance gains when combined with other signal-processing techniques such as spa ce-time block coding (STBC). To optimise signal transmission, a novel unequal adaptive modulation scheme for the communication of multimedia data over MIMO-OFDM systems has been proposed. In this system, discrete wavelet transform/subband coding is used to compress data into their respective low-frequency and high-frequency components. Unlike traditional methods, however, data representing the low-frequency data are processed and modulated separately as they are more sensitive to the distortion effects of mobile radio channels. To make use of a desirable subchannel state, such that the quality (SNR) of the multimedia data recovered at the receiver is optimized, we employ a lookup matrix-adaptive bit and power allocation (LM-ABPA) algorithm. Apart from improving the spectral efficiency of OFDM, the modified LM-ABPA scheme, sorts and allocates subcarriers with the highest SNR to low-frequency data and the remaining to the least important data. To maintain a target system SNR, the LM-ABPA loading scheme assigns appropriate signal constella tion sizes and transmit power levels (modulation type) across all subcarriers and is adapted to the varying channel conditions such that the average system error-rate (SER/BER) is minimised. When configured for a constant data-rate load, simulation results show significant performance gains over non-adaptive systems. In addition to the above studies, the simulation framework developed in this work is applied to investigate the performance of other signal processing techniques for multimedia communication such as blind channel equalization, and to examine the effectiveness of a secure communication system based on a logistic chaotic generator (LCG) for chaos shift-keying (CSK)

    Locating and extracting acoustic and neural signals

    Get PDF
    This dissertation presents innovate methodologies for locating, extracting, and separating multiple incoherent sound sources in three-dimensional (3D) space; and applications of the time reversal (TR) algorithm to pinpoint the hyper active neural activities inside the brain auditory structure that are correlated to the tinnitus pathology. Specifically, an acoustic modeling based method is developed for locating arbitrary and incoherent sound sources in 3D space in real time by using a minimal number of microphones, and the Point Source Separation (PSS) method is developed for extracting target signals from directly measured mixed signals. Combining these two approaches leads to a novel technology known as Blind Sources Localization and Separation (BSLS) that enables one to locate multiple incoherent sound signals in 3D space and separate original individual sources simultaneously, based on the directly measured mixed signals. These technologies have been validated through numerical simulations and experiments conducted in various non-ideal environments where there are non-negligible, unspecified sound reflections and reverberation as well as interferences from random background noise. Another innovation presented in this dissertation is concerned with applications of the TR algorithm to pinpoint the exact locations of hyper-active neurons in the brain auditory structure that are directly correlated to the tinnitus perception. Benchmark tests conducted on normal rats have confirmed the localization results provided by the TR algorithm. Results demonstrate that the spatial resolution of this source localization can be as high as the micrometer level. This high precision localization may lead to a paradigm shift in tinnitus diagnosis, which may in turn produce a more cost-effective treatment for tinnitus than any of the existing ones

    Tensor Decompositions for Signal Processing Applications From Two-way to Multiway Component Analysis

    Full text link
    The widespread use of multi-sensor technology and the emergence of big datasets has highlighted the limitations of standard flat-view matrix models and the necessity to move towards more versatile data analysis tools. We show that higher-order tensors (i.e., multiway arrays) enable such a fundamental paradigm shift towards models that are essentially polynomial and whose uniqueness, unlike the matrix methods, is guaranteed under verymild and natural conditions. Benefiting fromthe power ofmultilinear algebra as theirmathematical backbone, data analysis techniques using tensor decompositions are shown to have great flexibility in the choice of constraints that match data properties, and to find more general latent components in the data than matrix-based methods. A comprehensive introduction to tensor decompositions is provided from a signal processing perspective, starting from the algebraic foundations, via basic Canonical Polyadic and Tucker models, through to advanced cause-effect and multi-view data analysis schemes. We show that tensor decompositions enable natural generalizations of some commonly used signal processing paradigms, such as canonical correlation and subspace techniques, signal separation, linear regression, feature extraction and classification. We also cover computational aspects, and point out how ideas from compressed sensing and scientific computing may be used for addressing the otherwise unmanageable storage and manipulation problems associated with big datasets. The concepts are supported by illustrative real world case studies illuminating the benefits of the tensor framework, as efficient and promising tools for modern signal processing, data analysis and machine learning applications; these benefits also extend to vector/matrix data through tensorization. Keywords: ICA, NMF, CPD, Tucker decomposition, HOSVD, tensor networks, Tensor Train

    Adaptive antenna array beamforming using a concatenation of recursive least square and least mean square algorithms

    Get PDF
    In recent years, adaptive or smart antennas have become a key component for various wireless applications, such as radar, sonar and cellular mobile communications including worldwide interoperability for microwave access (WiMAX). They lead to an increase in the detection range of radar and sonar systems, and the capacity of mobile radio communication systems. These antennas are used as spatial filters for receiving the desired signals coming from a specific direction or directions, while minimizing the reception of unwanted signals emanating from other directions.Because of its simplicity and robustness, the LMS algorithm has become one of the most popular adaptive signal processing techniques adopted in many applications, including antenna array beamforming. Over the last three decades, several improvements have been proposed to speed up the convergence of the LMS algorithm. These include the normalized-LMS (NLMS), variable-length LMS algorithm, transform domain algorithms, and more recently the constrained-stability LMS (CSLMS) algorithm and modified robust variable step size LMS (MRVSS) algorithm. Yet another approach for attempting to speed up the convergence of the LMS algorithm without having to sacrifice too much of its error floor performance, is through the use of a variable step size LMS (VSSLMS) algorithm. All the published VSSLMS algorithms make use of an initial large adaptation step size to speed up the convergence. Upon approaching the steady state, smaller step sizes are then introduced to decrease the level of adjustment, hence maintaining a lower error floor. This convergence improvement of the LMS algorithm increases its complexity from 2N in the case of LMS algorithm to 9N in the case of the MRVSS algorithm, where N is the number of array elements.An alternative to the LMS algorithm is the RLS algorithm. Although higher complexity is required for the RLS algorithm compared to the LMS algorithm, it can achieve faster convergence, thus, better performance compared to the LMS algorithm. There are also improvements that have been made to the RLS algorithm families to enhance tracking ability as well as stability. Examples are, the adaptive forgetting factor RLS algorithm (AFF-RLS), variable forgetting factor RLS (VFFRLS) and the extended recursive least squares (EX-KRLS) algorithm. The multiplication complexity of VFFRLS, AFF-RLS and EX-KRLS algorithms are 2.5N2 + 3N + 20 , 9N2 + 7N , and 15N3 + 7N2 + 2N + 4 respectively, while the RLS algorithm requires 2.5N2 + 3N .All the above well known algorithms require an accurate reference signal for their proper operation. In some cases, several additional operating parameters should be specified. For example, MRVSS needs twelve predefined parameters. As a result, its performance highly depends on the input signal.In this study, two adaptive beamforming algorithms have been proposed. They are called recursive least square - least mean square (RLMS) algorithm, and least mean square - least mean square (LLMS) algorithm. These algorithms have been proposed for meeting future beamforming requirements, such as very high convergence rate, robust to noise and flexible modes of operation. The RLMS algorithm makes use of two individual algorithm stages, based on the RLS and LMS algorithms, connected in tandem via an array image vector. On the other hand, the LLMS algorithm is a simpler version of the RLMS algorithm. It makes use of two LMS algorithm stages instead of the RLS – LMS combination as used in the RLMS algorithm.Unlike other adaptive beamforming algorithms, for both of these algorithms, the error signal of the second algorithm stage is fed back and combined with the error signal of the first algorithm stage to form an overall error signal for use update the tap weights of the first algorithm stage.Upon convergence, usually after few iterations, the proposed algorithms can be switched to the self-referencing mode. In this mode, the entire algorithm outputs are swapped, replacing their reference signals. In moving target applications, the array image vector, F, should also be updated to the new position. This scenario is also studied for both proposed algorithms. A simple and effective method for calculate the required array image vector is also proposed. Moreover, since the RLMS and the LLMS algorithms employ the array image vector in their operation, they can be used to generate fixed beams by pre-setting the values of the array image vector to the specified direction.The convergence of RLMS and LLMS algorithms is analyzed for two different operation modes; namely with external reference or self-referencing. Array image vector calculations, ranges of step sizes values for stable operation, fixed beam generation, and fixed-point arithmetic have also been studied in this thesis. All of these analyses have been confirmed by computer simulations for different signal conditions. Computer simulation results show that both proposed algorithms are superior in convergence performances to the algorithms, such as the CSLMS, MRVSS, LMS, VFFRLS and RLS algorithms, and are quite insensitive to variations in input SNR and the actual step size values used. Furthermore, RLMS and LLMS algorithms remain stable even when their reference signals are corrupted by additive white Gaussian noise (AWGN). In addition, they are robust when operating in the presence of Rayleigh fading. Finally, the fidelity of the signal at the output of the proposed algorithms beamformers is demonstrated by means of the resultant values of error vector magnitude (EVM), and scatter plots. It is also shown that, the implementation of an eight element uniform linear array using the proposed algorithms with a wordlength of nine bits is sufficient to achieve performance close to that provided by full precision

    医用超音波における散乱体分布の高解像かつ高感度な画像化に関する研究

    Get PDF
    Ultrasound imaging as an effective method is widely used in medical diagnosis andNDT (non-destructive testing). In particular, ultrasound imaging plays an important role in medical diagnosis due to its safety, noninvasive, inexpensiveness and real-time compared with other medical imaging techniques. However, in general the ultrasound imaging has more speckles and is low definition than the MRI (magnetic resonance imaging) and X-ray CT (computerized tomography). Therefore, it is important to improve the ultrasound imaging quality. In this study, there are three newproposals. The first is the development of a high sensitivity transducer that utilizes piezoelectric charge directly for FET (field effect transistor) channel control. The second is a proposal of a method for estimating the distribution of small scatterers in living tissue using the empirical Bayes method. The third is a super-resolution imagingmethod of scatterers with strong reflection such as organ boundaries and blood vessel walls. The specific description of each chapter is as follows: Chapter 1: The fundamental characteristics and the main applications of ultrasound are discussed, then the advantages and drawbacks of medical ultrasound are high-lighted. Based on the drawbacks, motivations and objectives of this study are stated. Chapter 2: To overcome disadvantages of medical ultrasound, we advanced our studyin two directions: designing new transducer improves the acquisition modality itself, onthe other hand new signal processing improve the acquired echo data. Therefore, the conventional techniques related to the two directions are reviewed. Chapter 3: For high performance piezoelectric, a structure that enables direct coupling of a PZT (lead zirconate titanate) element to the gate of a MOSFET (metal-oxide semiconductor field-effect transistor) to provide a device called the PZT-FET that acts as an ultrasound receiver was proposed. The experimental analysis of the PZT-FET, in terms of its reception sensitivity, dynamic range and -6 dB reception bandwidth have been investigated. The proposed PZT-FET receiver offers high sensitivity, wide dynamic range performance when compared to the typical ultrasound transducer. Chapter 4: In medical ultrasound imaging, speckle patterns caused by reflection interference from small scatterers in living tissue are often suppressed by various methodologies. However, accurate imaging of small scatterers is important in diagnosis; therefore, we investigated influence of speckle pattern on ultrasound imaging by the empirical Bayesian learning. Since small scatterers are spatially correlated and thereby constitute a microstructure, we assume that scatterers are distributed according to the AR (auto regressive) model with unknown parameters. Under this assumption, the AR parameters are estimated by maximizing the marginal likelihood function, and the scatterers distribution is estimated as a MAP (maximum a posteriori) estimator. The performance of our method is evaluated by simulations and experiments. Through the results, we confirmed that the band limited echo has sufficient information of the AR parameters and the power spectrum of the echoes from the scatterers is properly extrapolated. Chapter 5: The medical ultrasound imaging of strong reflectance scatterers based on the MUSIC algorithm is the main subject of Chapter 5. Previously, we have proposed a super-resolution ultrasound imaging based on multiple TRs (transmissions/receptions) with different carrier frequencies called SCM (super resolution FM-chirp correlation method). In order to reduce the number of required TRs for the SCM, the method has been extended to the SA (synthetic aperture) version called SA-SCM. However, since super-resolution processing is performed for each line data obtained by the RBF (reception beam forming) in the SA-SCM, image discontinuities tend to occur in the lateral direction. Therefore, a new method called SCM-weighted SA is proposed, in this version the SCM is performed on each transducer element, and then the SCM result is used as the weight for RBF. The SCM-weighted SA can generate multiple B-mode images each of which corresponds to each carrier frequency, and the appropriate low frequency images among them have no grating lobes. For a further improvement, instead of simple averaging, the SCM applied to the result of the SCM-weighted SA for all frequencies again, which is called SCM-weighted SA-SCM. We evaluated the effectiveness of all the methods by simulations and experiments. From the results, it can be confirmed that the extension of the SCM framework can help ultrasound imaging reduce grating lobes, perform super-resolution and better SNR(signal-to-noise ratio). Chapter 6: A discussion of the overall content of the thesis as well as suggestions for further development together with the remaining problems are summarized.首都大学東京, 2019-03-25, 博士(工学)首都大学東

    Brain signal analysis in space-time-frequency domain : an application to brain computer interfacing

    Get PDF
    In this dissertation, advanced methods for electroencephalogram (EEG) signal analysis in the space-time-frequency (STF) domain with applications to eye-blink (EB) artifact removal and brain computer interfacing (BCI) are developed. The two methods for EB artifact removal from EEGs are presented which respectively include the estimated spatial signatures of the EB artifacts into the signal extraction and the robust beamforming frameworks. In the developed signal extraction algorithm, the EB artifacts are extracted as uncorrelated signals from EEGs. The algorithm utilizes the spatial signatures of the EB artifacts as priori knowledge in the signal extraction stage. The spatial distributions are identified using the STF model of EEGs. In the robust beamforming approach, first a novel space-time-frequency/time-segment (STF-TS) model for EEGs is introduced. The estimated spatial signatures of the EBs are then taken into account in order to restore the artifact contaminated EEG measurements. Both algorithms are evaluated by using the simulated and real EEGs and shown to produce comparable results to that of conventional approaches. Finally, an effective paradigm for BCI is introduced. In this approach prior physiological knowledge of spectrally band limited steady-state movement related potentials is exploited. The results consolidate the method.EThOS - Electronic Theses Online ServiceGBUnited Kingdo
    corecore