367 research outputs found

    Speech recognition in noise using weighted matching algorithms

    Get PDF

    Autoregressive Spectral Estimation in Noise with Application to Speech Analysis

    Get PDF
    Electrical Engineerin

    Applications of fuzzy counterpropagation neural networks to non-linear function approximation and background noise elimination

    Get PDF
    An adaptive filter which can operate in an unknown environment by performing a learning mechanism that is suitable for the speech enhancement process. This research develops a novel ANN model which incorporates the fuzzy set approach and which can perform a non-linear function approximation. The model is used as the basic structure of an adaptive filter. The learning capability of ANN is expected to be able to reduce the development time and cost of the designing adaptive filters based on fuzzy set approach. A combination of both techniques may result in a learnable system that can tackle the vagueness problem of a changing environment where the adaptive filter operates. This proposed model is called Fuzzy Counterpropagation Network (Fuzzy CPN). It has fast learning capability and self-growing structure. This model is applied to non-linear function approximation, chaotic time series prediction and background noise elimination

    The severity of stages estimation during hemorrhage using error correcting output codes method

    Get PDF
    As a beneficial component with critical impact, computer-aided decision making systems have infiltrated many fields, such as economics, medicine, architecture and agriculture. The latent capabilities for facilitating human work propel high-speed development of such systems. Effective decisions provided by such systems greatly reduce the expense of labor, energy, budget, etc. The computer-aided decision making system for traumatic injuries is one type of such systems that supplies suggestive opinions when dealing with the injuries resulted from accidents, battle, or illness. The functions may involve judging the type of illness, allocating the wounded according to battle injuries, deciding the severity of symptoms for illness or injuries, managing the resources in the context of traumatic events, etc. The proposed computer-aided decision making system aims at estimating the severity of blood volume loss. Specifically speaking, accompanying many traumatic injuries, severe hemorrhage, a potentially life-threatening condition that requires immediate treatment, is a significant loss of blood volume in process resulting in decreased blood and oxygen perfusion of vital organs. Hemorrhage and blood loss can occur in different levels such as mild, moderate, or severe. Our proposed system will assist physicians by estimating information such as the severity of blood volume loss and hemorrhage , so that timely measures can be taken to not only save lives but also reduce the long-term complications as well as the cost caused by unmatched operations and treatments. The general framework of the proposed research contains three tasks and many novel and transformative concepts are integrated into the system. First is the preprocessing of the raw signals. In this stage, adaptive filtering is adopted and customized to filter noise, and two detection algorithms (QRS complex detection and Systolic/Diastolic wave detection) are designed. The second process is to extract features. The proposed system combines features from time domain, frequency domain, nonlinear analysis, and multi-model analysis to better represent the patterns when hemorrhage happens. Third, a machine learning algorithm is designed for classification of patterns. A novel machine learning algorithm, as a new version of error correcting output code (ECOC), is designed and investigated for high accuracy and real-time decision making. The features and characteristics of this machine learning method are essential for the proposed computer-aided trauma decision making system. The proposed system is tested agasint Lower Body Negative Pressure (LBNP) dataset, and the results indicate the accuracy and reliability of the proposed system

    Signal validation in electroencephalography research

    Get PDF

    Novel linear and nonlinear optical signal processing for ultra-high bandwidth communications

    Get PDF
    The thesis is articulated around the theme of ultra-wide bandwidth single channel signals. It focuses on the two main topics of transmission and processing of information by techniques compatible with high baudrates. The processing schemes introduced combine new linear and nonlinear optical platforms such as Fourier-domain programmable optical processors and chalcogenide chip waveguides, as well as the concept of neural network. Transmission of data is considered in the context of medium distance links of Optical Time Division Multiplexed (OTDM) data subject to environmental fluctuations. We experimentally demonstrate simultaneous compensation of differential group delay and multiple orders of dispersion at symbol rates of 640 Gbaud and 1.28 Tbaud. Signal processing at high bandwidth is envisaged both in the case of elementary post-transmission analog error mitigation and in the broader field of optical computing for high level operations (“optical processor”). A key innovation is the introduction of a novel four-wave mixing scheme implementing a dot-product operation between wavelength multiplexed channels. In particular, it is demonstrated for low-latency hash-key based all-optical error detection in links encoded with advanced modulation formats. Finally, the work presents groundbreaking concepts for compact implementation of an optical neural network as a programmable multi-purpose processor. The experimental architecture can implement neural networks with several nodes on a single optical nonlinear transfer function implementing functions such as analog-to-digital conversion. The particularity of the thesis is the new approaches to optical signal processing that potentially enable high level operations using simple optical hardware and limited cascading of components

    Digital watermark technology in security applications

    Get PDF
    With the rising emphasis on security and the number of fraud related crimes around the world, authorities are looking for new technologies to tighten security of identity. Among many modern electronic technologies, digital watermarking has unique advantages to enhance the document authenticity. At the current status of the development, digital watermarking technologies are not as matured as other competing technologies to support identity authentication systems. This work presents improvements in performance of two classes of digital watermarking techniques and investigates the issue of watermark synchronisation. Optimal performance can be obtained if the spreading sequences are designed to be orthogonal to the cover vector. In this thesis, two classes of orthogonalisation methods that generate binary sequences quasi-orthogonal to the cover vector are presented. One method, namely "Sorting and Cancelling" generates sequences that have a high level of orthogonality to the cover vector. The Hadamard Matrix based orthogonalisation method, namely "Hadamard Matrix Search" is able to realise overlapped embedding, thus the watermarking capacity and image fidelity can be improved compared to using short watermark sequences. The results are compared with traditional pseudo-randomly generated binary sequences. The advantages of both classes of orthogonalisation inethods are significant. Another watermarking method that is introduced in the thesis is based on writing-on-dirty-paper theory. The method is presented with biorthogonal codes that have the best robustness. The advantage and trade-offs of using biorthogonal codes with this watermark coding methods are analysed comprehensively. The comparisons between orthogonal and non-orthogonal codes that are used in this watermarking method are also made. It is found that fidelity and robustness are contradictory and it is not possible to optimise them simultaneously. Comparisons are also made between all proposed methods. The comparisons are focused on three major performance criteria, fidelity, capacity and robustness. aom two different viewpoints, conclusions are not the same. For fidelity-centric viewpoint, the dirty-paper coding methods using biorthogonal codes has very strong advantage to preserve image fidelity and the advantage of capacity performance is also significant. However, from the power ratio point of view, the orthogonalisation methods demonstrate significant advantage on capacity and robustness. The conclusions are contradictory but together, they summarise the performance generated by different design considerations. The synchronisation of watermark is firstly provided by high contrast frames around the watermarked image. The edge detection filters are used to detect the high contrast borders of the captured image. By scanning the pixels from the border to the centre, the locations of detected edges are stored. The optimal linear regression algorithm is used to estimate the watermarked image frames. Estimation of the regression function provides rotation angle as the slope of the rotated frames. The scaling is corrected by re-sampling the upright image to the original size. A theoretically studied method that is able to synchronise captured image to sub-pixel level accuracy is also presented. By using invariant transforms and the "symmetric phase only matched filter" the captured image can be corrected accurately to original geometric size. The method uses repeating watermarks to form an array in the spatial domain of the watermarked image and the the array that the locations of its elements can reveal information of rotation, translation and scaling with two filtering processes

    Algorithms and systems for home telemonitoring in biomedical applications

    Get PDF
    During the past decades, the interest of the healthcare community shifted from the simple treatment of the diseases towards the prevention and maintenance of a healthy lifestyle. This approach is associated to a reduced cost for the Health Systems, having to face the constantly increased expenditures due to the reduced mortality for chronical diseases and to the progressive population ageing. Nevertheless, the high costs related to hospitalization of patients for monitoring procedures that could be better performed at home hamper the full implementation of this approach in a traditional way. Information and Communication Technology can provide a solution to implement a care model closer to the patient, crossing the physical boundaries of the hospitals and thus allowing to reach also those patients that, for a geographical or social condition, could not access the health services as other luckier subjects. This is the case of telemonitoring systems, whose aim is that of providing monitoring services for some health-related parameters at a distance, by means of custom-designed electronic devices. In this thesis, the specific issues associated to two telemonitoring applications are presented, along with the proposed solutions and the achieved results. The first telemonitoring application considered is the fetal electrocardiography. Non-invasive fetal electrocardiography is the recording of the fetal heart electrical activity using electrodes placed on the maternal abdomen. It can provide important diagnostic parameters, such as the beat-to-beat heart rate variability, whose recurring analysis would be useful in assessing and monitoring fetal health during pregnancy. Long term electrocardiographic monitoring is sustained by the absence of any collateral effects for both the mother and the fetus. This application has been tackled from several perspectives, mainly acquisition and processing. From the acquisition viewpoint a study on different skin treatments, disposable commercial electrodes and textile electrodes has been performed with the aim of improving the signal acquisition quality, while simplifying the measurement setup. From the processing viewpoint, different algorithms have been developed to allow extracting the fetal ECG heart rate, starting from an on-line ICA algorithm or exploiting a subtractive approach to work on recordings acquired with a reduced number of electrodes. The latter, took part to the international "Physionet/Computing in Cardiology Challenge" in 2013 entering into the top ten best-performing open-source algorithms. The improved version of this algorithm is also presented, which would mark the 5th and 4th position in the final ranking related to the fetal heart rate and fetal RR interval measurements performance, reserved to the open-source challenge entries, taking into account both official and unofficial entrants. The research in this field has been carried out in collaboration with the Pediatric Cardiology Unit of the Hospital G. Brotzu in Cagliari, for the acquisition of non-invasive fetal ECG signals from pregnant voluntary patients. The second telemonitoring application considered is the telerehabilitation of the hand. The execution of rehabilitation exercises has been proven to be effective in recovering hand functionality in a wide variety of invalidating diseases, but the lack of standardization and continuous medical control cause the patients neglecting this therapeutic procedures. Telemonitoring the rehabilitation sessions would allow the physician to closely follow the patients' progresses and compliance to the prescribed adapted exercises. This application leads to the development of a sensorized telerehabilitation system for the execution and objective monitoring of therapeutic exercises at the patients' home and of the telemedicine infrastructure that give the physician the opportunity to monitor patients' progresses through parameters summarizing the patients' performance. The proposed non-CE marked medical device, patent pending, underwent a clinical trial, reviewed and approved by the Italian Public Health Department, involving 20 patients with Rheumatoid Arthritis and 20 with Systemic Sclerosis randomly assigned to the experimental or the control arm, enrolled for 12 weeks in a home rehabilitation program. The trial, carried out with the collaboration of the Rheumatology Department of the Policlinico Universitario of Cagliari, revealed promising results in terms of hand functionality recovering, highlighting greater improvements for the patients enrolled in the experimental arm, that use the proposed telerehabilitation system, with respect to those of the control arm, which perform similar rehabilitation exercises using common objects
    corecore