841 research outputs found

    Detection of sparse targets with structurally perturbed echo

    Get PDF
    Cataloged from PDF version of article.In this paper, a novel algorithm is proposed to achieve robust high resolution detection in sparse multipath channels. Currently used sparse reconstruction techniques are not immediately applicable in multipath channel modeling. Performance of standard compressed sensing formulations based on discretization of the multipath channel parameter space degrade significantly when the actual channel parameters deviate from the assumed discrete set of values. To alleviate this off-grid problem, we make use of the particle swarm optimization (PSO) to perturb each grid point that reside in each multipath component cluster. Orthogonal matching pursuit (OMP) is used to reconstruct sparse multipath components in a greedy fashion. Extensive simulation results quantify the performance gain and robustness obtained by the proposed algorithm against the off-grid problem faced in sparse multipath channels. © 2013 Elsevier Inc

    Blind Source Separation Based on Covariance Ratio and Artificial Bee Colony Algorithm

    Get PDF
    The computation amount in blind source separation based on bioinspired intelligence optimization is high. In order to solve this problem, we propose an effective blind source separation algorithm based on the artificial bee colony algorithm. In the proposed algorithm, the covariance ratio of the signals is utilized as the objective function and the artificial bee colony algorithm is used to solve it. The source signal component which is separated out, is then wiped off from mixtures using the deflation method. All the source signals can be recovered successfully by repeating the separation process. Simulation experiments demonstrate that significant improvement of the computation amount and the quality of signal separation is achieved by the proposed algorithm when compared to previous algorithms

    Improving Maternal and Fetal Cardiac Monitoring Using Artificial Intelligence

    Get PDF
    Early diagnosis of possible risks in the physiological status of fetus and mother during pregnancy and delivery is critical and can reduce mortality and morbidity. For example, early detection of life-threatening congenital heart disease may increase survival rate and reduce morbidity while allowing parents to make informed decisions. To study cardiac function, a variety of signals are required to be collected. In practice, several heart monitoring methods, such as electrocardiogram (ECG) and photoplethysmography (PPG), are commonly performed. Although there are several methods for monitoring fetal and maternal health, research is currently underway to enhance the mobility, accuracy, automation, and noise resistance of these methods to be used extensively, even at home. Artificial Intelligence (AI) can help to design a precise and convenient monitoring system. To achieve the goals, the following objectives are defined in this research: The first step for a signal acquisition system is to obtain high-quality signals. As the first objective, a signal processing scheme is explored to improve the signal-to-noise ratio (SNR) of signals and extract the desired signal from a noisy one with negative SNR (i.e., power of noise is greater than signal). It is worth mentioning that ECG and PPG signals are sensitive to noise from a variety of sources, increasing the risk of misunderstanding and interfering with the diagnostic process. The noises typically arise from power line interference, white noise, electrode contact noise, muscle contraction, baseline wandering, instrument noise, motion artifacts, electrosurgical noise. Even a slight variation in the obtained ECG waveform can impair the understanding of the patient's heart condition and affect the treatment procedure. Recent solutions, such as adaptive and blind source separation (BSS) algorithms, still have drawbacks, such as the need for noise or desired signal model, tuning and calibration, and inefficiency when dealing with excessively noisy signals. Therefore, the final goal of this step is to develop a robust algorithm that can estimate noise, even when SNR is negative, using the BSS method and remove it based on an adaptive filter. The second objective is defined for monitoring maternal and fetal ECG. Previous methods that were non-invasive used maternal abdominal ECG (MECG) for extracting fetal ECG (FECG). These methods need to be calibrated to generalize well. In other words, for each new subject, a calibration with a trustable device is required, which makes it difficult and time-consuming. The calibration is also susceptible to errors. We explore deep learning (DL) models for domain mapping, such as Cycle-Consistent Adversarial Networks, to map MECG to fetal ECG (FECG) and vice versa. The advantages of the proposed DL method over state-of-the-art approaches, such as adaptive filters or blind source separation, are that the proposed method is generalized well on unseen subjects. Moreover, it does not need calibration and is not sensitive to the heart rate variability of mother and fetal; it can also handle low signal-to-noise ratio (SNR) conditions. Thirdly, AI-based system that can measure continuous systolic blood pressure (SBP) and diastolic blood pressure (DBP) with minimum electrode requirements is explored. The most common method of measuring blood pressure is using cuff-based equipment, which cannot monitor blood pressure continuously, requires calibration, and is difficult to use. Other solutions use a synchronized ECG and PPG combination, which is still inconvenient and challenging to synchronize. The proposed method overcomes those issues and only uses PPG signal, comparing to other solutions. Using only PPG for blood pressure is more convenient since it is only one electrode on the finger where its acquisition is more resilient against error due to movement. The fourth objective is to detect anomalies on FECG data. The requirement of thousands of manually annotated samples is a concern for state-of-the-art detection systems, especially for fetal ECG (FECG), where there are few publicly available FECG datasets annotated for each FECG beat. Therefore, we will utilize active learning and transfer-learning concept to train a FECG anomaly detection system with the least training samples and high accuracy. In this part, a model is trained for detecting ECG anomalies in adults. Later this model is trained to detect anomalies on FECG. We only select more influential samples from the training set for training, which leads to training with the least effort. Because of physician shortages and rural geography, pregnant women's ability to get prenatal care might be improved through remote monitoring, especially when access to prenatal care is limited. Increased compliance with prenatal treatment and linked care amongst various providers are two possible benefits of remote monitoring. If recorded signals are transmitted correctly, maternal and fetal remote monitoring can be effective. Therefore, the last objective is to design a compression algorithm that can compress signals (like ECG) with a higher ratio than state-of-the-art and perform decompression fast without distortion. The proposed compression is fast thanks to the time domain B-Spline approach, and compressed data can be used for visualization and monitoring without decompression owing to the B-spline properties. Moreover, the stochastic optimization is designed to retain the signal quality and does not distort signal for diagnosis purposes while having a high compression ratio. In summary, components for creating an end-to-end system for day-to-day maternal and fetal cardiac monitoring can be envisioned as a mix of all tasks listed above. PPG and ECG recorded from the mother can be denoised using deconvolution strategy. Then, compression can be employed for transmitting signal. The trained CycleGAN model can be used for extracting FECG from MECG. Then, trained model using active transfer learning can detect anomaly on both MECG and FECG. Simultaneously, maternal BP is retrieved from the PPG signal. This information can be used for monitoring the cardiac status of mother and fetus, and also can be used for filling reports such as partogram

    Development of new array signal processing techniques using swarm intelligence

    Get PDF
    Ankara : The Department of Electrical and Electronics Engineering and the Institute of Engineering and Sciences of Bilkent University, 2010.Thesis (Ph. D.) -- Bilkent University, 2010.Includes bibliographical references leaves 144-158.In this thesis, novel array signal processing techniques are proposed for identifi- cation of multipath communication channels based on cross ambiguity function (CAF) calculation, swarm intelligence and compressed sensing (CS) theory. First technique detects the presence of multipath components by integrating CAFs of each antenna output in the array and iteratively estimates direction-of-arrivals (DOAs), time delays and Doppler shifts of a known waveform. Second technique called particle swarm optimization-cross ambiguity function (PSO-CAF) makes use of the CAF calculation to transform the received antenna array outputs to delay-Doppler domain for efficient exploitation of the delay-Doppler diversity of the multipath components. Clusters of multipath components are identified by using a simple amplitude thresholding in the delay-Doppler domain. PSO is used to estimate parameters of the multipath components in each cluster. Third proposed technique combines CS theory, swarm intelligence and CAF computation. Performance of standard CS formulations based on discretization of the multipath channel parameter space degrade significantly when the actual channel parameters deviate from the assumed discrete set of values. To alleviate this “off-grid”problem, a novel technique by making use of the PSO, that can also be used in applications other than the multipath channel identification is proposed. Performances of the proposed techniques are verified both on sythetic and real data.Güldoğan, Mehmet BurakPh.D

    Orbiting Rainbows: Optical Manipulation of Aerosols and the Beginnings of Future Space Construction

    Get PDF
    Our objective is to investigate the conditions to manipulate and maintain the shape of an orbiting cloud of dust-like matter so that it can function as an ultra-lightweight surface with useful and adaptable electromagnetic characteristics, for instance, in the optical, RF, or microwave bands. Inspired by the light scattering and focusing properties of distributed optical assemblies in Nature, such as rainbows and aerosols, and by recent laboratory successes in optical trapping and manipulation, we propose a unique combination of space optics and autonomous robotic system technology, to enable a new vision of space system architecture with applications to ultra-lightweight space optics and, ultimately, in-situ space system fabrication. Typically, the cost of an optical system is driven by the size and mass of the primary aperture. The ideal system is a cloud of spatially disordered dust-like objects that can be optically manipulated: it is highly reconfigurable, fault-tolerant, and allows very large aperture sizes at low cost. See Figure 1 for a scenario of application of this concept. The solution that we propose is to construct an optical system in space in which the nonlinear optical properties of a cloud of micron-sized particles are shaped into a specific surface by light pressure, allowing it to form a very large and lightweight aperture of an optical system, hence reducing overall mass and cost. Other potential advantages offered by the cloud properties as optical system involve possible combination of properties (combined transmit/receive), variable focal length, combined refractive and reflective lens designs, and hyper-spectral imaging. A cloud of highly reflective particles of micron-size acting coherently in a specific electromagnetic band, just like an aerosol in suspension in the atmosphere, would reflect the Sun's light much like a rainbow. The only difference with an atmospheric or industrial aerosol is the absence of the supporting fluid medium. This new concept is based on recent understandings in the physics of optical manipulation of small particles in the laboratory and the engineering of distributed ensembles of spacecraft clouds to shape an orbiting cloud of micron-sized objects. In the same way that optical tweezers have revolutionized micro- and nano-manipulation of objects, our breakthrough concept will enable new large scale NASA mission applications and develop new technology in the areas of Astrophysical Imaging Systems and Remote Sensing because the cloud can operate as an adaptive optical imaging sensor. While achieving the feasibility of constructing one single aperture out of the cloud is the main topic of this work, it is clear that multiple orbiting aerosol lenses could also combine their power to synthesize a much larger aperture in space to enable challenging goals such as exoplanet detection. Furthermore, this effort could establish feasibility of key issues related to material properties, remote manipulation, and autonomy characteristics of cloud in orbit. There are several types of endeavors (science missions) that could be enabled by this type of approach, i.e. it can enable new astrophysical imaging systems, exoplanet search, large apertures allow for unprecedented high resolution to discern continents and important features of other planets, hyperspectral imaging, adaptive systems, spectroscopy imaging through limb, and stable optical systems from Lagrange-points. Future micro-miniaturization might hold promise of a further extension of our dust aperture concept to other more exciting smart dust concepts with other associated capabilities

    Reports on industrial information technology. Vol. 12

    Get PDF
    The 12th volume of Reports on Industrial Information Technology presents some selected results of research achieved at the Institute of Industrial Information Technology during the last two years.These results have contributed to many cooperative projects with partners from academia and industry and cover current research interests including signal and image processing, pattern recognition, distributed systems, powerline communications, automotive applications, and robotics

    Prediction model of alcohol intoxication from facial temperature dynamics based on K-means clustering driven by evolutionary computing

    Get PDF
    Alcohol intoxication is a significant phenomenon, affecting many social areas, including work procedures or car driving. Alcohol causes certain side effects including changing the facial thermal distribution, which may enable the contactless identification and classification of alcohol-intoxicated people. We adopted a multiregional segmentation procedure to identify and classify symmetrical facial features, which reliably reflects the facial-temperature variations while subjects are drinking alcohol. Such a model can objectively track alcohol intoxication in the form of a facial temperature map. In our paper, we propose the segmentation model based on the clustering algorithm, which is driven by the modified version of the Artificial Bee Colony (ABC) evolutionary optimization with the goal of facial temperature features extraction from the IR (infrared radiation) images. This model allows for a definition of symmetric clusters, identifying facial temperature structures corresponding with intoxication. The ABC algorithm serves as an optimization process for an optimal cluster's distribution to the clustering method the best approximate individual areas linked with gradual alcohol intoxication. In our analysis, we analyzed a set of twenty volunteers, who had IR images taken to reflect the process of alcohol intoxication. The proposed method was represented by multiregional segmentation, allowing for classification of the individual spatial temperature areas into segmentation classes. The proposed method, besides single IR image modelling, allows for dynamical tracking of the alcohol-temperature features within a process of intoxication, from the sober state up to the maximum observed intoxication level.Web of Science118art. no. 99

    Optimisation of Mobile Communication Networks - OMCO NET

    Get PDF
    The mini conference “Optimisation of Mobile Communication Networks” focuses on advanced methods for search and optimisation applied to wireless communication networks. It is sponsored by Research & Enterprise Fund Southampton Solent University. The conference strives to widen knowledge on advanced search methods capable of optimisation of wireless communications networks. The aim is to provide a forum for exchange of recent knowledge, new ideas and trends in this progressive and challenging area. The conference will popularise new successful approaches on resolving hard tasks such as minimisation of transmit power, cooperative and optimal routing

    A Review on Dimension Reduction Techniques in Data Mining

    Get PDF
    Real world data is high-dimensional like images, speech signals containing multiple dimensions to represent data. Higher dimensional data are more complex for detecting and exploiting the relationships among terms. Dimensionality reduction is a technique used for reducing complexity for analyzing high dimensional data. There are many methodologies that are being used to find the Critical Dimensions for a dataset that significantly reduces the number of dimensions. They reduce the dimensions from the original input data. Dimensionality reduction methods can be of two types as feature extractions and feature selection techniques. Feature Extraction is a distinct form of Dimensionality Reduction to extract some important feature from input dataset. Two different approaches available for dimensionality reduction are supervised approach and unsupervised approach. One exclusive purpose of this survey is to provide an adequate comprehension of the different dimensionality reduction techniques that exist currently and also to introduce the applicability of any one of the prescribed methods that depends upon the given set of parameters and varying conditions. This paper surveys the schemes that are majorly used for Dimensionality Reduction mainly high dimension datasets. A comparative analysis of surveyed methodologies is also done, based on which, best methodology for a certain type of dataset can be chosen. Keywords: Data Mining, Dimensionality Reduction, Clustering, feature selection; curse of dimensionality; critical dimensio
    corecore