90 research outputs found

    Adaptive and autonomous protocol for spectrum identification and coordination in ad hoc cognitive radio network

    Get PDF
    The decentralised structure of wireless Ad hoc networks makes them most appropriate for quick and easy deployment in military and emergency situations. Consequently, in this thesis, special interest is given to this form of network. Cognitive Radio (CR) is defined as a radio, capable of identifying its spectral environment and able to optimally adjust its transmission parameters to achieve interference free communication channel. In a CR system, Dynamic Spectrum Access (DSA) is made feasible. CR has been proposed as a candidate solution to the challenge of spectrum scarcity. CR works to solve this challenge by providing DSA to unlicensed (secondary) users. The introduction of this new and efficient spectrum management technique, the DSA, has however, opened up some challenges in this wireless Ad hoc Network of interest; the Cognitive Radio Ad Hoc Network (CRAHN). These challenges, which form the specific focus of this thesis are as follows: First, the poor performance of the existing spectrum sensing techniques in low Signal to Noise Ratio (SNR) conditions. Secondly the lack of a central coordination entity for spectrum allocation and information exchange in the CRAHN. Lastly, the existing Medium Access Control (MAC) Protocol such as the 802.11 was designed for both homogeneous spectrum usage and static spectrum allocation technique. Consequently, this thesis addresses these challenges by first developing an algorithm comprising of the Wavelet-based Scale Space Filtering (WSSF) algorithm and the Otsu's multi-threshold algorithm to form an Adaptive and Autonomous WaveletBased Scale Space Filter (AWSSF) for Primary User (PU) sensing in CR. These combined algorithms produced an enhanced algorithm that improves detection in low SNR conditions when compared to the performance of EDs and other spectrum sensing techniques in the literature. Therefore, the AWSSF met the performance requirement of the IEEE 802.22 standard as compared to other approaches and thus considered viable for application in CR. Next, a new approach for the selection of control channel in CRAHN environment using the Ant Colony System (ACS) was proposed. The algorithm reduces the complex objective of selecting control channel from an overtly large spectrum space,to a path finding problem in a graph. We use pheromone trails, proportional to channel reward, which are computed based on received signal strength and channel availability, to guide the construction of selection scheme. Simulation results revealed ACS as a feasible solution for optimal dynamic control channel selection. Finally, a new channel hopping algorithm for the selection of a control channel in CRAHN was presented. This adopted the use of the bio-mimicry concept to develop a swarm intelligence based mechanism. This mechanism guides nodes to select a common control channel within a bounded time for the purpose of establishing communication. Closed form expressions for the upper bound of the time to rendezvous (TTR) and Expected TTR (ETTR) on a common control channel were derived for various network scenarios. The algorithm further provides improved performance in comparison to the Jump-Stay and Enhanced Jump-Stay Rendezvous Algorithms. We also provided simulation results to validate our claim of improved TTR. Based on the results obtained, it was concluded that the proposed system contributes positively to the ongoing research in CRAHN

    Improving Maternal and Fetal Cardiac Monitoring Using Artificial Intelligence

    Get PDF
    Early diagnosis of possible risks in the physiological status of fetus and mother during pregnancy and delivery is critical and can reduce mortality and morbidity. For example, early detection of life-threatening congenital heart disease may increase survival rate and reduce morbidity while allowing parents to make informed decisions. To study cardiac function, a variety of signals are required to be collected. In practice, several heart monitoring methods, such as electrocardiogram (ECG) and photoplethysmography (PPG), are commonly performed. Although there are several methods for monitoring fetal and maternal health, research is currently underway to enhance the mobility, accuracy, automation, and noise resistance of these methods to be used extensively, even at home. Artificial Intelligence (AI) can help to design a precise and convenient monitoring system. To achieve the goals, the following objectives are defined in this research: The first step for a signal acquisition system is to obtain high-quality signals. As the first objective, a signal processing scheme is explored to improve the signal-to-noise ratio (SNR) of signals and extract the desired signal from a noisy one with negative SNR (i.e., power of noise is greater than signal). It is worth mentioning that ECG and PPG signals are sensitive to noise from a variety of sources, increasing the risk of misunderstanding and interfering with the diagnostic process. The noises typically arise from power line interference, white noise, electrode contact noise, muscle contraction, baseline wandering, instrument noise, motion artifacts, electrosurgical noise. Even a slight variation in the obtained ECG waveform can impair the understanding of the patient's heart condition and affect the treatment procedure. Recent solutions, such as adaptive and blind source separation (BSS) algorithms, still have drawbacks, such as the need for noise or desired signal model, tuning and calibration, and inefficiency when dealing with excessively noisy signals. Therefore, the final goal of this step is to develop a robust algorithm that can estimate noise, even when SNR is negative, using the BSS method and remove it based on an adaptive filter. The second objective is defined for monitoring maternal and fetal ECG. Previous methods that were non-invasive used maternal abdominal ECG (MECG) for extracting fetal ECG (FECG). These methods need to be calibrated to generalize well. In other words, for each new subject, a calibration with a trustable device is required, which makes it difficult and time-consuming. The calibration is also susceptible to errors. We explore deep learning (DL) models for domain mapping, such as Cycle-Consistent Adversarial Networks, to map MECG to fetal ECG (FECG) and vice versa. The advantages of the proposed DL method over state-of-the-art approaches, such as adaptive filters or blind source separation, are that the proposed method is generalized well on unseen subjects. Moreover, it does not need calibration and is not sensitive to the heart rate variability of mother and fetal; it can also handle low signal-to-noise ratio (SNR) conditions. Thirdly, AI-based system that can measure continuous systolic blood pressure (SBP) and diastolic blood pressure (DBP) with minimum electrode requirements is explored. The most common method of measuring blood pressure is using cuff-based equipment, which cannot monitor blood pressure continuously, requires calibration, and is difficult to use. Other solutions use a synchronized ECG and PPG combination, which is still inconvenient and challenging to synchronize. The proposed method overcomes those issues and only uses PPG signal, comparing to other solutions. Using only PPG for blood pressure is more convenient since it is only one electrode on the finger where its acquisition is more resilient against error due to movement. The fourth objective is to detect anomalies on FECG data. The requirement of thousands of manually annotated samples is a concern for state-of-the-art detection systems, especially for fetal ECG (FECG), where there are few publicly available FECG datasets annotated for each FECG beat. Therefore, we will utilize active learning and transfer-learning concept to train a FECG anomaly detection system with the least training samples and high accuracy. In this part, a model is trained for detecting ECG anomalies in adults. Later this model is trained to detect anomalies on FECG. We only select more influential samples from the training set for training, which leads to training with the least effort. Because of physician shortages and rural geography, pregnant women's ability to get prenatal care might be improved through remote monitoring, especially when access to prenatal care is limited. Increased compliance with prenatal treatment and linked care amongst various providers are two possible benefits of remote monitoring. If recorded signals are transmitted correctly, maternal and fetal remote monitoring can be effective. Therefore, the last objective is to design a compression algorithm that can compress signals (like ECG) with a higher ratio than state-of-the-art and perform decompression fast without distortion. The proposed compression is fast thanks to the time domain B-Spline approach, and compressed data can be used for visualization and monitoring without decompression owing to the B-spline properties. Moreover, the stochastic optimization is designed to retain the signal quality and does not distort signal for diagnosis purposes while having a high compression ratio. In summary, components for creating an end-to-end system for day-to-day maternal and fetal cardiac monitoring can be envisioned as a mix of all tasks listed above. PPG and ECG recorded from the mother can be denoised using deconvolution strategy. Then, compression can be employed for transmitting signal. The trained CycleGAN model can be used for extracting FECG from MECG. Then, trained model using active transfer learning can detect anomaly on both MECG and FECG. Simultaneously, maternal BP is retrieved from the PPG signal. This information can be used for monitoring the cardiac status of mother and fetus, and also can be used for filling reports such as partogram

    On Particle Swarm Optimization for MIMO Channel Estimation

    Get PDF

    Emotion Classification through Nonlinear EEG Analysis Using Machine Learning Methods

    Get PDF
    Background: Emotion recognition, as a subset of affective computing, has received considerable attention in recent years. Emotions are key to human-computer interactions. Electroencephalogram (EEG) is considered a valuable physiological source of information for classifying emotions. However, it has complex and chaotic behavior.Methods: In this study, an attempt is made to extract important nonlinear features from EEGs with the aim of emotion recognition. We also take advantage of machine learning methods such as evolutionary feature selection methods and committee machines to enhance the classification performance. Classification performed concerning both arousal and valence factors.Results: Results suggest that the proposed method is successful and comparable to the previous works. A recognition rate equal to 90% achieved, and the most significant features reported. We apply the final classification scheme to 2 different databases including our recorded EEGs and a benchmark dataset to evaluate the suggested approach.Conclusion: Our findings approve of the effectiveness of using nonlinear features and a combination of classifiers. Results are also discussed from different points of view to understand brain dynamics better while emotion changes. This study reveals useful insights about emotion classification and brain-behavior related to emotion elicitation

    Parametric array calibration

    Get PDF
    The subject of this thesis is the development of parametric methods for the calibration of array shape errors. Two physical scenarios are considered, the online calibration (self-calibration) using far-field sources and the offline calibration using near-field sources. The maximum likelihood (ML) estimators are employed to estimate the errors. However, the well-known computational complexity in objective function optimization for the ML estimators demands effective and efficient optimization algorithms. A novel space-alternating generalized expectation-maximization (SAGE)-based algorithm is developed to optimize the objective function of the conditional maximum likelihood (CML) estimator for the far-field online calibration. Through data augmentation, joint direction of arrival (DOA) estimation and array calibration can be carried out by a computationally simple search procedure. Numerical experiments show that the proposed method outperforms the existing method for closely located signal sources and is robust to large shape errors. In addition, the accuracy of the proposed procedure attains the Cram´er-Rao bound (CRB). A global optimization algorithm, particle swarm optimization (PSO) is employed to optimize the objective function of the unconditional maximum likelihood (UML) estimator for the farfield online calibration and the near-field offline calibration. A new technique, decaying diagonal loading (DDL) is proposed to enhance the performance of PSO at high signal-to-noise ratio (SNR) by dynamically lowering it, based on the counter-intuitive observation that the global optimum of the UML objective function is more prominent at lower SNR. Numerical simulations demonstrate that the UML estimator optimized by PSO with DDL is optimally accurate, robust to large shape errors, and free of the initialization problem. In addition, the DDL technique is applicable to a wide range of array processing problems where the UML estimator is employed and can be coupled with different global optimization algorithms

    Adaptive Equalization Based on Particle Swarm Optimization Techniques

    Get PDF

    Mehrdimensionale Kanalschätzung für MIMO-OFDM

    Get PDF
    DIGITAL wireless communication started in the 1990s with the wide-spread deployment of GSM. Since then, wireless systems evolved dramatically. Current wireless standards approach the goal of an omnipresent communication system, which fulfils the wish to communicate with anyone, anywhere at anytime. Nowadays, the acceptance of smartphones and/or tablets is huge and the mobile internet is the core application. Given the current growth, the estimated data traffic in wireless networks in 2020 might be 1000 times higher than that of 2010, exceeding 127 exabyte. Unfortunately, the available radio spectrum is scarce and hence, needs to be utilized efficiently. Key technologies, such as multiple-input multiple-output (MIMO), orthogonal frequency-division multiplexing (OFDM) as well as various MIMO precoding techniques increase the theoretically achievable channel capacity considerably and are used in the majority of wireless standards. On the one hand, MIMO-OFDM promises substantial diversity and/or capacity gains. On the other hand, the complexity of optimum maximum-likelihood detection grows exponentially and is thus, not sustainable. Additionally, the required signaling overhead increases with the number of antennas and thereby reduces the bandwidth efficiency. Iterative receivers which jointly carry out channel estimation and data detection are a potential enabler to reduce the pilot overhead and approach optimum capacity at often reduced complexity. In this thesis, a graph-based receiver is developed, which iteratively performs joint data detection and channel estimation. The proposed multi-dimensional factor graph introduces transfer nodes that exploit correlation of adjacent channel coefficients in an arbitrary number of dimensions (e.g. time, frequency, and space). This establishes a simple and flexible receiver structure that facilitates soft channel estimation and data detection in multi-dimensional dispersive channels, and supports arbitrary modulation and channel coding schemes. However, the factor graph exhibits suboptimal cycles. In order to reach the maximum performance, the message exchange schedule, the process of combining messages, and the initialization are adapted. Unlike conventional approaches, which merge nodes of the factor graph to avoid cycles, the proposed message combining methods mitigate the impairing effects of short cycles and retain a low computational complexity. Furthermore, a novel detection algorithm is presented, which combines tree-based MIMO detection with a Gaussian detector. The resulting detector, termed Gaussian tree search detection, integrates well within the factor graph framework and reduces further the overall complexity of the receiver. Additionally, particle swarm optimization (PSO) is investigated for the purpose of initial channel estimation. The bio-inspired algorithm is particularly interesting because of its fast convergence to a reasonable MSE and its versatile adaptation to a variety of optimization problems. It is especially suited for initialization since no a priori information is required. A cooperative approach to PSO is proposed for large-scale antenna implementations as well as a multi-objective PSO for time-varying frequency-selective channels. The performance of the multi-dimensional graph-based soft iterative receiver is evaluated by means of Monte Carlo simulations. The achieved results are compared to the performance of an iterative state-of-the-art receiver. It is shown that a similar or better performance is achieved at a lower complexity. An appealing feature of iterative semi-blind channel estimation is that the supported pilot spacings may exceed the limits given the by Nyquist-Shannon sampling theorem. In this thesis, a relation between pilot spacing and channel code is formulated. Depending on the chosen channel code and code rate, the maximum spacing approaches the proposed “coded sampling bound”.Die digitale drahtlose Kommunikation begann in den 1990er Jahren mit der zunehmenden Verbreitung von GSM. Seitdem haben sich Mobilfunksysteme drastisch weiterentwickelt. Aktuelle Mobilfunkstandards nähern sich dem Ziel eines omnipräsenten Kommunikationssystems an und erfüllen damit den Wunsch mit jedem Menschen zu jeder Zeit an jedem Ort kommunizieren zu können. Heutzutage ist die Akzeptanz von Smartphones und Tablets immens und das mobile Internet ist die zentrale Anwendung. Ausgehend von dem momentanen Wachstum wird das Datenaufkommen in Mobilfunk-Netzwerken im Jahr 2020, im Vergleich zum Jahr 2010, um den Faktor 1000 gestiegen sein und 100 Exabyte überschreiten. Unglücklicherweise ist die verfügbare Bandbreite beschränkt und muss daher effizient genutzt werden. Schlüsseltechnologien, wie z.B. Mehrantennensysteme (multiple-input multiple-output, MIMO), orthogonale Frequenzmultiplexverfahren (orthogonal frequency-division multiplexing, OFDM) sowie weitere MIMO Codierverfahren, vergrößern die theoretisch erreichbare Kanalkapazität und kommen bereits in der Mehrheit der Mobil-funkstandards zum Einsatz. Auf der einen Seite verspricht MIMO-OFDM erhebliche Diversitäts- und/oder Kapazitätsgewinne. Auf der anderen Seite steigt die Komplexität der optimalen Maximum-Likelihood Detektion exponientiell und ist infolgedessen nicht haltbar. Zusätzlich wächst der benötigte Mehraufwand für die Kanalschätzung mit der Anzahl der verwendeten Antennen und reduziert dadurch die Bandbreiteneffizienz. Iterative Empfänger, die Datendetektion und Kanalschätzung im Verbund ausführen, sind potentielle Wegbereiter um den Mehraufwand des Trainings zu reduzieren und sich gleichzeitig der maximalen Kapazität mit geringerem Aufwand anzunähern. Im Rahmen dieser Arbeit wird ein graphenbasierter Empfänger für iterative Datendetektion und Kanalschätzung entwickelt. Der vorgeschlagene multidimensionale Faktor Graph führt sogenannte Transferknoten ein, die die Korrelation benachbarter Kanalkoeffizienten in beliebigen Dimensionen, z.B. Zeit, Frequenz und Raum, ausnutzen. Hierdurch wird eine einfache und flexible Empfängerstruktur realisiert mit deren Hilfe weiche Kanalschätzung und Datendetektion in mehrdimensionalen, dispersiven Kanälen mit beliebiger Modulation und Codierung durchgeführt werden kann. Allerdings weist der Faktorgraph suboptimale Schleifen auf. Um die maximale Performance zu erreichen, wurde neben dem Ablauf des Nachrichtenaustausches und des Vorgangs zur Kombination von Nachrichten auch die Initialisierung speziell angepasst. Im Gegensatz zu herkömmlichen Methoden, bei denen mehrere Knoten zur Vermeidung von Schleifen zusammengefasst werden, verringern die vorgeschlagenen Methoden die leistungsmindernde Effekte von Schleifen, erhalten aber zugleich die geringe Komplexität des Empfängers. Zusätzlich wird ein neuartiger Detektionsalgorithmus vorgestellt, der baumbasierte Detektionsalgorithmen mit dem sogenannten Gauss-Detektor verknüpft. Der resultierende baumbasierte Gauss-Detektor (Gaussian tree search detector) lässt sich ideal in das graphenbasierte Framework einbinden und verringert weiter die Gesamtkomplexität des Empfängers. Zusätzlich wird Particle Swarm Optimization (PSO) zum Zweck der initialen Kanalschätzung untersucht. Der biologisch inspirierte Algorithmus ist insbesonders wegen seiner schnellen Konvergenz zu einem akzeptablen MSE und seiner vielseitigen Abstimmungsmöglichkeiten auf eine Vielzahl von Optimierungsproblemen interessant. Da PSO keine a priori Informationen benötigt, ist er speziell für die Initialisierung geeignet. Sowohl ein kooperativer Ansatz für PSO für Antennensysteme mit extrem vielen Antennen als auch ein multi-objective PSO für Kanäle, die in Zeit und Frequenz dispersiv sind, werden evaluiert. Die Leistungsfähigkeit des multidimensionalen graphenbasierten iterativen Empfängers wird mit Hilfe von Monte Carlo Simulationen untersucht. Die Simulationsergebnisse werden mit denen eines dem Stand der Technik entsprechenden Empfängers verglichen. Es wird gezeigt, dass ähnliche oder bessere Ergebnisse mit geringerem Aufwand erreicht werden. Eine weitere ansprechende Eigenschaft von iterativen semi-blinden Kanalschätzern ist, dass der mögliche Abstand von Trainingssymbolen die Grenzen des Nyquist-Shannon Abtasttheorem überschreiten kann. Im Rahmen dieser Arbeit wird eine Beziehung zwischen dem Trainingsabstand und dem Kanalcode formuliert. In Abhängigkeit des gewählten Kanalcodes und der Coderate folgt der maximale Trainingsabstand der vorgeschlagenen “coded sampling bound”

    Smart models to improve agrometeorological estimations and predictions

    Get PDF
    La población mundial, en continuo crecimiento, alcanzará de forma estimada los 9,7 mil millones de habitantes en el 2050. Este incremento, combinado con el aumento en los estándares de vida y la situación de emergencia climática (aumento de la temperatura, intensificación del ciclo del agua, etc.) nos enfrentan al enorme desafío de gestionar de forma sostenible los cada vez más escasos recursos disponibles. El sector agrícola tiene que afrontar retos tan importantes como la mejora en la gestión de los recursos naturales, la reducción de la degradación medioambiental o la seguridad alimentaria y nutricional. Todo ello condicionado por la escasez de agua y las condiciones de aridez: factores limitantes en la producción de cultivos. Para garantizar una producción agrícola sostenible bajo estas condiciones, es necesario que todas las decisiones que se tomen estén basadas en el conocimiento, la innovación y la digitalización de la agricultura de forma que se garantice la resiliencia de los agroecosistemas, especialmente en entornos áridos, semi-áridos y secos sub-húmedos en los que el déficit de agua es estructural. Por todo esto, el presente trabajo se centra en la mejora de la precisión de los actuales modelos agrometeorológicos, aplicando técnicas de inteligencia artificial. Estos modelos pueden proporcionar estimaciones y predicciones precisas de variables clave como la precipitación, la radiación solar y la evapotranspiración de referencia. A partir de ellas, es posible favorecer estrategias agrícolas más sostenibles, gracias a la posibilidad de reducir el consumo de agua y energía, por ejemplo. Además, se han reducido el número de mediciones requeridas como parámetros de entrada para estos modelos, haciéndolos más accesibles y aplicables en áreas rurales y países en desarrollo que no pueden permitirse el alto costo de la instalación, calibración y mantenimiento de estaciones meteorológicas automáticas completas. Este enfoque puede ayudar a proporcionar información valiosa a los técnicos, agricultores, gestores y responsables políticos de la planificación hídrica y agraria en zonas clave. Esta tesis doctoral ha desarrollado y validado nuevas metodologías basadas en inteligencia artificial que han ser vido para mejorar la precision de variables cruciales en al ámbito agrometeorológico: precipitación, radiación solar y evapotranspiración de referencia. En particular, se han modelado sistemas de predicción y rellenado de huecos de precipitación a diferentes escalas utilizando redes neuronales. También se han desarrollado modelos de estimación de radiación solar utilizando exclusivamente parámetros térmicos y validados en zonas con características climáticas similares a lugar de entrenamiento, sin necesidad de estar geográficamente en la misma región o país. Analógamente, se han desarrollado modelos de estimación y predicción de evapotranspiración de referencia a nivel local y regional utilizando también solamente datos de temperatura para todo el proceso: regionalización, entrenamiento y validación. Y finalmente, se ha creado una librería de Python de código abierto a nivel internacional (AgroML) que facilita el proceso de desarrollo y aplicación de modelos de inteligencia artificial, no solo enfocadas al sector agrometeorológico, sino también a cualquier modelo supervisado que mejore la toma de decisiones en otras áreas de interés.The world population, which is constantly growing, is estimated to reach 9.7 billion people in 2050. This increase, combined with the rise in living standards and the climate emergency situation (increase in temperature, intensification of the water cycle, etc.), presents us with the enormous challenge of managing increasingly scarce resources in a sustainable way. The agricultural sector must face important challenges such as improving natural resource management, reducing environmental degradation, and ensuring food and nutritional security. All of this is conditioned by water scarcity and aridity, limiting factors in crop production. To guarantee sustainable agricultural production under these conditions, it is necessary to based all the decision made on knowledge, innovation, and the digitization of agriculture to ensure the resilience of agroecosystems, especially in arid, semi-arid, and sub-humid dry environments where water deficit is structural. Therefore, this work focuses on improving the precision of current agrometeorological models by applying artificial intelligence techniques. These models can provide accurate estimates and predictions of key variables such as precipitation, solar radiation, and reference evapotranspiration. This way, it is possible to promote more sustainable agricultural strategies by reducing water and energy consumption, for example. In addition, the number of measurements required as input parameters for these models has been reduced, making them more accessible and applicable in rural areas and developing countries that cannot afford the high cost of installing, calibrating, and maintaining complete automatic weather stations. This approach can help provide valuable information to technicians, farmers, managers, and policy makers in key wáter and agricultural planning areas. This doctoral thesis has developed and validated new methodologies based on artificial intelligence that have been used to improve the precision of crucial variables in the agrometeorological field: precipitation, solar radiation, and reference evapotranspiration. Specifically, prediction systems and gap-filling models for precipitation at different scales have been modeled using neural networks. Models for estimating solar radiation using only thermal parameters have also been developed and validated in areas with similar climatic characteristics to the training location, without the need to be geographically in the same region or country. Similarly, models for estimating and predicting reference evapotranspiration at the local and regional level have been developed using only temperature data for the entire process: regionalization, training, and validation. Finally, an internationally open-source Python library (AgroML) has been created to facilitate the development and application of artificial intelligence models, not only focused on the agrometeorological sector but also on any supervised model that improves decision-making in other areas of interest
    corecore