25 research outputs found

    A survey of the application of soft computing to investment and financial trading

    Get PDF

    Integer-forcing architectures: cloud-radio access networks, time-variation and interference alignment

    Full text link
    Next-generation wireless communication systems will need to contend with many active mobile devices, each of which will require a very high data rate. To cope with this growing demand, network deployments are becoming denser, leading to higher interference between active users. Conventional architectures aim to mitigate this interference through careful design of signaling and scheduling protocols. Unfortunately, these methods become less effective as the device density increases. One promising option is to enable cellular basestations (i.e., cell towers) to jointly process their received signals for decoding users’ data packets as well as to jointly encode their data packets to the users. This joint processing architecture is often enabled by a cloud radio access network that links the basestations to a central processing unit via dedicated connections. One of the main contributions of this thesis is a novel end-to-end communications architecture for cloud radio access networks as well as a detailed comparison to prior approaches, both via theoretical bounds and numerical simulations. Recent work has that the following high-level approach has numerous advantages: each basestation quantizes its observed signal and sends it to the central processing unit for decoding, which in turn generates signals for the basestations to transmit, and sends them quantized versions. This thesis follows an integer-forcing approach that uses the fact that, if codewords are drawn from a linear codebook, then their integer-linear combinations are themselves codewords. Overall, this architecture requires integer-forcing channel coding from the users to the central processing unit and back, which handles interference between the users’ codewords, as well as integer-forcing source coding from the basestations to the central processing unit and back, which handles correlations between the basestations’ analog signals. Prior work on integer-forcing has proposed and analyzed channel coding strategies as well as a source coding strategy for the basestations to the central processing unit, and this thesis proposes a source coding strategy for the other direction. Iterative algorithms are developed to optimize the parameters of the proposed architecture, which involve real-valued beamforming and equalization matrices and integer-valued coefficient matrices in a quadratic objective. Beyond the cloud radio setting, it is argued that the integer-forcing approach is a promising framework for interference alignment between multiple transmitter-receiver pairs. In this scenario, the goal is to align the interfering data streams so that, from the perspective of each receiver, there seems to be only a signal receiver. Integer-forcing interference alignment accomplishes this objective by having each receiver recover two linear combinations that can then be solved for the desired signal and the sum of the interference. Finally, this thesis investigates the impact of channel coherence on the integer-forcing strategy via numerical simulations

    Sleep Stage Classification: A Deep Learning Approach

    Get PDF
    Sleep occupies significant part of human life. The diagnoses of sleep related disorders are of great importance. To record specific physical and electrical activities of the brain and body, a multi-parameter test, called polysomnography (PSG), is normally used. The visual process of sleep stage classification is time consuming, subjective and costly. To improve the accuracy and efficiency of the sleep stage classification, automatic classification algorithms were developed. In this research work, we focused on pre-processing (filtering boundaries and de-noising algorithms) and classification steps of automatic sleep stage classification. The main motivation for this work was to develop a pre-processing and classification framework to clean the input EEG signal without manipulating the original data thus enhancing the learning stage of deep learning classifiers. For pre-processing EEG signals, a lossless adaptive artefact removal method was proposed. Rather than other works that used artificial noise, we used real EEG data contaminated with EOG and EMG for evaluating the proposed method. The proposed adaptive algorithm led to a significant enhancement in the overall classification accuracy. In the classification area, we evaluated the performance of the most common sleep stage classifiers using a comprehensive set of features extracted from PSG signals. Considering the challenges and limitations of conventional methods, we proposed two deep learning-based methods for classification of sleep stages based on Stacked Sparse AutoEncoder (SSAE) and Convolutional Neural Network (CNN). The proposed methods performed more efficiently by eliminating the need for conventional feature selection and feature extraction steps respectively. Moreover, although our systems were trained with lower number of samples compared to the similar studies, they were able to achieve state of art accuracy and higher overall sensitivity

    Timing-Error Tolerance Techniques for Low-Power DSP: Filters and Transforms

    Get PDF
    Low-power Digital Signal Processing (DSP) circuits are critical to commercial System-on-Chip design for battery powered devices. Dynamic Voltage Scaling (DVS) of digital circuits can reclaim worst-case supply voltage margins for delay variation, reducing power consumption. However, removing static margins without compromising robustness is tremendously challenging, especially in an era of escalating reliability concerns due to continued process scaling. The Razor DVS scheme addresses these concerns, by ensuring robustness using explicit timing-error detection and correction circuits. Nonetheless, the design of low-complexity and low-power error correction is often challenging. In this thesis, the Razor framework is applied to fixed-precision DSP filters and transforms. The inherent error tolerance of many DSP algorithms is exploited to achieve very low-overhead error correction. Novel error correction schemes for DSP datapaths are proposed, with very low-overhead circuit realisations. Two new approximate error correction approaches are proposed. The first is based on an adapted sum-of-products form that prevents errors in intermediate results reaching the output, while the second approach forces errors to occur only in less significant bits of each result by shaping the critical path distribution. A third approach is described that achieves exact error correction using time borrowing techniques on critical paths. Unlike previously published approaches, all three proposed are suitable for high clock frequency implementations, as demonstrated with fully placed and routed FIR, FFT and DCT implementations in 90nm and 32nm CMOS. Design issues and theoretical modelling are presented for each approach, along with SPICE simulation results demonstrating power savings of 21 – 29%. Finally, the design of a baseband transmitter in 32nm CMOS for the Spectrally Efficient FDM (SEFDM) system is presented. SEFDM systems offer bandwidth savings compared to Orthogonal FDM (OFDM), at the cost of increased complexity and power consumption, which is quantified with the first VLSI architecture

    Measurement, optimisation and control of particle properties in pharmaceutical manufacturing processes

    Get PDF
    Previously held under moratorium from 2 June 2020 until 6 June 2022.The understanding and optimisation of particle properties connected to their structure and morphology is a common objective for particle engineering applications either to improve materialhandling in the manufacturing process or to influence Critical Quality Attributes (CQAs) linked to product performance. This work aims to demonstrate experimental means to support a rational development approach for pharmaceutical particulate systems with a specific focus on droplet drying platforms such as spray drying. Micro-X-ray tomography (micro-XRT) is widely applied in areas such as geo- and biomedical sciences to enable a three dimensional investigation of the specimens. Chapter 4 elaborates on practical aspects of micro-XRT for a quantitative analysis of pharmaceutical solid products with an emphasis on implemented image processing and analysis methodologies. Potential applications of micro-XRT in the pharmaceutical manufacturing process can range from the characterisation of single crystals to fully formulated oral dosage forms. Extracted quantitative information can be utilised to directly inform product design and production for process development or optimisation. The non-destructive nature of the micro-XRT analysis can be further employed to investigate structure-performance relationships which might provide valuable insights for modelling approaches. Chapter 5 further demonstrates the applicability of micro-XRT for the analysis of ibuprofen capsules as a multi-particulate system each with a population of approximately 300 pellets. The in-depth analysis of collected micro-XRT image data allowed the extraction of more than 200 features quantifying aspects of the pellets’ size, shape, porosity, surface and orientation. Employed feature selection and machine learning methods enabled the detection of broken pellets within a classification model. The classification model has an accuracy of more than 99.55% and a minimum precision of 86.20% validated with a test dataset of 886 pellets from three capsules. The combination of single droplet drying (SDD) experiments with a subsequent micro-XRT analysis was used for a quantitative investigation of the particle design space and is described in Chapter 6. The implemented platform was applied to investigate the solidification of formulated metformin hydrochloride particles using D-mannitol and hydroxypropyl methylcellulose within a selected, pragmatic particle design space. The results indicate a significant impact of hydroxypropyl methylcellulose reducing liquid evaporation rates and particle drying kinetics. The morphology and internal structure of the formulated particles after drying are dominated by a crystalline core of D-mannitol partially suppressed with increasing hydroxypropyl methylcellulose additions. The characterisation of formulated metformin hydrochloride particles with increasing polymer content demonstrated the importance of an early-stage quantitative assessment of formulation-related particle properties. A reliable and rational spray drying development approach needs to assess parameters of the compound system as well as of the process itself in order to define a well-controlled and robust operational design space. Chapter 7 presents strategies for process implementation to produce peptide-based formulations via spray drying demonstrated using s-glucagon as a model peptide. The process implementation was supported by an initial characterisation of the lab-scale spray dryer assessing a range of relevant independent process variables including drying temperature and feed rate. The platform response was captured with available and in-house developed Process Analytical Technology. A B-290 Mini-Spray Dryer was used to verify the development approach and to implement the pre-designed spray drying process. Information on the particle formation mechanism observed in SDD experiments were utilised to interpret the characteristics of the spray dried material.The understanding and optimisation of particle properties connected to their structure and morphology is a common objective for particle engineering applications either to improve materialhandling in the manufacturing process or to influence Critical Quality Attributes (CQAs) linked to product performance. This work aims to demonstrate experimental means to support a rational development approach for pharmaceutical particulate systems with a specific focus on droplet drying platforms such as spray drying. Micro-X-ray tomography (micro-XRT) is widely applied in areas such as geo- and biomedical sciences to enable a three dimensional investigation of the specimens. Chapter 4 elaborates on practical aspects of micro-XRT for a quantitative analysis of pharmaceutical solid products with an emphasis on implemented image processing and analysis methodologies. Potential applications of micro-XRT in the pharmaceutical manufacturing process can range from the characterisation of single crystals to fully formulated oral dosage forms. Extracted quantitative information can be utilised to directly inform product design and production for process development or optimisation. The non-destructive nature of the micro-XRT analysis can be further employed to investigate structure-performance relationships which might provide valuable insights for modelling approaches. Chapter 5 further demonstrates the applicability of micro-XRT for the analysis of ibuprofen capsules as a multi-particulate system each with a population of approximately 300 pellets. The in-depth analysis of collected micro-XRT image data allowed the extraction of more than 200 features quantifying aspects of the pellets’ size, shape, porosity, surface and orientation. Employed feature selection and machine learning methods enabled the detection of broken pellets within a classification model. The classification model has an accuracy of more than 99.55% and a minimum precision of 86.20% validated with a test dataset of 886 pellets from three capsules. The combination of single droplet drying (SDD) experiments with a subsequent micro-XRT analysis was used for a quantitative investigation of the particle design space and is described in Chapter 6. The implemented platform was applied to investigate the solidification of formulated metformin hydrochloride particles using D-mannitol and hydroxypropyl methylcellulose within a selected, pragmatic particle design space. The results indicate a significant impact of hydroxypropyl methylcellulose reducing liquid evaporation rates and particle drying kinetics. The morphology and internal structure of the formulated particles after drying are dominated by a crystalline core of D-mannitol partially suppressed with increasing hydroxypropyl methylcellulose additions. The characterisation of formulated metformin hydrochloride particles with increasing polymer content demonstrated the importance of an early-stage quantitative assessment of formulation-related particle properties. A reliable and rational spray drying development approach needs to assess parameters of the compound system as well as of the process itself in order to define a well-controlled and robust operational design space. Chapter 7 presents strategies for process implementation to produce peptide-based formulations via spray drying demonstrated using s-glucagon as a model peptide. The process implementation was supported by an initial characterisation of the lab-scale spray dryer assessing a range of relevant independent process variables including drying temperature and feed rate. The platform response was captured with available and in-house developed Process Analytical Technology. A B-290 Mini-Spray Dryer was used to verify the development approach and to implement the pre-designed spray drying process. Information on the particle formation mechanism observed in SDD experiments were utilised to interpret the characteristics of the spray dried material

    Transmission strategies for broadband wireless systems with MMSE turbo equalization

    Get PDF
    This monograph details efficient transmission strategies for single-carrier wireless broadband communication systems employing iterative (turbo) equalization. In particular, the first part focuses on the design and analysis of low complexity and robust MMSE-based turbo equalizers operating in the frequency domain. Accordingly, several novel receiver schemes are presented which improve the convergence properties and error performance over the existing turbo equalizers. The second part discusses concepts and algorithms that aim to increase the power and spectral efficiency of the communication system by efficiently exploiting the available resources at the transmitter side based upon the channel conditions. The challenging issue encountered in this context is how the transmission rate and power can be optimized, while a specific convergence constraint of the turbo equalizer is guaranteed.Die vorliegende Arbeit beschäftigt sich mit dem Entwurf und der Analyse von effizienten Übertragungs-konzepten für drahtlose, breitbandige Einträger-Kommunikationssysteme mit iterativer (Turbo-) Entzerrung und Kanaldekodierung. Dies beinhaltet einerseits die Entwicklung von empfängerseitigen Frequenzbereichs-entzerrern mit geringer Komplexität basierend auf dem Prinzip der Soft Interference Cancellation Minimum-Mean Squared-Error (SC-MMSE) Filterung und andererseits den Entwurf von senderseitigen Algorithmen, die durch Ausnutzung von Kanalzustandsinformationen die Bandbreiten- und Leistungseffizienz in Ein- und Mehrnutzersystemen mit Mehrfachantennen (sog. Multiple-Input Multiple-Output (MIMO)) verbessern. Im ersten Teil dieser Arbeit wird ein allgemeiner Ansatz für Verfahren zur Turbo-Entzerrung nach dem Prinzip der linearen MMSE-Schätzung, der nichtlinearen MMSE-Schätzung sowie der kombinierten MMSE- und Maximum-a-Posteriori (MAP)-Schätzung vorgestellt. In diesem Zusammenhang werden zwei neue Empfängerkonzepte, die eine Steigerung der Leistungsfähigkeit und Verbesserung der Konvergenz in Bezug auf existierende SC-MMSE Turbo-Entzerrer in verschiedenen Kanalumgebungen erzielen, eingeführt. Der erste Empfänger - PDA SC-MMSE - stellt eine Kombination aus dem Probabilistic-Data-Association (PDA) Ansatz und dem bekannten SC-MMSE Entzerrer dar. Im Gegensatz zum SC-MMSE nutzt der PDA SC-MMSE eine interne Entscheidungsrückführung, so dass zur Unterdrückung von Interferenzen neben den a priori Informationen der Kanaldekodierung auch weiche Entscheidungen der vorherigen Detektions-schritte berücksichtigt werden. Durch die zusätzlich interne Entscheidungsrückführung erzielt der PDA SC-MMSE einen wesentlichen Gewinn an Performance in räumlich unkorrelierten MIMO-Kanälen gegenüber dem SC-MMSE, ohne dabei die Komplexität des Entzerrers wesentlich zu erhöhen. Der zweite Empfänger - hybrid SC-MMSE - bildet eine Verknüpfung von gruppenbasierter SC-MMSE Frequenzbereichsfilterung und MAP-Detektion. Dieser Empfänger besitzt eine skalierbare Berechnungskomplexität und weist eine hohe Robustheit gegenüber räumlichen Korrelationen in MIMO-Kanälen auf. Die numerischen Ergebnisse von Simulationen basierend auf Messungen mit einem Channel-Sounder in Mehrnutzerkanälen mit starken räumlichen Korrelationen zeigen eindrucksvoll die Überlegenheit des hybriden SC-MMSE-Ansatzes gegenüber dem konventionellen SC-MMSE-basiertem Empfänger. Im zweiten Teil wird der Einfluss von System- und Kanalmodellparametern auf die Konvergenzeigenschaften der vorgestellten iterativen Empfänger mit Hilfe sogenannter Korrelationsdiagramme untersucht. Durch semi-analytische Berechnungen der Entzerrer- und Kanaldecoder-Korrelationsfunktionen wird eine einfache Berechnungsvorschrift zur Vorhersage der Bitfehlerwahrscheinlichkeit von SC-MMSE und PDA SC-MMSE Turbo Entzerrern für MIMO-Fadingkanäle entwickelt. Des Weiteren werden zwei Fehlerschranken für die Ausfallwahrscheinlichkeit der Empfänger vorgestellt. Die semi-analytische Methode und die abgeleiteten Fehlerschranken ermöglichen eine aufwandsgeringe Abschätzung sowie Optimierung der Leistungsfähigkeit des iterativen Systems. Im dritten und abschließenden Teil werden Strategien zur Raten- und Leistungszuweisung in Kommunikationssystemen mit konventionellen iterativen SC-MMSE Empfängern untersucht. Zunächst wird das Problem der Maximierung der instantanen Summendatenrate unter der Berücksichtigung der Konvergenz des iterativen Empfängers für einen Zweinutzerkanal mit fester Leistungsallokation betrachtet. Mit Hilfe des Flächentheorems von Extrinsic-Information-Transfer (EXIT)-Funktionen wird eine obere Schranke für die erreichbare Ratenregion hergeleitet. Auf Grundlage dieser Schranke wird ein einfacher Algorithmus entwickelt, der für jeden Nutzer aus einer Menge von vorgegebenen Kanalcodes mit verschiedenen Codierraten denjenigen auswählt, der den instantanen Datendurchsatz des Mehrnutzersystems verbessert. Neben der instantanen Ratenzuweisung wird auch ein ausfallbasierter Ansatz zur Ratenzuweisung entwickelt. Hierbei erfolgt die Auswahl der Kanalcodes für die Nutzer unter Berücksichtigung der Einhaltung einer bestimmten Ausfallwahrscheinlichkeit (outage probability) des iterativen Empfängers. Des Weiteren wird ein neues Entwurfskriterium für irreguläre Faltungscodes hergeleitet, das die Ausfallwahrscheinlichkeit von Turbo SC-MMSE Systemen verringert und somit die Zuverlässigkeit der Datenübertragung erhöht. Eine Reihe von Simulationsergebnissen von Kapazitäts- und Durchsatzberechnungen werden vorgestellt, die die Wirksamkeit der vorgeschlagenen Algorithmen und Optimierungsverfahren in Mehrnutzerkanälen belegen. Abschließend werden außerdem verschiedene Maßnahmen zur Minimierung der Sendeleistung in Einnutzersystemen mit senderseitiger Singular-Value-Decomposition (SVD)-basierter Vorcodierung untersucht. Es wird gezeigt, dass eine Methode, welche die Leistungspegel des Senders hinsichtlich der Bitfehlerrate des iterativen Empfängers optimiert, den konventionellen Verfahren zur Leistungszuweisung überlegen ist

    Humanoid Robots

    Get PDF
    For many years, the human being has been trying, in all ways, to recreate the complex mechanisms that form the human body. Such task is extremely complicated and the results are not totally satisfactory. However, with increasing technological advances based on theoretical and experimental researches, man gets, in a way, to copy or to imitate some systems of the human body. These researches not only intended to create humanoid robots, great part of them constituting autonomous systems, but also, in some way, to offer a higher knowledge of the systems that form the human body, objectifying possible applications in the technology of rehabilitation of human beings, gathering in a whole studies related not only to Robotics, but also to Biomechanics, Biomimmetics, Cybernetics, among other areas. This book presents a series of researches inspired by this ideal, carried through by various researchers worldwide, looking for to analyze and to discuss diverse subjects related to humanoid robots. The presented contributions explore aspects about robotic hands, learning, language, vision and locomotion

    Reinforcement learning in a multi-agent framework for pedestrian simulation

    Get PDF
    El objetivo de la tesis consiste en la utilización de Aprendizaje por refuerzo (Reinforcement Learning) para generar simulaciones plausibles de peatones en diferentes entornos. Metodología Se ha desarrollado un marco de trabajo multi-agente donde cada agente virtual que aprende un comportamiento de navegación por interacción con el mundo virtual en el que se encuentra junto con el resto de agentes. El mundo virtual es simulado con un motor físico (ODE) que está calibrado con parámetros de peatones humanos extraídos de la bibliografía de la materia. El marco de trabajo es flexible y permite utilizar diferentes algoritmos de aprendizaje (en concreto Q-Learning y Sarsa(lambda) en combinación con diferentes técnicas de generalización del espacio de estados (en concreto cuantización Vectorial y tile coding). Como herramientas de análisis de los comportamientos aprendidos se utilizan diagramas fundamentales (relación velocidad/densidad), mapas de densidad, cronogramas y rendimientos (en términos del porcentaje de agentes que consiguen llegar al objetivo). Conclusiones: Tras una batería de experimentos en diferentes escenarios (un total de 6 escenarios distintos) y los correspondientes analisis de resultados, las conclusiones son las siguientes: - Se han conseguido comportamientos plausibles de peatones -Los comportamientos son robustos al escalado y presentan capacidades de abstracción (comportamientos a niveles táctico y de planificación) -Los comportamientos aprendidos son capaces de generar comportamientos colectivos emergentes -La comparación con otro modelo de peatones estandar (Modelo de Helbing) y los análisis realizados a nivel de diagramas fundamentales, indican que la dinámica aprendida es coherente y similar a una dinámica de peatones

    Dynamic Generalisation of Continuous Action Spaces in Reinforcement Learning: A Neurally Inspired Approach

    Get PDF
    Institute for Adaptive and Neural ComputationAward number: 98318242.This thesis is about the dynamic generalisation of continuous action spaces in reinforcement learning problems. The standard Reinforcement Learning (RL) account provides a principled and comprehensive means of optimising a scalar reward signal in a Markov Decision Process. However, the theory itself does not directly address the imperative issue of generalisation which naturally arises as a consequence of large or continuous state and action spaces. A current thrust of research is aimed at fusing the generalisation capabilities of supervised (and unsupervised) learning techniques with the RL theory. An example par excellence is Tesauro’s TD-Gammon. Although much effort has gone into researching ways to represent and generalise over the input space, much less attention has been paid to the action space. This thesis first considers the motivation for learning real-valued actions, and then proposes a set of key properties desirable in any candidate algorithm addressing generalisation of both input and action spaces. These properties include: Provision of adaptive and online generalisation, adherence to the standard theory with a central focus on estimating expected reward, provision for real-valued states and actions, and full support for a real-valued discounted reward signal. Of particular interest are issues pertaining to robustness in non-stationary environments, scalability, and efficiency for real-time learning in applications such as robotics. Since exploring the action space is discovered to be a potentially costly process, the system should also be flexible enough to enable maximum reuse of learned actions. A new approach is proposed which succeeds for the first time in addressing all of the key issues identified. The algorithm, which is based on the ubiquitous self-organising map, is analysed and compared with other techniques including those based on the backpropagation algorithm. The investigation uncovers some important implications of the differences between these two particular approaches with respect to RL. In particular, the distributed representation of the multi-layer perceptron is judged to be something of a double-edged sword offering more sophisticated and more scalable generalising power, but potentially causing problems in dynamic or non-equiprobable environments, and tasks involving a highly varying input-output mapping. The thesis concludes that the self-organising map can be used in conjunction with current RL theory to provide real-time dynamic representation and generalisation of continuous action spaces. The proposed model is shown to be reliable in non-stationary, unpredictable and noisy environments and judged to be unique in addressing and satisfying a number of desirable properties identified as important to a large class of RL problems
    corecore