165 research outputs found

    Basic Space Mapping: A Retrospective and its Application to Design Optimization of Nonlinear RF and Microwave Circuits

    Get PDF
    Space mapping (SM) is one of the most powerful and computationally efficient design optimization methodologies in RF and microwave engineering. Its impressive evolution in terms of algorithmic variations and diverse engineering applications is well-documented. Most of the SM-based design optimization cases, including those solved by advanced and sophisticated space mapping formulations, have been demonstrated for linear frequency-domain microwave circuits. In this paper, we provide a brief retrospective on the emergence of space mapping, including its initial impression on a worldwide authority on nonlinear microwave circuit simulation and design: Prof. Vittorio Rizzoli. We briefly review some of the most fundamental space mapping optimization concepts and emphasize their applicability to nonlinear transient-domain microwave circuit design optimization. We illustrate this by a typical problem of high-speed digital signal conditioning: the physical design of a set of CMOS inverters driving an FR4 printed circuit board interconnect.ITESO, A.C

    EYE-HEIGHT/WIDTH PREDICTION USING ARTIFICIAL NEURAL NETWORKS FROM S-PARAMETERS WITH VECTOR FITTING

    Get PDF
    Artificial neural networks (ANNs) have been used to model microwave and RF devices over the years. Conventionally, S-parameters of microwave/RF designs are used as the inputs of neural network models to predict the electrical properties of the designs. However, using the S-parameters directly as inputs into the ANN results in a large number of inputs which slows down the training and configuration process. In this paper, a new method is proposed to first disassemble the S-parameters into poles and residues using vector fitting, and then the poles and residues are used as the input data during configuration and training of the neural networks. Test cases show that the ANN trained using the proposed method is able to predict the eye-heights and eye-widths of typical interconnect structures with minimal error, while showing significant speed improvement over the conventional method

    Digital Predistorion of 5G Millimeter-Wave Active Phased Arrays using Artificial Neural Networks

    Get PDF

    Optics for AI and AI for Optics

    Get PDF
    Artificial intelligence is deeply involved in our daily lives via reinforcing the digital transformation of modern economies and infrastructure. It relies on powerful computing clusters, which face bottlenecks of power consumption for both data transmission and intensive computing. Meanwhile, optics (especially optical communications, which underpin today’s telecommunications) is penetrating short-reach connections down to the chip level, thus meeting with AI technology and creating numerous opportunities. This book is about the marriage of optics and AI and how each part can benefit from the other. Optics facilitates on-chip neural networks based on fast optical computing and energy-efficient interconnects and communications. On the other hand, AI enables efficient tools to address the challenges of today’s optical communication networks, which behave in an increasingly complex manner. The book collects contributions from pioneering researchers from both academy and industry to discuss the challenges and solutions in each of the respective fields

    Technology Directions for the 21st Century

    Get PDF
    The Office of Space Communications (OSC) is tasked by NASA to conduct a planning process to meet NASA's science mission and other communications and data processing requirements. A set of technology trend studies was undertaken by Science Applications International Corporation (SAIC) for OSC to identify quantitative data that can be used to predict performance of electronic equipment in the future to assist in the planning process. Only commercially available, off-the-shelf technology was included. For each technology area considered, the current state of the technology is discussed, future applications that could benefit from use of the technology are identified, and likely future developments of the technology are described. The impact of each technology area on NASA operations is presented together with a discussion of the feasibility and risk associated with its development. An approximate timeline is given for the next 15 to 25 years to indicate the anticipated evolution of capabilities within each of the technology areas considered. This volume contains four chapters: one each on technology trends for database systems, computer software, neural and fuzzy systems, and artificial intelligence. The principal study results are summarized at the beginning of each chapter

    High-speed Channel Analysis and Design using Polynomial Chaos Theory and Machine Learning

    Get PDF
    With the exponential increase in the data rate of high-speed serial channels, their efficient and accurate analysis and design has become of crucial importance. Signal integrity analysis of these channels is often done with the eye diagram analysis, which demonstrates jitter and noise of the channel. Conventional methods for this type of analysis are either exorbitantly time and memory consuming, or only applicable to linear time invariant (LTI) systems. On the other hand, recently advancements in numerical methods and machine learning has shown a great potential for analysis and design of high-speed electronics. Therefore, in this dissertation we introduce two novel approaches for efficient eye analysis, based on machine learning and numerical techniques. These methods are focused on the data dependent jitter and noise, and the intersymbol interference. In the first approach, a complete surrogate model of the channel is trained using a short transient simulation. This model is based on the Polynomial Chaos theory. It can directly and quickly provide distribution of the jitter and other statistics of the eye diagram. In addition, it provides an estimation of the full eye diagram. The second analysis method is for faster analysis when we are interested in finding the worst-case eye width, eye height, and inner eye opening, which would be achieved by the conventional eye analysis if its transient simulation is continued for an arbitrary amount of time. The proposed approach quickly finds the data patterns resulting in the worst signal integrity; hence, in the closest eye. This method is based on the Bayesian optimization. Although majority of the contributions of this dissertation are on the analysis part, for the sake of completeness the final portion of this work is dedicated to design of high-speed channels with machine learning since the interference and complex interactions in modern channels has made their design challenging and time consuming too. The proposed design approach focuses on inverse design of CTLE, where the desired eye height and eye width are given, and the algorithm finds the corresponding peaking and DC gain of CTLE. This approach is based on the invertible neural networks. Main advantage of this network is the possibility to provide multiple solutions for cases where the answer to the inverse problem is not unique. Numerical examples are provided to evaluate efficiency and accuracy of the proposed approaches. The results show up to 11.5X speedup for direct estimation of the jitter distribution using the PC surrogate model approach. In addition, up to 23X speedup using the worst-case eye analysis approach is achieved, and the inverse design of CTLE shows promising results.Ph.D

    Computational aspects of cellular intelligence and their role in artificial intelligence.

    Get PDF
    The work presented in this thesis is concerned with an exploration of the computational aspects of the primitive intelligence associated with single-celled organisms. The main aim is to explore this Cellular Intelligence and its role within Artificial Intelligence. The findings of an extensive literature search into the biological characteristics, properties and mechanisms associated with Cellular Intelligence, its underlying machinery - Cell Signalling Networks and the existing computational methods used to capture it are reported. The results of this search are then used to fashion the development of a versatile new connectionist representation, termed the Artificial Reaction Network (ARN). The ARN belongs to the branch of Artificial Life known as Artificial Chemistry and has properties in common with both Artificial Intelligence and Systems Biology techniques, including: Artificial Neural Networks, Artificial Biochemical Networks, Gene Regulatory Networks, Random Boolean Networks, Petri Nets, and S-Systems. The thesis outlines the following original work: The ARN is used to model the chemotaxis pathway of Escherichia coli and is shown to capture emergent characteristics associated with this organism and Cellular Intelligence more generally. The computational properties of the ARN and its applications in robotic control are explored by combining functional motifs found in biochemical network to create temporal changing waveforms which control the gaits of limbed robots. This system is then extended into a complete control system by combining pattern recognition with limb control in a single ARN. The results show that the ARN can offer increased flexibility over existing methods. Multiple distributed cell-like ARN based agents termed Cytobots are created. These are first used to simulate aggregating cells based on the slime mould Dictyostelium discoideum. The Cytobots are shown to capture emergent behaviour arising from multiple stigmergic interactions. Applications of Cytobots within swarm robotics are investigated by applying them to benchmark search problems and to the task of cleaning up a simulated oil spill. The results are compared to those of established optimization algorithms using similar cell inspired strategies, and to other robotic agent strategies. Consideration is given to the advantages and disadvantages of the technique and suggestions are made for future work in the area. The report concludes that the Artificial Reaction Network is a versatile and powerful technique which has application in both simulation of chemical systems, and in robotic control, where it can offer a higher degree of flexibility and computational efficiency than benchmark alternatives. Furthermore, it provides a tool which may possibly throw further light on the origins and limitations of the primitive intelligence associated with cells

    Connectionist models and figurative speech

    Get PDF
    This paper contains an introduction to connectionist models. Then we focus on the question of how novel figurative usages of descriptive adjectives may be interpreted in a structured connectionist model of conceptual combination. The suggestion is that inferences drawn from an adjective\u27s use in familiar contexts form the basis for all possible interpretations of the adjective in a novel context. The more plausible of the possibilities, it is speculated, are reinforced by some form of one-shot learning, rendering the interpretative process obsolete after only one (memorable) encounter with a novel figure of speech

    Assist-as-needed EMG-based control strategy for wearable powered assistive devices

    Get PDF
    Dissertação de mestrado integrado em Engenharia Biomédica (área de especialização em Eletrónica Médica)Robotic-based gait rehabilitation and assistance using Wearable Powered Assistive Devices (WPADs), such as orthosis and exoskeletons, has been growing in the rehabilitation area to recover and augment the motor function of neurologically impaired subjects. These WPADs should provide a personalized assistance, since physical condition and muscular fatigue modify from patient to patient. In this field, electromyography (EMG) signals have been used to control WPADs given their ability to infer the user’s motion intention. However, in cases of motor disability conditions, EMG signals present lower magnitudes when compared to EMG signals under healthy conditions. Thus, the use of WPADs managed by EMG signals may not have potential to provide the assistance that the patient requires. The main goal of this dissertation aims the development of an Assisted-As-Needed (AAN) EMG-based control strategy for a future insertion in a Smart Active Orthotic System (SmartOs). To achieve this goal, the following elements were developed and validated: (i) an EMG system to acquire muscle activity signals from the most relevant muscles during the motion of the ankle joint; (ii) machine learning-based tool for ankle joint torque estimation to serve as reference in the AAN EMG-based control strategy; and (iii) a tool for real EMG-based torque estimation using Tibialis Anterior (TA) and Gastrocnemius Lateralis (GASL) muscles and real ankle joint angles. EMG system showed satisfactory pattern correlations with a commercial system. The reference ankle joint torque was generated based on predicted reference ankle joint kinematics, walking speed information (from 1 to 4 km/h) and anthropometric data (body height from 1.51 m to 1.83 m and body mass from 52.0 kg to 83.7 kg), using five machine learning algorithms: Support Vector Regression (SVR), Random Forest (RF), Multilayer Perceptron (MLP), Long-Short Term Memory (LSTM) and Convolutional Neural Network (CNN). CNN provided the best performance, predicting the reference ankle joint torque with fitting curves ranging from 74.7 to 89.8 % and Normalized Root Mean Square Errors (NRMSEs) between 3.16 and 8.02 %. EMG-based torque estimation beneficiates of a higher number of muscles, since EMG data from TA and GASL are not enough to estimate the real ankle joint torque.A assistência e reabilitação robótica usando dispositivos de assistência ativos vestíveis (WPADs), como ortóteses e exosqueletos, tem crescido na área da reabilitação com o fim de recuperar e aumentar a função motora de sujeitos com alterações neurológicas. Estes dispositivos devem fornecer uma assistência personalizada, uma vez que a condição física e a fadiga muscular variam de paciente para paciente. Nesta área, sinais de eletromiografia (EMG) têm sido usados para controlar WPADs, dada a sua capacidade de inferir a intenção de movimento do utilizador. Contudo, em casos de deficiência motora, os sinais de EMG apresentam menor amplitude quando comparados com sinais de EMG em condições saudáveis e, portanto, o uso de WPADs geridos por sinais de EMG pode não oferecer a assistência que o paciente necessita. O principal objetivo desta dissertação visa o desenvolvimento de uma estratégia de controlo baseada em EMG capaz de fornecer assistência quando necessário, para futura integração num sistema ortótico ativo e inteligente (SmartOs). Para atingir este objetivo foram desenvolvidos e validados os seguintes elementos: (i) sistema de EMG para adquirir sinais de atividade muscular dos músculos mais relevantes no movimento da articulação do tornozelo; (ii) ferramenta de machine learning para estimação do binário da articulação do tornozelo para servir como referência na estratégia de controlo; e (iii) ferramenta de estimação do binário real do tornozelo considerando sinais de EMG dos músculos Tibialis Anterior (TA) e Gastrocnemius Lateralis (GASL) e ângulo real do tornozelo. O sistema de EMG apresentou correlações satisfatórias com um sistema comercial. O binário de referência para o tornozelo foi gerado com base no ângulo de referência da mesma articulação, velocidade de marcha (de 1 até 4 km/h) e dados antropométricos (alturas de 1.51 m até 1.83 e massas de 52.0 kg até 83.7 kg), usando cinco algoritmos de machine learning: Support Vector Machine, Random Forest, Multilayer Perceptron, Long-Short Term Memory e Convolutional Neural Network. CNN apresentou a melhor performance, prevendo binários de referência do tornozelo com um fit entre 74.7 e 89.8 % e Normalized Root Mean Square Errors (NRMSE) entre 3.16 e 8.02 %. A estimativa do torque com base em sinais de EMG requer a inclusão de um maior número de músculos, uma vez que sinais de EMG dos músculos TA e GASL não foram suficientes
    • …
    corecore