1,241 research outputs found

    Genetic-algorithm-optimized neural networks for gravitational wave classification

    Get PDF
    Gravitational-wave detection strategies are based on a signal analysis technique known as matched filtering. Despite the success of matched filtering, due to its computational cost, there has been recent interest in developing deep convolutional neural networks (CNNs) for signal detection. Designing these networks remains a challenge as most procedures adopt a trial and error strategy to set the hyperparameter values. We propose a new method for hyperparameter optimization based on genetic algorithms (GAs). We compare six different GA variants and explore different choices for the GA-optimized fitness score. We show that the GA can discover high-quality architectures when the initial hyperparameter seed values are far from a good solution as well as refining already good networks. For example, when starting from the architecture proposed by George and Huerta, the network optimized over the 20-dimensional hyperparameter space has 78% fewer trainable parameters while obtaining an 11% increase in accuracy for our test problem. Using genetic algorithm optimization to refine an existing network should be especially useful if the problem context (e.g. statistical properties of the noise, signal model, etc) changes and one needs to rebuild a network. In all of our experiments, we find the GA discovers significantly less complicated networks as compared to the seed network, suggesting it can be used to prune wasteful network structures. While we have restricted our attention to CNN classifiers, our GA hyperparameter optimization strategy can be applied within other machine learning settings.Comment: 25 pages, 8 figures, and 2 tables; Version 2 includes an expanded discussion of our hyperparameter optimization mode

    Deep learning-based space-time coding wireless MIMO receiver optimization.

    Get PDF
    Doctoral Degree. University of KwaZulu-Natal, Durban.With the high demand for high data throughput and reliable wireless links to cater for real-time or low latency mobile application services, the wireless research community has developed wireless multiple-input multiple-output (MIMO) architectures that cater to these stringent quality of service (QoS) requirements. For the case of wireless link reliability, spatial diversity in wireless MIMO architectures is used to increase the link reliability. Besides increasing link reliability using spatial diversity, space-time block coding schemes may be used to further increase the wireless link reliability by adding time diversity to the wireless link. Our research is centered around the optimization of resources used in decoding space-time block coded wireless signals. There are two categories of space-time block coding schemes namely the orthogonal and non-orthogonal space-time block codes (STBC). In our research, we concentrate on two non-orthogonal STBC schemes namely the uncoded space-time labeling diversity (USTLD) and the Golden code. These two non-orthogonal STBC schemes exhibit some advantages over the orthogonal STBC called Alamouti despite their non-linear optimal detection. Orthogonal STBC schemes have the advantage of simple linear optimal detection relative to the more complex non-linear optimal detection of non-orthogonal STBC schemes. Since our research concentrates on wireless MIMO STBC transmission, for detection to occur optimally at the receiver side of a space-time block coded wireless MIMO link, we need to optimally perform channel estimation and decoding. USTLD has a coding gain advantage over the Alamouti STBC scheme. This implies that the USTLD can deliver higher wireless link reliability relative to the Alamouti STBC for the same spectral efficiency. Despite this advantage of the USTLD, to the best of our knowledge, the literature has concentrated on USTLD wireless transmission under the assumption that the wireless receiver has full knowledge of the wireless channel without estimation errors. We thus perform research of the USTLD wireless MIMO transmission with imperfect channel estimation. The traditional least-squares (LS) and minimum mean squared error (MMSE) used in literature, for imperfect pilot-assisted channel estimation, require the full knowledge of the transmitted pilot symbols and/or wireless channel second order statistics which may not always be fully known. We, therefore, propose blind channel estimation facilitated by a deep learning model that makes it unnecessary to have prior knowledge of the wireless channel second order statistics, transmitted pilot symbols and/or average noise power. We also derive an optimal number of pilot symbols that maybe used for USTLD wireless MIMO channel estimation without compromising the wireless link reliability. It is shown from the Monte Carlo simulations that the error rate performance of the USTLD transmission is not compromised despite using only 20% of the required number of Zadoff-Chu sequence pilot symbols used by the traditional LS and MMSE channel estimators for both 16-QAM and 16-PSK baseband modulation. The Golden code is a STBC scheme with spatial multiplexing gain over the Alamouti scheme. This implies that the Golden code can deliver higher spectral efficiencies for the same link reliability with the Alamouti scheme. The Alamouti scheme has been implemented in the modern wireless standards because it adds time diversity, with low decoding complexity, to wireless MIMO links. The Golden code adds time diversity and improves wireless MIMO spectral efficiency but at the cost of much higher decoding complexity relative to the Alamouti scheme. Because of the high decoding complexity, the Golden code is not widely adopted in the modern wireless standards. We, therefore, propose analytical and deep learning-based sphere-decoding algorithms to lower the number of detection floating-point operations (FLOPS) and decoding latency of the Golden code under low- and high-density M-ary quadrature amplitude modulation (M-QAM) baseband transmissions whilst maintaining the near-optimal error rate performance. The proposed sphere-decoding algorithms achieve at most 99% reduction in Golden code detection FLOPS, at low SNR, relative to the sphere-decoder with sorted detection subsets (SD-SDS) whilst maintaining the error rate performance. For the case of high-density M-QAM Golden code transmission, the proposed analytical and deep learning sphere-decoders reduce decoding latency by at most 70%, relative to the SD-SDS decoder, without diminishing the error rate performance

    Complexity, Emergent Systems and Complex Biological Systems:\ud Complex Systems Theory and Biodynamics. [Edited book by I.C. Baianu, with listed contributors (2011)]

    Get PDF
    An overview is presented of System dynamics, the study of the behaviour of complex systems, Dynamical system in mathematics Dynamic programming in computer science and control theory, Complex systems biology, Neurodynamics and Psychodynamics.\u

    Data analysis and source modelling for LISA

    Get PDF
    [no abstract

    Advanced photonic and electronic systems WILGA 2018

    Get PDF
    WILGA annual symposium on advanced photonic and electronic systems has been organized by young scientist for young scientists since two decades. It traditionally gathers around 400 young researchers and their tutors. Ph.D students and graduates present their recent achievements during well attended oral sessions. Wilga is a very good digest of Ph.D. works carried out at technical universities in electronics and photonics, as well as information sciences throughout Poland and some neighboring countries. Publishing patronage over Wilga keep Elektronika technical journal by SEP, IJET and Proceedings of SPIE. The latter world editorial series publishes annually more than 200 papers from Wilga. Wilga 2018 was the XLII edition of this meeting. The following topical tracks were distinguished: photonics, electronics, information technologies and system research. The article is a digest of some chosen works presented during Wilga 2018 symposium. WILGA 2017 works were published in Proc. SPIE vol.10445. WILGA 2018 works were published in Proc. SPIE vol.10808

    Thirty Years of Machine Learning: The Road to Pareto-Optimal Wireless Networks

    Full text link
    Future wireless networks have a substantial potential in terms of supporting a broad range of complex compelling applications both in military and civilian fields, where the users are able to enjoy high-rate, low-latency, low-cost and reliable information services. Achieving this ambitious goal requires new radio techniques for adaptive learning and intelligent decision making because of the complex heterogeneous nature of the network structures and wireless services. Machine learning (ML) algorithms have great success in supporting big data analytics, efficient parameter estimation and interactive decision making. Hence, in this article, we review the thirty-year history of ML by elaborating on supervised learning, unsupervised learning, reinforcement learning and deep learning. Furthermore, we investigate their employment in the compelling applications of wireless networks, including heterogeneous networks (HetNets), cognitive radios (CR), Internet of things (IoT), machine to machine networks (M2M), and so on. This article aims for assisting the readers in clarifying the motivation and methodology of the various ML algorithms, so as to invoke them for hitherto unexplored services as well as scenarios of future wireless networks.Comment: 46 pages, 22 fig

    Learning Approaches to Analog and Mixed Signal Verification and Analysis

    Get PDF
    The increased integration and interaction of analog and digital components within a system has amplified the need for a fast, automated, combined analog, and digital verification methodology. There are many automated characterization, test, and verification methods used in practice for digital circuits, but analog and mixed signal circuits suffer from long simulation times brought on by transistor-level analysis. Due to the substantial amount of simulations required to properly characterize and verify an analog circuit, many undetected issues manifest themselves in the manufactured chips. Creating behavioral models, a circuit abstraction of analog components assists in reducing simulation time which allows for faster exploration of the design space. Traditionally, creating behavioral models for non-linear circuits is a manual process which relies heavily on design knowledge for proper parameter extraction and circuit abstraction. Manual modeling requires a high level of circuit knowledge and often fails to capture critical effects stemming from block interactions and second order device effects. For this reason, it is of interest to extract the models directly from the SPICE level descriptions so that these effects and interactions can be properly captured. As the devices are scaled, process variations have a more profound effect on the circuit behaviors and performances. Creating behavior models from the SPICE level descriptions, which include input parameters and a large process variation space, is a non-trivial task. In this dissertation, we focus on addressing various problems related to the design automation of analog and mixed signal circuits. Analog circuits are typically highly specialized and fined tuned to fit the desired specifications for any given system reducing the reusability of circuits from design to design. This hinders the advancement of automating various aspects of analog design, test, and layout. At the core of many automation techniques, simulations, or data collection are required. Unfortunately, for some complex analog circuits, a single simulation may take many days. This prohibits performing any type of behavior characterization or verification of the circuit. This leads us to the first fundamental problem with the automation of analog devices. How can we reduce the simulation cost while maintaining the robustness of transistor level simulations? As analog circuits can vary vastly from one design to the next and are hardly ever comprised of standard library based building blocks, the second fundamental question is how to create automated processes that are general enough to be applied to all or most circuit types? Finally, what circuit characteristics can we utilize to enhance the automation procedures? The objective of this dissertation is to explore these questions and provide suitable evidence that they can be answered. We begin by exploring machine learning techniques to model the design space using minimal simulation effort. Circuit partitioning is employed to reduce the complexity of the machine learning algorithms. Using the same partitioning algorithm we further explore the behavior characterization of analog circuits undergoing process variation. The circuit partitioning is general enough to be used by any CMOS based analog circuit. The ideas and learning gained from behavioral modeling during behavior characterization are used to improve the simulation through event propagation, input space search, complexity and information measurements. The reduction of the input space and behavioral modeling of low complexity, low information primitive elements reduces the simulation time of large analog and mixed signal circuits by 50-75%. The method is extended and applied to assist in analyzing analog circuit layout. All of the proposed methods are implemented on analog circuits ranging from small benchmark circuits to large, highly complex and specialized circuits. The proposed dependency based partitioning of large analog circuits in the time domain allows for fast identification of highly sensitive transistors as well as provides a natural division of circuit components. Modeling analog circuits in the time domain with this partitioning technique and SVM learning algorithms allows for very fast transient behavior predictions, three orders of magnitude faster than traditional simulators, while maintaining 95% accuracy. Analog verification can be explored through a reduction of simulation time by utilizing the partitions, information and complexity measures, and input space reduction. Behavioral models are created using supervised learning techniques for detected primitive elements. We will show the effectiveness of the method on four analog circuits where the simulation time is decreased by 55-75%. Utilizing the reduced simulation method, critical nodes can be found quickly and efficiently. The nodes found using this method match those found by an experienced layout engineer, but are detected automatically given the design and input specifications. The technique is further extended to find the tolerance of transistors to both process variation and power supply fluctuation. This information allows for corrections in layout overdesign or guidance in placing noise reducing components such as guard rings or decoupling capacitors. The proposed approaches significantly reduce the simulation time required to perform the tasks traditionally, maintain high accuracy, and can be automated

    Applied Mathematics and Computational Physics

    Get PDF
    As faster and more efficient numerical algorithms become available, the understanding of the physics and the mathematical foundation behind these new methods will play an increasingly important role. This Special Issue provides a platform for researchers from both academia and industry to present their novel computational methods that have engineering and physics applications

    Machine learning for particle identification in the LHCb detector

    Get PDF
    LHCb experiment is a specialised b-physics experiment at the Large Hadron Collider at CERN. It has a broad physics program with the primary objective being the search for CP violations that would explain the matter-antimatter asymmetry of the Universe. LHCb studies very rare phenomena, making it necessary to process millions of collision events per second to gather enough data in a reasonable time frame. Thus software and data analysis tools are essential for the success of the experiment. Particle identification (PID) is a crucial ingredient of most of the LHCb results. The quality of the particle identification depends a lot on the data processing algorithms. This dissertation aims to leverage the recent advances in machine learning field to improve the PID at LHCb. The thesis contribution consists of four essential parts related to LHCb internal projects. Muon identification aims to quickly separate muons from the other charged particles using only information from the Muon subsystem. The second contribution is a method that takes into account a priori information on label noise and improves the accuracy of a machine learning model for classification of this data. Such data are common in high-energy physics and, in particular, is used to develop the data-driven muon identification methods. Global PID combines information from different subdetectors into a single set of PID variables. Cherenkov detector fast simulation aims to improve the speed of the PID variables simulation in Monte-Carlo
    corecore