12,503 research outputs found

    The Data Big Bang and the Expanding Digital Universe: High-Dimensional, Complex and Massive Data Sets in an Inflationary Epoch

    Get PDF
    Recent and forthcoming advances in instrumentation, and giant new surveys, are creating astronomical data sets that are not amenable to the methods of analysis familiar to astronomers. Traditional methods are often inadequate not merely because of the size in bytes of the data sets, but also because of the complexity of modern data sets. Mathematical limitations of familiar algorithms and techniques in dealing with such data sets create a critical need for new paradigms for the representation, analysis and scientific visualization (as opposed to illustrative visualization) of heterogeneous, multiresolution data across application domains. Some of the problems presented by the new data sets have been addressed by other disciplines such as applied mathematics, statistics and machine learning and have been utilized by other sciences such as space-based geosciences. Unfortunately, valuable results pertaining to these problems are mostly to be found only in publications outside of astronomy. Here we offer brief overviews of a number of concepts, techniques and developments, some "old" and some new. These are generally unknown to most of the astronomical community, but are vital to the analysis and visualization of complex datasets and images. In order for astronomers to take advantage of the richness and complexity of the new era of data, and to be able to identify, adopt, and apply new solutions, the astronomical community needs a certain degree of awareness and understanding of the new concepts. One of the goals of this paper is to help bridge the gap between applied mathematics, artificial intelligence and computer science on the one side and astronomy on the other.Comment: 24 pages, 8 Figures, 1 Table. Accepted for publication: "Advances in Astronomy, special issue "Robotic Astronomy

    Overview of high-speed TDM-PON beyond 50 Gbps per wavelength using digital signal processing [Invited Tutorial]

    Get PDF
    The recent evolution of passive optical network standards and related research activities for physical layer solutions that achieve bit rates well above 10 Gbps per wavelength (lambda) is discussed. We show that the advancement toward 50, 100, and 200 Gbps/lambda will certainly require a strong introduction of advanced digital signal processing (DSP) technologies for linear, and maybe nonlinear, equalization and for forward error correction. We start by reviewing in detail the current standardization activities in the International Telecommunication Union and the Institute of Electrical and Electronics Engineers, and then we present a comparison of the DSP approaches for traditional direct detection solutions and for future coherent detection approaches. (c) 2022 Optica Publishing Grou

    Bank regulation and the network paradigm : policy implications for developing and transition economies

    Get PDF
    Current issues in banking policy range from the need to construct basic institutions and incentive structures in transition economies, to the challenges posed by the increasingly complex interactions involved in contemporary banking. The authors of this report outline the basic regulatory framework needed to reduce bank failures, as shown by recent experience. Theoreticians note that banking increasingly displays network characteristics that may call for corrective action but make policy intervention ineffective or counterproductive. Networks are susceptible to externalities, redundancy, (ensuring that flows cannot be obstructed by blocking just one path), and a tendency to adapt to disturbances in a complex manner. Regulation is justified, but the complexity of the network makes successful interventions hard to design. Supervision has a role, and the authors outline the basic regulatory measures needed, but the blurring of boundaries between banking and the rest of the financial network has placed an upper bound on the effectiveness of supervision. The authors conclude that although bank failures--mitigated by deposit insurance to protect small savers--must be put up with in designing banking policy, the social cost of bank failure is not as high as is sometimes thought.Payment Systems&Infrastructure,Economic Theory&Research,Banks&Banking Reform,Financial Intermediation,Financial Crisis Management&Restructuring,Banks&Banking Reform,Financial Intermediation,Financial Crisis Management&Restructuring,Economic Theory&Research,Environmental Economics&Policies

    A Comparison of Quaternion Neural Network Backpropagation Algorithms

    Get PDF
    This research paper focuses on quaternion neural networks (QNNs) - a type of neural network wherein the weights, biases, and input values are all represented as quaternion numbers. Previous studies have shown that QNNs outperform real-valued neural networks in basic tasks and have potential in high-dimensional problem spaces. However, research on QNNs has been fragmented, with contributions from different mathematical and engineering domains leading to unintentional overlap in QNN literature. This work aims to unify existing research by evaluating four distinct QNN backpropagation algorithms, including the novel GHR-calculus backpropagation algorithm, and providing concise, scalable implementations of each algorithm using a modern compiled programming language. Additionally, the authors apply a robust Design of Experiments (DoE) methodology to compare the accuracy and runtime of each algorithm. The experiments demonstrate that the Clifford Multilayer Perceptron (CMLP) learning algorithm results in statistically significant improvements in network test set accuracy while maintaining comparable runtime performance to the other three algorithms in four distinct regression tasks. By unifying existing research and comparing different QNN training algorithms, this work develops a state-of-the-art baseline and provides important insights into the potential of QNNs for solving high-dimensional problems

    Codebook Features: Sparse and Discrete Interpretability for Neural Networks

    Full text link
    Understanding neural networks is challenging in part because of the dense, continuous nature of their hidden states. We explore whether we can train neural networks to have hidden states that are sparse, discrete, and more interpretable by quantizing their continuous features into what we call codebook features. Codebook features are produced by finetuning neural networks with vector quantization bottlenecks at each layer, producing a network whose hidden features are the sum of a small number of discrete vector codes chosen from a larger codebook. Surprisingly, we find that neural networks can operate under this extreme bottleneck with only modest degradation in performance. This sparse, discrete bottleneck also provides an intuitive way of controlling neural network behavior: first, find codes that activate when the desired behavior is present, then activate those same codes during generation to elicit that behavior. We validate our approach by training codebook Transformers on several different datasets. First, we explore a finite state machine dataset with far more hidden states than neurons. In this setting, our approach overcomes the superposition problem by assigning states to distinct codes, and we find that we can make the neural network behave as if it is in a different state by activating the code for that state. Second, we train Transformer language models with up to 410M parameters on two natural language datasets. We identify codes in these models representing diverse, disentangled concepts (ranging from negative emotions to months of the year) and find that we can guide the model to generate different topics by activating the appropriate codes during inference. Overall, codebook features appear to be a promising unit of analysis and control for neural networks and interpretability. Our codebase and models are open-sourced at https://github.com/taufeeque9/codebook-features
    corecore