1,980 research outputs found

    Energy Harvesting Wireless Communications: A Review of Recent Advances

    Get PDF
    This article summarizes recent contributions in the broad area of energy harvesting wireless communications. In particular, we provide the current state of the art for wireless networks composed of energy harvesting nodes, starting from the information-theoretic performance limits to transmission scheduling policies and resource allocation, medium access and networking issues. The emerging related area of energy transfer for self-sustaining energy harvesting wireless networks is considered in detail covering both energy cooperation aspects and simultaneous energy and information transfer. Various potential models with energy harvesting nodes at different network scales are reviewed as well as models for energy consumption at the nodes.Comment: To appear in the IEEE Journal of Selected Areas in Communications (Special Issue: Wireless Communications Powered by Energy Harvesting and Wireless Energy Transfer

    Generalization on the Unseen, Logic Reasoning and Degree Curriculum

    Full text link
    This paper considers the learning of logical (Boolean) functions with focus on the generalization on the unseen (GOTU) setting, a strong case of out-of-distribution generalization. This is motivated by the fact that the rich combinatorial nature of data in certain reasoning tasks (e.g., arithmetic/logic) makes representative data sampling challenging, and learning successfully under GOTU gives a first vignette of an 'extrapolating' or 'reasoning' learner. We then study how different network architectures trained by (S)GD perform under GOTU and provide both theoretical and experimental evidence that for a class of network models including instances of Transformers, random features models, and diagonal linear networks, a min-degree-interpolator is learned on the unseen. We also provide evidence that other instances with larger learning rates or mean-field networks reach leaky min-degree solutions. These findings lead to two implications: (1) we provide an explanation to the length generalization problem (e.g., Anil et al. 2022); (2) we introduce a curriculum learning algorithm called Degree-Curriculum that learns monomials more efficiently by incrementing supports.Comment: To appear in ICML 202

    On the Application of PSpice for Localised Cloud Security

    Get PDF
    The work reported in this thesis commenced with a review of methods for creating random binary sequences for encoding data locally by the client before storing in the Cloud. The first method reviewed investigated evolutionary computing software which generated noise-producing functions from natural noise, a highly-speculative novel idea since noise is stochastic. Nevertheless, a function was created which generated noise to seed chaos oscillators which produced random binary sequences and this research led to a circuit-based one-time pad key chaos encoder for encrypting data. Circuit-based delay chaos oscillators, initialised with sampled electronic noise, were simulated in a linear circuit simulator called PSpice. Many simulation problems were encountered because of the nonlinear nature of chaos but were solved by creating new simulation parts, tools and simulation paradigms. Simulation data from a range of chaos sources was exported and analysed using Lyapunov analysis and identified two sources which produced one-time pad sequences with maximum entropy. This led to an encoding system which generated unlimited, infinitely-long period, unique random one-time pad encryption keys for plaintext data length matching. The keys were studied for maximum entropy and passed a suite of stringent internationally-accepted statistical tests for randomness. A prototype containing two delay chaos sources initialised by electronic noise was produced on a double-sided printed circuit board and produced more than 200 Mbits of OTPs. According to Vladimir Kotelnikov in 1941 and Claude Shannon in 1945, one-time pad sequences are theoretically-perfect and unbreakable, provided specific rules are adhered to. Two other techniques for generating random binary sequences were researched; a new circuit element, memristance was incorporated in a Chua chaos oscillator, and a fractional-order Lorenz chaos system with order less than three. Quantum computing will present many problems to cryptographic system security when existing systems are upgraded in the near future. The only existing encoding system that will resist cryptanalysis by this system is the unconditionally-secure one-time pad encryption

    Unity and Plurality of the European Cycle

    Get PDF
    We apply uni- and multivariate unobserved components models to the study of European growth cycles. The multivariate dimension enables to search similar or, more strongly, common components among national GDP series (quarterly data from 1960 to 1999). Three successive ways to exhibit the European cycle satisfactorily converge: the direct decomposition of the aggregate European GDP; the aggregation of the member countries' national cycles; the search for common components between these national cycles. The European aggregate fluctuations reveal two distinct cyclical components, assimilated to the classical Juglar (decennial, related to investment) and Kitchin (triennial, related to inventories) cycles. The European Juglar cycle cannot be reduced to a single common component of the national cycles. It has at least a dimension of "three": it can be understood as the interference of three elementary and independent sequences of stochastic shocks, that correspond to the European geographical division. The euro-zone is not yet an optimal currency area, as the shocks generating the European cycles are not completely symmetrical. Studying the sequences of innovations extracted from the models shows that euro-zone vulnerability to strong shocks and asymmetry of these shocks tend to decrease during the last decades, but this trend is neither regular, nor irreversible.(A)symmetrical shocks, Common factors, European integration, Growth cycles, Stochastic trends, Structural time series model.

    Error-correction on non-standard communication channels

    Get PDF
    Many communication systems are poorly modelled by the standard channels assumed in the information theory literature, such as the binary symmetric channel or the additive white Gaussian noise channel. Real systems suffer from additional problems including time-varying noise, cross-talk, synchronization errors and latency constraints. In this thesis, low-density parity-check codes and codes related to them are applied to non-standard channels. First, we look at time-varying noise modelled by a Markov channel. A low-density parity-check code decoder is modified to give an improvement of over 1dB. Secondly, novel codes based on low-density parity-check codes are introduced which produce transmissions with Pr(bit = 1) ≠ Pr(bit = 0). These non-linear codes are shown to be good candidates for multi-user channels with crosstalk, such as optical channels. Thirdly, a channel with synchronization errors is modelled by random uncorrelated insertion or deletion events at unknown positions. Marker codes formed from low-density parity-check codewords with regular markers inserted within them are studied. It is shown that a marker code with iterative decoding has performance close to the bounds on the channel capacity, significantly outperforming other known codes. Finally, coding for a system with latency constraints is studied. For example, if a telemetry system involves a slow channel some error correction is often needed quickly whilst the code should be able to correct remaining errors later. A new code is formed from the intersection of a convolutional code with a high rate low-density parity-check code. The convolutional code has good early decoding performance and the high rate low-density parity-check code efficiently cleans up remaining errors after receiving the entire block. Simulations of the block code show a gain of 1.5dB over a standard NASA code

    Stream ciphers and linear complexity

    Get PDF
    Master'sMASTER OF SCIENC

    Aspects of local linear complexity

    Get PDF
    The concept of linear complexity is important in cryptography, and in particular in the study of stream ciphers. There are two varieties of linear complexity; global linear complexity, which applies to infinite periodic binary sequences, and local linear complexity, which applies to binary sequences of finite length.This thesis is concerned primarily with the latter.The local linear complexity of a finite binary sequence can be computed using the Berlekamp-Massey algorithm. Chapter 2 deals with a number of aspects of this algorithm.The Berlekamp-Massey algorithm also yields the linear complexity profile of a binary sequence. Linear complexity profiles are discussed in Chapter 3, and a number of associated enumeration results are obtained.In Chapter 4 it is shown that if the bits of a binary sequence satisfy certain conditions, expressible as a set of linear equations, then the linear complexity profile of the sequence will be restricted in some way. These restrictions take the form of conditions on the heights of the jumps in the profile.The final chapter deals with the randomness testing of binary sequences. Statistical tests for randomness based on linear complexity profiles are derived, and it is demonstrated how these tests can identify the non-randomness in the sequences discussed in the preceding chapter.<p

    Modelling individual variability in cognitive development

    Get PDF
    Investigating variability in reasoning tasks can provide insights into key issues in the study of cognitive development. These include the mechanisms that underlie developmental transitions, and the distinction between individual differences and developmental disorders. We explored the mechanistic basis of variability in two connectionist models of cognitive development, a model of the Piagetian balance scale task (McClelland, 1989) and a model of the Piagetian conservation task (Shultz, 1998). For the balance scale task, we began with a simple feed-forward connectionist model and training patterns based on McClelland (1989). We investigated computational parameters, problem encodings, and training environments that contributed to variability in development, both across groups and within individuals. We report on the parameters that affect the complexity of reasoning and the nature of ‘rule’ transitions exhibited by networks learning to reason about balance scale problems. For the conservation task, we took the task structure and problem encoding of Shultz (1998) as our base model. We examined the computational parameters, problem encodings, and training environments that contributed to variability in development, in particular examining the parameters that affected the emergence of abstraction. We relate the findings to existing cognitive theories on the causes of individual differences in development
    corecore