872 research outputs found

    Reconfiguration for Fault Tolerance and Performance Analysis

    Get PDF
    Architecture reconfiguration, the ability of a system to alter the active interconnection among modules, has a history of different purposes and strategies. Its purposes develop from the relatively simple desire to formalize procedures that all processes have in common to reconfiguration for the improvement of fault-tolerance, to reconfiguration for performance enhancement, either through the simple maximizing of system use or by sophisticated notions of wedding topology to the specific needs of a given process. Strategies range from straightforward redundancy by means of an identical backup system to intricate structures employing multistage interconnection networks. The present discussion surveys the more important contributions to developments in reconfigurable architecture. The strategy here is in a sense to approach the field from an historical perspective, with the goal of developing a more coherent theory of reconfiguration. First, the Turing and von Neumann machines are discussed from the perspective of system reconfiguration, and it is seen that this early important theoretical work contains little that anticipates reconfiguration. Then some early developments in reconfiguration are analyzed, including the work of Estrin and associates on the fixed plus variable restructurable computer system, the attempt to theorize about configurable computers by Miller and Cocke, and the work of Reddi and Feustel on their restructable computer system. The discussion then focuses on the most sustained systems for fault tolerance and performance enhancement that have been proposed. An attempt will be made to define fault tolerance and to investigate some of the strategies used to achieve it. By investigating four different systems, the Tandern computer, the C.vmp system, the Extra Stage Cube, and the Gamma network, the move from dynamic redundancy to reconfiguration is observed. Then reconfiguration for performance enhancement is discussed. A survey of some proposals is attempted, then the discussion focuses on the most sustained systems that have been proposed: PASM, the DC architecture, the Star local network, and the NYU Ultracomputer. The discussion is organized around a comparison of control, scheduling, communication, and network topology. Finally, comparisons are drawn between fault tolerance and performance enhancement, in order to clarify the notion of reconfiguration and to reveal the common ground of fault tolerance and performance enhancement as well as the areas in which they diverge. An attempt is made in the conclusion to derive from this survey and analysis some observations on the nature of reconfiguration, as well as some remarks on necessary further areas of research

    Applications of Coding Theory to Massive Multiple Access and Big Data Problems

    Get PDF
    The broad theme of this dissertation is design of schemes that admit iterative algorithms with low computational complexity to some new problems arising in massive multiple access and big data. Although bipartite Tanner graphs and low-complexity iterative algorithms such as peeling and message passing decoders are very popular in the channel coding literature they are not as widely used in the respective areas of study and this dissertation serves as an important step in that direction to bridge that gap. The contributions of this dissertation can be categorized into the following three parts. In the first part of this dissertation, a timely and interesting multiple access problem for a massive number of uncoordinated devices is considered wherein the base station is interested only in recovering the list of messages without regard to the identity of the respective sources. A coding scheme with polynomial encoding and decoding complexities is proposed for this problem, the two main features of which are (i) design of a close-to-optimal coding scheme for the T-user Gaussian multiple access channel and (ii) successive interference cancellation decoder. The proposed coding scheme not only improves on the performance of the previously best known coding scheme by ≈ 13 dB but is only ≈ 6 dB away from the random Gaussian coding information rate. In the second part construction-D lattices are constructed where the underlying linear codes are nested binary spatially-coupled low-density parity-check codes (SCLDPC) codes with uniform left and right degrees. It is shown that the proposed lattices achieve the Poltyrev limit under multistage belief propagation decoding. Leveraging this result lattice codes constructed from these lattices are applied to the three user symmetric interference channel. For channel gains within 0.39 dB from the very strong interference regime, the proposed lattice coding scheme with the iterative belief propagation decoder, for target error rates of ≈ 10^-5, is only 2:6 dB away the Shannon limit. The third part focuses on support recovery in compressed sensing and the nonadaptive group testing (GT) problems. Prior to this work, sensing schemes based on left-regular sparse bipartite graphs and iterative recovery algorithms based on peeling decoder were proposed for the above problems. These schemes require O(K logN) and Ω(K logK logN) measurements respectively to recover the sparse signal with high probability (w.h.p), where N, K denote the dimension and sparsity of the signal respectively (K (double backward arrow) N). Also the number of measurements required to recover at least (1 - €) fraction of defective items w.h.p (approximate GT) is shown to be cv€_K logN/K. In this dissertation, instead of the left-regular bipartite graphs, left-and- right regular bipartite graph based sensing schemes are analyzed. It is shown that this design strategy enables to achieve superior and sharper results. For the support recovery problem, the number of measurements is reduced to the optimal lower bound of Ω (K log N/K). Similarly for the approximate GT, proposed scheme only requires c€_K log N/ K measurements. For the probabilistic GT, proposed scheme requires (K logK log vN/ K) measurements which is only log K factor away from the best known lower bound of Ω (K log N/ K). Apart from the asymptotic regime, the proposed schemes also demonstrate significant improvement in the required number of measurements for finite values of K, N

    Applications of Coding Theory to Massive Multiple Access and Big Data Problems

    Get PDF
    The broad theme of this dissertation is design of schemes that admit iterative algorithms with low computational complexity to some new problems arising in massive multiple access and big data. Although bipartite Tanner graphs and low-complexity iterative algorithms such as peeling and message passing decoders are very popular in the channel coding literature they are not as widely used in the respective areas of study and this dissertation serves as an important step in that direction to bridge that gap. The contributions of this dissertation can be categorized into the following three parts. In the first part of this dissertation, a timely and interesting multiple access problem for a massive number of uncoordinated devices is considered wherein the base station is interested only in recovering the list of messages without regard to the identity of the respective sources. A coding scheme with polynomial encoding and decoding complexities is proposed for this problem, the two main features of which are (i) design of a close-to-optimal coding scheme for the T-user Gaussian multiple access channel and (ii) successive interference cancellation decoder. The proposed coding scheme not only improves on the performance of the previously best known coding scheme by ≈ 13 dB but is only ≈ 6 dB away from the random Gaussian coding information rate. In the second part construction-D lattices are constructed where the underlying linear codes are nested binary spatially-coupled low-density parity-check codes (SCLDPC) codes with uniform left and right degrees. It is shown that the proposed lattices achieve the Poltyrev limit under multistage belief propagation decoding. Leveraging this result lattice codes constructed from these lattices are applied to the three user symmetric interference channel. For channel gains within 0.39 dB from the very strong interference regime, the proposed lattice coding scheme with the iterative belief propagation decoder, for target error rates of ≈ 10^-5, is only 2:6 dB away the Shannon limit. The third part focuses on support recovery in compressed sensing and the nonadaptive group testing (GT) problems. Prior to this work, sensing schemes based on left-regular sparse bipartite graphs and iterative recovery algorithms based on peeling decoder were proposed for the above problems. These schemes require O(K logN) and Ω(K logK logN) measurements respectively to recover the sparse signal with high probability (w.h.p), where N, K denote the dimension and sparsity of the signal respectively (K (double backward arrow) N). Also the number of measurements required to recover at least (1 - €) fraction of defective items w.h.p (approximate GT) is shown to be cv€_K logN/K. In this dissertation, instead of the left-regular bipartite graphs, left-and- right regular bipartite graph based sensing schemes are analyzed. It is shown that this design strategy enables to achieve superior and sharper results. For the support recovery problem, the number of measurements is reduced to the optimal lower bound of Ω (K log N/K). Similarly for the approximate GT, proposed scheme only requires c€_K log N/ K measurements. For the probabilistic GT, proposed scheme requires (K logK log vN/ K) measurements which is only log K factor away from the best known lower bound of Ω (K log N/ K). Apart from the asymptotic regime, the proposed schemes also demonstrate significant improvement in the required number of measurements for finite values of K, N

    Integer-Forcing Linear Receivers

    Get PDF
    Linear receivers are often used to reduce the implementation complexity of multiple-antenna systems. In a traditional linear receiver architecture, the receive antennas are used to separate out the codewords sent by each transmit antenna, which can then be decoded individually. Although easy to implement, this approach can be highly suboptimal when the channel matrix is near singular. This paper develops a new linear receiver architecture that uses the receive antennas to create an effective channel matrix with integer-valued entries. Rather than attempting to recover transmitted codewords directly, the decoder recovers integer combinations of the codewords according to the entries of the effective channel matrix. The codewords are all generated using the same linear code which guarantees that these integer combinations are themselves codewords. Provided that the effective channel is full rank, these integer combinations can then be digitally solved for the original codewords. This paper focuses on the special case where there is no coding across transmit antennas and no channel state information at the transmitter(s), which corresponds either to a multi-user uplink scenario or to single-user V-BLAST encoding. In this setting, the proposed integer-forcing linear receiver significantly outperforms conventional linear architectures such as the zero-forcing and linear MMSE receiver. In the high SNR regime, the proposed receiver attains the optimal diversity-multiplexing tradeoff for the standard MIMO channel with no coding across transmit antennas. It is further shown that in an extended MIMO model with interference, the integer-forcing linear receiver achieves the optimal generalized degrees-of-freedom.Comment: 40 pages, 16 figures, to appear in the IEEE Transactions on Information Theor

    Beam Pattern Optimization Method for Subarray-Based Hybrid Beamforming Systems

    Get PDF
    Massive multiple-input multiple-output (MIMO) systems operating at millimeter-wave (mmWave) frequencies promise to satisfy the demand for higher data rates in mobile communication networks. A practical challenge that arises is the calibration in amplitude and phase of these massive MIMO systems, as the antenna elements are too densely packed to provide a separate calibration branch for measuring them independently. Over-the-air (OTA) calibration methods are viable solutions to this problem. In contrast to previous works, the here presented OTA calibration method is investigated and optimized for subarray-based hybrid beamforming (SBHB) systems. SBHB systems represent an efficient architectural solution to realize massive MIMO systems. Moreover, based on OTA scattering parameter measurements, the ambiguities of the phase shifters are exploited and two criteria to optimize the beam pattern are formulated. Finally, the optimization criteria are examined in measurements utilizing a novel SBHB receiver system operating at 27.8 GHz

    Dynamic Systolization for Developing Multiprocessor Supercomputers

    Get PDF
    A dynamic network approach is introduced for developing reconfigurable, systolic arrays or wavefront processors; This allows one to design very powerful and flexible processors to be used in a general-purpose, reconfigurable, and fault-tolerant, multiprocessor computer system. The concepts of macro-dataflow and multitasking can be integrated to handle variable-resolution granularities in computationally intensive algorithms. A multiprocessor architecture, Remps, is proposed based on these design methodologies. The Remps architecture is generalized from the Cedar, HEP, Cray X- MP, Trac, NYU ultracomputer, S-l, Pumps, Chip, and SAM projects. Our goal is to provide a multiprocessor research model for developing design methodologies, multiprocessing and multitasking supports, dynamic systolic/wavefront array processors, interconnection networks, reconfiguration techniques, and performance analysis tools. These system design and operational techniques should be useful to those who are developing or evaluating multiprocessor supercomputers

    A Computational Framework for Efficient Reliability Analysis of Complex Networks

    Get PDF
    With the growing scale and complexity of modern infrastructure networks comes the challenge of developing efficient and dependable methods for analysing their reliability. Special attention must be given to potential network interdependencies as disregarding these can lead to catastrophic failures. Furthermore, it is of paramount importance to properly treat all uncertainties. The survival signature is a recent development built to effectively analyse complex networks that far exceeds standard techniques in several important areas. Its most distinguishing feature is the complete separation of system structure from probabilistic information. Because of this, it is possible to take into account a variety of component failure phenomena such as dependencies, common causes of failure, and imprecise probabilities without reevaluating the network structure. This cumulative dissertation presents several key improvements to the survival signature ecosystem focused on the structural evaluation of the system as well as the modelling of component failures. A new method is presented in which (inter)-dependencies between components and networks are modelled using vine copulas. Furthermore, aleatory and epistemic uncertainties are included by applying probability boxes and imprecise copulas. By leveraging the large number of available copula families it is possible to account for varying dependent effects. The graph-based design of vine copulas synergizes well with the typical descriptions of network topologies. The proposed method is tested on a challenging scenario using the IEEE reliability test system, demonstrating its usefulness and emphasizing the ability to represent complicated scenarios with a range of dependent failure modes. The numerical effort required to analytically compute the survival signature is prohibitive for large complex systems. This work presents two methods for the approximation of the survival signature. In the first approach system configurations of low interest are excluded using percolation theory, while the remaining parts of the signature are estimated by Monte Carlo simulation. The method is able to accurately approximate the survival signature with very small errors while drastically reducing computational demand. Several simple test systems, as well as two real-world situations, are used to show the accuracy and performance. However, with increasing network size and complexity this technique also reaches its limits. A second method is presented where the numerical demand is further reduced. Here, instead of approximating the whole survival signature only a few strategically selected values are computed using Monte Carlo simulation and used to build a surrogate model based on normalized radial basis functions. The uncertainty resulting from the approximation of the data points is then propagated through an interval predictor model which estimates bounds for the remaining survival signature values. This imprecise model provides bounds on the survival signature and therefore the network reliability. Because a few data points are sufficient to build the interval predictor model it allows for even larger systems to be analysed. With the rising complexity of not just the system but also the individual components themselves comes the need for the components to be modelled as subsystems in a system-of-systems approach. A study is presented, where a previously developed framework for resilience decision-making is adapted to multidimensional scenarios in which the subsystems are represented as survival signatures. The survival signature of the subsystems can be computed ahead of the resilience analysis due to the inherent separation of structural information. This enables efficient analysis in which the failure rates of subsystems for various resilience-enhancing endowments are calculated directly from the survival function without reevaluating the system structure. In addition to the advancements in the field of survival signature, this work also presents a new framework for uncertainty quantification developed as a package in the Julia programming language called UncertaintyQuantification.jl. Julia is a modern high-level dynamic programming language that is ideal for applications such as data analysis and scientific computing. UncertaintyQuantification.jl was built from the ground up to be generalised and versatile while remaining simple to use. The framework is in constant development and its goal is to become a toolbox encompassing state-of-the-art algorithms from all fields of uncertainty quantification and to serve as a valuable tool for both research and industry. UncertaintyQuantification.jl currently includes simulation-based reliability analysis utilising a wide range of sampling schemes, local and global sensitivity analysis, and surrogate modelling methodologies
    • …
    corecore