145 research outputs found

    Analysis of the variation of horizontal stresses and strains in bedded deposits in the eastern and Midwestern United States

    Get PDF
    The variation of the horizontal stress magnitude in bedded deposits in the eastern and Midwestern United States is analyzed with respect to the site depth and the rock elastic modulus using data from 40 sites. For the development of adequate regression models with the elastic modulus, zones with sufficiently uniform strains must be established. A low strain zone encompassing much of the eastern United States and a high strain zone encompassing a portion of southern West Virginia are delineated. In each zone, the regression models with the elastic modulus as the independent variable explains about 85 percent of the maximum horizontal stress variation. In general, the minimum horizontal stress is much less dependent on the elastic modulus. Though the site depths range from 275 to 2,300 ft., depth can explain only 15 percent of the maximum horizontal stress variation and is apparently not a significant independent factor

    Uncorrectable sequences and telecommand

    Get PDF
    The purpose of a tail sequence for command link transmission units is to fail to decode, so that the command decoder will begin searching for the start of the next unit. A tail sequence used by several missions and recommended for this purpose by the Consultative Committee on Space Data Standards is analyzed. A single channel error can cause the sequence to decode. An alternative sequence requiring at least two channel errors before it can possibly decode is presented. (No sequence requiring more than two channel errors before it can possibly decode exists for this code.

    Some easily analyzable convolutional codes

    Get PDF
    Convolutional codes have played and will play a key role in the downlink telemetry systems on many NASA deep-space probes, including Voyager, Magellan, and Galileo. One of the chief difficulties associated with the use of convolutional codes, however, is the notorious difficulty of analyzing them. Given a convolutional code as specified, say, by its generator polynomials, it is no easy matter to say how well that code will perform on a given noisy channel. The usual first step in such an analysis is to computer the code's free distance; this can be done with an algorithm whose complexity is exponential in the code's constraint length. The second step is often to calculate the transfer function in one, two, or three variables, or at least a few terms in its power series expansion. This step is quite hard, and for many codes of relatively short constraint lengths, it can be intractable. However, a large class of convolutional codes were discovered for which the free distance can be computed by inspection, and for which there is a closed-form expression for the three-variable transfer function. Although for large constraint lengths, these codes have relatively low rates, they are nevertheless interesting and potentially useful. Furthermore, the ideas developed here to analyze these specialized codes may well extend to a much larger class

    Optical deep space communication via relay satellite

    Get PDF
    The possible use of an optical for high rate data transmission from a deep space vehicle to an Earth-orbiting relay satellite while RF links are envisioned for the relay to Earth link was studied. A preliminary link analysis is presented for initial sizing of optical components and power levels, in terms of achievable data rates and feasible range distances. Modulation formats are restricted to pulsed laser operation, involving bot coded and uncoded schemes. The advantage of an optical link over present RF deep space link capabilities is shown. The problems of acquisition, pointing and tracking with narrow optical beams are presented and discussed. Mathematical models of beam trackers are derived, aiding in the design of such systems for minimizing beam pointing errors. The expected orbital geometry between spacecraft and relay satellite, and its impact on beam pointing dynamics are discussed

    Rate-compatible protograph LDPC code families with linear minimum distance

    Get PDF
    Digital communication coding methods are shown, which generate certain types of low-density parity-check (LDPC) codes built from protographs. A first method creates protographs having the linear minimum distance property and comprising at least one variable node with degree less than 3. A second method creates families of protographs of different rates, all having the linear minimum distance property, and structurally identical for all rates except for a rate-dependent designation of certain variable nodes as transmitted or non-transmitted. A third method creates families of protographs of different rates, all having the linear minimum distance property, and structurally identical for all rates except for a rate-dependent designation of the status of certain variable nodes as non-transmitted or set to zero. LDPC codes built from the protographs created by these methods can simultaneously have low error floors and low iterative decoding thresholds, and families of such codes of different rates can be decoded efficiently using a common decoding architecture

    Recent advances in coding theory for near error-free communications

    Get PDF
    Channel and source coding theories are discussed. The following subject areas are covered: large constraint length convolutional codes (the Galileo code); decoder design (the big Viterbi decoder); Voyager's and Galileo's data compression scheme; current research in data compression for images; neural networks for soft decoding; neural networks for source decoding; finite-state codes; and fractals for data compression

    Transfer function bounds on the performance of turbo codes

    Get PDF
    In this article we apply transfer function bounding techniques to obtain upper bounds on the bit-error rate for maximum likelihood decoding of turbo codes constructed with random permutations. These techniques are applied to two turbo codes with constraint length 3 and later extended to other codes. The performance predicted by these bounds is compared with simulation results. The bounds are useful in estimating the 'error floor' that is difficult to measure by simulation, and they provide insight on how to lower this floor. More refined bounds are needed for accurate performance measures at lower signal-to-noise ratios

    Model for Quantitative Estimation of Functionality Influence on the Final Value of a Software Product

    Get PDF
    The gap between software development requirements and available resources of software developers continues to widen. This requires changes in the development and organization of software development. This study introduced a quantitative software development management methodology that estimates the relative importance and risk of functionality retention or abundance, which determines the final value of the software product. The final value of a software product is interpreted as a function of its requirements and functionalities, represented as a computational graph (called a software product graph). The software product graph allows the relative importance of functionalities to be estimated by calculating the corresponding partial derivatives of the value function. The risk of not implementing functionality was estimated by reducing the product's final value. This model was applied to two EU projects, CareHD and vINCI. In vINCI, functionalities with the most significant added value to the application are developed based on the implemented model, and those related to the least value are abandoned. Optimization was not implemented in the CareHD project, and proceeded as initially designed. Consequently, only 71% of the CareHD potential value was achieved. The proposed model enables rational management and organization of software product development with real-time quantitative evaluation of functionalities impacts and, assessment of the risks of omitting them without a significant impact. Quantitative evaluation of the impacts and risks of retention or abundance is possible based on the proposed algorithm, which is the core of the model. This model is a tool for the rational organization and development of software products

    Minimal trellises for linear block codes and their duals

    Get PDF
    We consider the problem of finding a trellis for a linear block code that minimizes one or more measures of trellis complexity for a fixed permutation of the code. We examine constraints on trellises, including relationships between the minimal trellis of a code and that of the dual code. We identify the primitive structures that can appear in a minimal trellis and relate this to those for the minimal trellis of the dual code

    Encoders for block-circulant LDPC codes

    Get PDF
    Methods and apparatus to encode message input symbols in accordance with an accumulate-repeat-accumulate code with repetition three or four are disclosed. Block circulant matrices are used. A first method and apparatus make use of the block-circulant structure of the parity check matrix. A second method and apparatus use block-circulant generator matrices
    • …
    corecore