175 research outputs found

    Node Synchronization for the Viterbi Decoder

    Get PDF
    Motivated by the needs of NASA's Voyager 2 mission, in this paper we describe an algorithm which detects and corrects losses of node synchronization in convolutionally encoded data. This algorithm, which would be implemented as a hardware device external to a Viterbi decoder, makes statistical decisions about node synch based on the hard-quantized undecoded data stream. We will show that in a worst-case Voyager environment, our method will detect and correct a true loss of synch (thought to be a very rare event) within several hundred bits; many of the resulting outages will be corrected by the outer Reed-Solomon code. At the same time, the mean time between false alarms is on the order of several years, independent of the signal-to-noise ratio

    A mixed MAP/MLSE receiver for convolutional coded signals transmitted over a fading channel

    Get PDF
    Copyright © 2002 IEEEThis paper addresses the problem of estimating a rapidly fading convolutionally coded signal such as might be found in a wireless telephony or data network. We model both the channel gain and the convolutionally coded signal as Markov processes and, thus, the noisy received signal as a hidden Markov process (HMP). Two now-classical methods for estimating finite-state hidden Markov processes are the Viterbi (1967) algorithm and the a posteriori probability (APP) filter. A hybrid recursive estimation procedure is derived whereby one hidden process (the encoder state in our application) is estimated using a Viterbi-type (i.e., sequence based) cost and the other (the fading process) using an APP-based cost such as maximum a posteriori probability. The paper presents the new algorithm as applied specifically to this problem but also formulates the problem in a more general setting. The algorithm is derived in this general setting using reference probability methods. Using simulations, performance of the optimal scheme is compared with a number of suboptimal techniques-decision-directed Kalman and HMP predictors and Kalman filter and HMP filter per-survivor processing techniquesLangford B. White and Robert J. Elliot

    Multi-user receiver structures for direct sequence code division multiple access

    Get PDF

    An investigation into jamming GSM systems through exploiting weaknesses in the control channel forward error correction scheme

    Get PDF
    A dissertation submitted to the Faculty of Engineering and the Built Environment, University of the Witwatersrand, Johannesburg, in fulfilment of the requirements for the degree of Masters of Science in Engineering (Electrical), 2017The ability to communicate effectively is of key importance in military scenarios. The ability to interfere with these communications is a useful tool in gaining competitive advantages by disrupting enemy communications and protecting allied troops against threats such as remotely detonated explosives. By reducing the number of corrupt bits required by using customised error patterns, the transmission time required by a jammer can be reduced without sacrificing effectiveness. To this end a MATLAB simulation of the GSM control channel forward error correction scheme is tested against four jamming methodologies and three bit corruption techniques. These methodologies are aimed at minimising the number of transmitted jamming bits required from a jammer to prevent communications on the channel. By using custom error patterns it is possible to target individual components of the forward error correction scheme and bypass others. A ran dom error approach is implemented to test the system against random errors on the channel, a burst error approach is implemented to test the convolutional code against burst errors, and two proposed custom error patterns are implemented aimed at exploiting the Fire code’s error detection method. The burst error pattern approach required the least number of transmitted jamming bits. The system also shows improvements over current control channel jamming techniques in literature.XL201

    A survey of digital television broadcast transmission techniques

    No full text
    This paper is a survey of the transmission techniques used in digital television (TV) standards worldwide. With the increase in the demand for High-Definition (HD) TV, video-on-demand and mobile TV services, there was a real need for more bandwidth-efficient, flawless and crisp video quality, which motivated the migration from analogue to digital broadcasting. In this paper we present a brief history of the development of TV and then we survey the transmission technology used in different digital terrestrial, satellite, cable and mobile TV standards in different parts of the world. First, we present the Digital Video Broadcasting standards developed in Europe for terrestrial (DVB-T/T2), for satellite (DVB-S/S2), for cable (DVB-C) and for hand-held transmission (DVB-H). We then describe the Advanced Television System Committee standards developed in the USA both for terrestrial (ATSC) and for hand-held transmission (ATSC-M/H). We continue by describing the Integrated Services Digital Broadcasting standards developed in Japan for Terrestrial (ISDB-T) and Satellite (ISDB-S) transmission and then present the International System for Digital Television (ISDTV), which was developed in Brazil by adopteding the ISDB-T physical layer architecture. Following the ISDTV, we describe the Digital Terrestrial television Multimedia Broadcast (DTMB) standard developed in China. Finally, as a design example, we highlight the physical layer implementation of the DVB-T2 standar

    The mobile satellite service (MSS) systems for global personal communications

    Get PDF
    A worldwide interest has arisen on personal communications via satellite systems. The recently proposed mobile satellite service(MSS) systems are categorized four areas: geostationary earth orbit(GEO) systems, medium earth orbit(MEO) systems, low earth orbit(LEO) systems, and highly elliptical orbit(HEO) systems. Most of the systems in each category are introduced and explained including some technical details. The communication links and orbital constellations of some systems are analyzed and compared with different categories, and with different systems. Some economical aspects of the systems are mentioned. The regulatory issues about frequency spectrum allocation, and the current technical trends in these systems are summarized

    Manifold Learning Approaches to Compressing Latent Spaces of Unsupervised Feature Hierarchies

    Get PDF
    Field robots encounter dynamic unstructured environments containing a vast array of unique objects. In order to make sense of the world in which they are placed, they collect large quantities of unlabelled data with a variety of sensors. Producing robust and reliable applications depends entirely on the ability of the robot to understand the unlabelled data it obtains. Deep Learning techniques have had a high level of success in learning powerful unsupervised representations for a variety of discriminative and generative models. Applying these techniques to problems encountered in field robotics remains a challenging endeavour. Modern Deep Learning methods are typically trained with a substantial labelled dataset, while datasets produced in a field robotics context contain limited labelled training data. The primary motivation for this thesis stems from the problem of applying large scale Deep Learning models to field robotics datasets that are label poor. While the lack of labelled ground truth data drives the desire for unsupervised methods, the need for improving the model scaling is driven by two factors, performance and computational requirements. When utilising unsupervised layer outputs as representations for classification, the classification performance increases with layer size. Scaling up models with multiple large layers of features is problematic, as the sizes of subsequent hidden layers scales with the size of the previous layer. This quadratic scaling, and the associated time required to train such networks has prevented adoption of large Deep Learning models beyond cluster computing. The contributions in this thesis are developed from the observation that parameters or filter el- ements learnt in Deep Learning systems are typically highly structured, and contain related ele- ments. Firstly, the structure of unsupervised filters is utilised to construct a mapping from the high dimensional filter space to a low dimensional manifold. This creates a significantly smaller repre- sentation for subsequent feature learning. This mapping, and its effect on the resulting encodings, highlights the need for the ability to learn highly overcomplete sets of convolutional features. Driven by this need, the unsupervised pretraining of Deep Convolutional Networks is developed to include a number of modern training and regularisation methods. These pretrained models are then used to provide initialisations for supervised convolutional models trained on low quantities of labelled data. By utilising pretraining, a significant increase in classification performance on a number of publicly available datasets is achieved. In order to apply these techniques to outdoor 3D Laser Illuminated Detection And Ranging data, we develop a set of resampling techniques to provide uniform input to Deep Learning models. The features learnt in these systems outperform the high effort hand engineered features developed specifically for 3D data. The representation of a given signal is then reinterpreted as a combination of modes that exist on the learnt low dimensional filter manifold. From this, we develop an encoding technique that allows the high dimensional layer output to be represented as a combination of low dimensional components. This allows the growth of subsequent layers to only be dependent on the intrinsic dimensionality of the filter manifold and not the number of elements contained in the previous layer. Finally, the resulting unsupervised convolutional model, the encoding frameworks and the em- bedding methodology are used to produce a new unsupervised learning stratergy that is able to encode images in terms of overcomplete filter spaces, without producing an explosion in the size of the intermediate parameter spaces. This model produces classification results on par with state of the art models, yet requires significantly less computational resources and is suitable for use in the constrained computation environment of a field robot

    Computational Intelligence and Complexity Measures for Chaotic Information Processing

    Get PDF
    This dissertation investigates the application of computational intelligence methods in the analysis of nonlinear chaotic systems in the framework of many known and newly designed complex systems. Parallel comparisons are made between these methods. This provides insight into the difficult challenges facing nonlinear systems characterization and aids in developing a generalized algorithm in computing algorithmic complexity measures, Lyapunov exponents, information dimension and topological entropy. These metrics are implemented to characterize the dynamic patterns of discrete and continuous systems. These metrics make it possible to distinguish order from disorder in these systems. Steps required for computing Lyapunov exponents with a reorthonormalization method and a group theory approach are formalized. Procedures for implementing computational algorithms are designed and numerical results for each system are presented. The advance-time sampling technique is designed to overcome the scarcity of phase space samples and the buffer overflow problem in algorithmic complexity measure estimation in slow dynamics feedback-controlled systems. It is proved analytically and tested numerically that for a quasiperiodic system like a Fibonacci map, complexity grows logarithmically with the evolutionary length of the data block. It is concluded that a normalized algorithmic complexity measure can be used as a system classifier. This quantity turns out to be one for random sequences and a non-zero value less than one for chaotic sequences. For periodic and quasi-periodic responses, as data strings grow their normalized complexity approaches zero, while a faster deceasing rate is observed for periodic responses. Algorithmic complexity analysis is performed on a class of certain rate convolutional encoders. The degree of diffusion in random-like patterns is measured. Simulation evidence indicates that algorithmic complexity associated with a particular class of 1/n-rate code increases with the increase of the encoder constraint length. This occurs in parallel with the increase of error correcting capacity of the decoder. Comparing groups of rate-1/n convolutional encoders, it is observed that as the encoder rate decreases from 1/2 to 1/7, the encoded data sequence manifests smaller algorithmic complexity with a larger free distance value

    Inter-carrier interference mitigation for underwater acoustic communications

    Get PDF
    Communicating at a high data rate through the ocean is challenging. Such communications must be acoustic in order to travel long distances. The underwater acoustic channel has a long delay spread, which makes orthogonal frequency division multiplexing (OFDM) an attractive communication scheme. However, the underwater acoustic channel is highly dynamic, which has the potential to introduce significant inter-carrier interference (ICI). This thesis explores a number of means for mitigating ICI in such communication systems. One method that is explored is directly adapted linear turbo ICI cancellation. This scheme uses linear filters in an iterative structure to cancel the interference. Also explored is on-off keyed (OOK) OFDM, which is a signal designed to avoid ICI

    The Eureka 147 digital audio broadcasdting system adapted to the U.S.

    Get PDF
    Thesis (M. Eng.)--Massachusetts Institute of Technology, Dept. of Electrical Engineering and Computer Science, 1996.Includes bibliographical references (p. 83-85).by Nupur Gupta.M.Eng
    corecore