17,501 research outputs found

    Cramer-Rao bounds in the estimation of time of arrival in fading channels

    Get PDF
    This paper computes the Cramer-Rao bounds for the time of arrival estimation in a multipath Rice and Rayleigh fading scenario, conditioned to the previous estimation of a set of propagation channels, since these channel estimates (correlation between received signal and the pilot sequence) are sufficient statistics in the estimation of delays. Furthermore, channel estimation is a constitutive block in receivers, so we can take advantage of this information to improve timing estimation by using time and space diversity. The received signal is modeled as coming from a scattering environment that disperses the signal both in space and time. Spatial scattering is modeled with a Gaussian distribution and temporal dispersion as an exponential random variable. The impact of the sampling rate, the roll-off factor, the spatial and temporal correlation among channel estimates, the number of channel estimates, and the use of multiple sensors in the antenna at the receiver is studied and related to the mobile subscriber positioning issue. To our knowledge, this model is the only one of its kind as a result of the relationship between the space-time diversity and the accuracy of the timing estimation.Peer ReviewedPostprint (published version

    Carrier Frequency Offset Estimation for OFDM Systems using Repetitive Patterns

    Get PDF
    This paper deals with Carrier Frequency Offset (CFO) estimation for OFDM systems using repetitive patterns in the training symbol. A theoretical comparison based on Cramer Rao Bounds (CRB) for two kinds of CFO estimation methods has been presented in this paper. Through the comparison, it is shown that the performance of CFO estimation can be improved by exploiting the repetition property and the exact training symbol rather than exploiting the repetition property only. The selection of Q (number of repetition patterns) is discussed for both situations as well. Moreover, for exploiting the repetition and the exact training symbol, a new numerical procedure for the Maximum-Likelihood (ML) estimation is designed in this paper to save computational complexity. Analysis and numerical result are also given, demonstrating the conclusions in this paper

    A Tight Bound for Probability of Error for Quantum Counting Based Multiuser Detection

    Full text link
    Future wired and wireless communication systems will employ pure or combined Code Division Multiple Access (CDMA) technique, such as in the European 3G mobile UMTS or Power Line Telecommunication system, but also several 4G proposal includes e.g. multi carrier (MC) CDMA. Former examinations carried out the drawbacks of single user detectors (SUD), which are widely employed in narrowband IS-95 CDMA systems, and forced to develop suitable multiuser detection schemes to increase the efficiency against interference. However, at this moment there are only suboptimal solutions available because of the rather high complexity of optimal detectors. One of the possible receiver technologies can be the quantum assisted computing devices which allows high level parallelism in computation. The first commercial devices are estimated for the next years, which meets the advert of 3G and 4G systems. In this paper we analyze the error probability and give tight bounds in a static and dynamically changing environment for a novel quantum computation based Quantum Multiuser detection (QMUD) algorithm, employing quantum counting algorithm, which provides optimal solution.Comment: presented at IEEE ISIT 2002, 7 pages, 2 figure

    Self-tuning routine alarm analysis of vibration signals in steam turbine generators

    Get PDF
    This paper presents a self-tuning framework for knowledge-based diagnosis of routine alarms in steam turbine generators. The techniques provide a novel basis for initialising and updating time series feature extraction parameters used in the automated decision support of vibration events due to operational transients. The data-driven nature of the algorithms allows for machine specific characteristics of individual turbines to be learned and reasoned about. The paper provides a case study illustrating the routine alarm paradigm and the applicability of systems using such techniques

    Self-tuning diagnosis of routine alarms in rotating plant items

    Get PDF
    Condition monitoring of rotating plant items in the energy generation industry is often achieved through examination of vibration signals. Engineers use this data to monitor the operation of turbine generators, gas circulators and other key plant assets. A common approach in such monitoring is to trigger an alarm when a vibration deviates from a predefined envelope of normal operation. This limit-based approach, however, generates a large volume of alarms not indicative of system damage or concern, such as operational transients that result in temporary increases in vibration. In the nuclear generation context, all alarms on rotating plant assets must be analysed and subjected to auditable review. The analysis of these alarms is often undertaken manually, on a case- by-case basis, but recent developments in monitoring research have brought forward the use of intelligent systems techniques to automate parts of this process. A knowledge- based system (KBS) has been developed to automatically analyse routine alarms, where the underlying cause can be attributed to observable operational changes. The initialisation and ongoing calibration of such systems, however, is a problem, as normal machine state is not uniform throughout asset life due to maintenance procedures and the wear of components. In addition, different machines will exhibit differing vibro- acoustic dynamics. This paper proposes a self-tuning knowledge-driven analysis system for routine alarm diagnosis across the key rotating plant items within the nuclear context common to the UK. Such a system has the ability to automatically infer the causes of routine alarms, and provide auditable reports to the engineering staff

    Attention and Anticipation in Fast Visual-Inertial Navigation

    Get PDF
    We study a Visual-Inertial Navigation (VIN) problem in which a robot needs to estimate its state using an on-board camera and an inertial sensor, without any prior knowledge of the external environment. We consider the case in which the robot can allocate limited resources to VIN, due to tight computational constraints. Therefore, we answer the following question: under limited resources, what are the most relevant visual cues to maximize the performance of visual-inertial navigation? Our approach has four key ingredients. First, it is task-driven, in that the selection of the visual cues is guided by a metric quantifying the VIN performance. Second, it exploits the notion of anticipation, since it uses a simplified model for forward-simulation of robot dynamics, predicting the utility of a set of visual cues over a future time horizon. Third, it is efficient and easy to implement, since it leads to a greedy algorithm for the selection of the most relevant visual cues. Fourth, it provides formal performance guarantees: we leverage submodularity to prove that the greedy selection cannot be far from the optimal (combinatorial) selection. Simulations and real experiments on agile drones show that our approach ensures state-of-the-art VIN performance while maintaining a lean processing time. In the easy scenarios, our approach outperforms appearance-based feature selection in terms of localization errors. In the most challenging scenarios, it enables accurate visual-inertial navigation while appearance-based feature selection fails to track robot's motion during aggressive maneuvers.Comment: 20 pages, 7 figures, 2 table
    • 

    corecore