498 research outputs found

    Distributed video coding for wireless video sensor networks: a review of the state-of-the-art architectures

    Get PDF
    Distributed video coding (DVC) is a relatively new video coding architecture originated from two fundamental theorems namely, Slepian–Wolf and Wyner–Ziv. Recent research developments have made DVC attractive for applications in the emerging domain of wireless video sensor networks (WVSNs). This paper reviews the state-of-the-art DVC architectures with a focus on understanding their opportunities and gaps in addressing the operational requirements and application needs of WVSNs

    Authentication of Fingerprint Scanners

    Get PDF
    To counter certain security threats in biometric authentication systems, particularly in portable devices (e.g., phones and laptops), we have developed a technology for automated authentication of fingerprint scanners of exactly the same type, manufacturer, and model. The technology uses unique, persistent, and unalterable characteristics of the fingerprint scanners to detect attacks on the scanners, such as detecting an image containing the fingerprint pattern of the legitimate user and acquired with the authentic fingerprint scanner replaced by another image that still contains the fingerprint pattern of the legitimate user but has been acquired with another, unauthentic fingerprint scanner. The technology uses the conventional authentication steps of enrolment and verification, each of which can be implemented in a portable device, a desktop, or a remote server. The technology is extremely accurate, computationally efficient, robust in a wide range of conditions, does not require any hardware modifications, and can be added (as a software add-on) to systems already manufactured and placed into service. We have also implemented the technology in a demonstration prototype for both area and swipe scanners

    Sparsity in Linear Predictive Coding of Speech

    Get PDF
    nrpages: 197status: publishe

    A novel constant quality rate control scheme for object- based encoding

    Get PDF
    Master'sMASTER OF ENGINEERIN

    Integrity Determination for Image Rendering Vision Navigation

    Get PDF
    This research addresses the lack of quantitative integrity approaches for vision navigation, relying on the use of image or image rendering techniques. The ability to provide quantifiable integrity is a critical aspect for utilization of vision systems as a viable means of precision navigation. This research describes the development of two unique approaches for determining uncertainty and integrity for a vision based, precision, relative navigation system, and is based on the concept of using a single camera vision system, such as an electro-optical (EO) or infrared imaging (IR) sensor, to monitor for unacceptably large and potentially unsafe relative navigation errors. The first approach formulates the integrity solution by means of discrete detection methods, for which the systems monitors for conditions when the platform is outside of a defined operational area, thus preventing hazardously misleading information (HMI). The second approach utilizes a generalized Bayesian inference approach, in which a full pdf determination of the estimated navigation state is realized. These integrity approaches are demonstrated, in the context of an aerial refueling application, to provide extremely high levels (10-6) of navigation integrity. Additionally, various sensitivities analyzes show the robustness of these integrity approaches to various vision sensor effects and sensor trade-offs

    Non-Radiative Calibration of Active Antenna Arrays

    Get PDF
    Antenna arrays offer significant benefits for modern wireless communication systems but they remain difficult and expensive to produce. One of the impediments of utilising them is to maintain knowledge of the precise amplitude and phase relationships between the elements of the array, which are sensitive to errors particularly when each element of the array is connected to its own transceiver. These errors arise from multiple sources such as manufacturing errors, mutual coupling between the elements, thermal effects, component aging and element location errors. The calibration problem of antenna arrays is primarily the identification of the amplitude and phase mismatch, and then using this information for correction. This thesis will present a novel measurement-based calibration approach, which uses a fixed structure allowing each element of the array to be measured. The measurement structure is based around multiple sensors, which are interleaved with the elements of the array to provide a scalable structure that provides multiple measurement paths to almost all of the elements of the array. This structure is utilised by comparison based calibration algorithms, so that each element of the array can be calibrated while mitigating the impact of the additional measurement hardware on the calibration accuracy. The calibration was proven in the investigation of the experimental test-bed, which represented a typical telecommunications basestation. Calibration accuracies of ±0.5dB and 5o were achieved for all but one amplitude outlier of 0.55dB. The performance is only limited by the quality of the coupler design. This calibration approach has also been demonstrated for wideband signal calibration

    Techniques for Managing Grid Vulnerability and Assessing Structure

    Full text link
    As power systems increasingly rely on renewable power sources, generation fluctuations play a greater role in operation. These unpredictable changes shift the system operating point, potentially causing transmission lines to overheat and sag. Any attempt to anticipate line thermal constraint violations due to renewable generation shifts must address the temporal nature of temperature dynamics, as well as changing ambient conditions. An algorithm for assessing vulnerability in an operating environment should also have solution guarantees, and scale well to large systems. A method for quantifying and responding to system vulnerability to renewable generation fluctuations is presented. In contrast to existing methods, the proposed temporal framework captures system changes and line temperature dynamics over time. The non-convex quadratically constrained quadratic program (QCQP) associated with this temporal framework may be reliably solved via a proposed series of transformations. Case studies demonstrate the method's effectiveness for anticipating line temperature constraint violations due to small shifts in renewable generation. The method is also useful for quickly identifying optimal generator dispatch adjustments for cooling an overheated line, making it well-suited for use in power system operation. Development and testing of the temporal deviation scanning method involves time series data and system structure. Time series data are widely available, but publicly available data are often synthesized. Well-known time series analysis techniques are used to assess whether given data are realistic. Bounds from signal processing literature are used to identify, characterize, and isolate the quantization noise that exists in many commonly-used electric load profile datasets. Just as straightforward time series analysis can detect unrealistic data and quantization noise, so graph theory may be employed to identify unrealistic features of transmission networks. A small set of unweighted graph metrics is used on a large set of test networks to reveal unrealistic connectivity patterns in transmission grids. These structural anomalies often arise due to network reduction, and are shown to exist in multiple publicly available test networks. The aforementioned study of system structure suggested a means of improving the performance of algorithms that solve the semidefinite relaxation of the optimal power flow problem (SDP OPF). It is well known that SDP OPF performance improves when the semidefinite constraint is decomposed along the lines of the maximal cliques of the underlying network graph. Further improvement is possible by merging some cliques together, trading off between the number of decomposed constraints and their sizes. Potential for improvement over the existing greedy clique merge algorithm is shown. A comparison of clique merge algorithms demonstrates that approximate problem size may not be the most important consideration when merging cliques. The last subject of interest is the ubiquitous load-tap-changing (LTC) transformer, which regulates voltage in response to changes in generation and load. Unpredictable and significant changes in wind cause LTCs to tap more frequently, reducing their lifetimes. While voltage regulation at renewable sites can resolve this issue for nearby sub-transmission LTCs, upstream transmission-level LTCs must then tap more to offset the reactive power flows that result. A simple test network is used to illustrate this trade-off between transmission LTC and sub-transmission LTC tap operations as a function of wind-farm voltage regulation and device setpoints. The trade-off calls for more nuanced voltage regulation policies that balance tap operations between LTCs.PHDElectrical Engineering: SystemsUniversity of Michigan, Horace H. Rackham School of Graduate Studieshttps://deepblue.lib.umich.edu/bitstream/2027.42/155266/1/kersulis_1.pd

    Robust density modelling using the student's t-distribution for human action recognition

    Full text link
    The extraction of human features from videos is often inaccurate and prone to outliers. Such outliers can severely affect density modelling when the Gaussian distribution is used as the model since it is highly sensitive to outliers. The Gaussian distribution is also often used as base component of graphical models for recognising human actions in the videos (hidden Markov model and others) and the presence of outliers can significantly affect the recognition accuracy. In contrast, the Student's t-distribution is more robust to outliers and can be exploited to improve the recognition rate in the presence of abnormal data. In this paper, we present an HMM which uses mixtures of t-distributions as observation probabilities and show how experiments over two well-known datasets (Weizmann, MuHAVi) reported a remarkable improvement in classification accuracy. © 2011 IEEE
    corecore