23 research outputs found

    Systematic hybrid analog/digital signal coding

    Get PDF
    Thesis (Ph.D.)--Massachusetts Institute of Technology, Dept. of Electrical Engineering and Computer Science, 2000.Includes bibliographical references (p. 201-206).This thesis develops low-latency, low-complexity signal processing solutions for systematic source coding, or source coding with side information at the decoder. We consider an analog source signal transmitted through a hybrid channel that is the composition of two channels: a noisy analog channel through which the source is sent unprocessed and a secondary rate-constrained digital channel; the source is processed prior to transmission through the digital channel. The challenge is to design a digital encoder and decoder that provide a minimum-distortion reconstruction of the source at the decoder, which has observations of analog and digital channel outputs. The methods described in this thesis have importance to a wide array of applications. For example, in the case of in-band on-channel (IBOC) digital audio broadcast (DAB), an existing noisy analog communications infrastructure may be augmented by a low-bandwidth digital side channel for improved fidelity, while compatibility with existing analog receivers is preserved. Another application is a source coding scheme which devotes a fraction of available bandwidth to the analog source and the rest of the bandwidth to a digital representation. This scheme is applicable in a wireless communications environment (or any environment with unknown SNR), where analog transmission has the advantage of a gentle roll-off of fidelity with SNR. A very general paradigm for low-latency, low-complexity source coding is composed of three basic cascaded elements: 1) a space rotation, or transformation, 2) quantization, and 3) lossless bitstream coding. The paradigm has been applied with great success to conventional source coding, and it applies equally well to systematic source coding. Focusing on the case involving a Gaussian source, Gaussian channel and mean-squared distortion, we determine optimal or near-optimal components for each of the three elements, each of which has analogous components in conventional source coding. The space rotation can take many forms such as linear block transforms, lapped transforms, or subband decomposition, all for which we derive conditions of optimality. For a very general case we develop algorithms for the design of locally optimal quantizers. For the Gaussian case, we describe a low-complexity scalar quantizer, the nested lattice scalar quantizer, that has performance very near that of the optimal systematic scalar quantizer. Analogous to entropy coding for conventional source coding, Slepian-Wolf coding is shown to be an effective lossless bitstream coding stage for systematic source coding.by Richard J. Barron.Ph.D

    TIME AND LOCATION FORENSICS FOR MULTIMEDIA

    Get PDF
    In the modern era, a vast quantities of digital information is available in the form of audio, image, video, and other sensor recordings. These recordings may contain metadata describing important information such as the time and the location of recording. As the stored information can be easily modified using readily available digital editing software, determining the authenticity of a recording has utmost importance, especially for critical applications such as law enforcement, journalism, and national and business intelligence. In this dissertation, we study novel environmental signatures induced by power networks, which are known as Electrical Network Frequency (ENF) signals and become embedded in multimedia data at the time of recording. ENF fluctuates slightly over time from its nominal value of 50 Hz/60 Hz. The major trend of fluctuations in the ENF remains consistent across the entire power grid, including when measured at physically distant geographical locations. We investigate the use of ENF signals for a variety of applications such as estimation/verification of time and location of a recording's creation, and develop a theoretical foundation to support ENF based forensic analysis. In the first part of the dissertation, the presence of ENF signals in visual recordings captured in electric powered lighting environments is demonstrated. The source of ENF signals in visual recordings is shown to be the invisible flickering of indoor lighting sources such as fluorescent and incandescent lamps. The techniques to extract ENF signals from recordings demonstrate that a high correlation is observed between the ENF fluctuations obtained from indoor lighting and that from the power mains supply recorded at the same time. Applications of the ENF signal analysis to tampering detection of surveillance video recordings, and forensic binding of the audio and visual track of a video are also discussed. In the following part, an analytical model is developed to gain an understanding of the behavior of ENF signals. It is demonstrated that ENF signals can be modeled using a time-varying autoregressive process. The performance of the proposed model is evaluated for a timestamp verification application. Based on this model, an improved algorithm for ENF matching between a reference signal and a query signal is provided. It is shown that the proposed approach provides an improved matching performance as compared to the case when matching is performed directly on ENF signals. Another application of the proposed model in learning the power grid characteristics is also explicated. These characteristics are learnt by using the modeling parameters as features to train a classifier to determine the creation location of a recording among candidate grid-regions. The last part of the dissertation demonstrates that differences exist between ENF signals recorded in the same grid-region at the same time. These differences can be extracted using a suitable filter mechanism and follow a relationship with the distance between different locations. Based on this observation, two localization protocols are developed to identify the location of a recording within the same grid-region, using ENF signals captured at anchor locations. Localization accuracy of the proposed protocols are then compared. Challenges in using the proposed technique to estimate the creation location of multimedia recordings within the same grid, along with efficient and resilient trilateration strategies in the presence of outliers and malicious anchors, are also discussed

    Intelligent Circuits and Systems

    Get PDF
    ICICS-2020 is the third conference initiated by the School of Electronics and Electrical Engineering at Lovely Professional University that explored recent innovations of researchers working for the development of smart and green technologies in the fields of Energy, Electronics, Communications, Computers, and Control. ICICS provides innovators to identify new opportunities for the social and economic benefits of society.γ€€ This conference bridges the gap between academics and R&D institutions, social visionaries, and experts from all strata of society to present their ongoing research activities and foster research relations between them. It provides opportunities for the exchange of new ideas, applications, and experiences in the field of smart technologies and finding global partners for future collaboration. The ICICS-2020 was conducted in two broad categories, Intelligent Circuits & Intelligent Systems and Emerging Technologies in Electrical Engineering

    Proceedings of ICMMB2014

    Get PDF

    Multimedia Forensics

    Get PDF
    This book is open access. Media forensics has never been more relevant to societal life. Not only media content represents an ever-increasing share of the data traveling on the net and the preferred communications means for most users, it has also become integral part of most innovative applications in the digital information ecosystem that serves various sectors of society, from the entertainment, to journalism, to politics. Undoubtedly, the advances in deep learning and computational imaging contributed significantly to this outcome. The underlying technologies that drive this trend, however, also pose a profound challenge in establishing trust in what we see, hear, and read, and make media content the preferred target of malicious attacks. In this new threat landscape powered by innovative imaging technologies and sophisticated tools, based on autoencoders and generative adversarial networks, this book fills an important gap. It presents a comprehensive review of state-of-the-art forensics capabilities that relate to media attribution, integrity and authenticity verification, and counter forensics. Its content is developed to provide practitioners, researchers, photo and video enthusiasts, and students a holistic view of the field
    corecore