774 research outputs found

    Compact Digital Predistortion for Multi-band and Wide-band RF Transmitters

    Get PDF
    This thesis is focusing on developing a compact digital predistortion (DPD) system which costs less DPD added power consumptions. It explores a new theory and techniques to relieve the requirement of the number of training samples and the sampling-rate of feedback ADCs in DPD systems. A new theory about the information carried by training samples is introduced. It connects the generalized error of the DPD estimation algorithm with the statistical properties of modulated signals. Secondly, based on the proposed theory, this work introduces a compressed sample selection method to reduce the number of training samples by only selecting the minimal samples which satisfy the foreknown probability information. The number of training samples and complex multiplication operations required for coefficients estimation can be reduced by more than ten times without additional calculation resource. Thirdly, based on the proposed theory, this thesis proves that theoretically a DPD system using memory polynomial based behavioural modes and least-square (LS) based algorithms can be performed with any sampling-rate of feedback samples. The principle, implementation and practical concerns of the undersampling DPD which uses lower sampling-rate ADC are then introduced. Finally, the observation bandwidth of DPD systems can be extended by the proposed multi-rate track-and-hold circuits with the associated algorithm. By addressing several parameters of ADC and corresponding DPD algorithm, multi-GHz observation bandwidth using only a 61.44MHz ADC is achieved, and demonstrated the satisfactory linearization performance of multi-band and continued wideband RF transmitter applications via extensive experimental tests

    Data compression techniques applied to high resolution high frame rate video technology

    Get PDF
    An investigation is presented of video data compression applied to microgravity space experiments using High Resolution High Frame Rate Video Technology (HHVT). An extensive survey of methods of video data compression, described in the open literature, was conducted. The survey examines compression methods employing digital computing. The results of the survey are presented. They include a description of each method and assessment of image degradation and video data parameters. An assessment is made of present and near term future technology for implementation of video data compression in high speed imaging system. Results of the assessment are discussed and summarized. The results of a study of a baseline HHVT video system, and approaches for implementation of video data compression, are presented. Case studies of three microgravity experiments are presented and specific compression techniques and implementations are recommended

    Weaknesses in ENT Battery Design

    Get PDF
    Randomness testing is a key tool to analyse the quality of true (physical) random and pseudo-random number generators. There is a wide variety of tests that are designed for this purpose, i.e., to analyse the goodness of the sequences used. These tests are grouped in different sets called suites or batteries. The batteries must be designed in such a way that the tests that form them are independent, that they have a wide coverage, and that they are computationally efficient. One such battery is the well-known ENT battery, which provides four measures and the value of a statistic (corresponding to the chi-square goodness-of-fit test). In this paper, we will show that this battery presents some vulnerabilities and, therefore, must be redefined to solve the detected problems

    ATM network impairment to video quality

    Get PDF
    Includes bibliographical reference

    Frequency diversity wideband digital receiver and signal processor for solid-state dual-polarimetric weather radars

    Get PDF
    2012 Summer.Includes bibliographical references.The recent spate in the use of solid-state transmitters for weather radar systems has unexceptionably revolutionized the research in meteorology. The solid-state transmitters allow transmission of low peak powers without losing the radar range resolution by allowing the use of pulse compression waveforms. In this research, a novel frequency-diversity wideband waveform is proposed and realized to extenuate the low sensitivity of solid-state radars and mitigate the blind range problem tied with the longer pulse compression waveforms. The latest developments in the computing landscape have permitted the design of wideband digital receivers which can process this novel waveform on Field Programmable Gate Array (FPGA) chips. In terms of signal processing, wideband systems are generally characterized by the fact that the bandwidth of the signal of interest is comparable to the sampled bandwidth; that is, a band of frequencies must be selected and filtered out from a comparable spectral window in which the signal might occur. The development of such a wideband digital receiver opens a window for exciting research opportunities for improved estimation of precipitation measurements for higher frequency systems such as X, Ku and Ka bands, satellite-borne radars and other solid-state ground-based radars. This research describes various unique challenges associated with the design of a multi-channel wideband receiver. The receiver consists of twelve channels which simultaneously downconvert and filter the digitized intermediate-frequency (IF) signal for radar data processing. The product processing for the multi-channel digital receiver mandates a software and network architecture which provides for generating and archiving a single meteorological product profile culled from multi-pulse profiles at an increased data date. The multi-channel digital receiver also continuously samples the transmit pulse for calibration of radar receiver gain and transmit power. The multi-channel digital receiver has been successfully deployed as a key component in the recently developed National Aeronautical and Space Administration (NASA) Global Precipitation Measurement (GPM) Dual-Frequency Dual-Polarization Doppler Radar (D3R). The D3R is the principal ground validation instrument for the precipitation measurements of the Dual Precipitation Radar (DPR) onboard the GPM Core Observatory satellite scheduled for launch in 2014. The D3R system employs two broadly separated frequencies at Ku- and Ka-bands that together make measurements for precipitation types which need higher sensitivity such as light rain, drizzle and snow. This research describes unique design space to configure the digital receiver for D3R at several processing levels. At length, this research presents analysis and results obtained by employing the multi-carrier waveforms for D3R during the 2012 GPM Cold-Season Precipitation Experiment (GCPEx) campaign in Canada

    Predicting room acoustical behavior with the ODEON computer model

    Get PDF

    Designing new network adaptation and ATM adaptation layers for interactive multimedia applications

    Get PDF
    Multimedia services, audiovisual applications composed of a combination of discrete and continuous data streams, will be a major part of the traffic flowing in the next generation of high speed networks. The cornerstones for multimedia are Asynchronous Transfer Mode (ATM) foreseen as the technology for the future Broadband Integrated Services Digital Network (B-ISDN) and audio and video compression algorithms such as MPEG-2 that reduce applications bandwidth requirements. Powerful desktop computers available today can integrate seamlessly the network access and the applications and thus bring the new multimedia services to home and business users. Among these services, those based on multipoint capabilities are expected to play a major role.    Interactive multimedia applications unlike traditional data transfer applications have stringent simultaneous requirements in terms of loss and delay jitter due to the nature of audiovisual information. In addition, such stream-based applications deliver data at a variable rate, in particular if a constant quality is required.    ATM, is able to integrate traffic of different nature within a single network creating interactions of different types that translate into delay jitter and loss. Traditional protocol layers do not have the appropriate mechanisms to provide the required network quality of service (QoS) for such interactive variable bit rate (VBR) multimedia multipoint applications. This lack of functionalities calls for the design of protocol layers with the appropriate functions to handle the stringent requirements of multimedia.    This thesis contributes to the solution of this problem by proposing new Network Adaptation and ATM Adaptation Layers for interactive VBR multimedia multipoint services.    The foundations to build these new multimedia protocol layers are twofold; the requirements of real-time multimedia applications and the nature of compressed audiovisual data.    On this basis, we present a set of design principles we consider as mandatory for a generic Multimedia AAL capable of handling interactive VBR multimedia applications in point-to-point as well as multicast environments. These design principles are then used as a foundation to derive a first set of functions for the MAAL, namely; cell loss detection via sequence numbering, packet delineation, dummy cell insertion and cell loss correction via RSE FEC techniques.    The proposed functions, partly based on some theoretical studies, are implemented and evaluated in a simulated environment. Performances are evaluated from the network point of view using classic metrics such as cell and packet loss. We also study the behavior of the cell loss process in order to evaluate the efficiency to be expected from the proposed cell loss correction method. We also discuss the difficulties to map network QoS parameters to user QoS parameters for multimedia applications and especially for video information. In order to present a complete performance evaluation that is also meaningful to the end-user, we make use of the MPQM metric to map the obtained network performance results to a user level. We evaluate the impact that cell loss has onto video and also the improvements achieved with the MAAL.    All performance results are compared to an equivalent implementation based on AAL5, as specified by the current ITU-T and ATM Forum standards.    An AAL has to be by definition generic. But to fully exploit the functionalities of the AAL layer, it is necessary to have a protocol layer that will efficiently interface the network and the applications. This role is devoted to the Network Adaptation Layer.    The network adaptation layer (NAL) we propose, aims at efficiently interface the applications to the underlying network to achieve a reliable but low overhead transmission of video streams. Since this requires an a priori knowledge of the information structure to be transmitted, we propose the NAL to be codec specific.    The NAL targets interactive multimedia applications. These applications share a set of common requirements independent of the encoding scheme used. This calls for the definition of a set of design principles that should be shared by any NAL even if the implementation of the functions themselves is codec specific. On the basis of the design principles, we derive the common functions that NALs have to perform which are mainly two; the segmentation and reassembly of data packets and the selective data protection.    On this basis, we develop an MPEG-2 specific NAL. It provides a perceptual syntactic information protection, the PSIP, which results in an intelligent and minimum overhead protection of video information. The PSIP takes advantage of the hierarchical organization of the compressed video data, common to the majority of the compression algorithms, to perform a selective data protection based on the perceptual relevance of the syntactic information.    The transmission over the combined NAL-MAAL layers shows significant improvement in terms of CLR and perceptual quality compared to equivalent transmissions over AAL5 with the same overhead.    The usage of the MPQM as a performance metric, which is one of the main contributions of this thesis, leads to a very interesting observation. The experimental results show that for unexpectedly high CLRs, the average perceptual quality remains close to the original value. The economical potential of such an observation is very important. Given that the data flows are VBR, it is possible to improve network utilization by means of statistical multiplexing. It is therefore possible to reduce the cost per communication by increasing the number of connections with a minimal loss in quality.    This conclusion could not have been derived without the combined usage of perceptual and network QoS metrics, which have been able to unveil the economic potential of perceptually protected streams.    The proposed concepts are finally tested in a real environment where a proof-of-concept implementation of the MAAL has shown a behavior close to the simulated results therefore validating the proposed multimedia protocol layers

    Quality-oriented adaptation scheme for multimedia streaming in local broadband multi-service IP networks

    Get PDF
    The research reported in this thesis proposes, designs and tests the Quality-Oriented Adaptation Scheme (QOAS), an application-level adaptive scheme that offers high quality multimedia services to home residences and business premises via local broadband IP-networks in the presence of other traffic of different types. QOAS uses a novel client-located grading scheme that maps some network-related parameters’ values, variations and variation patterns (e.g. delay, jitter, loss rate) to application-level scores that describe the quality of delivery. This grading scheme also involves an objective metric that estimates the end-user perceived quality, increasing its effectiveness. A server-located arbiter takes content and rate adaptation decisions based on these quality scores, which is the only information sent via feedback by the clients. QOAS has been modelled, implemented and tested through simulations and an instantiation of it has been realized in a prototype system. The performance was assessed in terms of estimated end-user perceived quality, network utilisation, loss rate and number of customers served by a fixed infrastructure. The influence of variations in the parameters used by QOAS and of the networkrelated characteristics was studied. The scheme’s adaptive reaction was tested with background traffic of different type, size and variation patterns and in the presence of concurrent multimedia streaming processes subject to user-interactions. The results show that the performance of QOAS was very close to that of an ideal adaptive scheme. In comparison with other adaptive schemes QOAS allows for a significant increase in the number of simultaneous users while maintaining a good end-user perceived quality. These results are verified by a set of subjective tests that have been performed on viewers using a prototype system
    corecore