57 research outputs found

    Dispersive Fourier Transformation for Versatile Microwave Photonics Applications

    Get PDF
    Abstract: Dispersive Fourier transformation (DFT) maps the broadband spectrum of an ultrashort optical pulse into a time stretched waveform with its intensity profile mirroring the spectrum using chromatic dispersion. Owing to its capability of continuous pulse-by-pulse spectroscopic measurement and manipulation, DFT has become an emerging technique for ultrafast signal generation and processing, and high-throughput real-time measurements, where the speed of traditional optical instruments falls short. In this paper, the principle and implementation methods of DFT are first introduced and the recent development in employing DFT technique for widespread microwave photonics applications are presented, with emphasis on real-time spectroscopy, microwave arbitrary waveform generation, and microwave spectrum sensing. Finally, possible future research directions for DFT-based microwave photonics techniques are discussed as well

    Optimized techniques for real-time microwave and millimeter wave SAR imaging

    Get PDF
    Microwave and millimeter wave synthetic aperture radar (SAR)-based imaging techniques, used for nondestructive evaluation (NDE), have shown tremendous usefulness for the inspection of a wide variety of complex composite materials and structures. Studies were performed for the optimization of uniform and nonuniform sampling (i.e., measurement positions) since existing formulations of SAR resolution and sampling criteria do not account for all of the physical characteristics of a measurement (e.g., 2D limited-size aperture, electric field decreasing with distance from the measuring antenna, etc.) and nonuniform sampling criteria supports sampling below the Nyquist rate. The results of these studies demonstrate optimum sampling given design requirements that fully explain resolution dependence on sampling criteria. This work was then extended to manually-selected and nonuniformly distributed samples such that the intelligence of the user may be utilized by observing SAR images being updated in real-time. Furthermore, a novel reconstruction method was devised that uses components of the SAR algorithm to advantageously exploit the inherent spatial information contained in the data, resulting in a superior final SAR image. Furthermore, better SAR images can be obtained if multiple frequencies are utilized as compared to single frequency. To this end, the design of an existing microwave imaging array was modified to support multiple frequency measurement. Lastly, the data of interest in such an array may be corrupted by coupling among elements since they are closely spaced, resulting in images with an increased level of artifacts. A method for correcting or pre-processing the data by using an adaptation of correlation canceling technique is presented as well --Abstract, page iii

    Design of high speed folding and interpolating analog-to-digital converter

    Get PDF
    High-speed and low resolution analog-to-digital converters (ADC) are key elements in the read channel of optical and magnetic data storage systems. The required resolution is about 6-7 bits while the sampling rate and effective resolution bandwidth requirements increase with each generation of storage system. Folding is a technique to reduce the number of comparators used in the flash architecture. By means of an analog preprocessing circuit in folding A/D converters the number of comparators can be reduced significantly. Folding architectures exhibit low power and low latency as well as the ability to run at high sampling rates. Folding ADCs employing interpolation schemes to generate extra folding waveforms are called "Folding and Interpolating ADC" (F&I ADC). The aim of this research is to increase the input bandwidth of high speed conversion, and low latency F&I ADC. Behavioral models are developed to analyze the bandwidth limitation at the architecture level. A front-end sample-and-hold unit is employed to tackle the frequency multiplication problem, which is intrinsic for all F&I ADCs. Current-mode signal processing is adopted to increase the bandwidth of the folding amplifiers and interpolators, which are the bottleneck of the whole system. An operational transconductance amplifier (OTA) based folding amplifier, current mirror-based interpolator, very low impedance fast current comparator are proposed and designed to carry out the current-mode signal processing. A new bit synchronization scheme is proposed to correct the error caused by the delay difference between the coarse and fine channels. A prototype chip was designed and fabricated in 0.35μm CMOS process to verify the ideas. The S/H and F&I ADC prototype is realized in 0.35μm double-poly CMOS process (only one poly is used). Integral nonlinearity (INL) is 1.0 LSB and Differential nonlinearity (DNL) is 0.6 LSB at 110 KHz. The ADC occupies 1.2mm2 active area and dissipates 200mW (excluding 70mW of S/H) from 3.3V supply. At 300MSPS sampling rate, the ADC achieves no less than 6 ENOB with input signal lower than 60MHz. It has the highest input bandwidth of 60MHz reported in the literature for this type of CMOS ADC with similar resolution and sample rate

    Design and debugging of multi-step analog to digital converters

    Get PDF
    With the fast advancement of CMOS fabrication technology, more and more signal-processing functions are implemented in the digital domain for a lower cost, lower power consumption, higher yield, and higher re-configurability. The trend of increasing integration level for integrated circuits has forced the A/D converter interface to reside on the same silicon in complex mixed-signal ICs containing mostly digital blocks for DSP and control. However, specifications of the converters in various applications emphasize high dynamic range and low spurious spectral performance. It is nontrivial to achieve this level of linearity in a monolithic environment where post-fabrication component trimming or calibration is cumbersome to implement for certain applications or/and for cost and manufacturability reasons. Additionally, as CMOS integrated circuits are accomplishing unprecedented integration levels, potential problems associated with device scaling – the short-channel effects – are also looming large as technology strides into the deep-submicron regime. The A/D conversion process involves sampling the applied analog input signal and quantizing it to its digital representation by comparing it to reference voltages before further signal processing in subsequent digital systems. Depending on how these functions are combined, different A/D converter architectures can be implemented with different requirements on each function. Practical realizations show the trend that to a first order, converter power is directly proportional to sampling rate. However, power dissipation required becomes nonlinear as the speed capabilities of a process technology are pushed to the limit. Pipeline and two-step/multi-step converters tend to be the most efficient at achieving a given resolution and sampling rate specification. This thesis is in a sense unique work as it covers the whole spectrum of design, test, debugging and calibration of multi-step A/D converters; it incorporates development of circuit techniques and algorithms to enhance the resolution and attainable sample rate of an A/D converter and to enhance testing and debugging potential to detect errors dynamically, to isolate and confine faults, and to recover and compensate for the errors continuously. The power proficiency for high resolution of multi-step converter by combining parallelism and calibration and exploiting low-voltage circuit techniques is demonstrated with a 1.8 V, 12-bit, 80 MS/s, 100 mW analog to-digital converter fabricated in five-metal layers 0.18-µm CMOS process. Lower power supply voltages significantly reduce noise margins and increase variations in process, device and design parameters. Consequently, it is steadily more difficult to control the fabrication process precisely enough to maintain uniformity. Microscopic particles present in the manufacturing environment and slight variations in the parameters of manufacturing steps can all lead to the geometrical and electrical properties of an IC to deviate from those generated at the end of the design process. Those defects can cause various types of malfunctioning, depending on the IC topology and the nature of the defect. To relive the burden placed on IC design and manufacturing originated with ever-increasing costs associated with testing and debugging of complex mixed-signal electronic systems, several circuit techniques and algorithms are developed and incorporated in proposed ATPG, DfT and BIST methodologies. Process variation cannot be solved by improving manufacturing tolerances; variability must be reduced by new device technology or managed by design in order for scaling to continue. Similarly, within-die performance variation also imposes new challenges for test methods. With the use of dedicated sensors, which exploit knowledge of the circuit structure and the specific defect mechanisms, the method described in this thesis facilitates early and fast identification of excessive process parameter variation effects. The expectation-maximization algorithm makes the estimation problem more tractable and also yields good estimates of the parameters for small sample sizes. To allow the test guidance with the information obtained through monitoring process variations implemented adjusted support vector machine classifier simultaneously minimize the empirical classification error and maximize the geometric margin. On a positive note, the use of digital enhancing calibration techniques reduces the need for expensive technologies with special fabrication steps. Indeed, the extra cost of digital processing is normally affordable as the use of submicron mixed signal technologies allows for efficient usage of silicon area even for relatively complex algorithms. Employed adaptive filtering algorithm for error estimation offers the small number of operations per iteration and does not require correlation function calculation nor matrix inversions. The presented foreground calibration algorithm does not need any dedicated test signal and does not require a part of the conversion time. It works continuously and with every signal applied to the A/D converter. The feasibility of the method for on-line and off-line debugging and calibration has been verified by experimental measurements from the silicon prototype fabricated in standard single poly, six metal 0.09-µm CMOS process

    A procedure for testing the significance of orbital tuning of the martian polar layered deposits

    Get PDF
    a b s t r a c t Layered deposits of dusty ice in the martian polar caps have been hypothesized to record climate changes driven by orbitally induced variations in the distribution of incoming solar radiation. Attempts to identify such an orbital signal by tuning a stratigraphic sequence of polar layered deposits (PLDs) to match an assumed forcing introduce a risk of identifying spurious matches between unrelated records. We present an approach for evaluating the significance of matches obtained by orbital tuning, and investigate the utility of this approach for identifying orbital signals in the Mars PLDs. Using a set of simple models for ice and dust accumulation driven by insolation, we generate synthetic PLD stratigraphic sequences with nonlinear time-depth relationships. We then use a dynamic time warping algorithm to attempt to identify an orbital signal in the modeled sequences, and apply a Monte Carlo procedure to determine whether this match is significantly better than a match to a random sequence that contains no orbital signal. For simple deposition mechanisms in which dust deposition rate is constant and ice deposition rate varies linearly with insolation, we find that an orbital signal can be confidently identified if at least 10% of the accumulation time interval is preserved as strata. Addition of noise to our models raises this minimum preservation requirement, and we expect that more complex deposition functions would generally also make identification more difficult. In light of these results, we consider the prospects for identifying an orbital signal in the actual PLD stratigraphy, and conclude that this is feasible even with a strongly nonlinear relationship between stratigraphic depth and time, provided that a sufficient fraction of time is preserved in the record and that ice and dust deposition rates vary predictably with insolation. Independent age constraints from other techniques may be necessary, for example, if an insufficient amount of time is preserved in the stratigraphy

    A hardware-embedded, delay-based PUF engine designed for use in cryptographic and authentication applications

    Get PDF
    Cryptographic and authentication applications in application-specific integrated circuits (ASICs) and field-programmable gate arrays (FPGAs), as well as codes for the activation of on-chip features, require the use of embedded secret information. The generation of secret bitstrings using physical unclonable functions, or PUFs, provides several distinct advantages over conventional methods, including the elimination of costly non-volatile memory, and the potential to increase the random bits available to applications. In this dissertation, a Hardware-Embedded Delay PUF (HELP) is proposed that is designed to leverage path delay variations that occur in the core logic macros of a chip to create random bitstrings. A thorough discussion is provided of the operational details of an embedded path timing structure called REBEL that is used by HELP to provide the timing functionality upon which HELP relies for the entropy source for the cryptographic quality of the bitstrings. Further details of the FPGA-based implementation used to prove the viability of the HELP PUF concept are included, along with a discussion of the evolution of the techniques employed in realizing the final PUF engine design. The bitstrings produced by a set of 30 FPGA boards are evaluated with regard to several statistical quality metrics including uniqueness, randomness, and stability. The stability characteristics of the bitstrings are evaluated by subjecting the FPGAs to commercial-grade temperature and power supply voltage variations. In particular, this work evaluates the reproducibility of the bitstrings generated at 0C, 25C, and 70C, and 10% of the rated supply voltage. A pair of error avoidance schemes are proposed and presented that provide significant improvements to the HELP PUF\u27s resiliency against bit-flip errors in the bitstrings

    Proceedings of the Third International Mobile Satellite Conference (IMSC 1993)

    Get PDF
    Satellite-based mobile communications systems provide voice and data communications to users over a vast geographic area. The users may communicate via mobile or hand-held terminals, which may also provide access to terrestrial cellular communications services. While the first and second International Mobile Satellite Conferences (IMSC) mostly concentrated on technical advances, this Third IMSC also focuses on the increasing worldwide commercial activities in Mobile Satellite Services. Because of the large service areas provided by such systems, it is important to consider political and regulatory issues in addition to technical and user requirements issues. Topics covered include: the direct broadcast of audio programming from satellites; spacecraft technology; regulatory and policy considerations; advanced system concepts and analysis; propagation; and user requirements and applications

    Eighth International Workshop on Laser Ranging Instrumentation

    Get PDF
    The Eighth International Workshop for Laser Ranging Instrumentation was held in Annapolis, Maryland in May 1992, and was sponsored by the NASA Goddard Space Flight Center in Greenbelt, Maryland. The workshop is held once every 2 to 3 years under differing institutional sponsorship and provides a forum for participants to exchange information on the latest developments in satellite and lunar laser ranging hardware, software, science applications, and data analysis techniques. The satellite laser ranging (SLR) technique provides sub-centimeter precision range measurements to artificial satellites and the Moon. The data has application to a wide range of Earth and lunar science issues including precise orbit determination, terrestrial reference frames, geodesy, geodynamics, oceanography, time transfer, lunar dynamics, gravity and relativity

    Advanced scanners and imaging systems for earth observations

    Get PDF
    Assessments of present and future sensors and sensor related technology are reported along with a description of user needs and applications. Five areas are outlined: (1) electromechanical scanners, (2) self-scanned solid state sensors, (3) electron beam imagers, (4) sensor related technology, and (5) user applications. Recommendations, charts, system designs, technical approaches, and bibliographies are included for each area
    • …
    corecore