1,377 research outputs found

    Chaos-Based Bitwise Dynamical Pseudorandom Number Generator on FPGA

    Get PDF
    In this paper, a new pseudorandom number generator (PRNG) based on the logistic map has been proposed. To prevent the system to fall into short period orbits as well as increasing the randomness of the generated sequences, the proposed algorithm dynamically changes the parameters of the chaotic system. This PRNG has been implemented in a Virtex 7 field-programmable gate array (FPGA) with a 32-bit fixed point precision, using a total of 510 lookup tables (LUTs) and 120 registers. The sequences generated by the proposed algorithm have been subjected to the National Institute of Standards and Technology (NIST) randomness tests, passing all of them. By comparing the randomness with the sequences generated by a raw 32-bit logistic map, it is shown that, by using only an additional 16% of LUTs, the proposed PRNG obtains a much better performance in terms of randomness, increasing the NIST passing rate from 0.252 to 0.989. Finally, the proposed bitwise dynamical PRNG is compared with other chaos-based realizations previously proposed, showing great improvement in terms of resources and randomness

    Design and debugging of multi-step analog to digital converters

    Get PDF
    With the fast advancement of CMOS fabrication technology, more and more signal-processing functions are implemented in the digital domain for a lower cost, lower power consumption, higher yield, and higher re-configurability. The trend of increasing integration level for integrated circuits has forced the A/D converter interface to reside on the same silicon in complex mixed-signal ICs containing mostly digital blocks for DSP and control. However, specifications of the converters in various applications emphasize high dynamic range and low spurious spectral performance. It is nontrivial to achieve this level of linearity in a monolithic environment where post-fabrication component trimming or calibration is cumbersome to implement for certain applications or/and for cost and manufacturability reasons. Additionally, as CMOS integrated circuits are accomplishing unprecedented integration levels, potential problems associated with device scaling – the short-channel effects – are also looming large as technology strides into the deep-submicron regime. The A/D conversion process involves sampling the applied analog input signal and quantizing it to its digital representation by comparing it to reference voltages before further signal processing in subsequent digital systems. Depending on how these functions are combined, different A/D converter architectures can be implemented with different requirements on each function. Practical realizations show the trend that to a first order, converter power is directly proportional to sampling rate. However, power dissipation required becomes nonlinear as the speed capabilities of a process technology are pushed to the limit. Pipeline and two-step/multi-step converters tend to be the most efficient at achieving a given resolution and sampling rate specification. This thesis is in a sense unique work as it covers the whole spectrum of design, test, debugging and calibration of multi-step A/D converters; it incorporates development of circuit techniques and algorithms to enhance the resolution and attainable sample rate of an A/D converter and to enhance testing and debugging potential to detect errors dynamically, to isolate and confine faults, and to recover and compensate for the errors continuously. The power proficiency for high resolution of multi-step converter by combining parallelism and calibration and exploiting low-voltage circuit techniques is demonstrated with a 1.8 V, 12-bit, 80 MS/s, 100 mW analog to-digital converter fabricated in five-metal layers 0.18-µm CMOS process. Lower power supply voltages significantly reduce noise margins and increase variations in process, device and design parameters. Consequently, it is steadily more difficult to control the fabrication process precisely enough to maintain uniformity. Microscopic particles present in the manufacturing environment and slight variations in the parameters of manufacturing steps can all lead to the geometrical and electrical properties of an IC to deviate from those generated at the end of the design process. Those defects can cause various types of malfunctioning, depending on the IC topology and the nature of the defect. To relive the burden placed on IC design and manufacturing originated with ever-increasing costs associated with testing and debugging of complex mixed-signal electronic systems, several circuit techniques and algorithms are developed and incorporated in proposed ATPG, DfT and BIST methodologies. Process variation cannot be solved by improving manufacturing tolerances; variability must be reduced by new device technology or managed by design in order for scaling to continue. Similarly, within-die performance variation also imposes new challenges for test methods. With the use of dedicated sensors, which exploit knowledge of the circuit structure and the specific defect mechanisms, the method described in this thesis facilitates early and fast identification of excessive process parameter variation effects. The expectation-maximization algorithm makes the estimation problem more tractable and also yields good estimates of the parameters for small sample sizes. To allow the test guidance with the information obtained through monitoring process variations implemented adjusted support vector machine classifier simultaneously minimize the empirical classification error and maximize the geometric margin. On a positive note, the use of digital enhancing calibration techniques reduces the need for expensive technologies with special fabrication steps. Indeed, the extra cost of digital processing is normally affordable as the use of submicron mixed signal technologies allows for efficient usage of silicon area even for relatively complex algorithms. Employed adaptive filtering algorithm for error estimation offers the small number of operations per iteration and does not require correlation function calculation nor matrix inversions. The presented foreground calibration algorithm does not need any dedicated test signal and does not require a part of the conversion time. It works continuously and with every signal applied to the A/D converter. The feasibility of the method for on-line and off-line debugging and calibration has been verified by experimental measurements from the silicon prototype fabricated in standard single poly, six metal 0.09-µm CMOS process

    Enhanced receiver architectures for processing multi GNSS signals in a single chain : based on partial differential equations mathematical model

    Get PDF
    The focus of our research is on designing a new architecture (RF front-end and digital) for processing multi GNSS signals in a single receiver chain. The motivation is to save in overhead cost (size, processing time and power consumption) of implementing multiple signal receivers side-by-side on-board Smartphones. This thesis documents the new multi-signal receiver architecture that we have designed. Based on this architecture, we have achieved/published eight novel contributions. Six of these implementations focus on multi GNSS signal receivers, and the last two are for multiplexing Bluetooth and GPS received signals in a single processing chain. We believe our work in terms of the new innovative and novel techniques achieved is a major contribution to the commercial world especially that of Smartphones. Savings in both silicon size and processing time will be highly beneficial to reduction of costs but more importantly for conserving the energy of the battery. We are proud that we have made this significant contribution to both industry and the scientific research and development arena. The first part of the work focus on the Two GNSS signal detection front-end approaches that were designed to explore the availability of the L1 band of GPS, Galileo and GLONASS at an early stage. This is so that the receiver devotes appropriate resources to acquire them. The first approach was based on folding the carrier frequency of all the three GNSS signals with their harmonics to the First Nyquist Zone (FNZ), as depicted by the BandPass Sampling Receiver technique (BPSR). Consequently, there is a unique power distribution of these folded signals based on the actual present signals that can be detected to alert the digital processing parts to acquire it. Volterra Series model is used to estimate the existing power in the FNZ by extracting the kernels of these folded GNSS signals, if available. The second approach filters out the right-side lobe of the GLONASS signal and the left-side lobe of the Galileo signal, prior to the folding process in our BPSR implementation. This filtering is important to enable none overlapped folding of these two signals with the GPS signal in the FNZ. The simulation results show that adopting these two approaches can save much valuable acquisition processing time. Our Orthogonal BandPass Sampling Receiver and Orthogonal Complex BandPass Sampling Receiver are two methods designed to capture any two wireless signals simultaneously and use a single channel in the digital domain to process them, including tracking and decoding, concurrently. The novelty of the two receivers is centred on the Orthogonal Integrated Function (OIF) that continuously harmonies the two received signals to form a single orthogonal signal allowing the “tracking and decoding” to be carried out by a single digital channel. These receivers employ a Hilbert Transform for shifting one of the input signals by 90-degrees. Then, the BPSR technique is used to fold back the two received signals to the same reference frequency in the FNZ. Results show that these designed methods also reduce the sampling frequency to a rate proportional to the maximum bandwidth, instead of the summation of bandwidths, of the input signals. Two combined GPS L1CA and L2C signal acquisition channels are designed based on applying the idea of the OIF to enhance the power consumption and the implementation complexity in the existing combination methods and also to enhance the acquisition sensitivity. This is achieved by removing the Doppler frequency of the two signals; our methods add the in-phase component of the L2C signal together with the in-phase component of the L1CA signal, which is then shifted by 90-degree before adding it to the remaining components of these two signals, resulting in an orthogonal form of the combined signals. This orthogonal signal is then fed to our developed version of the parallel-code-phase-search engine. Our simulation results illustrate that the acquisition sensitivity of these signals is improved successfully by 5.0 dB, which is necessary for acquiring weak signals in harsh environments. The last part of this work focuses on the tracking stage when specifically multiplexing Bluetooth and L1CA GPS signals in a single channel based on using the concept of the OIF, where the tracking channel can be shared between the two signals without losing the lock or degrading its performance. Two approaches are designed for integrating the two signals based on the mathematical analysis of the main function of the tracking channel, which the Phase-Locked Loop (PLL). A mathematical model of a set of differential equations has been developed to evaluate the PLL when it used to track and demodulated two signals simultaneously. The simulation results proved that the implementation of our approaches has reduced by almost half the size and processing time

    Index to NASA tech briefs, 1971

    Get PDF
    The entries are listed by category, subject, author, originating source, source number/Tech Brief number, and Tech Brief number/source number. There are 528 entries

    Advanced Technique and Future Perspective for Next Generation Optical Fiber Communications

    Get PDF
    Optical fiber communication industry has gained unprecedented opportunities and achieved rapid progress in recent years. However, with the increase of data transmission volume and the enhancement of transmission demand, the optical communication field still needs to be upgraded to better meet the challenges in the future development. Artificial intelligence technology in optical communication and optical network is still in its infancy, but the existing achievements show great application potential. In the future, with the further development of artificial intelligence technology, AI algorithms combining channel characteristics and physical properties will shine in optical communication. This reprint introduces some recent advances in optical fiber communication and optical network, and provides alternative directions for the development of the next generation optical fiber communication technology

    Middle Atmosphere Program. Handbook for MAP, volume 20

    Get PDF
    Various topics related to investigations of the middle atmosphere are discussed. Numerical weather prediction, performance characteristics of weather profiling radars, determination of gravity wave and turbulence parameters, case studies of gravity-wave propagation, turbulence and diffusion due to gravity waves, the climatology of gravity waves, mesosphere-stratosphere-troposphere radar, antenna arrays, and data management techniques are among the topics discussed

    Proceedings of the Second International Mobile Satellite Conference (IMSC 1990)

    Get PDF
    Presented here are the proceedings of the Second International Mobile Satellite Conference (IMSC), held June 17-20, 1990 in Ottawa, Canada. Topics covered include future mobile satellite communications concepts, aeronautical applications, modulation and coding, propagation and experimental systems, mobile terminal equipment, network architecture and control, regulatory and policy considerations, vehicle antennas, and speech compression

    Design and Implementation of a Novel Flash ADC for Ultra Wide Band Applications

    Get PDF
    This dissertation presents a design and implementation of a novel flash ADC architecture for ultra wide band applications. The advancement in wireless technology takes us in to a world without wires. Most of the wireless communication systems use digital signal processing to transmit as well as receive the information. The real world signals are analog. Due to the processing complexity of the analog signal, it is converted to digital form so that processing becomes easier. The development in the digital signal processor field is rapid due to the advancement in the integrated circuit technology over the last decade. Therefore, analog-to -digital converter acts as an interface in between analog signal and digital signal processing systems. The continuous speed enhancement of the wireless communication systems brings out huge demands in speed and power specifications of high-speed low-resolution analog-to -digital converters. Even though wired technology is a primary mode of communication, the quality and efficiency of the wireless technology allows us to apply to biomedical applications, in home services and even to radar applications. These applications are highly relying on wireless technology to send and receive information at high speed with great accuracy. Ultra Wideband (UWB) technology is the best method to these applications. A UWB signal has a bandwidth of minimum 500MHz or a fractional bandwidth of 25 percentage of its centre frequency. The two different technology standards that are used in UWB are multiband orthogonal frequency division multiplexing ultra wideband technology (MB-OFDM) and carrier free direct sequence ultra wideband technology (DS-UWB). ADC is the core of any UWB receiver. Generally a high speed flash ADC is used in DS-UWB receiver. Two different flash ADC architectures are proposed in this thesis for DS-UWB applications. The first design is a high speed five bit flash ADC architecture with a sampling rate of 5 GS/s. The design is verified using CADENCE tool with CMOS 90 nm technology. The total power dissipation of the ADC is 8.381 mW from power supply of 1.2 V. The die area of the proposed flash ADC is 186 μm × 210 μm (0.039 mm2). The proposed flash ADC is analysed and compared with other papers in the literature having same resolution and it is concluded that it has the highest speed of operation with medium power dissipation. iii The second design is a reconfigurable five bit flash ADC architecture with a sampling rate of 1.25 GS/s. The design is verified using CADENCE tool with UMC 180 nm technology. The total power dissipation of the ADC is 11.71 mW from power supply of 1.8 V. The die area of the implementation is 432 μm × 720 μm (0.31104 mm2). The chip tape out of the proposed reconfigurable flash ADC is made for fabrication

    Objective Approaches to Single-Molecule Time Series Analysis

    Get PDF
    Single-molecule spectroscopy has provided a means to uncover pathways and heterogeneities that were previously hidden beneath the ensemble average. Such heterogeneity, however, is often obscured by the artifacts of experimental noise and the occurrence of undesired processes within the experimental medium. This has subsequently caused in the need for new analytical methodologies. It is particularly important that objectivity be maintained in the development of new analytical methodology so that bias is not introduced and the results improperly characterized. The research presented herein identifies two such sources of experimental uncertainty, and constructs objective approaches to reduce their effects in the experimental results. The first, photoblinking, arises from the occupation of dark electronic states within the probe molecule, resulting in experimental data that is distorted by its contribution. A method based in Bayesian inference is developed, and is found to nearly eliminate photoblinks from the experimental data while minimally affecting the remaining data and maintaining objectivity. The second source of uncertainty is electronic shot-noise, which arises as a result of Poissonian photon collection. A method based in wavelet decomposition is constructed and applied to simulated and experimental data. It is iii found that, while making only one assumption, that photon collection is indeed a Poisson process, up to 75% of the shot-noise contribution may be removed from the experimental signal by the wavelet-based procedure. Lastly, in an effort to connect model-based approaches such as molecular dynamics simulation to model-free approaches that rely solely on the experimental data, a coarse-grained molecular model of a molecular ionic fluorophore diffusing within an electrostatically charged polymer brush is constructed and characterized. It is found that, while the characteristics of the coarse-grained simulation compare well with atomistic simulations, the model is lacking in its representation of the electrostatically-driven behavior of the experimental system
    corecore