252 research outputs found

    New fast arctangent approximation algorithm for generic real-time embedded applications

    Get PDF
    Fast and accurate arctangent approximations are used in several contemporary applications, including embedded systems, signal processing, radar, and power systems. Three main approximation techniques are well-established in the literature, varying in their accuracy and resource utilization levels. Those are the iterative coordinate rotational digital computer (CORDIC), the lookup tables (LUTs)-based, and the rational formulae techniques. This paper presents a novel technique that combines the advantages of both rational formulae and LUT approximation methods. The new algorithm exploits the pseudo-linear region around the tangent function zero point to estimate a reduced input arctangent through a modified rational approximation before referring this estimate to its original value using miniature LUTs. A new 2nd order rational approximation formula is introduced for the first time in this work and benchmarked against existing alternatives as it improves the new algorithm performance. The eZDSP-F28335 platform has been used for practical implementation and results validation of the proposed technique. The contributions of this work are summarized as follows: (1) introducing a new approximation algorithm with high precision and application-based flexibility; (2) introducing a new rational approximation formula that outperforms literature alternatives with the algorithm at higher accuracy requirement; and (3) presenting a practical evaluation index for rational approximations in the literature. - 2019 by the authors. Licensee MDPI, Basel, Switzerland.Funding: The publication of this article was funded by the Qatar National Library.Scopu

    Digital instrumentation for the measurement of high spectral purity signals

    Get PDF
    Improvements on electronic technology in recent years have allowed the application of digital techniques in time and frequency metrology where low noise and high accuracy are required, yielding flexibility in systems implementation and setup. This results in measurement systems with extended capabilities, additional functionalities and ease of use. The Analog to Digital Converters (ADCs) and Digital to Analog Converters (DACs), as the system front-end, set the ultimate performance of the system in terms of noise. The noise characterization of these components will allow performing punctual considerations on the study of the implementation feasibility of new techniques and for the selection of proper components according to the application requirements. Moreover, most commercial platforms based on FPGA are clocked by quartz oscillators whose accuracy and frequency stability are not suitable for many time and frequency applications. In this case, it is possible to take advantage of the internal Phase Locked Loop (PLL) for generating the internal clock from an external frequency reference. However, the PLL phase noise could degrade the oscillator stability thereby limiting the entire system performance becoming a critical component for digital instrumentation. The information available currently in literature, describes in depth the features of these devices at frequency offsets far from the carrier. However, the information close to the carrier is a more important concern for time and frequency applications. In this frame, my PhD work is focused on understanding the limitations of the critical blocks of digital instrumentation for time and frequency metrology. The aim is to characterize the noise introduced by these blocks and in this manner to be able to predict their effects on a specific application. This is done by modeling the noise introduced by each component and by describing them in terms of general and technical parameters. The parameters of the models are identified and extracted through the corresponding method proposed accordingly to the component operation. This work was validated by characterizing a commercially available platform, Red Pitaya. This platform is an open source embedded system whose resolution and speed (14 bit, 125 MSps) are reasonably close to the state of the art of ADCs and DACs (16 bit, 350 MSps or 14 bit, 1 GSps/3GSPs) and it is potentially sufficient for the implementation of a complete instrument. The characterization results lead to the noise limitations of the platform and give a guideline for instrumentation design techniques. Based on the results obtained from the noise characterization, the implementation of a digital instrument for frequency transfer using fiber link was performed on the Red Pitaya platform. In this project, a digital implementation for the detection and compensation of the phase noise induced by the fiber is proposed. The beat note, representing the fiber length variations, is acquired directly with a high speed ADC followed by a fully digital phase detector. Based on the characterization results, it was expected a limitation in the phase noise measurement given by the PLL. First measurements of this implementation were performed using the 150 km-long buried fibers, placed in the same cables between INRiM and the Laboratoire Souterrain de Modane (LSM) on the Italy-France border. The two fibers are joined together at LSM to obtain a 300 km loop with both ends at INRiM. From these results the noise introduced by the digital system was verified in agreement with characterization results. Further test and improvements will be performed for having a finished system which is intended to be used on the Italian Link for Frequency and Time from Turin to Florence that is 642-km long and to its extension in the rest of Italy that is foreseen in the next future. Currently, a higher performance platform is under assessment by applying the tools and concepts developed along the PhD. The purpose of this project is the implementation of a state of the art phasemeter whose architecture is based on the DAC. In order to estimate the ultimate performance of the instrument, the DAC characterization is under development and preliminary measurements are also reported here

    Architectures for soft-decision decoding of non-binary codes

    Full text link
    En esta tesis se estudia el dise¿no de decodificadores no-binarios para la correcci'on de errores en sistemas de comunicaci'on modernos de alta velocidad. El objetivo es proponer soluciones de baja complejidad para los algoritmos de decodificaci'on basados en los c'odigos de comprobaci'on de paridad de baja densidad no-binarios (NB-LDPC) y en los c'odigos Reed-Solomon, con la finalidad de implementar arquitecturas hardware eficientes. En la primera parte de la tesis se analizan los cuellos de botella existentes en los algoritmos y en las arquitecturas de decodificadores NB-LDPC y se proponen soluciones de baja complejidad y de alta velocidad basadas en el volteo de s'¿mbolos. En primer lugar, se estudian las soluciones basadas en actualizaci'on por inundaci 'on con el objetivo de obtener la mayor velocidad posible sin tener en cuenta la ganancia de codificaci'on. Se proponen dos decodificadores diferentes basados en clipping y t'ecnicas de bloqueo, sin embargo, la frecuencia m'axima est'a limitada debido a un exceso de cableado. Por este motivo, se exploran algunos m'etodos para reducir los problemas de rutado en c'odigos NB-LDPC. Como soluci'on se propone una arquitectura basada en difusi'on parcial para algoritmos de volteo de s'¿mbolos que mitiga la congesti'on por rutado. Como las soluciones de actualizaci 'on por inundaci'on de mayor velocidad son sub-'optimas desde el punto de vista de capacidad de correci'on, decidimos dise¿nar soluciones para la actualizaci'on serie, con el objetivo de alcanzar una mayor velocidad manteniendo la ganancia de codificaci'on de los algoritmos originales de volteo de s'¿mbolo. Se presentan dos algoritmos y arquitecturas de actualizaci'on serie, reduciendo el 'area y aumentando de la velocidad m'axima alcanzable. Por 'ultimo, se generalizan los algoritmos de volteo de s'¿mbolo y se muestra como algunos casos particulares puede lograr una ganancia de codificaci'on cercana a los algoritmos Min-sum y Min-max con una menor complejidad. Tambi'en se propone una arquitectura eficiente, que muestra que el 'area se reduce a la mitad en comparaci'on con una soluci'on de mapeo directo. En la segunda parte de la tesis, se comparan algoritmos de decodificaci'on Reed- Solomon basados en decisi'on blanda, concluyendo que el algoritmo de baja complejidad Chase (LCC) es la soluci'on m'as eficiente si la alta velocidad es el objetivo principal. Sin embargo, los esquemas LCC se basan en la interpolaci'on, que introduce algunas limitaciones hardware debido a su complejidad. Con el fin de reducir la complejidad sin modificar la capacidad de correcci'on, se propone un esquema de decisi'on blanda para LCC basado en algoritmos de decisi'on dura. Por 'ultimo se dise¿na una arquitectura eficiente para este nuevo esquemaGarcía Herrero, FM. (2013). Architectures for soft-decision decoding of non-binary codes [Tesis doctoral no publicada]. Universitat Politècnica de València. https://doi.org/10.4995/Thesis/10251/33753TESISPremiad

    High-performance and hardware-aware computing: proceedings of the second International Workshop on New Frontiers in High-performance and Hardware-aware Computing (HipHaC\u2711), San Antonio, Texas, USA, February 2011 ; (in conjunction with HPCA-17)

    Get PDF
    High-performance system architectures are increasingly exploiting heterogeneity. The HipHaC workshop aims at combining new aspects of parallel, heterogeneous, and reconfigurable microprocessor technologies with concepts of high-performance computing and, particularly, numerical solution methods. Compute- and memory-intensive applications can only benefit from the full hardware potential if all features on all levels are taken into account in a holistic approach

    A Monte Carlo simulation framework for performance evaluation of a proton imaging system without front trackers

    Get PDF
    Today, radiotherapy is one of the main methods for cancer treatment and it is used to irradiate a tumour with a prescribed dose according to a dose plan designed to irradiate the tumour cells while sparing surrounding healthy tissue and organs at risk as much as possible. As radiotherapy has improved and increased the survival rates for several types of cancer over the years, a reduction of long and short term side-effects has become an important focus in modern radiotherapy. One of the most severe side-effect is secondary cancer that can occur decades after treatment. Particle therapy has a potential to reduce the risk of long and short term side-effects in radiotherapy by reducing the irradiated volume and enable more sparing of healthy tissue surrounding the tumour. This can be achieved due to the physics of charged particles stopping in matter and depositing an increased dose at the end of their range. To ensure accurate treatment with particles it is imperative to have the particles stop inside the tumour volume. Today, particle therapy dose plans are based on X-ray computed tomography (CT) images of the patient that are converted to Relative Stopping Power (RSP) to calculate how the particle will deposit dose inside the patient. This conversion, due to the calibration and difference between photon and charged particle interactions in matter, is associated with uncertainties, up to 3.5% in some cases, this can result in misplacement of the distal dose of several mm and necessitate the inclusion of treatment margins around the tumour volume. Proton CT is an imaging method circumventing this conversion step by applying protons as the imaging particle and directly calculate the RSP for dose planning purposes, proton CT can potentially make treatment with particle therapy even more accurate. Proton CT uses a high energy proton beam with sufficient initial energy to pass through the patient and enter a detector that measures the proton residual energy. The energy-loss of each proton is thus used to reconstruct a volumetric stopping power map over the patient to be used for dose planning. Due to the physics of charged particle interactions, protons will scatter in matter and this necessitates path estimations, i.e. most likely path, of the individual protons as they traverse the patient to achieve more accurate distribution of the energy-loss locations. This typically requires two sets of position sensitive detector systems (tracker planes), one upstream (front) and one downstream (rear) of the patient to measure the proton entrance and exit position for Most Likely Path (MLP) estimations. Since proton imaging does not exist in the clinics today, an idea to adapt a proton CT detector assembly to use in proton therapy treatment rooms and bringing proton imaging a step closer to a clinical implementation is to remove the front trackers and instead rely on pencil beam scanning and rear trackers (single-sided imaging setup) for path estimations. A GEANT4/GATE based Monte Carlo (MC) simulation environment was designed to create the necessary MC framework for investigating proton imaging setups both with and without front trackers. The MC calculated proton positions on position sensitive tracker planes is used to reconstruct proton radiographs and proton CT images. The MC simulation environment is based on pencil beam scanning irradiating standardized Catphan® phantoms for spatial resolution and RSP accuracy investigations, including a clinically relevant paediatric head phantom. The pencil beam spot-size and spot-spacing parameters were varied in the single-sided setup to identify and study their effect on MLP and image quality, while a conventional proton imaging setup consisting of both front and rear trackers (double sided) was used as a gold standard in comparisons. The reconstructed proton radiography and proton CT image quality in terms of spatial resolution and RSP accuracy was quantified to evaluate the proton imaging setups being investigated. The impact on most likely path estimations and image quality in radiographs were also investigated when modifying the pencil beam spot size, e.g. 7 mm and 3 mm full width at half maximum (FWHM), and spot-spacing (spot spacing of 0.5, 1, and 2 times the FWHM) when performing pencil beam scanning. The practical use of the MC simulation framework was exemplified by modelling the proton CT Digital Tracking Calorimeter (DTC) prototype that is designed and under construction by the Bergen proton CT collaboration. The DTC is a single-sided imaging setup consisting of multiple layers of ALPIDE pixels sensors and aluminium energy absorbers and was modelled in the MC simulation framework with accurate material budgets. The DTC was investigated in terms of the resulting MLP accuracy and image quality using the expected tracker position resolution and RSP resolution of the DTC. Simulations of the radiation environment using the FLUKA MC code was performed to investigate the radiation environment the detector assembly is expected to be exposed to during irradiation of the patient. Potential radiation damage and effects such as single event upsets in the radiation sensitive FPGA readout electronics of the DTC were estimated based on FLUKA calculated particle fluence and dose deposited in the FPGAs. When the sensitive FPGAs were placed at a distance of 100 cm or more perpendicular from the DTC it was found to be radiation hard enough to be operational for over 30 years without considerable radiation effects during operation. The ALPIDE in terms of its documented design limitations were also found to be radiation hard enough to survive in the radiation environment for over 30 years. Image quality analysis in the form of spatial resolution and RSP accuracy revealed that the single-sided proton imaging setup, such as the Bergen DTC, has the potential to be used for dose planning purposes. The spatial resolution results larger than 3 line pairs per cm from the Catphan® CTP528 phantom module, and the less than 0.5% RSP deviation from reference RSP values of materials involved in the Catphan® CTP404 phantom module showed this. However, investigation into the proton CT reconstructed image of a paediatric head phantom revealed that more studies focused on dose plans based on proton CT images should be performed in the future to better evaluate the impact of using a single-sided proton imaging setup. The MC simulation framework for proton imaging and image analysis is expected to be usable in future proton imaging studies by modifying the proton imaging setups and evaluating resulting proton radiographs and proton CT image qualities.Doktorgradsavhandlin

    Small FPGA polynomial approximations with 33-bit coefficients and low-precision estimations of the powers of x

    Get PDF
    This paper presents small FPGA implementations of low precision polynomial approximations of functions without multipliers. Our method uses degree-22 or degree-33 polynomial approximations with at most 33-bit coefficients and low precision estimations of the powers of xx. Here, we denote by 33-bit coefficients values with at most 33 non-zero and possibly non-contiguous signed bits (e.g. 1.0010001‾1.001000\overline1). This leads to very small operators by replacing the costly multipliers by a small number of additions. Our method provides approximations with very low average error and is suitable for signal processing applications

    Fast, area-efficient 32-bit LNS for computer arithmetic operations

    Get PDF
    PhD ThesisThe logarithmic number system has been proposed as an alternative to floating-point. Multiplication, division and square-root operations are accomplished with fixedpoint arithmetic, but addition and subtraction are considerably more challenging. Recent work has demonstrated that these operations too can be done with similar speed and accuracy to their floating-point equivalents, but the necessary circuitry is complex. In particular, it is dominated by the need for large lookup tables for the storage of a non-linear function. This thesis describes the architectures required to implement a newly design approach for producing fast and area-efficient 32-bit LNS arithmetic unit. The designs are structured based on two different algorithms. At first, a new cotransformation procedure is introduced in the singularity region whilst performing subtractions in which the technique capable to generate less total storage than the cotransformation method in the previous LNS architecture. Secondly, improvement to an existing interpolation process is proposed, that also reduce the total tables to an extent that allows their easy synthesis in logic. Consequently, the total delays in the system can be significantly reduced. According to the comparison analysis with previous best LNS design and floating-point units, it is shown that the new LNS architecture capable to offer significantly better in speed while sustaining its accuracy within floating-point limit. In addition, its implementation is more economical than previous best LNS system and almost equivalent with existing floating-point arithmetic unit.University Malaysia Perlis: Ministry of Higher Education, Malaysia

    Applications of MATLAB in Science and Engineering

    Get PDF
    The book consists of 24 chapters illustrating a wide range of areas where MATLAB tools are applied. These areas include mathematics, physics, chemistry and chemical engineering, mechanical engineering, biological (molecular biology) and medical sciences, communication and control systems, digital signal, image and video processing, system modeling and simulation. Many interesting problems have been included throughout the book, and its contents will be beneficial for students and professionals in wide areas of interest

    Generation of Level 1 Data Products and Validating the Correctness of Currently Available Release 04 Data for the GRACE Follow-On Laser Ranging Interferometer

    Get PDF

    Generation of Level 1 Data Products and Validating the Correctness of Currently Available Release 04 Data for the GRACE Follow-On Laser Ranging Interferometer

    Get PDF
    The satellite pair of the Gravity Recovery and Climate Experiment (GRACE) Follow-On orbits the Earth, while their inter-satellite distance changes are measured with an accuracy never reached before. This is achieved with the first Laser Ranging Interferometer (LRI) that oper- ates between two distant spacecraft. The mission is based on a US-German collaboration for investigating Earth’s gravitational field and its temporal variations. The LRI was developed with the involvement of the Albert Einstein Institute (AEI) and the instrument has been running reliably for about 3 years now. The AEI has an interest in verifying and validating the LRI Level 1 data products, to ensure that the officially provided LRI data (Release 04 or v04) is correct and useful for gravity field determination. Level 1 data results from the raw telemetry of the spacecraft and serves as an intermediate step before the actual gravity field solutions can be created. Furthermore, the Level 1 data is divided into Level 1A and Level 1B products, where Level 1B is the result of further processing of Level 1A. The author of this thesis has implemented a processing chain in the existing framework of data processing and data analysis at AEI. The new processing chain generates alternative LRI Level 1A data products and especially the LRI1B product. They are referred to as v50 data. The data sets of v04 and v50 were compared in order to identify discrepancies between both versions. It turns out that the LRI Level 1A v04 products show some minor imperfections like a few missing packets of the data, incorrect units or time frame identifiers which do not match with the product description. However, the LRI phase measurements within the LRI1A product are provided correctly, which is the most important data for deriving a correct LRI1B product and the subsequent gravity field solutions. In the case of LRI1B, the range measurement in v50 shows a lower noise level on some individual days than v04. This might be related to instrument reboots, incorrect clock data, and to jumps in the phase measurement, which result for example from thruster activation, but were probably not completely removed from v04 data. In summary, this thesis will introduce some theoretical basics on laser interferometry and occurring effects of relativity in space. Afterwards, the GRACE Follow-On mission and the functionality of the LRI are presented in detail. Furthermore, the different levels of data processing are discussed and the LRI Level 1A and LRI1B processing steps are explained. Finally, the differences of v04 and v50, and their origins will be clarified
    • …
    corecore