1,182 research outputs found

    Real Coded Genetic Algorithm for Design of IIR Digital Filter with Conflicting Objectives

    Full text link

    Development of Urban Electric Bus Drivetrain

    Get PDF
    The development of the drivetrain for a new series of urban electric buses is presented in the paper. The traction and design properties of several drive variants are compared. The efficiency of the drive was tested using simulation calculations of the vehicle rides based on data from real bus lines in Prague. The results of the design work and simulation calculations are presented in the paper

    Stochastic Formal Correctness of Numerical Algorithms

    Get PDF
    We provide a framework to bound the probability that accumulated errors were never above a given threshold on numerical algorithms. Such algorithms are used for example in aircraft and nuclear power plants. This report contains simple formulas based on Levy's and Markov's inequalities and it presents a formal theory of random variables with a special focus on producing concrete results. We selected four very common applications that fit in our framework and cover the common practices of systems that evolve for a long time. We compute the number of bits that remain continuously significant in the first two applications with a probability of failure around one out of a billion, where worst case analysis considers that no significant bit remains. We are using PVS as such formal tools force explicit statement of all hypotheses and prevent incorrect uses of theorems

    Design and multiplierless realization of digital synthesis filters for hybrid-filter-bank A/D converters

    Get PDF
    This paper studies the optimal least squares and minimax design and realization of digital synthesis filters for hybrid-filter-bank analog-to-digltal converters (HFB ADCs) to meet a given spurious-free dynamic range (SFDR). The problem for designing finite-impulse-response synthesis filters is formulated as a second-order cone-programming problem, which is convex and allows linear and quadratic constraints such as peak aliasing error to be incorporated. The fixed coefficients of the designed synthesis filters are efficiently implemented using sum-of-power-of-two (SOPOT) coefficients, while the internal word length used for each intermediate data is minimized using geometric programming. The main sources of error are analyzed, and a new formula of SFDR in terms of these errors is derived. The effects of component variations of analog analysis filters on the HFB ADC are also addressed by means of two new robust HFB ADC design algorithms based on stochastic uncertainty and worst case uncertainty models. Design results show that the proposed approach offers more flexibility and better performance than conventional methods in achieving a given SFDR and that the robust design algorithms are more robust to parameter uncertainties than the nominal design in which the uncertainties are not taken into account. © 2009 IEEE.published_or_final_versio

    Reconstructing the calibrated strain signal in the Advanced LIGO detectors

    Get PDF
    Advanced LIGO's raw detector output needs to be calibrated to compute dimensionless strain h(t). Calibrated strain data is produced in the time domain using both a low-latency, online procedure and a high-latency, offline procedure. The low-latency h(t) data stream is produced in two stages, the first of which is performed on the same computers that operate the detector's feedback control system. This stage, referred to as the front-end calibration, uses infinite impulse response (IIR) filtering and performs all operations at a 16384 Hz digital sampling rate. Due to several limitations, this procedure currently introduces certain systematic errors in the calibrated strain data, motivating the second stage of the low-latency procedure, known as the low-latency gstlal calibration pipeline. The gstlal calibration pipeline uses finite impulse response (FIR) filtering to apply corrections to the output of the front-end calibration. It applies time-dependent correction factors to the sensing and actuation components of the calibrated strain to reduce systematic errors. The gstlal calibration pipeline is also used in high latency to recalibrate the data, which is necessary due mainly to online dropouts in the calibrated data and identified improvements to the calibration models or filters.Comment: 20 pages including appendices and bibliography. 11 Figures. 3 Table

    On adaptive filter structure and performance

    Get PDF
    SIGLEAvailable from British Library Document Supply Centre- DSC:D75686/87 / BLDSC - British Library Document Supply CentreGBUnited Kingdo

    A VHDL Core for Intrinsic Evolution of Discrete Time Filters with Signal Feedback

    Get PDF
    The design of an Evolvable Machine VHDL Core is presented, representing a discrete-time processing structure capable of supporting control system applications. This VHDL Core is implemented in an FPGA and is interfaced with an evolutionary algorithm implemented in firmware on a Digital Signal Processor (DSP) to create an evolvable system platform. The salient features of this architecture are presented. The capability to implement IIR filter structures is presented along with the results of the intrinsic evolution of a filter. The robustness of the evolved filter design is tested and its unique characteristics are described

    Digital Signal Processing and Machine Learning System Design using Stochastic Logic

    Get PDF
    University of Minnesota Ph.D. dissertation. July 2017. Major: Electrical/Computer Engineering. Advisor: Keshab Parhi. 1 computer file (PDF); xxii, 172 pages.Digital signal processing (DSP) and machine learning systems play a crucial role in the fields of big data and artificial intelligence. The hardware design of these systems is extremely critical to meet stringent application requirements such as extremely small size, low power consumption, and high reliability. Following the path of Moore's Law, the density and performance of hardware systems are dramatically improved at an exponential pace. The increase in the number of transistors on a chip, which plays the main role in improvement in the density of circuit design, causes rapid increase in circuit complexity. Therefore, low area consumption is one of the key challenges for IC design, especially for portable devices. Another important challenge for hardware design is reliability. A chip fabricated using nanoscale complementary metal-oxide-semiconductor (CMOS) technologies will be prone to errors caused by fluctuations in threshold voltage, supply voltage, doping levels, aging, timing errors and soft errors. Design of nanoscale failure-resistant systems is currently of significant interest, especially as the technology scales below 10 nm. Stochastic Computing (SC) is a novel approach to address these challenges in system and circuit design. This dissertation considers the design of digital signal processing and machine learning systems in stochastic logic. The stochastic implementations of finite impulse response (FIR) and infinite impulse response (IIR) filters based on various lattice structures are presented. The implementations of complex functions such as trigonometric, exponential, and sigmoid, are derived based on truncated versions of their Maclaurin series expansions. We also present stochastic computation of polynomials using stochastic subtractors and factorization. The machine learning systems including artificial neural network (ANN) and support vector machine (SVM) in stochastic logic are also presented. First, we propose novel implementations for linear-phase FIR filters in stochastic logic. The proposed design is based on lattice structures. Compared to direct-form linear-phase FIR filters, linear-phase lattice filters require twice the number of multipliers but the same number of adders. The hardware complexities of stochastic implementations of linear-phase FIR filters for direct-form and lattice structures are comparable. We propose stochastic implementation of IIR filters using lattice structures where the states are orthogonal and uncorrelated. We present stochastic IIR filters using basic, normalized and modified lattice structures. Simulation results demonstrate high signal-to-error ratio and fault tolerance in these structures. Furthermore, hardware synthesis results show that these filter structures require lower hardware area and power compared to two's complement realizations. Second, We present stochastic logic implementations of complex arithmetic functions based on truncated versions of their Maclaurin series expansions. It is shown that a polynomial can be implemented using multiple levels of NAND gates based on Horner's rule, if the coefficients are alternately positive and negative and their magnitudes are monotonically decreasing. Truncated Maclaurin series expansions of arithmetic functions are used to generate polynomials which satisfy these constraints. The input and output in these functions are represented by unipolar representation. For a polynomial that does not satisfy these constraints, it still can be implemented based on Horner's rule if each factor of the polynomial satisfies these constraints. format conversion is proposed for arithmetic functions with input and output represented in different formats, such as cosπx\text{cos}\,\pi x given x[0,1]x\in[0,1] and sigmoid(x)\text{sigmoid(x)} given x[1,1]x\in[-1,1]. Polynomials are transformed to equivalent forms that naturally exploit format conversions. The proposed stochastic logic circuits outperform the well-known Bernstein polynomial based and finite-state-machine (FSM) based implementations. Furthermore, the hardware complexity and the critical path of the proposed implementations are less than the Bernstein polynomial based and FSM based implementations for most cases. Third, we address subtraction and polynomial computations using unipolar stochastic logic. It is shown that stochastic computation of polynomials can be implemented by using a stochastic subtractor and factorization. Two approaches are proposed to compute subtraction in stochastic unipolar representation. In the first approach, the subtraction operation is approximated by cascading multi-levels of OR and AND gates. The accuracy of the approximation is improved with the increase in the number of stages. In the second approach, the stochastic subtraction is implemented using a multiplexer and a stochastic divider. We propose stochastic computation of polynomials using factorization. Stochastic implementations of first-order and second-order factors are presented for different locations of polynomial roots. From experimental results, it is shown that the proposed stochastic logic circuits require less hardware complexity than the previous stochastic polynomial implementation using Bernstein polynomials. Finally, this thesis presents novel architectures for machine learning based classifiers using stochastic logic. Three types of classifiers are considered. These include: linear support vector machine (SVM), artificial neural network (ANN) and radial basis function (RBF) SVM. These architectures are validated using seizure prediction from electroencephalogram (EEG) as an application example. To improve the accuracy of proposed stochastic classifiers, an approach of data-oriented linear transform for input data is proposed for EEG signal classification using linear SVM classifiers. Simulation results in terms of the classification accuracy are presented for the proposed stochastic computing and the traditional binary implementations based datasets from two patients. It is shown that accuracies of the proposed stochastic linear SVM are improved by 3.88\% and 85.49\% for datasets from patient-1 and patient-2, respectively, by using the proposed linear-transform for input data. Compared to conventional binary implementation, the accuracy of the proposed stochastic ANN is improved by 5.89\% for the datasets from patient-1. For patient-2, the accuracy of the proposed stochastic ANN is improved by 7.49\% by using the proposed linear-transform for input data. Additionally, compared to the traditional binary linear SVM and ANN, the hardware complexity, power consumption and critical path of the proposed stochastic implementations are reduced significantly

    FPGA-Based Degradation and Reliability Monitor for Underground Cables

    Get PDF
    The online Remaining Useful Life (RUL) estimation of underground cables and their reliability analysis requires obtaining the cable failure time probability distribution. Monte Carlo (MC) simulations of complex thermal heating and electro-thermal degradation models can be employed for this analysis, but uncertainties need to be considered in the simulations, to produce accurate RUL expectation values and confidence margins for the results. The process requires performing large simulation sets, based on past temperature or load measurements and future load predictions. Field Programmable Gate Arrays (FPGAs) permit accelerating simulations for live analysis, but the thermal models involved are complex to be directly implemented in hardware logic. A new standalone FPGA architecture has been proposed for the fast and on-site degradation and reliability analysis of underground cables, based on MC simulation, and the effect of load uncertainties on the predicted cable End Of Life (EOL) has been analyzed from the results
    corecore