139,361 research outputs found

    A Real-Time Thermal Monitoring System Intended for Embedded Sensors Interfaces

    Get PDF
    RÉSUMÉ: This paper proposes a real-time thermal monitoring method using embedded integrated sensor interfaces dedicated to industrial integrated system applications. Industrial sensor interfaces are complex systems that involve analog and mixed signals, where several parameters can influence their performance. These include the presence of heat sources near sensitive integrated circuits, and various heat transfer phenomena need to be considered. This creates a need for real-time thermal monitoring and management. Indeed, the control of transient temperature gradients or temperature differential variations as well as the prediction of possible induced thermal shocks and stress at early design phases of advanced integrated circuits and systems are essential. This paper addresses the growing requirements of microelectronics applications in several areas that experience fast variations in high-power density and thermal gradient differences caused by the implementation of different systems on the same chip, such as the new-generation 5G circuits. To mitigate adverse thermal effects, a real-time prediction algorithm is proposed and validated using the MCUXpresso tool applied to a Freescale embedded sensor board to monitor and predict its temperature profile in real time by programming the embedded sensor into the FRDM-KL26Z board. Based on discrete temperature measurements, the embedded system is used to predict, in advance, overheating situations in the embedded integrated circuit (IC). These results confirm the peak detection capability of the proposed algorithm that satisfactorily predicts thermal peaks in the FRDM-KL26Z board as modeled with a finite element thermal analysis tool (the Numerical Integrated elements for System Analysis (NISA) tool), to gauge the level of local thermomechanical stresses that may be induced. In this paper, the FPGA implementation and comparison measurements are also presented. This work provides a solution to the thermal stresses and local system overheating that have been a major concern for integrated sensor interface designers when designing integrated circuits in various high-performance technologies or harsh environment

    Microprocessor based signal processing techniques for system identification and adaptive control of DC-DC converters

    Get PDF
    PhD ThesisMany industrial and consumer devices rely on switch mode power converters (SMPCs) to provide a reliable, well regulated, DC power supply. A poorly performing power supply can potentially compromise the characteristic behaviour, efficiency, and operating range of the device. To ensure accurate regulation of the SMPC, optimal control of the power converter output is required. However, SMPC uncertainties such as component variations and load changes will affect the performance of the controller. To compensate for these time varying problems, there is increasing interest in employing real-time adaptive control techniques in SMPC applications. It is important to note that many adaptive controllers constantly tune and adjust their parameters based upon on-line system identification. In the area of system identification and adaptive control, Recursive Least Square (RLS) method provide promising results in terms of fast convergence rate, small prediction error, accurate parametric estimation, and simple adaptive structure. Despite being popular, RLS methods often have limited application in low cost systems, such as SMPCs, due to the computationally heavy calculations demanding significant hardware resources which, in turn, may require a high specification microprocessor to successfully implement. For this reason, this thesis presents research into lower complexity adaptive signal processing and filtering techniques for on-line system identification and control of SMPCs systems. The thesis presents the novel application of a Dichotomous Coordinate Descent (DCD) algorithm for the system identification of a dc-dc buck converter. Two unique applications of the DCD algorithm are proposed; system identification and self-compensation of a dc-dc SMPC. Firstly, specific attention is given to the parameter estimation of dc-dc buck SMPC. It is computationally efficient, and uses an infinite impulse response (IIR) adaptive filter as a plant model. Importantly, the proposed method is able to identify the parameters quickly and accurately; thus offering an efficient hardware solution which is well suited to real-time applications. Secondly, new alternative adaptive schemes that do not depend entirely on estimating the plant parameters is embedded with DCD algorithm. The proposed technique is based on a simple adaptive filter method and uses a one-tap finite impulse response (FIR) prediction error filter (PEF). Experimental and simulation results clearly show the DCD technique can be optimised to achieve comparable performance to classic RLS algorithms. However, it is computationally superior; thus making it an ideal candidate technique for low cost microprocessor based applications.Iraq Ministry of Higher Educatio

    Slight-Delay Shaped Variable Bit Rate (SD-SVBR) Technique for Video Transmission

    Get PDF
    The aim of this thesis is to present a new shaped Variable Bit Rate (VBR) for video transmission, which plays a crucial role in delivering video traffic over the Internet. This is due to the surge of video media applications over the Internet and the video typically has the characteristic of a highly bursty traffic, which leads to the Internet bandwidth fluctuation. This new shaped algorithm, referred to as Slight Delay - Shaped Variable Bit Rate (SD-SVBR), is aimed at controlling the video rate for video application transmission. It is designed based on the Shaped VBR (SVBR) algorithm and was implemented in the Network Simulator 2 (ns-2). SVBR algorithm is devised for real-time video applications and it has several limitations and weaknesses due to its embedded estimation or prediction processes. SVBR faces several problems, such as the occurrence of unwanted sharp decrease in data rate, buffer overflow, the existence of a low data rate, and the generation of a cyclical negative fluctuation. The new algorithm is capable of producing a high data rate and at the same time a better quantization parameter (QP) stability video sequence. In addition, the data rate is shaped efficiently to prevent unwanted sharp increment or decrement, and to avoid buffer overflow. To achieve the aim, SD-SVBR has three strategies, which are processing the next Group of Picture (GoP) video sequence and obtaining the QP-to-data rate list, dimensioning the data rate to a higher utilization of the leaky-bucket, and implementing a QP smoothing method by carefully measuring the effects of following the previous QP value. However, this algorithm has to be combined with a network feedback algorithm to produce a better overall video rate control. A combination of several video clips, which consisted of a varied video rate, has been used for the purpose of evaluating SD-SVBR performance. The results showed that SD-SVBR gains an impressive overall Peak Signal-to-Noise Ratio (PSNR) value. In addition, in almost all cases, it gains a high video rate but without buffer overflow, utilizes the buffer well, and interestingly, it is still able to obtain smoother QP fluctuation

    Efficient Neural Network Implementations on Parallel Embedded Platforms Applied to Real-Time Torque-Vectoring Optimization Using Predictions for Multi-Motor Electric Vehicles

    Get PDF
    The combination of machine learning and heterogeneous embedded platforms enables new potential for developing sophisticated control concepts which are applicable to the field of vehicle dynamics and ADAS. This interdisciplinary work provides enabler solutions -ultimately implementing fast predictions using neural networks (NNs) on field programmable gate arrays (FPGAs) and graphical processing units (GPUs)- while applying them to a challenging application: Torque Vectoring on a multi-electric-motor vehicle for enhanced vehicle dynamics. The foundation motivating this work is provided by discussing multiple domains of the technological context as well as the constraints related to the automotive field, which contrast with the attractiveness of exploiting the capabilities of new embedded platforms to apply advanced control algorithms for complex control problems. In this particular case we target enhanced vehicle dynamics on a multi-motor electric vehicle benefiting from the greater degrees of freedom and controllability offered by such powertrains. Considering the constraints of the application and the implications of the selected multivariable optimization challenge, we propose a NN to provide batch predictions for real-time optimization. This leads to the major contribution of this work: efficient NN implementations on two intrinsically parallel embedded platforms, a GPU and a FPGA, following an analysis of theoretical and practical implications of their different operating paradigms, in order to efficiently harness their computing potential while gaining insight into their peculiarities. The achieved results exceed the expectations and additionally provide a representative illustration of the strengths and weaknesses of each kind of platform. Consequently, having shown the applicability of the proposed solutions, this work contributes valuable enablers also for further developments following similar fundamental principles.Some of the results presented in this work are related to activities within the 3Ccar project, which has received funding from ECSEL Joint Undertaking under grant agreement No. 662192. This Joint Undertaking received support from the European Union’s Horizon 2020 research and innovation programme and Germany, Austria, Czech Republic, Romania, Belgium, United Kingdom, France, Netherlands, Latvia, Finland, Spain, Italy, Lithuania. This work was also partly supported by the project ENABLES3, which received funding from ECSEL Joint Undertaking under grant agreement No. 692455-2

    Methods of Technical Prognostics Applicable to Embedded Systems

    Get PDF
    Hlavní cílem dizertace je poskytnutí uceleného pohledu na problematiku technické prognostiky, která nachází uplatnění v tzv. prediktivní údržbě založené na trvalém monitorování zařízení a odhadu úrovně degradace systému či jeho zbývající životnosti a to zejména v oblasti komplexních zařízení a strojů. V současnosti je technická diagnostika poměrně dobře zmapovaná a reálně nasazená na rozdíl od technické prognostiky, která je stále rozvíjejícím se oborem, který ovšem postrádá větší množství reálných aplikaci a navíc ne všechny metody jsou dostatečně přesné a aplikovatelné pro embedded systémy. Dizertační práce přináší přehled základních metod použitelných pro účely predikce zbývající užitné životnosti, jsou zde popsány metriky pomocí, kterých je možné jednotlivé přístupy porovnávat ať už z pohledu přesnosti, ale také i z pohledu výpočetní náročnosti. Jedno z dizertačních jader tvoří doporučení a postup pro výběr vhodné prognostické metody s ohledem na prognostická kritéria. Dalším dizertačním jádrem je představení tzv. částicového filtrovaní (particle filtering) vhodné pro model-based prognostiku s ověřením jejich implementace a porovnáním. Hlavní dizertační jádro reprezentuje případovou studii pro velmi aktuální téma prognostiky Li-Ion baterii s ohledem na trvalé monitorování. Případová studie demonstruje proces prognostiky založené na modelu a srovnává možné přístupy jednak pro odhad doby před vybitím baterie, ale také sleduje možné vlivy na degradaci baterie. Součástí práce je základní ověření modelu Li-Ion baterie a návrh prognostického procesu.The main aim of the thesis is to provide a comprehensive overview of technical prognosis, which is applied in the condition based maintenance, based on continuous device monitoring and remaining useful life estimation, especially in the field of complex equipment and machinery. Nowadays technical prognosis is still evolving discipline with limited number of real applications and is not so well developed as technical diagnostics, which is fairly well mapped and deployed in real systems. Thesis provides an overview of basic methods applicable for prediction of remaining useful life, metrics, which can help to compare the different approaches both in terms of accuracy and in terms of computational/deployment cost. One of the research cores consists of recommendations and guide for selecting the appropriate forecasting method with regard to the prognostic criteria. Second thesis research core provides description and applicability of particle filtering framework suitable for model-based forecasting. Verification of their implementation and comparison is provided. The main research topic of the thesis provides a case study for a very actual Li-Ion battery health monitoring and prognostics with respect to continuous monitoring. The case study demonstrates the prognostic process based on the model and compares the possible approaches for estimating both the runtime and capacity fade. Proposed methodology is verified on real measured data.

    A Survey of Prediction and Classification Techniques in Multicore Processor Systems

    Get PDF
    In multicore processor systems, being able to accurately predict the future provides new optimization opportunities, which otherwise could not be exploited. For example, an oracle able to predict a certain application\u27s behavior running on a smart phone could direct the power manager to switch to appropriate dynamic voltage and frequency scaling modes that would guarantee minimum levels of desired performance while saving energy consumption and thereby prolonging battery life. Using predictions enables systems to become proactive rather than continue to operate in a reactive manner. This prediction-based proactive approach has become increasingly popular in the design and optimization of integrated circuits and of multicore processor systems. Prediction transforms from simple forecasting to sophisticated machine learning based prediction and classification that learns from existing data, employs data mining, and predicts future behavior. This can be exploited by novel optimization techniques that can span across all layers of the computing stack. In this survey paper, we present a discussion of the most popular techniques on prediction and classification in the general context of computing systems with emphasis on multicore processors. The paper is far from comprehensive, but, it will help the reader interested in employing prediction in optimization of multicore processor systems

    Online Learning Algorithm for Time Series Forecasting Suitable for Low Cost Wireless Sensor Networks Nodes

    Full text link
    Time series forecasting is an important predictive methodology which can be applied to a wide range of problems. Particularly, forecasting the indoor temperature permits an improved utilization of the HVAC (Heating, Ventilating and Air Conditioning) systems in a home and thus a better energy efficiency. With such purpose the paper describes how to implement an Artificial Neural Network (ANN) algorithm in a low cost system-on-chip to develop an autonomous intelligent wireless sensor network. The present paper uses a Wireless Sensor Networks (WSN) to monitor and forecast the indoor temperature in a smart home, based on low resources and cost microcontroller technology as the 8051MCU. An on-line learning approach, based on Back-Propagation (BP) algorithm for ANNs, has been developed for real-time time series learning. It performs the model training with every new data that arrive to the system, without saving enormous quantities of data to create a historical database as usual, i.e., without previous knowledge. Consequently to validate the approach a simulation study through a Bayesian baseline model have been tested in order to compare with a database of a real application aiming to see the performance and accuracy. The core of the paper is a new algorithm, based on the BP one, which has been described in detail, and the challenge was how to implement a computational demanding algorithm in a simple architecture with very few hardware resources.Comment: 28 pages, Published 21 April 2015 at MDPI's journal "Sensors
    corecore