3,840 research outputs found

    Accurate Calibration of Power Measurements from Internal Power Sensors on NVIDIA Jetson Devices

    Full text link
    Power efficiency is a crucial consideration for embedded systems design, particularly in the field of edge computing and IoT devices. This study aims to calibrate the power measurements obtained from the built-in sensors of NVIDIA Jetson devices, facilitating the collection of reliable and precise power consumption data in real-time. To achieve this goal, accurate power readings are obtained using external hardware, and a regression model is proposed to map the sensor measurements to the true power values. Our results provide insights into the accuracy and reliability of the built-in power sensors for various Jetson edge boards and highlight the importance of calibrating their internal power readings. In detail, internal sensors underestimate the actual power by up to 50% in most cases, but this calibration reduces the error to within 3%. By making the internal sensor data usable for precise online assessment of power and energy figures, the regression models presented in this paper have practical applications, for both practitioners and researchers, in accurately designing energy-efficient and autonomous edge services.Comment: 5 pages, 5 figures, IEEE Edge 2023 Conferenc

    AI/ML Algorithms and Applications in VLSI Design and Technology

    Full text link
    An evident challenge ahead for the integrated circuit (IC) industry in the nanometer regime is the investigation and development of methods that can reduce the design complexity ensuing from growing process variations and curtail the turnaround time of chip manufacturing. Conventional methodologies employed for such tasks are largely manual; thus, time-consuming and resource-intensive. In contrast, the unique learning strategies of artificial intelligence (AI) provide numerous exciting automated approaches for handling complex and data-intensive tasks in very-large-scale integration (VLSI) design and testing. Employing AI and machine learning (ML) algorithms in VLSI design and manufacturing reduces the time and effort for understanding and processing the data within and across different abstraction levels via automated learning algorithms. It, in turn, improves the IC yield and reduces the manufacturing turnaround time. This paper thoroughly reviews the AI/ML automated approaches introduced in the past towards VLSI design and manufacturing. Moreover, we discuss the scope of AI/ML applications in the future at various abstraction levels to revolutionize the field of VLSI design, aiming for high-speed, highly intelligent, and efficient implementations

    Biointegrated and wirelessly powered implantable brain devices: a review

    Get PDF
    Implantable neural interfacing devices have added significantly to neural engineering by introducing the low-frequency oscillations of small populations of neurons known as local field potential as well as high-frequency action potentials of individual neurons. Regardless of the astounding progression as of late, conventional neural modulating system is still incapable to achieve the desired chronic in vivo implantation. The real constraint emerges from mechanical and physical diffierences between implants and brain tissue that initiates an inflammatory reaction and glial scar formation that reduces the recording and stimulation quality. Furthermore, traditional strategies consisting of rigid and tethered neural devices cause substantial tissue damage and impede the natural behaviour of an animal, thus hindering chronic in vivo measurements. Therefore, enabling fully implantable neural devices, requires biocompatibility, wireless power/data capability, biointegration using thin and flexible electronics, and chronic recording properties. This paper reviews biocompatibility and design approaches for developing biointegrated and wirelessly powered implantable neural devices in animals aimed at long-term neural interfacing and outlines current challenges toward developing the next generation of implantable neural devices

    Forecasting and Prediction of Solar Energy Generation using Machine Learning Techniques

    Get PDF
    The growing demand for renewable energy sources, especially wind and solar power, has increased the requirement for precise forecasts in the energy production process. Using machine learning (ML)techniques offers a revolutionary way to deal with this problem, and this thesis uses machinelearning (ML) to estimate solar energy production with the goal of revolutionizing decision-making processes through the analysis of large datasets and the generation of accurate forecasts.Solar meteorological data is analyzed methodologically using regression, time series analysis, and deep learning algorithms. The study demonstrates how well machine learning-based forecasting works to anticipate future solar energy output. Quantitative evaluations show excellent prediction accuracy and verify the techniques used. For example, the key observations made were that the Multiple Linear Regression methods demonstrates reasonable predictive ability with moderate Mean Absolute Error (MAE) and Root Mean Squared Error (RMSE) values yet slightly lower R-squared values compared to other methods.The study also provides a reflective analysis of result significance, methodology dependability, and result generalizability, as well as a summary of its limits and recommendations for further study. The conclusion provides implications for broader applications across energy sectors and emphasizes the critical role that ML-based forecasting plays in predicting solar energy generation. By utilizing renewable energy sources like solar power, this approach aims to lessen dependency on non-renewable resources and pave the way for a more sustainable future

    Scalable and Efficient Methods for Uncertainty Estimation and Reduction in Deep Learning

    Full text link
    Neural networks (NNs) can achieved high performance in various fields such as computer vision, and natural language processing. However, deploying NNs in resource-constrained safety-critical systems has challenges due to uncertainty in the prediction caused by out-of-distribution data, and hardware non-idealities. To address the challenges of deploying NNs in resource-constrained safety-critical systems, this paper summarizes the (4th year) PhD thesis work that explores scalable and efficient methods for uncertainty estimation and reduction in deep learning, with a focus on Computation-in-Memory (CIM) using emerging resistive non-volatile memories. We tackle the inherent uncertainties arising from out-of-distribution inputs and hardware non-idealities, crucial in maintaining functional safety in automated decision-making systems. Our approach encompasses problem-aware training algorithms, novel NN topologies, and hardware co-design solutions, including dropout-based \emph{binary} Bayesian Neural Networks leveraging spintronic devices and variational inference techniques. These innovations significantly enhance OOD data detection, inference accuracy, and energy efficiency, thereby contributing to the reliability and robustness of NN implementations

    Potential and Challenges of Analog Reconfigurable Computation in Modern and Future CMOS

    Get PDF
    In this work, the feasibility of the floating-gate technology in analog computing platforms in a scaled down general-purpose CMOS technology is considered. When the technology is scaled down the performance of analog circuits tends to get worse because the process parameters are optimized for digital transistors and the scaling involves the reduction of supply voltages. Generally, the challenge in analog circuit design is that all salient design metrics such as power, area, bandwidth and accuracy are interrelated. Furthermore, poor flexibility, i.e. lack of reconfigurability, the reuse of IP etc., can be considered the most severe weakness of analog hardware. On this account, digital calibration schemes are often required for improved performance or yield enhancement, whereas high flexibility/reconfigurability can not be easily achieved. Here, it is discussed whether it is possible to work around these obstacles by using floating-gate transistors (FGTs), and analyze problems associated with the practical implementation. FGT technology is attractive because it is electrically programmable and also features a charge-based built-in non-volatile memory. Apart from being ideal for canceling the circuit non-idealities due to process variations, the FGTs can also be used as computational or adaptive elements in analog circuits. The nominal gate oxide thickness in the deep sub-micron (DSM) processes is too thin to support robust charge retention and consequently the FGT becomes leaky. In principle, non-leaky FGTs can be implemented in a scaled down process without any special masks by using “double”-oxide transistors intended for providing devices that operate with higher supply voltages than general purpose devices. However, in practice the technology scaling poses several challenges which are addressed in this thesis. To provide a sufficiently wide-ranging survey, six prototype chips with varying complexity were implemented in four different DSM process nodes and investigated from this perspective. The focus is on non-leaky FGTs, but the presented autozeroing floating-gate amplifier (AFGA) demonstrates that leaky FGTs may also find a use. The simplest test structures contain only a few transistors, whereas the most complex experimental chip is an implementation of a spiking neural network (SNN) which comprises thousands of active and passive devices. More precisely, it is a fully connected (256 FGT synapses) two-layer spiking neural network (SNN), where the adaptive properties of FGT are taken advantage of. A compact realization of Spike Timing Dependent Plasticity (STDP) within the SNN is one of the key contributions of this thesis. Finally, the considerations in this thesis extend beyond CMOS to emerging nanodevices. To this end, one promising emerging nanoscale circuit element - memristor - is reviewed and its applicability for analog processing is considered. Furthermore, it is discussed how the FGT technology can be used to prototype computation paradigms compatible with these emerging two-terminal nanoscale devices in a mature and widely available CMOS technology.Siirretty Doriast

    Predictive maintenance of rotational machinery using deep learning

    Get PDF
    This paper describes an implementation of a deep learning-based predictive maintenance (PdM) system for industrial rotational machinery, built upon the foundation of a long short-term memory (LSTM) autoencoder and regression analysis. The autoencoder identifies anomalous patterns, while the latter, based on the autoencoder’s output, estimates the machine’s remaining useful life (RUL). Unlike prior PdM systems dependent on labelled historical data, the developed system doesn’t require it as it’s based on an unsupervised deep learning model, enhancing its adaptability. The paper also explores a robust condition monitoring system that collects machine operational data, including vibration and current parameters, and transmits them to a database via a Bluetooth low energy (BLE) network. Additionally, the study demonstrates the integration of this PdM system within a web-based framework, promoting its adoption across various industrial settings. Tests confirm the system's ability to accurately identify faults, highlighting its potential to reduce unexpected downtime and enhance machinery reliability
    • …
    corecore