3,866 research outputs found

    In -cylinder combustion -based virtual emissions sensing

    Get PDF
    The development of a real-time, on-board measurement of exhaust emissions from heavy-duty engines would offer tremendous advantages in on-board diagnostics and engine control. In the absence of suitable measurement hardware, an alternative approach is the development of software-based predictive approaches. This study demonstrates the feasibility of using in-cylinder pressure-based variables as the inputs to predictive neural networks that are then used to predict engine-out exhaust gas emissions. Specifically, a large steady-state engine operation data matrix provides the necessary information for training a successful predictive network while at the same time eliminating errors produced by the dispersive and time-delay effects of the emissions measurement system which includes the exhaust system, the dilution tunnel, and the emissions analyzers. The steady-state training conditions allow for the correlation of time-averaged in-cylinder combustion variables to the engine-out gaseous emissions. A back-propagation neural network is then capable of learning the relationships between these variables and the measured gaseous emissions with the ability to interpolate between steady-state points in the matrix. The networks were then validated using the transient Federal Test Procedure cycle and in-cylinder combustion parameters gathered in real time through the use of an acquisition system based on a digital signal processor. The predictive networks for NOx and CO 2 proved highly successful while those for HC and CO were not as effective. Problems with the HC and CO networks included very low measured levels and validation data that fell beyond the training matrix boundary during transient engine operation

    Automated Scratchpad Mapping and Allocation for Embedded Processors

    Get PDF
    Embedded system-on-chip processors such as the Texas Instruments C66 DSP and the IBM Cell provide the programmer with a software controlled on-chip memory to supplement a traditional but simple two-level cache. By decomposing data sets and their corresponding workload into small subsets that fit within this on-chip memory, the processor can potentially achieve equivalent or better performance, power efficiency, and area efficiency than with its sophisticated cache. However, program controlled on chip memory requires a shift in the responsibility for management and allocation from the hardware to the programmer. Specifically, this requires the explicit mapping of program arrays to specific types of on chip memory structure and the addition of supporting code that allocates and manages the on chip memory. Previous work in tiling focuses on automated loop transformations but are hardware agnostic and do not incorporate a performance model of the underlying memory design. In this work we will explore the relationship between mapping and allocation of tiles for stencil loops and linear algebra kernels on the Texas Instruments Keystone II DSP platform

    Data dependent energy modelling for worst case energy consumption analysis

    Get PDF
    Safely meeting Worst Case Energy Consumption (WCEC) criteria requires accurate energy modeling of software. We investigate the impact of instruction operand values upon energy consumption in cacheless embedded processors. Existing instruction-level energy models typically use measurements from random input data, providing estimates unsuitable for safe WCEC analysis. We examine probabilistic energy distributions of instructions and propose a model for composing instruction sequences using distributions, enabling WCEC analysis on program basic blocks. The worst case is predicted with statistical analysis. Further, we verify that the energy of embedded benchmarks can be characterised as a distribution, and compare our proposed technique with other methods of estimating energy consumption

    Statistical Reliability Estimation of Microprocessor-Based Systems

    Get PDF
    What is the probability that the execution state of a given microprocessor running a given application is correct, in a certain working environment with a given soft-error rate? Trying to answer this question using fault injection can be very expensive and time consuming. This paper proposes the baseline for a new methodology, based on microprocessor error probability profiling, that aims at estimating fault injection results without the need of a typical fault injection setup. The proposed methodology is based on two main ideas: a one-time fault-injection analysis of the microprocessor architecture to characterize the probability of successful execution of each of its instructions in presence of a soft-error, and a static and very fast analysis of the control and data flow of the target software application to compute its probability of success. The presented work goes beyond the dependability evaluation problem; it also has the potential to become the backbone for new tools able to help engineers to choose the best hardware and software architecture to structurally maximize the probability of a correct execution of the target softwar

    High-Level Synthesis Hardware Design for FPGA-Based Accelerators: Models, Methodologies, and Frameworks

    Get PDF
    Hardware accelerators based on field programmable gate array (FPGA) and system on chip (SoC) devices have gained attention in recent years. One of the main reasons is that these devices contain reconfigurable logic, which makes them feasible for boosting the performance of applications. High-level synthesis (HLS) tools facilitate the creation of FPGA code from a high level of abstraction using different directives to obtain an optimized hardware design based on performance metrics. However, the complexity of the design space depends on different factors such as the number of directives used in the source code, the available resources in the device, and the clock frequency. Design space exploration (DSE) techniques comprise the evaluation of multiple implementations with different combinations of directives to obtain a design with a good compromise between different metrics. This paper presents a survey of models, methodologies, and frameworks proposed for metric estimation, FPGA-based DSE, and power consumption estimation on FPGA/SoC. The main features, limitations, and trade-offs of these approaches are described. We also present the integration of existing models and frameworks in diverse research areas and identify the different challenges to be addressed

    Routine procedure for the assessment of rail-induced vibration

    Get PDF
    Railway induced ground-borne vibration is among the most common and widespread sources of perceptible environmental vibration, adversely impacting on human activity and the operation of sensitive equipment. The rising demand for building new railway lines or upgrading existing lines in order to meet increasing traffic flows has furthered the need for adequate vibration assessment tools during scheme planning and design. In recent years many studies of rail and ground dynamics have produced many vibration prediction techniques which have given rise to a variety of procedures for estimating rail-induced vibration on adjacent buildings. Each method shows potential for application at different levels of complexity and at different stages of a scheme. However, for the majority of the procedures significant challenges arise in obtaining the required input data, which can compromise their routine use in Environmental Impact Assessment (EIA). Moreover, as the majority of prediction procedures do not provide levels of uncertainty (i.e. expected spread of data), little is available on their effectiveness. Additionally, some procedures are restricted in that they require specific modelling approaches or proprietary software. Therefore, from an industrial point of view there is a need for a robust and flexible rail-induced vibration EIA procedure that can be routinely used with a degree of confidence. Based on an existing framework for assessing rail-induced vibration offered by the USA department of transportation (FTA) this project investigates, revises and establishes an empirical procedure capable of predicting rail-induced vibration in nearby buildings that can be routinely applied by the sponsoring company. Special attention is given to the degree of variability inherent to rail-induced vibration prediction, bringing forward the degrees of uncertainty, at all levels (i.e. measuring, analysis and scenario characterisation) that may impact on the procedure performance. The research shows a diminishing confidence when predicting rail-induced absolute vibration levels. It was found that ground-to-transducer coupling method, which is a critical step for acquiring data for characterising the ground, can impact on the results by as much as 10 dB. The ground decay rate, when derived through transfer functions, also showed to vary significantly in accordance to the assessment approach. Here it is shown the extent to which track conditions, which are difficult to account for, can affect predictions; variability in vibration levels of up to 10 dB, at some frequency bands, was found to occur simply due to track issues. The thesis offers general curves that represent modern UK buildings; however, a 15 dB variation should be expected. For urban areas, where the ground structure is significantly heterogeneous, the thesis proposes an empirical modelling technique capable of shortening the FTA procedure, whilst maintain the uncertainty levels within limits. Based on the finding and acknowledging the inherent degree of variability mentioned above, this study proposes a resilient empirical vibration analysis model, where its flexibility is established by balancing the significance of each modelling component with the uncertainty levels likely to arise due to randomness in the system

    A Fast Digital Integrator for magnetic measurements

    Get PDF
    In this work, the Fast Digital Integrator (FDI), conceived for characterizing dynamic features of superconducting magnets and measuring fast transients of magnetic fields at the European Organization for Nuclear Research (CERN) and other high-energy physics research centres, is presented. FDI development was carried out inside a framework of cooperation between the group of Magnet Tests and Measurements of CERN and the Department of Engineering of the University of Sannio. Drawbacks related to measurement time decrease of main high-performance analog-to-digital architectures, such as Sigma-Delta and integrators, are overcome by founding the design on (i) a new generation of successive-approximation converters, for high resolution (18-bit) at high rate (500 kS/s), (ii) a digital signal processor, for on-line down-sampling by integrating the input signal, (iii) a custom time base, based on a Universal Time Counter, for reducing time-domain uncertainty, and (iv) a PXI board, for high bus transfer rate, as well as noise and heat immunity. A metrological analysis, aimed at verifying the effect of main uncertainty sources, systematic errors, and design parameters on the instrument performance is presented. In particular, results of an analytical study, a preliminary numerical analysis, and a comprehensive multi-factor analysis carried out to confirm the instrument design, are reported. Then, the selection of physical components and the FDI implementation on a PXI board according to the above described conceptual architecture are highlighted. The on-line integration algorithm, developed on the DSP in order to achieve a real-time Nyquist bandwidth of 125 kHz on the flux, is described. C++ classes for remote control of FDI, developed as a part of a new software framework, the Flexible Framework for Magnetic Measurements, conceived for managing a wide spectrum of magnetic measurements techniques, are described. Experimental results of metrological and throughput characterization of FDI are reported. In particular, in metrological characterization, FDI working as a digitizer and as an integrator, was assessed by means of static, dynamic, and time base tests. Typical values of static integral nonlinearity of ±7 ppm, ±3 ppm of 24-h stability, and 108 dB of signal-to-noise-anddistortion ratio at 10 Hz on Nyquist bandwidth of 125 kHz, were surveyed during the integrator working. The actual throughput rate was measured by a specific procedure of PXI bus analysis, by highlighting typical values of 1 MB/s. Finally, the experimental campaign, carried out at CERN facilities of superconducting magnet testing for on-field qualification of FDI, is illustrated. In particular, the FDI was included in a measurement station using also the new generation of fast transducers. The performance of such a station was compared with the one of the previous standard station used in series tests for qualifying LHC magnets. All the results highlight the FDI full capability of acting as the new de-facto standard for high-performance magnetic measurements at CERN and in other high-energy physics research centres
    corecore