823 research outputs found

    TriCheck: Memory Model Verification at the Trisection of Software, Hardware, and ISA

    Full text link
    Memory consistency models (MCMs) which govern inter-module interactions in a shared memory system, are a significant, yet often under-appreciated, aspect of system design. MCMs are defined at the various layers of the hardware-software stack, requiring thoroughly verified specifications, compilers, and implementations at the interfaces between layers. Current verification techniques evaluate segments of the system stack in isolation, such as proving compiler mappings from a high-level language (HLL) to an ISA or proving validity of a microarchitectural implementation of an ISA. This paper makes a case for full-stack MCM verification and provides a toolflow, TriCheck, capable of verifying that the HLL, compiler, ISA, and implementation collectively uphold MCM requirements. The work showcases TriCheck's ability to evaluate a proposed ISA MCM in order to ensure that each layer and each mapping is correct and complete. Specifically, we apply TriCheck to the open source RISC-V ISA, seeking to verify accurate, efficient, and legal compilations from C11. We uncover under-specifications and potential inefficiencies in the current RISC-V ISA documentation and identify possible solutions for each. As an example, we find that a RISC-V-compliant microarchitecture allows 144 outcomes forbidden by C11 to be observed out of 1,701 litmus tests examined. Overall, this paper demonstrates the necessity of full-stack verification for detecting MCM-related bugs in the hardware-software stack.Comment: Proceedings of the Twenty-Second International Conference on Architectural Support for Programming Languages and Operating System

    autoAx: An Automatic Design Space Exploration and Circuit Building Methodology utilizing Libraries of Approximate Components

    Full text link
    Approximate computing is an emerging paradigm for developing highly energy-efficient computing systems such as various accelerators. In the literature, many libraries of elementary approximate circuits have already been proposed to simplify the design process of approximate accelerators. Because these libraries contain from tens to thousands of approximate implementations for a single arithmetic operation it is intractable to find an optimal combination of approximate circuits in the library even for an application consisting of a few operations. An open problem is "how to effectively combine circuits from these libraries to construct complex approximate accelerators". This paper proposes a novel methodology for searching, selecting and combining the most suitable approximate circuits from a set of available libraries to generate an approximate accelerator for a given application. To enable fast design space generation and exploration, the methodology utilizes machine learning techniques to create computational models estimating the overall quality of processing and hardware cost without performing full synthesis at the accelerator level. Using the methodology, we construct hundreds of approximate accelerators (for a Sobel edge detector) showing different but relevant tradeoffs between the quality of processing and hardware cost and identify a corresponding Pareto-frontier. Furthermore, when searching for approximate implementations of a generic Gaussian filter consisting of 17 arithmetic operations, the proposed approach allows us to identify approximately 10310^3 highly important implementations from 102310^{23} possible solutions in a few hours, while the exhaustive search would take four months on a high-end processor.Comment: Accepted for publication at the Design Automation Conference 2019 (DAC'19), Las Vegas, Nevada, US

    Modeling and analysis of semiconductor manufacturing processes using petri nets

    Get PDF
    This thesis addresses the issues in modeling and analysis of multichip module (MCM) manufacturing processes using Petri nets. Building such graphical and mathematical models is a crucial step to understand MCM technologies and to enhance their application scope. In this thesis, the application of Petri nets is presented with top-down and bottom-up approaches. The theory of Petri nets is summarized with its basic notations and properties at first. After that, the capability of calculating and analyzing Petri nets with deterministic timing information is extended to meet the requirements of the MCM models. Then, using top-down refining and system decomposition, MCM models are built from an abstract point to concrete systems with timing information. In this process, reduction theory based on a multiple-input-single-output modules for deterministic Petri nets is applied to analyze the cycle time of Petri net models. Besides, this thesis is of significance in its use of the reduction theory which is derived for timed marked graphs - an important class of Petri nets

    Use data mining to improve student retention in HE - a case study

    Get PDF
    Data mining combines machine learning, statistics and visualization techniques to discover and extract knowledge. One of the biggest challenges that higher education faces is to improve student retention (National Audition Office, 2007). Student retention has become an indication of academic performance and enrolment management. Our project uses data mining and natural language processing technologies to monitor student, analyze student academic behaviour and provide a basis for efficient intervention strategies. Our aim is to identify potential problems as early as possible and to follow up with intervention options to enhance student retention. In this paper we discuss how data mining can help spot students ‘at risk’, evaluate the course or module suitability, and tailor the interventions to increase student retention

    Using data mining to improve student retention in HE: a case study.

    Get PDF
    Data mining combines machine learning, statistics and visualization techniques to discover and extract knowledge. One of the biggest challenges that higher education faces is to improve student retention
 (National Audition Office, 2007).
Student retention has become an indication of academic performance and enrolment management. Our project uses data mining and natural language processing technologies to monitor student, analyze student academic behaviour and provide a basis for efficient intervention strategies. Our aim is to identify potential problems as early as possible and to follow up with intervention options to enhance student retention. In this paper we discuss how data mining can help spot students ‘at risk’, evaluate the course or module suitability, and tailor the interventions to increase student retention

    Analyzing the memory ordering models of the Apple M1

    Get PDF
    The Apple M1 ARM processor family incorporates two memory consistency models: the conventional ARM weak memory ordering and the Total store ordering (TSO) model from the x86 architecture utilized by Apple's x86 emulator, Rosetta 2. The presence of both memory ordering models on the same hardware enables us to thoroughly benchmark and compare their performance characteristics and worst-case workloads. In this paper, we assess the performance implications of TSO on the Apple M1 processor architecture. Based on the multi-threading workloads of the SPEC2017 CPU FP benchmark suite, our findings indicate that TSO is, on average, 8.94 percent slower than ARM's weaker memory ordering. Through synthetic benchmarks, we further explore the workloads that experience the most significant performance degradation due to TSO. We also take a deeper look into the specific atomic instructions provided by the ARMv8.3 specification and their synchronization overheads

    Reliability-Oriented Strategies for Multichip Module Based Mission Critical Industry Applications

    Get PDF
    The availability is defined as the portion of time the system remains operational to serve its purpose. In mission critical applications (MCA), the availability of power converters are determinant to ensure continue productivity and avoid financial losses. Multichip Modules (MCM) are widely adopted in such applications due to the high power density and reduced price; however, the high number of dies inside a compact package results in critical thermal deviations among them. Moreover, uneven power flow, inhomogeneous cooling and accumulated degradation, potentially result in thermal deviation among modules, thereby increasing the temperature differences and resulting in extra temperature in specific subset of devices. High temperatures influences multiple failure mechanisms in power modules, especially in highly dynamic load profiles. Therefore, the higher failure probability of the hottest dies drastically reduces the reliability of mission critical power converters. Therefore, this work investigate reliability-oriented solutions for the design and thermal management of MCM-based power converters applied in mission critical applications. The first contribution, is the integration of a die-level thermal and probabilistic analysis on the design for reliability (DFR) procedure, whereby the temperature and failure probability of each die are taken into account during the reliability modeling. It is demonstrated that the dielevel analysis can obtain more realistic system-level reliability of MCM-based power converters. Thereafter, three novel die-level thermal balancing strategies, based on a modified MCM - with more gate-emitter connections - are proposed and investigated. It is proven that the temperatures inside the MCM can be overcame, and the maximum temperate reduced in up to 8 %

    Performance and emissions of compression ignition engine fueled with preheated blend of vegetable oil

    Get PDF
    Now days fossil fuel has been a problem that can been use in a compression ignition engine. Straight vegetable oil is one of the most reliable fuel that suitable for diesel engine. The scope of study of this study is focused on performance and emission of the straight vegetable oil from the grocery store to compare with crude palm oil from UTHM pilot plan. S5, S10 and S15 straight vegetable oil fuel is used for this experiment. This straight vegetable oil is also compared with an natural diesel in a combustion-ignition engine. The test is conducted with UTHM dynomometer which is located at automotive lab. The properties of the vegetable oil is tested for density, kinematic viscosity, water content, acids value and flash points. Brake power, flywheel torque, (in term of hydrocarbon, carbon monoxide, carbon dioxide, oxygen content and smoke opacity) and tested for performance and emission. Results obtained show that flywheel torque that has been produced from the biodiesel fuels are less than the natural diesel (ND). Biodiesel emission results shown a better emission compared to the ND fuels. The CO2, CO, HC and O2 content that released from the biodiesel fuels are clearly lower than the ND fuels. At low engine speed, biodiesel smoke opacity contents are quite high produces compared to the OD and other type o

    On the Reliability of Machine Learning Models for Survival Analysis When Cure Is a Possibility

    Get PDF
    [Abstract]: In classical survival analysis, it is assumed that all the individuals will experience the event of interest. However, if there is a proportion of subjects who will never experience the event, then a standard survival approach is not appropriate, and cure models should be considered instead. This paper deals with the problem of adapting a machine learning approach for classical survival analysis to a situation when cure (i.e., not suffering the event) is a possibility. Specifically, a brief review of cure models and recent machine learning methodologies is presented, and an adaptation of machine learning approaches to account for cured individuals is introduced. In order to validate the proposed methods, we present an extensive simulation study in which we compare the performance of the adapted machine learning algorithms with existing cure models. The results show the good behavior of the semiparametric or the nonparametric approaches, depending on the simulated scenario. The practical utility of the methodology is showcased through two real-world dataset illustrations. In the first one, the results show the gain of using the nonparametric mixture cure model approach. In the second example, the results show the poor performance of some machine learning methods for small sample sizes.This project was funded by the Xunta de Galicia (Axencia Galega de Innovación) Research projects COVID-19 presented in ISCIII IN845D 2020/26, Operational Program FEDER Galicia 2014–2020; by the Centro de Investigación de Galicia “CITIC”, funded by Xunta de Galicia and the European Union European Regional Development Fund (ERDF)-Galicia 2014–2020 Program, by grant ED431G 2019/01; and by the Spanish Ministerio de Economía y Competitividad (research projects PID2019-109238GB-C22 and PID2021-128045OA-I00). ALC was sponsored by the BEATRIZ GALINDO JUNIOR Spanish Grant from MICINN (Ministerio de Ciencia e Innovación) with code BGP18/00154. ALC was partially supported by the MICINN Grant PID2020-113578RB-I00 and partial support of Xunta de Galicia (Grupos de Referencia Competitiva ED431C-2020-14). We gratefully acknowledge the support of NVIDIA Corporation with the donation of the Titan Xp GPU used for this research.Xunta de Galicia; ED431G 2019/01Xunta de Galicia; ED431C-2020-14Xunta de Galicia; IN845D 2020/2
    corecore