1,715 research outputs found

    DeSyRe: on-Demand System Reliability

    No full text
    The DeSyRe project builds on-demand adaptive and reliable Systems-on-Chips (SoCs). As fabrication technology scales down, chips are becoming less reliable, thereby incurring increased power and performance costs for fault tolerance. To make matters worse, power density is becoming a significant limiting factor in SoC design, in general. In the face of such changes in the technological landscape, current solutions for fault tolerance are expected to introduce excessive overheads in future systems. Moreover, attempting to design and manufacture a totally defect and fault-free system, would impact heavily, even prohibitively, the design, manufacturing, and testing costs, as well as the system performance and power consumption. In this context, DeSyRe delivers a new generation of systems that are reliable by design at well-balanced power, performance, and design costs. In our attempt to reduce the overheads of fault-tolerance, only a small fraction of the chip is built to be fault-free. This fault-free part is then employed to manage the remaining fault-prone resources of the SoC. The DeSyRe framework is applied to two medical systems with high safety requirements (measured using the IEC 61508 functional safety standard) and tight power and performance constraints

    Pre-validation of SoC via hardware and software co-simulation

    Get PDF
    Abstract. System-on-chips (SoCs) are complex entities consisting of multiple hardware and software components. This complexity presents challenges in their design, verification, and validation. Traditional verification processes often test hardware models in isolation until late in the development cycle. As a result, cooperation between hardware and software development is also limited, slowing down bug detection and fixing. This thesis aims to develop, implement, and evaluate a co-simulation-based pre-validation methodology to address these challenges. The approach allows for the early integration of hardware and software, serving as a natural intermediate step between traditional hardware model verification and full system validation. The co-simulation employs a QEMU CPU emulator linked to a register-transfer level (RTL) hardware model. This setup enables the execution of software components, such as device drivers, on the target instruction set architecture (ISA) alongside cycle-accurate RTL hardware models. The thesis focuses on two primary applications of co-simulation. Firstly, it allows software unit tests to be run in conjunction with hardware models, facilitating early communication between device drivers, low-level software, and hardware components. Secondly, it offers an environment for using software in functional hardware verification. A significant advantage of this approach is the early detection of integration errors. Software unit tests can be executed at the IP block level with actual hardware models, a task previously only possible with costly system-level prototypes. This enables earlier collaboration between software and hardware development teams and smoothens the transition to traditional system-level validation techniques.JÀrjestelmÀpiirin esivalidointi laitteiston ja ohjelmiston yhteissimulaatiolla. TiivistelmÀ. JÀrjestelmÀpiirit (SoC) ovat monimutkaisia kokonaisuuksia, jotka koostuvat useista laitteisto- ja ohjelmistokomponenteista. TÀmÀ monimutkaisuus asettaa haasteita niiden suunnittelulle, varmennukselle ja validoinnille. Perinteiset varmennusprosessit testaavat usein laitteistomalleja eristyksissÀ kehityssyklin loppuvaiheeseen saakka. TÀmÀn myötÀ myös yhteistyö laitteisto- ja ohjelmistokehityksen vÀlillÀ on vÀhÀistÀ, mikÀ hidastaa virheiden tunnistamista ja korjausta. TÀmÀn diplomityön tavoitteena on kehittÀÀ, toteuttaa ja arvioida laitteisto-ohjelmisto-yhteissimulointiin perustuva esivalidointimenetelmÀ nÀiden haasteiden ratkaisemiseksi. MenetelmÀ mahdollistaa laitteiston ja ohjelmiston varhaisen integroinnin, toimien luonnollisena vÀlietappina perinteisen laitteistomallin varmennuksen ja koko jÀrjestelmÀn validoinnin vÀlillÀ. Yhteissimulointi kÀyttÀÀ QEMU suoritinemulaattoria, joka on yhdistetty rekisterinsiirtotason (RTL) laitteistomalliin. TÀmÀ mahdollistaa ohjelmistokomponenttien, kuten laiteajureiden, suorittamisen kohdejÀrjestelmÀn kÀskysarja-arkkitehtuurilla (ISA) yhdessÀ kellosyklitarkkojen RTL laitteistomallien kanssa. Työ keskittyy kahteen yhteissimulaation pÀÀsovellukseen. EnsinnÀkin se mahdollistaa ohjelmiston yksikkötestien suorittamisen laitteistomallien kanssa, varmistaen kommunikaation laiteajurien, matalan tason ohjelmiston ja laitteistokomponenttien vÀlillÀ. Toiseksi se tarjoaa ympÀristön ohjelmiston kÀyttÀmiseen toiminnallisessa laitteiston varmennuksessa. MerkittÀvÀ etu tÀstÀ lÀhestymistavasta on integraatiovirheiden varhainen havaitseminen. Ohjelmiston yksikkötestejÀ voidaan suorittaa jo IP-lohkon tasolla oikeilla laitteistomalleilla, mikÀ on aiemmin ollut mahdollista vain kalliilla jÀrjestelmÀtason prototyypeillÀ. TÀmÀ mahdollistaa aikaisemman ohjelmisto- ja laitteistokehitystiimien vÀlisen yhteistyön ja helpottaa siirtymistÀ perinteisiin jÀrjestelmÀtason validointimenetelmiin

    Beyond the Hype: A Real-World Evaluation of the Impact and Cost of Machine Learning-Based Malware Detection

    Full text link
    There is a lack of scientific testing of commercially available malware detectors, especially those that boast accurate classification of never-before-seen (i.e., zero-day) files using machine learning (ML). The result is that the efficacy and gaps among the available approaches are opaque, inhibiting end users from making informed network security decisions and researchers from targeting gaps in current detectors. In this paper, we present a scientific evaluation of four market-leading malware detection tools to assist an organization with two primary questions: (Q1) To what extent do ML-based tools accurately classify never-before-seen files without sacrificing detection ability on known files? (Q2) Is it worth purchasing a network-level malware detector to complement host-based detection? We tested each tool against 3,536 total files (2,554 or 72% malicious, 982 or 28% benign) including over 400 zero-day malware, and tested with a variety of file types and protocols for delivery. We present statistical results on detection time and accuracy, consider complementary analysis (using multiple tools together), and provide two novel applications of a recent cost-benefit evaluation procedure by Iannaconne & Bridges that incorporates all the above metrics into a single quantifiable cost. While the ML-based tools are more effective at detecting zero-day files and executables, the signature-based tool may still be an overall better option. Both network-based tools provide substantial (simulated) savings when paired with either host tool, yet both show poor detection rates on protocols other than HTTP or SMTP. Our results show that all four tools have near-perfect precision but alarmingly low recall, especially on file types other than executables and office files -- 37% of malware tested, including all polyglot files, were undetected.Comment: Includes Actionable Takeaways for SOC

    Security Operations Centers: A Holistic View on Problems and Solutions

    Get PDF
    Since Security Operations Centers (SOCs) were first implemented, they have strived to protect the organization and constituency they serve from all manner of Information Technology (IT) security threats. As SOCs have evolved over time to become as effective and efficient at this as possible, they have struggled with changes and upgrades to their foundational elements of people, processes, and technology in pursuit of this mission. While most relevant literature focuses on one challenge a SOC faces, or one aspect of one problem, the authors of this paper performed a literature review to identify and discuss the top current and future challenges that SOCs face in addition to the top current and future solutions to these problems

    High quality testing of grid style power gating

    No full text
    This paper shows that existing delay-based testing techniques for power gating exhibit fault coverage loss due to unconsidered delays introduced by the structure of the virtual voltage power-distribution-network (VPDN). To restore this loss, which could reach up to 70.3% on stuck-open faults, we propose a design-for-testability (DFT) logic that considers the impact of VPDN on fault coverage in order to constitute the proper interface between the VPDN and the DFT. The proposed logic can be easily implemented on-top of existing DFT solutions and its overhead is optimized by an algorithm that offers trade-off flexibility between test-application-time and hardware overhead. Through physical layout SPICE simulations, we show complete fault coverage recovery on stuck-open faults and 43.2% test-application-time improvement compared to a previously proposed DFT technique. To the best of our knowledge, this paper presents the first analysis of the VPDN impact on test qualit

    Fault Tolerant Nanosatellite Computing on a Budget

    Get PDF
    In this contribution, we present a CubeSat-compatible on-board computer (OBC) architecture that offers strong fault tolerance to enable the use of such spacecraft in critical and long-term missions. We describe in detail the design of our OBC’s breadboard setup, and document its composition from the component-level, all the way down to the software level. Fault tolerance in this OBC is achieved without resorting to radiation hardening, just intelligent through software. The OBC ages graceful, and makes use of FPGA-reconfiguration and mixed criticality. It can dynamically adapt to changing performance requirements throughout a space mission. We developed a proof-of-concept with several Xilinx Ultrascale and Ultrascale+ FPGAs. With the smallest Kintex Ultrascale+ KU3P device, we achieve 1.94W total power consumption at 300Mhz, well within the power budget range of current 2U CubeSats. To our knowledge, this is the first scalable and COTS-based, widely reproducible OBC solution which can offer strong fault coverage even for small CubeSats. To reproduce this OBC architecture, no custom-written, proprietary, or protected IP is needed, and the needed design tools are available free-of-charge to academics. All COTS components required to construct this architecture can be purchased on the open market, and are affordable even for academic and scientific CubeSat developers
    • 

    corecore