16,983 research outputs found

    Trojans in Early Design Steps—An Emerging Threat

    Get PDF
    Hardware Trojans inserted by malicious foundries during integrated circuit manufacturing have received substantial attention in recent years. In this paper, we focus on a different type of hardware Trojan threats: attacks in the early steps of design process. We show that third-party intellectual property cores and CAD tools constitute realistic attack surfaces and that even system specification can be targeted by adversaries. We discuss the devastating damage potential of such attacks, the applicable countermeasures against them and their deficiencies

    On Silicon Group Elements Ejected by Supernovae Type Ia

    Full text link
    There is compelling evidence that the peak brightness of a Type Ia supernova is affected by the electron fraction Ye at the time of the explosion. The electron fraction is set by the aboriginal composition of the white dwarf and the reactions that occur during the pre explosive convective burning. To date, determining the makeup of the white dwarf progenitor has relied on indirect proxies, such as the average metallicity of the host stellar population. In this paper, we present analytical calculations supporting the idea that the electron fraction of the progenitor systematically influences the nucleosynthesis of silicon group ejecta in Type Ia supernovae. In particular, we suggest the abundances generated in quasi nuclear statistical equilibrium are preserved during the subsequent freezeout. This allows one to potential recovery of Ye at explosion from the abundances recovered from an observed spectra. We show that measurement of 28Si, 32S, 40Ca, and 54Fe abundances can be used to construct Ye in the silicon rich regions of the supernovae. If these four abundances are determined exactly, they are sufficient to recover Ye to 6 percent. This is because these isotopes dominate the composition of silicon-rich material and iron rich material in quasi nuclear statistical equilibrium. Analytical analysis shows that the 28Si abundance is insensitive to Ye, the 32S abundance has a nearly linear trend with Ye, and the 40Ca abundance has a nearly quadratic trend with Ye. We verify these trends with post-processing of 1D models and show that these trends are reflected in model synthetic spectra.Comment: Submitted to the Ap

    A Test Vector Minimization Algorithm Based On Delta Debugging For Post-Silicon Validation Of Pcie Rootport

    Get PDF
    In silicon hardware design, such as designing PCIe devices, design verification is an essential part of the design process, whereby the devices are subjected to a series of tests that verify the functionality. However, manual debugging is still widely used in post-silicon validation and is a major bottleneck in the validation process. The reason is a large number of tests vectors have to be analyzed, and this slows process down. To solve the problem, a test vector minimizer algorithm is proposed to eliminate redundant test vectors that do not contribute to reproduction of a test failure, hence, improving the debug throughput. The proposed methodology is inspired by the Delta Debugging algorithm which is has been used in automated software debugging but not in post-silicon hardware debugging. The minimizer operates on the principle of binary partitioning of the test vectors, and iteratively testing each subset (or complement of set) on a post-silicon System-Under-Test (SUT), to identify and eliminate redundant test vectors. Test results using test vector sets containing deliberately introduced erroneous test vectors show that the minimizer is able to isolate the erroneous test vectors. In test cases containing up to 10,000 test vectors, the minimizer requires about 16ns per test vector in the test case when only one erroneous test vector is present. In a test case with 1000 vectors including erroneous vectors, the same minimizer requires about 140μs per erroneous test vector that is injected. Thus, the minimizer’s CPU consumption is significantly smaller than the typical amount of time of a test running on SUT. The factors that significantly impact the performance of the algorithm are number of erroneous test vectors and distribution (spacing) of the erroneous vectors. The effect of total number of test vectors and position of the erroneous vectors are relatively minor compared to the other two. The minimization algorithm therefore was most effective for cases where there are only a few erroneous test vectors, with large number of test vectors in the set

    Spectral sequences of Type Ia supernovae. I. Connecting normal and sub-luminous SN Ia and the presence of unburned carbon

    Get PDF
    Type Ia supernovae are generally agreed to arise from thermonuclear explosions of carbon-oxygen white dwarfs. The actual path to explosion, however, remains elusive, with numerous plausible parent systems and explosion mechanisms suggested. Observationally, type Ia supernovae have multiple subclasses, distinguished by their lightcurves and spectra. This raises the question whether these reflect that multiple mechanisms occur in nature, or instead that explosions have a large but continuous range of physical properties. We revisit the idea that normal and 91bg-like supernovae can be understood as part of a spectral sequence, in which changes in temperature dominate. Specifically, we find that a single ejecta structure is sufficient to provide reasonable fits of both the normal type Ia supernova SN~2011fe and the 91bg-like SN~2005bl, provided that the luminosity and thus temperature of the ejecta are adjusted appropriately. This suggests that the outer layers of the ejecta are similar, thus providing some support of a common explosion mechanism. Our spectral sequence also helps to shed light on the conditions under which carbon can be detected in pre-maximum SN~Ia spectra -- we find that emission from iron can "fill in" the carbon trough in cool SN~Ia. This may indicate that the outer layers of the ejecta of events in which carbon is detected are relatively metal poor compared to events where carbon is not detected

    Pre-validation of SoC via hardware and software co-simulation

    Get PDF
    Abstract. System-on-chips (SoCs) are complex entities consisting of multiple hardware and software components. This complexity presents challenges in their design, verification, and validation. Traditional verification processes often test hardware models in isolation until late in the development cycle. As a result, cooperation between hardware and software development is also limited, slowing down bug detection and fixing. This thesis aims to develop, implement, and evaluate a co-simulation-based pre-validation methodology to address these challenges. The approach allows for the early integration of hardware and software, serving as a natural intermediate step between traditional hardware model verification and full system validation. The co-simulation employs a QEMU CPU emulator linked to a register-transfer level (RTL) hardware model. This setup enables the execution of software components, such as device drivers, on the target instruction set architecture (ISA) alongside cycle-accurate RTL hardware models. The thesis focuses on two primary applications of co-simulation. Firstly, it allows software unit tests to be run in conjunction with hardware models, facilitating early communication between device drivers, low-level software, and hardware components. Secondly, it offers an environment for using software in functional hardware verification. A significant advantage of this approach is the early detection of integration errors. Software unit tests can be executed at the IP block level with actual hardware models, a task previously only possible with costly system-level prototypes. This enables earlier collaboration between software and hardware development teams and smoothens the transition to traditional system-level validation techniques.Järjestelmäpiirin esivalidointi laitteiston ja ohjelmiston yhteissimulaatiolla. Tiivistelmä. Järjestelmäpiirit (SoC) ovat monimutkaisia kokonaisuuksia, jotka koostuvat useista laitteisto- ja ohjelmistokomponenteista. Tämä monimutkaisuus asettaa haasteita niiden suunnittelulle, varmennukselle ja validoinnille. Perinteiset varmennusprosessit testaavat usein laitteistomalleja eristyksissä kehityssyklin loppuvaiheeseen saakka. Tämän myötä myös yhteistyö laitteisto- ja ohjelmistokehityksen välillä on vähäistä, mikä hidastaa virheiden tunnistamista ja korjausta. Tämän diplomityön tavoitteena on kehittää, toteuttaa ja arvioida laitteisto-ohjelmisto-yhteissimulointiin perustuva esivalidointimenetelmä näiden haasteiden ratkaisemiseksi. Menetelmä mahdollistaa laitteiston ja ohjelmiston varhaisen integroinnin, toimien luonnollisena välietappina perinteisen laitteistomallin varmennuksen ja koko järjestelmän validoinnin välillä. Yhteissimulointi käyttää QEMU suoritinemulaattoria, joka on yhdistetty rekisterinsiirtotason (RTL) laitteistomalliin. Tämä mahdollistaa ohjelmistokomponenttien, kuten laiteajureiden, suorittamisen kohdejärjestelmän käskysarja-arkkitehtuurilla (ISA) yhdessä kellosyklitarkkojen RTL laitteistomallien kanssa. Työ keskittyy kahteen yhteissimulaation pääsovellukseen. Ensinnäkin se mahdollistaa ohjelmiston yksikkötestien suorittamisen laitteistomallien kanssa, varmistaen kommunikaation laiteajurien, matalan tason ohjelmiston ja laitteistokomponenttien välillä. Toiseksi se tarjoaa ympäristön ohjelmiston käyttämiseen toiminnallisessa laitteiston varmennuksessa. Merkittävä etu tästä lähestymistavasta on integraatiovirheiden varhainen havaitseminen. Ohjelmiston yksikkötestejä voidaan suorittaa jo IP-lohkon tasolla oikeilla laitteistomalleilla, mikä on aiemmin ollut mahdollista vain kalliilla järjestelmätason prototyypeillä. Tämä mahdollistaa aikaisemman ohjelmisto- ja laitteistokehitystiimien välisen yhteistyön ja helpottaa siirtymistä perinteisiin järjestelmätason validointimenetelmiin

    Silicon-Organic Hybrid (SOH) Mach-Zehnder Modulators for 100 Gbit/s On-Off Keying

    Get PDF
    Electro-optic modulators for high-speed on-off keying (OOK) are key components of short- and mediumreach interconnects in data-center networks. Besides small footprint and cost-efficient large-scale production, small drive voltages and ultra-low power consumption are of paramount importance for such devices. Here we demonstrate that the concept of silicon-organic hybrid (SOH) integration is perfectly suited for meeting these challenges. The approach combines the unique processing advantages of large-scale silicon photonics with unrivalled electro-optic (EO) coefficients obtained by molecular engineering of organic materials. In our proof-of-concept experiments, we demonstrate generation and transmission of OOK signals with line rates of up to 100 Gbit/s using a 1.1 mm-long SOH Mach-Zehnder modulator (MZM) which features a {\pi}-voltage of only 0.9 V. This experiment represents not only the first demonstration of 100 Gbit/s OOK on the silicon photonic platform, but also leads to the lowest drive voltage and energy consumption ever demonstrated at this data rate for a semiconductor-based device. We support our experimental results by a theoretical analysis and show that the nonlinear transfer characteristic of the MZM can be exploited to overcome bandwidth limitations of the modulator and of the electric driver circuitry. The devices are fabricated in a commercial silicon photonics line and can hence be combined with the full portfolio of standard silicon photonic devices. We expect that high-speed power-efficient SOH modulators may have transformative impact on short-reach optical networks, enabling compact transceivers with unprecedented energy efficiency that will be at the heart of future Ethernet interfaces at Tbit/s data rates

    Automated Debugging Methodology for FPGA-based Systems

    Get PDF
    Electronic devices make up a vital part of our lives. These are seen from mobiles, laptops, computers, home automation, etc. to name a few. The modern designs constitute billions of transistors. However, with this evolution, ensuring that the devices fulfill the designer’s expectation under variable conditions has also become a great challenge. This requires a lot of design time and effort. Whenever an error is encountered, the process is re-started. Hence, it is desired to minimize the number of spins required to achieve an error-free product, as each spin results in loss of time and effort. Software-based simulation systems present the main technique to ensure the verification of the design before fabrication. However, few design errors (bugs) are likely to escape the simulation process. Such bugs subsequently appear during the post-silicon phase. Finding such bugs is time-consuming due to inherent invisibility of the hardware. Instead of software simulation of the design in the pre-silicon phase, post-silicon techniques permit the designers to verify the functionality through the physical implementations of the design. The main benefit of the methodology is that the implemented design in the post-silicon phase runs many order-of-magnitude faster than its counterpart in pre-silicon. This allows the designers to validate their design more exhaustively. This thesis presents five main contributions to enable a fast and automated debugging solution for reconfigurable hardware. During the research work, we used an obstacle avoidance system for robotic vehicles as a use case to illustrate how to apply the proposed debugging solution in practical environments. The first contribution presents a debugging system capable of providing a lossless trace of debugging data which permits a cycle-accurate replay. This methodology ensures capturing permanent as well as intermittent errors in the implemented design. The contribution also describes a solution to enhance hardware observability. It is proposed to utilize processor-configurable concentration networks, employ debug data compression to transmit the data more efficiently, and partially reconfiguring the debugging system at run-time to save the time required for design re-compilation as well as preserve the timing closure. The second contribution presents a solution for communication-centric designs. Furthermore, solutions for designs with multi-clock domains are also discussed. The third contribution presents a priority-based signal selection methodology to identify the signals which can be more helpful during the debugging process. A connectivity generation tool is also presented which can map the identified signals to the debugging system. The fourth contribution presents an automated error detection solution which can help in capturing the permanent as well as intermittent errors without continuous monitoring of debugging data. The proposed solution works for designs even in the absence of golden reference. The fifth contribution proposes to use artificial intelligence for post-silicon debugging. We presented a novel idea of using a recurrent neural network for debugging when a golden reference is present for training the network. Furthermore, the idea was also extended to designs where golden reference is not present
    corecore