7,621 research outputs found

    Exploring the Mysteries of System-Level Test

    Full text link
    System-level test, or SLT, is an increasingly important process step in today's integrated circuit testing flows. Broadly speaking, SLT aims at executing functional workloads in operational modes. In this paper, we consolidate available knowledge about what SLT is precisely and why it is used despite its considerable costs and complexities. We discuss the types or failures covered by SLT, and outline approaches to quality assessment, test generation and root-cause diagnosis in the context of SLT. Observing that the theoretical understanding for all these questions has not yet reached the level of maturity of the more conventional structural and functional test methods, we outline new and promising directions for methodical developments leveraging on recent findings from software engineering.Comment: 7 pages, 2 figure

    Spartan Daily, March 12, 1990

    Get PDF
    Volume 94, Issue 31https://scholarworks.sjsu.edu/spartandaily/7961/thumbnail.jp

    Spartan Daily, April 2, 1986

    Get PDF
    Volume 86, Issue 40https://scholarworks.sjsu.edu/spartandaily/7429/thumbnail.jp

    Spartan Daily, November 15, 1988

    Get PDF
    Volume 91, Issue 52https://scholarworks.sjsu.edu/spartandaily/7778/thumbnail.jp

    Spartan Daily, October 11, 1989

    Get PDF
    Volume 93, Issue 28https://scholarworks.sjsu.edu/spartandaily/7888/thumbnail.jp

    Spartan Daily, September 22, 1989

    Get PDF
    Volume 93, Issue 15https://scholarworks.sjsu.edu/spartandaily/7875/thumbnail.jp

    Spartan Daily, February 14, 1984

    Get PDF
    Volume 82, Issue 12https://scholarworks.sjsu.edu/spartandaily/7130/thumbnail.jp

    Analyzing the Stability of Relative Performance Differences Between Cloud and Embedded Environments

    Get PDF
    There has been a shift towards the software-defined vehicle in the automotive industry in recent years. In order to enable the correct behaviour of critical as well as non-critical software functions, like those found in Autonomous Driving/Driver Assistance subsystems, extensive software testing needs to be performed. The usage of embedded hardware for these tests is either very expensive or takes a prohibitively long time in relation to the fast development cycles in the industry. To reduce development bottlenecks, test frameworks executed in cloud environments that leverage the scalability of the cloud are an essential part of the development process. However, relying on more performant cloud hardware for the majority of tests means that performance problems will only become apparent in later development phases when software is deployed to the real target. However, if the performance relation between executing in the cloud and on the embedded target can be approximated with sufficient precision, the expressiveness of the executed tests can be improved. Moreover, as a fully integrated system consists of a large number of software components that, at any given time, exhibit an unknown mix of best-/average-/worst-case behaviour, it is critical to know whether the performance relation differs depending on the inputs. In this paper, we examine the relative performance differences between a physical ARM-based chipset and a cloud-based ARM-based virtual machine, using a generic benchmark and 2 algorithms representative of typical automotive workloads, modified to generate best-/average-/worst-case behaviour in a reproducible and controlled way and assess the performance differences. We determine that the performance difference factor is between 1.8 and 3.6 for synthetic benchmarks and around 2.0-2.8 for more representative benchmarks. These results indicate that it may be possible to relate cloud to embedded performance with acceptable precision, especially when workload characterization is taken into account

    Spartan Daily, April 11, 1988

    Get PDF
    Volume 90, Issue 42https://scholarworks.sjsu.edu/spartandaily/7701/thumbnail.jp
    • …
    corecore