59,152 research outputs found

    On Timing Model Extraction and Hierarchical Statistical Timing Analysis

    Full text link
    In this paper, we investigate the challenges to apply Statistical Static Timing Analysis (SSTA) in hierarchical design flow, where modules supplied by IP vendors are used to hide design details for IP protection and to reduce the complexity of design and verification. For the three basic circuit types, combinational, flip-flop-based and latch-controlled, we propose methods to extract timing models which contain interfacing as well as compressed internal constraints. Using these compact timing models the runtime of full-chip timing analysis can be reduced, while circuit details from IP vendors are not exposed. We also propose a method to reconstruct the correlation between modules during full-chip timing analysis. This correlation can not be incorporated into timing models because it depends on the layout of the corresponding modules in the chip. In addition, we investigate how to apply the extracted timing models with the reconstructed correlation to evaluate the performance of the complete design. Experiments demonstrate that using the extracted timing models and reconstructed correlation full-chip timing analysis can be several times faster than applying the flattened circuit directly, while the accuracy of statistical timing analysis is still well maintained

    Statistical Power Supply Dynamic Noise Prediction in Hierarchical Power Grid and Package Networks

    Get PDF
    One of the most crucial high performance systems-on-chip design challenge is to front their power supply noise sufferance due to high frequencies, huge number of functional blocks and technology scaling down. Marking a difference from traditional post physical-design static voltage drop analysis, /a priori dynamic voltage drop/evaluation is the focus of this work. It takes into account transient currents and on-chip and package /RLC/ parasitics while exploring the power grid design solution space: Design countermeasures can be thus early defined and long post physical-design verification cycles can be shortened. As shown by an extensive set of results, a carefully extracted and modular grid library assures realistic evaluation of parasitics impact on noise and facilitates the power network construction; furthermore statistical analysis guarantees a correct current envelope evaluation and Spice simulations endorse reliable result

    Graviton mass bounds from space-based gravitational-wave observations of massive black hole populations

    Full text link
    Space-based gravitational-wave detectors, such as LISA or a similar ESA-led mission, will offer unique opportunities to test general relativity. We study the bounds that space-based detectors could place on the graviton Compton wavelength \lambda_g=h/(m_g c) by observing multiple inspiralling black hole binaries. We show that while observations of individual inspirals will yield mean bounds \lambda_g~3x10^15 km, the combined bound from observing ~50 events in a two-year mission is about ten times better: \lambda_g~3x10^16 km (m_g~4x10^-26 eV). The bound improves faster than the square root of the number of observed events, because typically a few sources provide constraints as much as three times better than the mean. This result is only mildly dependent on details of black hole formation and detector characteristics. The bound achievable in practice should be one order of magnitude better than this figure (and hence almost competitive with the static, model-dependent bounds from gravitational effects on cosmological scales), because our calculations ignore the merger/ringdown portion of the waveform. The observation that an ensemble of events can sensibly improve the bounds that individual binaries set on \lambda_g applies to any theory whose deviations from general relativity are parametrized by a set of global parameters.Comment: 5 pages, 3 figures, 2 tables. Minor changes to address comments by the referee

    Video summarization by group scoring

    Get PDF
    In this paper a new model for user-centered video summarization is presented. Involvement of more than one expert in generating the final video summary should be regarded as the main use case for this algorithm. This approach consists of three major steps. First, the video frames are scored by a group of operators. Next, these assigned scores are averaged to produce a singular value for each frame and lastly, the highest scored video frames alongside the corresponding audio and textual contents are extracted to be inserted into the summary. The effectiveness of this approach has been evaluated by comparing the video summaries generated by this system against the results from a number of automatic summarization tools that use different modalities for abstraction
    corecore