6,142 research outputs found

    Flat branches and pressure amorphization

    Full text link
    After summarizing the phenomenology of pressure amorphization (PA), we present a theory of PA based on the notion that one or more branches of the phonon spectrum soften and flatten with increasing pressure. The theory expresses the anharmonic dynamics of the flat branches in terms of local modes, represented by lattice Wannier functions, which are in turn used to construct an effective Hamiltonian. When the low-pressure structure becomes metastable with respect to the high-pressure equilibrium phase and the relevant branches are sufficiently flat, transformation into an amorphous phase is shown to be kinetically favored because of the exponentially large number of both amorphous phases and reaction pathways. In effect, the critical-size nucleus for the first-order phase transition is found to be reduced to a single unit cell, or nearly so. Random nucleation into symmetrically equivalent local configurations characteristic of the high-pressure structure is then shown to overwhelm any possible domain growth, and an ``amorphous'' structure results.Comment: 8 pages with 3 postscript figures embedded; Proceedings of the 4th International Discussion Meeting on Relaxations in Complex Systems, Hersonissos, Heraklion, Crete, June 16-23, ed. K. L. Ngai, Special Issues of the Journal of Non-Crystalline Solids, 200

    Liquidation, bailout, and bail-in: insolvency resolution mechanisms and bank lending

    Get PDF
    We present a dynamic, continuous-time model in which risk averse inside equityholders set a bank’s lending, payout, and financing policies, and the exposure of bank assets to crashes. We examine whether bailouts encourage excessive lending and risk-taking compared to liquidation or bail-ins with debt-to-equity conversion or debt write-downs. The effects of the prevailing insolvency resolution mechanism (IRM) on the probability of insolvency, loss in default, and the bank’s value suggest no single IRM is a panacea. We show how a bailout fund financed through a tax on bank dividends resolves bailouts without public money and without distorting insiders’ incentives

    Anti-arrhythmic effects of hypercalcemia in hyperkalemic, Langendorff-perfused mouse hearts

    Get PDF
    published_or_final_versio

    Is the standard SF-12 Health Survey valid and equivalent for a Chinese population?

    Get PDF
    Introduction: Chinese is the world's largest ethnic group but few health-related quality of life (HRQoL) measures have been tested on them. The aim of this study was to determine if the standard SF-12 was valid and equivalent for a Chinese population. Methods: The SF-36 data of 2410 Chinese adults randomly selected from the general population of Hong Kong (HK) were analysed. The Chinese (HK) specific SF-12 items and scoring algorithm were derived from the HK Chinese population data by multiple regressions. The SF-36 PCS and MCS scores were used as criteria to assess the content and criterion validity of the SF-12. The standard and Chinese (HK) specific SF-12 PCS and MCS scores were compared for equivalence. Results: The standard SF-12 explained 82% and 89% of the variance of the SF-36 PCS and MCS scores, respectively, and the effect size differences between the standard SF-36 and SF-12 scores were less than 0.3. Six of the Chinese (HK) specific SF-12 items were different from those of the standard SF-12, but the effect size differences between the Chinese (HK) specific and standard SF-12 scores were mostly less than 0.3. Conclusions: The standard SF-12 was valid and equivalent for the Chinese, which would enable more Chinese to be included in clinical trials that measure HRQoL. © Springer 2005.postprin

    Distributed algorithms for optimal power flow problem

    Get PDF
    Optimal power flow (OPF) is an important problem for power generation and it is in general non-convex. With the employment of renewable energy, it will be desirable if OPF can be solved very efficiently so that its solution can be used in real time. With some special network structure, e.g. trees, the problem has been shown to have a zero duality gap and the convex dual problem yields the optimal solution. In this paper, we propose a primal and a dual algorithm to coordinate the smaller subproblems decomposed from the convexified OPF. We can arrange the subproblems to be solved sequentially and cumulatively in a central node or solved in parallel in distributed nodes. We test the algorithms on IEEE radial distribution test feeders, some random tree-structured networks, and the IEEE transmission system benchmarks. Simulation results show that the computation time can be improved dramatically with our algorithms over the centralized approach of solving the problem without decomposition, especially in tree-structured problems. The computation time grows linearly with the problem size with the cumulative approach while the distributed one can have size-independent computation time.postprin

    On practical adequate test suites for integrated test case prioritization and fault localization

    Get PDF
    An effective integration between testing and debugging should address how well testing and fault localization can work together productively. In this paper, we report an empirical study on the effectiveness of using adequate test suites for fault localization. We also investigate the integration of test case prioritization and statistical fault localization with a postmortem analysis approach. Our results on 16 test case prioritization techniques and four statistical fault localization techniques show that, although much advancement has been made in the last decade, test adequacy criteria are still insufficient in supporting effective fault localization. We also find that the use of branch-adequate test suites is more likely than statement-adequate test suites in the effective support of statistical fault localization. © 2011 IEEE.published_or_final_versionThe 11th International Conference on Quality Software (QSIC 2011), Madrid, Spain, 13-14 July 2011. In International Conference on Quality Software Proceedings, 2011, p. 21-3

    Precise propagation of fault-failure correlations in program flow graphs

    Get PDF
    Statistical fault localization techniques find suspicious faulty program entities in programs by comparing passed and failed executions. Existing studies show that such techniques can be promising in locating program faults. However, coincidental correctness and execution crashes may make program entities indistinguishable in the execution spectra under study, or cause inaccurate counting, thus severely affecting the precision of existing fault localization techniques. In this paper, we propose a BlockRank technique, which calculates, contrasts, and propagates the mean edge profiles between passed and failed executions to alleviate the impact of coincidental correctness. To address the issue of execution crashes, Block-Rank identifies suspicious basic blocks by modeling how each basic block contributes to failures by apportioning their fault relevance to surrounding basic blocks in terms of the rate of successful transition observed from passed and failed executions. BlockRank is empirically shown to be more effective than nine representative techniques on four real-life medium-sized programs. © 2011 IEEE.published_or_final_versionProceedings of the 35th IEEE Annual International Computer Software and Applications Conference (COMPSAC 2011), Munich, Germany, 18-22 July 2011, p. 58-6

    CARISMA: a context-sensitive approach to race-condition sample-instance selection for multithreaded applications

    Get PDF
    Dynamic race detectors can explore multiple thread schedules of a multithreaded program over the same input to detect data races. Although existing sampling-based precise race detectors reduce overheads effectively so that lightweight precise race detection can be performed in testing or post-deployment environments, they are ineffective in detecting races if the sampling rates are low. This paper presents CARISMA to address this problem. CARISMA exploits the insight that along an execution trace, a program may potentially handle many accesses to the memory locations created at the same site for similar purposes. Iterating over multiple execution trials of the same input, CARISMA estimates and distributes the sampling budgets among such location creation sites, and probabilistically collects a fraction of all accesses to the memory locations associated with such sites for subsequent race detection. Our experiment shows that, compared with PACER on the same platform and at the same sampling rate (such as 1%), CARISMA is significantly more effective. © 2012 ACM.postprin
    • …
    corecore