612 research outputs found

    Outcome of patients with stable angina pectoris treated with or without percutaneous coronary intervention

    Get PDF
    Background: To assess the outcome of patients with stable angina pectoris treated with percutaneous coronary intervention versus medically treated patients. Methods: Eighty patients with stable angina pectoris and coronary stenosis as confirmed in coronary angiography were treated with (n = 31) or without (n = 49) percutaneous coronary intervention in our department. All patients received optimal medical therapy and were followed up for a period of 24 months. Results: Baseline clinical characteristics, including risk factors of coronary heart disease and coronary lesion type did not differ between the two groups (all p > 0.05). There was no significant difference in major adverse cardiac events (22.4% vs. 22.6%) during the 24 month follow-up between the two groups (p > 0.05). Conclusions: Percutaneous coronary intervention did not provide extra benefit in this group of patients with stable angina pectoris receiving standard medical treatment in terms of 24 months major adverse outcomes. (Cardiol J 2008; 15: 226-229

    Dynamic Pricing for Airline Revenue Management under Passenger Mental Accounting

    Get PDF
    Mental accounting is a far-reaching concept, which is often used to explain various kinds of irrational behaviors in human decision making process. This paper investigates dynamic pricing problems for single-flight and multiple flights settings, respectively, where passengers may be affected by mental accounting. We analyze dynamic pricing problems by means of the dynamic programming method and obtain the optimal pricing strategies. Further, we analytically show that the passenger mental accounting depth has a positive effect on the flight’s expected revenue for the single flight and numerically illustrate that the passenger mental accounting depth has a positive effect on the optimal prices for the multiple flights

    Split, Encode and Aggregate for Long Code Search

    Full text link
    Code search with natural language plays a crucial role in reusing existing code snippets and accelerating software development. Thanks to the Transformer-based pretraining models, the performance of code search has been improved significantly compared to traditional information retrieval (IR) based models. However, due to the quadratic complexity of multi-head self-attention, there is a limit on the input token length. For efficient training on standard GPUs like V100, existing pretrained code models, including GraphCodeBERT, CodeBERT, RoBERTa (code), take the first 256 tokens by default, which makes them unable to represent the complete information of long code that is greater than 256 tokens. Unlike long text paragraph that can be regarded as a whole with complete semantics, the semantics of long code is discontinuous as a piece of long code may contain different code modules. Therefore, it is unreasonable to directly apply the long text processing methods to long code. To tackle the long code problem, we propose SEA (Split, Encode and Aggregate for Long Code Search), which splits long code into code blocks, encodes these blocks into embeddings, and aggregates them to obtain a comprehensive long code representation. With SEA, we could directly use Transformer-based pretraining models to model long code without changing their internal structure and repretraining. Leveraging abstract syntax tree (AST) based splitting and attention-based aggregation methods, SEA achieves significant improvements in long code search performance. We also compare SEA with two sparse Trasnformer methods. With GraphCodeBERT as the encoder, SEA achieves an overall mean reciprocal ranking score of 0.785, which is 10.1% higher than GraphCodeBERT on the CodeSearchNet benchmark.Comment: 9 page
    • …
    corecore