991 research outputs found

    Calculating incremental risk charges: The effect of the liquidity horizon

    Get PDF
    The recent incremental risk charge addition to the Basel (1996) market risk amend- ment requires banks to estimate, separately, the default and migration risk of their trading portfolios that are exposed to credit risk. The new regulation requires the total regulatory charges for trading books to be computed as the sum of the market risk capi- tal and the incremental risk charge for credit risk. In contrast to Basel II models for the banking book no model is prescribed and banks can use internal models for calculating the incremental risk charge. In the calculation of incremental risk charges a key compo- nent is the choice of the liquidity horizon for traded credits. In this paper we explore the e¤ect of the liquidity horizon on the incremental risk charge. Speci�cally we consider a sample of 28 bonds with di¤erent rating and liquidity horizons to evaluate the impact of the choice of the liquidity horizon for a certain rating class of credits. We �find that choosing the liquidity horizon for a particular credit there are two important effects that needs to be considered. Firstly, for bonds with short liquidity horizons there is a miti- gation effect of preventing the bond from further downgrades by trading it frequently. Secondly, there is the possibility of multiple defaults. Of these two effects the multiple default effect will generally be more pronounced for non investment grade credits as the probability of default is severe even for short liquidity periods. For medium investment grade credits these two effects will in general o¤set and the incremental risk charge will be approximately the same across liquidity horizons. For high quality investment grade credits the effect of the multiple defaults is low for short liquidity horizons as the frequent trading effectively prevents severe downgrades.credit risk; incremental risk charge; liquidity horizon; Basel III

    Going the Extra Mile to Increase the Wilder School’s Student Enrollment

    Get PDF
    To align with the themes and goals of VCU Quest 2028: One VCU Together We Transform, the Wilder School has prepared and will implement a schoolwide strategic plan to guide its development to 2028. In this strategic plan, one of the metrics of student success is student enrollment, which is of paramount importance. After the background introduction, this report gives an overview of the student enrollment data of the five academic programs within the Wilder School during the past 5 years and takes a snapshot of the 2022 MURP student demographic data. Afterward, it briefly introduces the enrollment management strategies in other selected universities and makes a set of recommendations on how to go the extra mile in boosting the URSP Program’s graduate student enrollment through different recruitment strategies. Finally, the report proposes a budget estimate to implement this plan, lays out the implementation priorities, and draws conclusions

    A Review and Analysis of Service Level Agreements and Chargebacks in the Retail Industry

    Get PDF
    Purpose: This study examines service level agreements (SLAs) in the retail industry and uses empirical data to draw conclusions on relationships between SLA parameters and retailer financial performance. Design/methodology/approach: Based on prior SLA theories, hypotheses about the impacts of SLA confidentiality, choice of chargeback mechanisms, and chargeback penalty on retailer inventory turnover are tested. Findings: Retailer inventory turnover could vary by the level of SLA confidentiality, and the variation of retailer inventory turnovers could be explained by chargeback penalty. Research limitations/implications – The research findings may not be readily applicable to SLAs outside of the retail industry. Also, most conclusions were drawn from publicly available SLAs. Practical implications: The significant relationships between SLA parameters and retailer inventory turnover imply that a retailer could improve its financial performance by leveraging its SLA design. Originality/value: Not only does this study contribute to the understanding of retail SLA design in practice, but it also extends prior theories by investigating the implications of SLA design on retailer inventory turnover

    Examining Challenges Facing SMEs Businesses in Dar Es Salaam Tanzania: A Case Study of Ilala and Kinondoni Municipalities.

    Get PDF
    The analysis of different SMEs definitions worldwide reveal that it is very difficult to arrive at a common definition. In fact one study by Auciello (1975) in 75 countries found more than 75 definitions were used in the target countries. This demonstrates very well that there is no common accepted definition of SMEs. SMEs businesses range from very small micro-firms run by one or two persons and very slow growth or no growth to fast growing medium businesses earning millions of dollars and majority employing as many as 250 employees (Fjose et al., 2010). SMEs all over the world and in Tanzania in particular, can be easily established since their requirements in terms of capital; technology, management and even utilities are not as demanding as it is the case for large enterprises. They are regarded as backbone of economic growth in almost all developed and developing countries. In Tanzania, SMEs contribute over 30% of the GDP, and employs 3-4 million people, which is 20-30% of the total labour force (Kazungu, Ndiege and Matolo, 2013). Therefore this paper has identified several challenges facing SMEs such as financial access which is consistently reported as one of the major obstacles to SME’s growth and development where only 20 % of African SMEs have a line of credit from a financial institution. And suggests technical approaches to solving them in respect to the developing economy of Tanzania. Keywords: Small and Medium Enterprises (SMEs), Gross Domestic Product (GDP), Challenges. DOI: 10.7176/JESD/11-17-07 Publication date:September 30th 202

    Training Curricula for Open Domain Answer Re-Ranking

    Full text link
    In precision-oriented tasks like answer ranking, it is more important to rank many relevant answers highly than to retrieve all relevant answers. It follows that a good ranking strategy would be to learn how to identify the easiest correct answers first (i.e., assign a high ranking score to answers that have characteristics that usually indicate relevance, and a low ranking score to those with characteristics that do not), before incorporating more complex logic to handle difficult cases (e.g., semantic matching or reasoning). In this work, we apply this idea to the training of neural answer rankers using curriculum learning. We propose several heuristics to estimate the difficulty of a given training sample. We show that the proposed heuristics can be used to build a training curriculum that down-weights difficult samples early in the training process. As the training process progresses, our approach gradually shifts to weighting all samples equally, regardless of difficulty. We present a comprehensive evaluation of our proposed idea on three answer ranking datasets. Results show that our approach leads to superior performance of two leading neural ranking architectures, namely BERT and ConvKNRM, using both pointwise and pairwise losses. When applied to a BERT-based ranker, our method yields up to a 4% improvement in MRR and a 9% improvement in P@1 (compared to the model trained without a curriculum). This results in models that can achieve comparable performance to more expensive state-of-the-art techniques.Comment: Accepted at SIGIR 2020 (long

    Shipment sizing for autonomous trucks of road freight

    Get PDF
    Unprecedented endeavors have been made to take autonomous trucks to the open road. This study aims to provide relevant information on autonomous truck technology and to help logistics managers gain insight into assessing optimal shipment sizes for autonomous trucks. Empirical data of estimated autonomous truck costs is collected to help revise classic, conceptual models of assessing optimal shipment sizes. Numerical experiments are conducted to illustrate the optimal shipment size when varying the autonomous truck technology cost and transportation lead time reduction. Autonomous truck technology can cost as much as 70% of the price of a truck. Logistics managers using classic models that disregard the additional cost could underestimate the optimal shipment size for autonomous trucks. This study also predicts the possibility of inventory centralization in the supply chain network. The findings are based on information collected from trade articles and academic journals in the domain of logistics management. Other technical or engineering discussions on autonomous trucks are not included in the literature review. Logistics managers must consider the latest cost information when deciding on shipment sizes of road freight for autonomous trucks. When the economies of scale in autonomous technology prevail, the classic economic order quantity solution might again suffice as a good approximation for optimal shipment size. This study shows that some models in the literature might no longer be applicable after the introduction of autonomous trucks. We also develop a new cost expression that is a function of the lead time reduction by adopting autonomous trucks

    End-to-End Retrieval with Learned Dense and Sparse Representations Using Lucene

    Full text link
    The bi-encoder architecture provides a framework for understanding machine-learned retrieval models based on dense and sparse vector representations. Although these representations capture parametric realizations of the same underlying conceptual framework, their respective implementations of top-kk similarity search require the coordination of different software components (e.g., inverted indexes, HNSW indexes, and toolkits for neural inference), often knitted together in complex architectures. In this work, we ask the following question: What's the simplest design, in terms of requiring the fewest changes to existing infrastructure, that can support end-to-end retrieval with modern dense and sparse representations? The answer appears to be that Lucene is sufficient, as we demonstrate in Anserini, a toolkit for reproducible information retrieval research. That is, effective retrieval with modern single-vector neural models can be efficiently performed directly in Java on the CPU. We examine the implications of this design for information retrieval researchers pushing the state of the art as well as for software engineers building production search systems

    Improving Mechanical Ventilator Clinical Decision Support Systems with A Machine Learning Classifier for Determining Ventilator Mode

    Full text link
    Clinical decision support systems (CDSS) will play an in-creasing role in improving the quality of medical care for critically ill patients. However, due to limitations in current informatics infrastructure, CDSS do not always have com-plete information on state of supporting physiologic monitor-ing devices, which can limit the input data available to CDSS. This is especially true in the use case of mechanical ventilation (MV), where current CDSS have no knowledge of critical ventilation settings, such as ventilation mode. To enable MV CDSS to make accurate recommendations related to ventilator mode, we developed a highly performant ma-chine learning model that is able to perform per-breath clas-sification of 5 of the most widely used ventilation modes in the USA with an average F1-score of 97.52%. We also show how our approach makes methodologic improvements over previous work and that it is highly robust to missing data caused by software/sensor error
    • …
    corecore