193 research outputs found

    A Comparative Study of Bootstrapping Techniques for Inventory Control

    Get PDF
    Setting correct inventory levels is an important business consideration in order to minimise inventory investment while at the same time ensuring sufficient inventory levels to meet customer demand. Inventory management has a significant impact on both financial and customer service aspects of a business. Selecting appropriate inventory levels requires that products’ lead time demand be accurately estimated in order to calculate the reorder point. The purpose of this study was to empirically determine whether bootstrapping methods used to estimate the lead time demand distribution and reorder point calculation could match or even outperform a standard parametric approach. The two bootstrapping methods compared in this research included variations of those presented by Bookbinder and Lordahl [1989] and do Rego and de Mesquita [2015]. These were compared to the standard parametric approach common in practice which makes use of the Normal distribution for modelling lead time demand. The three reorder point calculation methods were each incorporated into the inventory policy simulations using data supplied by a South African automotive spare parts business. The simulations covered a period of twelve months and were repeated for multiple service levels ranging from 70 to 99 percent. Results of the simulations were compared at a high level as well as for groups of items identified using segmentation techniques which considered different item demand and lead time characteristics. Key findings were that the Normal approximation method was far superior in terms of the service level metric, while the variation of the Bookbinder and Lordahl [1989] method adopted in this study presented possible cost benefits at lower service levels

    Application of Machine Learning Algorithms to Actuarial Ratemaking within Property and Casualty Insurance

    Get PDF
    A scientific pricing assessment is essential for maintaining viable customer relationship management solutions (CRM) for various stakeholders including consumers, insurance intermediaries, and insurers. The thesis aims to examine research problems neighboring the ratemaking process, including relaxing the conventional loss model assumption of homogeneity and independence. The thesis identified three major research scopes within multiperil insurance settings: heterogeneity in consumer behaviour on pricing decisions, loss trending under non-linearity and temporal dependencies, and loss modelling in presence of inflationary pressure. Heterogeneous consumers on pricing decisions were examined using demand and loyalty-based strategy. A hybrid decision tree classification framework is implemented, that includes semi-supervised learning model, variable selection technique, and partitioning approach with different treatment effects in order to achieve adequate risk profiling. Also, the thesis explored a supervised tree learning mechanism under highly imbalanced overlap classes and having a non-linear response-predictors relationship. The two-phase classification framework is applied to an owner’s occupied property portfolio from a personal insurance brokerage powered by a digital platform within the Canadian market. The hybrid three-phase tree algorithm, which includes conditional inference trees, random forest wrapped by the Boruta algorithm, and model-based recursive partitioning under a multinomial generalized linear model, is proposed to study the price sensitivity ranking of digital consumers. The empirical results suggest a well-defined segmentation of digital consumers with differential price sensitivity. Further, with highly imbalanced and overlapped classes, the resampling technique was modelled together with the decision tree algorithm, providing a more scientific approach to overcome classification problems than the traditional multinomial regression. The resulting segmentation was able to identify the high-sensitivity consumers group, where premium rate reductions are recommended to reduce the churn rate. Consumers are classified as an insensitive group for which the price strategy to increase the premium rate is expected to have a slight impact on the closing ratio and retention rate. Insurance loss incurred greatly exhibits abnormal characteristics such as temporal dependence, nonlinear relationship between dependent and independent variables, seasonal variation, and mixture distribution resulting from the implicit claim inflation component. With such abnormal variable characteristics, the severity and frequency components may exhibit an altered trending pattern, that changes over time and never repeats. This could have a profound impact on the experience rating model, where the estimates of the pure premium and the rate relativity of tariff class are likely to be under or over-estimated. A discussion of the pros and cons of the conventional loss trending approach leads to an alternative framework for the loss cost structure. The conventional pure premium is further split into base severity and severity deflator random variables using a do(·) operator within causal inference. The components are separately modelled based on different time basis predictors using the semiparametric generalized additive model (GAM) with a spline curve. To maximize the claim inflation calendar year effect and improve the efficiency of severity trending, this thesis refines the claim inflation estimation by adapting Taylor’s [86] separation method that estimates the inflation index from a loss development triangle. In the second phase of developing the severity trend model, we integrated both the base severity and severity deflator under a new generalized mechanism known as Discount, Model, and Trend (DMT). The two-phase modelling was built to overcome the mixture distribution effect on final trend estimates. A simulation study constructed using the claims paid development triangle from a Canadian Insurtech broker’s houseowners/householders portfolio was used in a severity trend movement prediction analysis. We discovered that the conventional framework understated the severity trends more than the separation cum DMT framework. GAM provides a flexible and effective mechanism for modelling nonlinear time series in studies of the frequency loss trend. However, GAM assumes that residuals are independent and identically distributed (iid), while frequency loss time series can be correlated in adjacent time points. This thesis introduces a new model called Generalized Additive Model with Seasonal Autoregressive term (GAMSAR) that accounts for temporal dependency and seasonal variation in order to improve prediction confidence intervals. Parameters of the GAMSAR model are estimated by maximum partial likelihood using a modified Newton’s method developed by Yang et al. [97], and the goodness-of-fit between GAM, and GAMSAR is demonstrated using a simulation study. Simulation results show that the bias of the mean estimates from GAM differs greatly from their true value. The proposed GAMSAR model shows to be superior, especially in the presence of seasonal variation. Further, a comparison study is conducted between GAMSAR and Generalized Additive Model with Autoregressive term (GAMAR) developed by Yang et al. [97], and the coverage rate of 95% confidence interval confirms that the GAMSAR model has the ability to incorporate the nonlinear trend effects as well as capture the serial correlation between the observations. In the empirical analysis, a claim dataset of personal property insurance obtained from digital brokers in Canada is used to show that the GAMSAR(1)12 captures the periodic dependence structure of the data precisely compared to standard regression models. The proposed frequency severity trend models support the thesis’s goal of establishing a scientific approach to pricing that is robust under different trending processes

    Analysis and forecasting of asset quality, risk management and financial stability for the Greek banking system

    Get PDF
    The increase in non-performing loans (NPLs) during the financial crisis of 2008, which has been converted into a fiscal crisis, as well as the risk of a medium-term increase due to the COVID-19 pandemic has put into question the robustness of many banks and the financial stability of the whole sector. As far as the banking sector is concerned, the management of non-performing loans represents the most significant challenge as their stock reached unprecedented levels, with the deterioration in asset quality being widespread. Addressing the problem of non-performing loans with the assistance of credit risk modeling is important from both a micro and a macro-prudential perspective, since it would not only improve the financial soundness and the capital adequacy of the banking sector, but also free-up funds to be directed to other more productive sectors of the economy. This Thesis extends earlier research by employing a short-term monitoring system with the aim to forecast “failures” i.e. NPL creation. The creation of such a monitoring system allows the risk of a “failure” to change over time, measuring the likelihood of “failure” given the survival time and a set of explanatory variables. The application of Cox proportional hazards models and survival trees to forecast NPLs can be usefully employed in the Greek corporate sectors. The research aim of this thesis consists of two domains: The first aim is the investigation of the determinants that contribute to the NPLs formation. Two GAMLSS models are being tested, a linear GAMLSS model and a nonlinear semi-parametric GAMLSS model which includes smoothing functions that capture potential nonlinear relationships between the explanatory variables to model the parameters favorably. The explanatory variables of the models consist of credit risk variables, macroeconomic variables, bank-specific variables and supervisory and market variables, while the response variable is the non-performing loans. The second aim is to provide answers on whether proportional hazards Cox models and survival tree models can forecast NPLs of loans that are provided in specific corporate sectors in Greece by the use of the most granular data set of corporate borrowers. By evaluating a series of Cox models, a short-term monitoring system has been created with the aim to forecast “failures” i.e. NPL creation. The Cox proportional hazards regression models are incorporating time-to-event, involving a timeline, described by the survival function, indicating the probability that a loan becomes an NPL until time t. The time period counts from the origination of the loan until the “death” of the loan, i.e. its termination, incorporating an “in between” observation point. The event is when the loan is initially being “infected”, i.e. has become NPL. Regarding survival trees, the data set was divided into more subsets, which are easier to model separately and hence yield an improved overall performance. Such models are then beneficial to implement with different machine learning techniques. Predictors (or covariates) are defined as the sectors of the Greek economy and the model is fitted both for the whole sample and for the sample of early terminated loans. The Thesis is organized as follows: Chapter 1 - Introduction addresses the role of banks in financial intermediation, the evolution of credit risk and some issues regarding the Greek banking sector. Chapter 2 constitutes a literature review on research focused on improving the predictive performance of different credit risk assessment methods. Chapter 3 outlines the competitive conditions in the banking sector to demonstrate whether the increase in concentration had affected the competitive conditions in the Greek banking system. In Chapter 4, the funding and the liquidity conditions in the Greek banking sector are being addressed. Chapter 5 contains the selection of aggregate sample, results and analysis of GAMLSS models that have been used for determining NPLs. Chapter 6 provides an introduction to the granular database on Large Exposures, which is used for deriving the panel sample of corporate borrowers whereby models of forecasting and prediction are being employed. Chapter 7 contains the application of Cox models and decision trees, the estimation procedure, parameters, model fit, estimation results and empirical findings. Chapter 8 provides an evaluation and applicability of models as well as the implications for further research. Finally, a conclusion is provided by summarizing my contribution to the research community and my recommendations to the banking industr

    Maintenance Management of Wind Turbines

    Get PDF
    “Maintenance Management of Wind Turbines” considers the main concepts and the state-of-the-art, as well as advances and case studies on this topic. Maintenance is a critical variable in industry in order to reach competitiveness. It is the most important variable, together with operations, in the wind energy industry. Therefore, the correct management of corrective, predictive and preventive politics in any wind turbine is required. The content also considers original research works that focus on content that is complementary to other sub-disciplines, such as economics, finance, marketing, decision and risk analysis, engineering, etc., in the maintenance management of wind turbines. This book focuses on real case studies. These case studies concern topics such as failure detection and diagnosis, fault trees and subdisciplines (e.g., FMECA, FMEA, etc.) Most of them link these topics with financial, schedule, resources, downtimes, etc., in order to increase productivity, profitability, maintainability, reliability, safety, availability, and reduce costs and downtime, etc., in a wind turbine. Advances in mathematics, models, computational techniques, dynamic analysis, etc., are employed in analytics in maintenance management in this book. Finally, the book considers computational techniques, dynamic analysis, probabilistic methods, and mathematical optimization techniques that are expertly blended to support the analysis of multi-criteria decision-making problems with defined constraints and requirements

    A Statistical Approach to the Alignment of fMRI Data

    Get PDF
    Multi-subject functional Magnetic Resonance Image studies are critical. The anatomical and functional structure varies across subjects, so the image alignment is necessary. We define a probabilistic model to describe functional alignment. Imposing a prior distribution, as the matrix Fisher Von Mises distribution, of the orthogonal transformation parameter, the anatomical information is embedded in the estimation of the parameters, i.e., penalizing the combination of spatially distant voxels. Real applications show an improvement in the classification and interpretability of the results compared to various functional alignment methods

    A comparison of the CAR and DAGAR spatial random effects models with an application to diabetics rate estimation in Belgium

    Get PDF
    When hierarchically modelling an epidemiological phenomenon on a finite collection of sites in space, one must always take a latent spatial effect into account in order to capture the correlation structure that links the phenomenon to the territory. In this work, we compare two autoregressive spatial models that can be used for this purpose: the classical CAR model and the more recent DAGAR model. Differently from the former, the latter has a desirable property: its ρ parameter can be naturally interpreted as the average neighbor pair correlation and, in addition, this parameter can be directly estimated when the effect is modelled using a DAGAR rather than a CAR structure. As an application, we model the diabetics rate in Belgium in 2014 and show the adequacy of these models in predicting the response variable when no covariates are available

    A novel statistical signal processing approach for analysing high volatile expression profiles.

    Get PDF
    The aim of this research is to introduce new advanced statistical methods for analysing gene expression profiles to consequently enhance our understanding of the spatial gradients of the proteins produced by genes in a gene regulatory network (GRN). To that end, this research has three main contributions. In this thesis, the segmentation Network (SN) in Drosophila melanogaster and the bicoid gene (bcd) as the critical input of this network are targeted to study. The first contribution of this research is to introduce a new noise filtering and signal processing algorithm based on Singular Spectrum Analysis (SSA) for extracting the signal of bicoid gene. Using the proposed SSA algorithm which is based on the minimum variance estimator, the extraction of bcd signal from its noisy profile is considerably improved compared to the most widely accepted model, Synthesis Diffusion Degradation (SDD). The achieved results are evaluated via both simulation studies and empirical results. Given the reliance of this research towards introducing an improved signal extraction approach, it is mandatory to compare the proposed method with the other well-known and widely used signal processing models. Therefore, the results are compared with a range of parametric and non-parametric signal processing methods. The conducted comparison study confirmed the outperformance of the SSA technique. Having the superior performance of SSA, in the second contribution, the SSA signal extraction performance is optimised using several novel computational methods including window length and eigenvalue identification approaches, Sequential and Hybrid SSA and SSA based on Colonial Theory. Each introduced method successfully improves a particular aspect of the SSA signal extraction procedure. The third and final contribution of this research aims at extracting the regulatory role of the maternal effect genes in SN using a variety of causality detection techniques. The hybrid algorithm developed here successfully portrays the interactions which have been previously accredited via laboratory experiments and therefore, suggests a new analytical view to the GRNs

    Renewable Energy Resource Assessment and Forecasting

    Get PDF
    In recent years, several projects and studies have been launched towards the development and use of new methodologies, in order to assess, monitor, and support clean forms of energy. Accurate estimation of the available energy potential is of primary importance, but is not always easy to achieve. The present Special Issue on ‘Renewable Energy Resource Assessment and Forecasting’ aims to provide a holistic approach to the above issues, by presenting multidisciplinary methodologies and tools that are able to support research projects and meet today’s technical, socio-economic, and decision-making needs. In particular, research papers, reviews, and case studies on the following subjects are presented: wind, wave and solar energy; biofuels; resource assessment of combined renewable energy forms; numerical models for renewable energy forecasting; integrated forecasted systems; energy for buildings; sustainable development; resource analysis tools and statistical models; extreme value analysis and forecasting for renewable energy resources
    • 

    corecore