10,247 research outputs found

    Boosting insights in insurance tariff plans with tree-based machine learning methods

    Full text link
    Pricing actuaries typically operate within the framework of generalized linear models (GLMs). With the upswing of data analytics, our study puts focus on machine learning methods to develop full tariff plans built from both the frequency and severity of claims. We adapt the loss functions used in the algorithms such that the specific characteristics of insurance data are carefully incorporated: highly unbalanced count data with excess zeros and varying exposure on the frequency side combined with scarce, but potentially long-tailed data on the severity side. A key requirement is the need for transparent and interpretable pricing models which are easily explainable to all stakeholders. We therefore focus on machine learning with decision trees: starting from simple regression trees, we work towards more advanced ensembles such as random forests and boosted trees. We show how to choose the optimal tuning parameters for these models in an elaborate cross-validation scheme, we present visualization tools to obtain insights from the resulting models and the economic value of these new modeling approaches is evaluated. Boosted trees outperform the classical GLMs, allowing the insurer to form profitable portfolios and to guard against potential adverse risk selection

    Rigorously assessing software reliability and safety

    Get PDF
    This paper summarises the state of the art in the assessment of software reliability and safety ("dependability"), and describes some promising developments. A sound demonstration of very high dependability is still impossible before operation of the software; but research is finding ways to make rigorous assessment increasingly feasible. While refined mathematical techniques cannot take the place of factual knowledge, they can allow the decision-maker to draw more accurate conclusions from the knowledge that is available

    Constrained optimization in simulation: a novel approach.

    Get PDF
    This paper presents a novel heuristic for constrained optimization of random computer simulation models, in which one of the simulation outputs is selected as the objective to be minimized while the other outputs need to satisfy prespeci¯ed target values. Besides the simulation outputs, the simulation inputs must meet prespeci¯ed constraints including the constraint that the inputs be integer. The proposed heuristic combines (i) experimental design to specify the simulation input combinations, (ii) Kriging (also called spatial correlation modeling) to analyze the global simulation input/output data that result from this experimental design, and (iii) integer nonlinear programming to estimate the optimal solution from the Kriging metamodels. The heuristic is applied to an (s, S) inventory system and a realistic call-center simulation model, and compared with the popular commercial heuristic OptQuest embedded in the ARENA versions 11 and 12. These two applications show that the novel heuristic outperforms OptQuest in terms of search speed (it moves faster towards high-quality solutions) and consistency of the solution quality.

    Constrained Optimization in Simulation: A Novel Approach

    Get PDF
    This paper presents a novel heuristic for constrained optimization of random computer simulation models, in which one of the simulation outputs is selected as the objective to be minimized while the other outputs need to satisfy prespeci¯ed target values. Besides the simulation outputs, the simulation inputs must meet prespeci¯ed constraints including the constraint that the inputs be integer. The proposed heuristic combines (i) experimental design to specify the simulation input combinations, (ii) Kriging (also called spatial correlation mod- eling) to analyze the global simulation input/output data that result from this experimental design, and (iii) integer nonlinear programming to estimate the optimal solution from the Krig- ing metamodels. The heuristic is applied to an (s, S) inventory system and a realistic call-center simulation model, and compared with the popular commercial heuristic OptQuest embedded in the ARENA versions 11 and 12. These two applications show that the novel heuristic outper- forms OptQuest in terms of search speed (it moves faster towards high-quality solutions) and consistency of the solution quality.

    Multidisciplinary Design Optimization for Space Applications

    Get PDF
    Multidisciplinary Design Optimization (MDO) has been increasingly studied in aerospace engineering with the main purpose of reducing monetary and schedule costs. The traditional design approach of optimizing each discipline separately and manually iterating to achieve good solutions is substituted by exploiting the interactions between the disciplines and concurrently optimizing every subsystem. The target of the research was the development of a flexible software suite capable of concurrently optimizing the design of a rocket propellant launch vehicle for multiple objectives. The possibility of combining the advantages of global and local searches have been exploited in both the MDO architecture and in the selected and self developed optimization methodologies. Those have been compared according to computational efficiency and performance criteria. Results have been critically analyzed to identify the most suitable optimization approach for the targeted MDO problem

    Evaluating testing methods by delivered reliability

    Get PDF
    There are two main goals in testing software: (1) to achieve adequate quality (debug testing), where the objective is to probe the software for defects so that these can be removed, and (2) to assess existing quality (operational testing), where the objective is to gain confidence that the software is reliable. Debug methods tend to ignore random selection of test data from an operational profile, while for operational methods this selection is all-important. Debug methods are thought to be good at uncovering defects so that these can be repaired, but having done so they do not provide a technically defensible assessment of the reliability that results. On the other hand, operational methods provide accurate assessment, but may not be as useful for achieving reliability. This paper examines the relationship between the two testing goals, using a probabilistic analysis. We define simple models of programs and their testing, and try to answer the question of how to attain program reliability: is it better to test by probing for defects as in debug testing, or to assess reliability directly as in operational testing? Testing methods are compared in a model where program failures are detected and the software changed to eliminate them. The “better” method delivers higher reliability after all test failures have been eliminated. Special cases are exhibited in which each kind of testing is superior. An analysis of the distribution of the delivered reliability indicates that even simple models have unusual statistical properties, suggesting caution in interpreting theoretical comparisons

    Uncertainty and Error in Combat Modeling, Simulation, and Analysis

    Get PDF
    Due to the infrequent and competitive nature of combat, several challenges present themselves when developing a predictive simulation. First, there is limited data with which to validate such analysis tools. Secondly, there are many aspects of combat modeling that are highly uncertain and not knowable. This research develops a comprehensive set of techniques for the treatment of uncertainty and error in combat modeling and simulation analysis. First, Evidence Theory is demonstrated as a framework for representing epistemic uncertainty in combat modeling output. Next, a novel method for sensitivity analysis of uncertainty in Evidence Theory is developed. This sensitivity analysis method generates marginal cumulative plausibility functions (CPFs) and cumulative belief functions (CBFs) and prioritizes the contribution of each factor by the Wasserstein distance (also known as the Kantorovich or Earth Movers distance) between the CBF and CPF. Using this method, a rank ordering of the simulation input factors can be produced with respect to uncertainty. Lastly, a procedure for prioritizing the impact of modeling choices on simulation output uncertainty in settings where multiple models are employed is developed. This analysis provides insight into the overall sensitivities of the system with respect to multiple modeling choices

    APPLICATIONS OF ENVIRONMENT-BASED DESIGN (EBD) METHODOLOGY

    Get PDF
    A product’s environments play a significant role in its development. In other words, any alteration in the environment surrounding a product leads to changes in its features. Hence, having a systematic procedure to analyze the product’s environments is a crucial need for industries. Environment-Based Design (EBD) methodology describes the environment of the product (excluding the product itself) and presents a rational approach to analyze it. In order to achieve an efficient product design and development process, EBD utilizes different tools. Recursive Object Model (ROM) diagram, Cause and Effect Analysis, Life Cycle Analysis, Asking Right Question and Answering are EBD’s major tools and technics. In this research, we aim to represent EBD’s capabilities for product evolution analysis, complex products development and human-centered products development. In order to demonstrate EBD’s competences for product evolution analysis, we conduct a case study of braking systems evolution analysis through analyzing the environments around them. Afterward, we perform environment analysis for aerospace design methodology in order to propose a novel design methodology for the aerospace industries. Finally, we propose a course scheduling model based on environment analysis of the academic schedules and we verify our model using Concordia University’s courses
    corecore