12,436 research outputs found

    Payments per claim model of outstanding claims reserve based on fuzzy linear regression

    Get PDF
    There are uncertainties in factors such as inflation. Historical data and variable values are ambiguous. They lead to ambiguity in the assessment of outstanding claims reserves. The payments per claim model can only perform point estimation. But the fuzzy linear regression is based on fuzzy theory and can directly deal with uncertainty in data. Therefore, this paper proposes a payments per claim model based on fuzzy linear regression. The linear regression method and fuzzy least square method are used to estimate the parameters of the fuzzy regression equation. And the estimated results are introduced into the payments per claim model. Then, the predicted value of each accident reserve is obtained. This result is compared with that of the traditional payments per claim model. And we find that the payments per claim model of estimating the fuzzy linear regression parameters based on the linear programming method is more effective. The model gives the width of the compensation amount for each accident year. In addition, this model solves the problem that the traditional payments per claim model cannot measure the dynamic changes in reserves

    A First Look at the Crypto-Mining Malware Ecosystem: A Decade of Unrestricted Wealth

    Get PDF
    Illicit crypto-mining leverages resources stolen from victims to mine cryptocurrencies on behalf of criminals. While recent works have analyzed one side of this threat, i.e.: web-browser cryptojacking, only commercial reports have partially covered binary-based crypto-mining malware. In this paper, we conduct the largest measurement of crypto-mining malware to date, analyzing approximately 4.5 million malware samples (1.2 million malicious miners), over a period of twelve years from 2007 to 2019. Our analysis pipeline applies both static and dynamic analysis to extract information from the samples, such as wallet identifiers and mining pools. Together with OSINT data, this information is used to group samples into campaigns. We then analyze publicly-available payments sent to the wallets from mining-pools as a reward for mining, and estimate profits for the different campaigns. All this together is is done in a fully automated fashion, which enables us to leverage measurement-based findings of illicit crypto-mining at scale. Our profit analysis reveals campaigns with multi-million earnings, associating over 4.4% of Monero with illicit mining. We analyze the infrastructure related with the different campaigns, showing that a high proportion of this ecosystem is supported by underground economies such as Pay-Per-Install services. We also uncover novel techniques that allow criminals to run successful campaigns.Comment: A shorter version of this paper appears in the Proceedings of 19th ACM Internet Measurement Conference (IMC 2019). This is the full versio

    Risk pricing practices in finance, insurance and construction

    Get PDF
    A review of current risk pricing practices in the financial, insurance and construction sectors is conducted through a comprehensive literature review. The purpose was to inform a study on risk and price in the tendering processes of contractors: specifically, how contractors take account of risk when they are calculating their bids for construction work. The reference to mainstream literature was in view of construction management research as a field of application rather than a fundamental academic discipline. Analytical models are used for risk pricing in the financial sector. Certain mathematical laws and principles of insurance are used to price risk in the insurance sector. construction contractors and practitioners are described to traditionally price allowances for project risk using mechanisms such as intuition and experience. Project risk analysis models have proliferated in recent years. However, they are rarely used because of problems practitioners face when confronted with them. A discussion of practices across the three sectors shows that the construction industry does not approach risk according to the sophisticated mechanisms of the two other sectors. This is not a poor situation in itself. However, knowledge transfer from finance and insurance can help construction practitioners. But also, formal risk models for contractors should be informed by the commercial exigencies and unique characteristics of the construction sector

    Beyond the Formalism Debate: Expert Reasoning, Fuzzy Logic, and Complex Statutes

    Get PDF
    Formalists and antiformalists continue to debate the utility of using legislative history and current social values to interpret statutes. Lost in the debate, however, is a clear model of how judges actually make decisions. Rather than focusing on complex problems presented by actual judicial decisions, formalists and antiformalists concentrate on stylized examples of simple statutes. In this Article, Professors Adams and Farber construct a more functional model of judicial decisionmaking by focusing on complex problems. They use cognitive psychological research on expert reasoning and techniques from an emerging area in the field of artificial intelligence, fuzzy logic, to construct their model. To probe the complex interactions between judicial interpretation, the business and legal communities, and the legislature, the authors apply their model to two important bankruptcy cases written by prominent formalist judges. Professors Adams and Farber demonstrate how cognitive psychology and fuzzy logic can reveal the reasoning processes that both formalist and antiformalist judges use to interpret \u27complex statutes. To apply formalist rules, judges need to recognize the aspects of a case that trigger relevant rules. Cognitive psychologists have researched expert reasoning using this type of diagnostic process. Once the judge identifies the appropriate rules, she will often find they point in conflicting directions. Fuzzy logic provides a model of how to analyze such conflicts. Next, Professors Adams and Farber consider how these models of judicial decisionmaking inform efforts to improve statutory interpretation of complex statutes. They reason that expert decisionmaking builds on pattern recognition skills and fuzzy maps, both the result of intensive repeated experience. The authors explain that cases involving complex statutory interpretation frequently involve competing considerations, and that the implicit understandings of field insiders tend to be entrenched and difficult to displace. Consequently, Professors Adams and Farber argue that judges in specialty courts, such as the Bankruptcy Courts, are probably in a better position than generalist appellate judges to interpret complex statutes. Generalist judges should approach complex statutory issues with a strong degree of deference to the local culture of the field. Professors Adams and Farber conclude the Article with speculation on how fuzzy logic could be used in a more quantitative way to model legal problems. They note that computer modeling may ultimately provide insight into the subtle process of judicial practical reasoning, moving away from the false dichotomy often drawn between formalist and antiformalist approaches to practical judicial decision- making

    Beyond the Formalism Debate: Expert Reasoning, Fuzzy Logic, and Complex Statutes

    Get PDF
    Formalists and antiformalists continue to debate the utility of using legislative history and current social values to interpret statutes. Lost in the debate, however, is a clear model of how judges actually make decisions. Rather than focusing on complex problems presented by actual judicial decisions, formalists and antiformalists concentrate on stylized examples of simple statutes.In this Article, Professors Adams and Farber construct a more functional model of judicial decisionmaking by focusing on complex problems. They use cognitive psychological research on expert reasoning and techniques from an emerging area in the field of artificial intelligence, fuzzy logic, to construct their model. To probe the complex interactions between judicial interpretation, the business and legal communities, and the legislature, the authors apply their model to two important bankruptcy cases written by prominent formalist judges.Professors Adams and Farber demonstrate how cognitive psychology and fuzzy logic can reveal the reasoning processes that both formalist and antiformalist judges use to interpret complex statutes. To apply formalist rules, judges need to recognize the aspects of a case that trigger relevant rules. Cognitive psychologists have researched expert reasoning using this type of diagnostic process. Once the judge identifies the appropriate rules, she will often find they point in conflicting directions. Fuzzy logic provides a model of how to analyze such conflicts.Next, Professors Adams and Farber consider how these models of judicial decisionmaking inform efforts to improve statutory interpretation of complex statutes. They reason that expert decisionmaking builds on pattern recognition skills and fuzzy maps, both the result of intensive repeated experience. The authors explain that cases involving complex statutory interpretation frequently involve competing considerations, and that the implicit understandings of field insiders tend to be entrenched and difficult to displace. Consequently, Professors Adams and Farber argue that judges in specialty courts, such as the Bankruptcy Courts, are probably in a better position than generalist appellate judges to interpret complex statutes. Generalist judges should approach complex statutory issues with a strong degree of deference to the local culture of the field.Professors Adams and Farber conclude the Article with speculation on how fuzzy logic could be used in a more quantitative way to model legal problems. They note that computer modeling may ultimately provide insight into the subtle process of judicial practical reasoning, moving away from the false dichotomy often drawn between formalist and antiformalist approaches to practical judicial decisionmaking. Formalist, Antiformalist, Fuzzy Logic, Statutory Interpretatio

    On being balanced in an unbalanced world

    Get PDF
    This paper examines the case of a procurement auction for a single project, in which the breakdown of the winning bid into its component items determines the value of payments subsequently made to bidder as the work progresses. Unbalanced bidding, or bid skewing, involves the uneven distribution of mark-up among the component items in such a way as to attempt to derive increased benefit to the unbalancer but without involving any change in the total bid. One form of unbalanced bidding for example, termed Front Loading (FL), is thought to be widespread in practice. This involves overpricing the work items that occur early in the project and underpricing the work items that occur later in the project in order to enhance the bidder's cash flow. Naturally, auctioners attempt to protect themselves from the effects of unbalancing—typically reserving the right to reject a bid that has been detected as unbalanced. As a result, models have been developed to both unbalance bids and detect unbalanced bids but virtually nothing is known of their use, success or otherwise. This is of particular concern for the detection methods as, without testing, there is no way of knowing the extent to which unbalanced bids are remaining undetected or balanced bids are being falsely detected as unbalanced. This paper reports on a simulation study aimed at demonstrating the likely effects of unbalanced bid detection models in a deterministic environment involving FL unbalancing in a Texas DOT detection setting, in which bids are deemed to be unbalanced if an item exceeds a maximum (or fails to reach a minimum) ‘cut-off’ value determined by the Texas method. A proportion of bids are automatically and maximally unbalanced over a long series of simulated contract projects and the profits and detection rates of both the balancers and unbalancers are compared. The results show that, as expected, the balanced bids are often incorrectly detected as unbalanced, with the rate of (mis)detection increasing with the proportion of FL bidders in the auction. It is also shown that, while the profit for balanced bidders remains the same irrespective of the number of FL bidders involved, the FL bidder's profit increases with the greater proportion of FL bidders present in the auction. Sensitivity tests show the results to be generally robust, with (mis)detection rates increasing further when there are fewer bidders in the auction and when more data are averaged to determine the baseline value, but being smaller or larger with increased cut-off values and increased cost and estimate variability depending on the number of FL bidders involved. The FL bidder's expected benefit from unbalancing, on the other hand, increases, when there are fewer bidders in the auction. It also increases when the cut-off rate and discount rate is increased, when there is less variability in the costs and their estimates, and when less data are used in setting the baseline values

    Simplifying Administration of Health Insurance

    Get PDF
    Reviews definitions and estimates of the insurance system's administrative costs and efforts to reduce them. Examines the potential of various reform proposals to simplify or further complicate the system. Includes data on estimated administrative costs

    Insurance Product Development

    Get PDF
    The project\u27s objective was to develop and price a new insurance product. Rollercoaster insurance was chosen for its uniqueness. Data for injury numbers, number of riders and deaths were necessary for pricing, but primarily private. Risk rate and ridership were modeled based on limited parks\u27 data. A portfolio was developed using the modeled data. The purpose of portfolios was to share the risk and reduce premiums. After adding in expense and profit, a final premium was determined for each rollercoaster

    Analysis of limitations and metrology weaknesses of enterprise architecture (EA) measurement solutions & proposal of a COSMIC-based approach to EA measurement

    Get PDF
    The literature on enterprise architecture (EA) posits that EA is of considerable value for organizations. However, while the EA literature documents a number of proposals for EA measurement solutions, there is little evidence-based research supporting their achievements and limitations. This thesis aims at helping the EA community to understand the existing trends in EA measurement research and to recognize the existing gaps, limitations, and weaknesses in EA measurement solutions. Furthermore, this thesis aims to assist the EA community to design EA measurement solutions based on measurement and metrology best practices. The research goal of this thesis is to contribute to the EA body of knowledge by shaping new perspectives for future research avenues in EA measurement research. To achieve the research goal, the following research objectives are defined: 1. To classify the EA measurement solutions into specific categories in order to identify research themes and explain the structure of the research area. 2. To evaluate the EA measurement solutions from a measurement and metrology perspective. 3. To identify the measurement and metrology issues in EA measurement solutions. 4. To propose a novel EA measurement approach based on measurement and metrology guidelines and best practices. To achieve the first objective, this thesis conducts a systematic mapping study (SMS to help understand the state-of-the-art of EA measurement research and classify the research area in order to acquire a general understanding about the existing research trends. To achieve the second and third objectives, this thesis conducts a systematic literature review (SLR) to evaluate the EA measurement solutions from a measurement and metrology perspective, and hence, to reveal the weaknesses of EA measurement solutions and propose relevant solutions to these weaknesses. To perform this evaluation, we develop an evaluation process based on combining both the components of the evolution theory and the concepts of measurement and metrology best practices, such as ISO 15939. To achieve the fourth objective, we propose a mapping between two international standards: • COSMIC - ISO/IEC 19761: a method for measuring the functional size of software. • ArchiMate: a modelling language for EA. This mapping results in proposing a novel EA measurement approach that overcomes the weaknesses and limitations found in the existing EA measurement solutions. The research results demonstrate that: 1. The current publications on EA measurement are trending toward an increased focus on the “enterprise IT architecting” school of thought, lacks the rigorous terminology found in science and engineering and shows limited adoption of knowledge from other disciplines in the proposals of EA measurement solutions. 2. There is a lack of attention to attaining appropriate metrology properties in EA measurement proposals: all EA measurement proposals are characterized with insufficient metrology coverage scoring, theoretical and empirical definitions. 3. The proposed novel EA measurement approach demonstrates that it is handy for EA practitioners, and easy to adopt by organizations
    • …
    corecore