108,592 research outputs found

    Operational risk management and new computational needs in banks

    Get PDF
    Basel II banking regulation introduces new needs for computational schemes. They involve both optimal stochastic control, and large scale simulations of decision processes of preventing low-frequency high loss-impact events. This paper will first state the problem and present its parameters. It then spells out the equations that represent a rational risk management behavior and link together the variables: Levy processes are used to model operational risk losses, where calibration by historical loss databases is possible ; where it is not the case, qualitative variables such as quality of business environment and internal controls can provide both costs-side and profits-side impacts. Among other control variables are business growth rate, and efficiency of risk mitigation. The economic value of a policy is maximized by resolving the resulting Hamilton-Jacobi-Bellman type equation. Computational complexity arises from embedded interactions between 3 levels: * Programming global optimal dynamic expenditures budget in Basel II context, * Arbitraging between the cost of risk-reduction policies (as measured by organizational qualitative scorecards and insurance buying) and the impact of incurred losses themselves. This implies modeling the efficiency of the process through which forward-looking measures of threats minimization, can actually reduce stochastic losses, * And optimal allocation according to profitability across subsidiaries and business lines. The paper next reviews the different types of approaches that can be envisaged in deriving a sound budgetary policy solution for operational risk management, based on this HJB equation. It is argued that while this complex, high dimensional problem can be resolved by taking some usual simplifications (Galerkin approach, imposing Merton form solutions, viscosity approach, ad hoc utility functions that provide closed form solutions, etc.) , the main interest of this model lies in exploring the scenarios in an adaptive learning framework ( MDP, partially observed MDP, Q-learning, neuro-dynamic programming, greedy algorithm, etc.). This makes more sense from a management point of view, and solutions are more easily communicated to, and accepted by, the operational level staff in banks through the explicit scenarios that can be derived. This kind of approach combines different computational techniques such as POMDP, stochastic control theory and learning algorithms under uncertainty and incomplete information. The paper concludes by presenting the benefits of such a consistent computational approach to managing budgets, as opposed to a policy of operational risk management made up from disconnected expenditures. Such consistency satisfies the qualifying criteria for banks to apply for the AMA (Advanced Measurement Approach) that will allow large economies of regulatory capital charge under Basel II Accord.REGULAR - Operational risk management, HJB equation, Levy processes, budget optimization, capital allocation

    Dynamical Interactions with Electronic Instruments

    Get PDF
    This paper examines electronic instruments that incorporate dynamical systems, where the behaviour of the instrument depends not only upon the immediate input to the instrument, but also on the past input. Five instruments are presented as case studies: Michel Waisvisz’ Crackle-box, Dylan Menzies’ Spiro, no-input mixing desk, the author’s Feedback Joypad, and microphone-loudspeaker feedback. Links are suggested between the sonic affordances of each instrument and the dynamical mechanisms embedded in them. These affordances are contrasted with those of non-dynamical instruments such as the Theremin and sample-based instruments. This is discussed in the context of contemporary, material-oriented approaches to composition and particularly to free improvisation where elements such as unpredictability and instability are often of interest, and the process of exploration and discovery is an important part of the practice

    Capturing complexity in clinician case-mix: classification system development using GP and physician associate data.

    Get PDF
    Background: There are limited case-mix classification systems for primary care settings which are applicable when considering the optimal clinical skill mix to provide services. Aim: To develop a case-mix classification system (CMCS) and test its impact on analyses of patient outcomes by clinician type, using example data from physician associates' (PAs) and GPs' consultations with same-day appointment patients. Design & setting: Secondary analysis of controlled observational data from six general practices employing PAs and six matched practices not employing PAs in England. Method: Routinely-collected patient consultation records (PA n = 932, GP n = 1154) were used to design the CMCS (combining problem codes, disease register data, and free text); to describe the case-mix; and to assess impact of statistical adjustment for the CMCS on comparison of outcomes of consultations with PAs and with GPs. Results: A CMCS was developed by extending a system that only classified 18.6% (213/1147) of the presenting problems in this study's data. The CMCS differentiated the presenting patient's level of need or complexity as: acute, chronic, minor problem or symptom, prevention, or process of care, applied hierarchically. Combination of patient and consultation-level measures resulted in a higher classification of acuity and complexity for 639 (30.6%) of patient cases in this sample than if using consultation level alone. The CMCS was a key adjustment in modelling the study's main outcome measure, that is rate of repeat consultation. Conclusion: This CMCS assisted in classifying the differences in case-mix between professions, thereby allowing fairer assessment of the potential for role substitution and task shifting in primary care, but it requires further validation

    Social learning strategies modify the effect of network structure on group performance

    Full text link
    The structure of communication networks is an important determinant of the capacity of teams, organizations and societies to solve policy, business and science problems. Yet, previous studies reached contradictory results about the relationship between network structure and performance, finding support for the superiority of both well-connected efficient and poorly connected inefficient network structures. Here we argue that understanding how communication networks affect group performance requires taking into consideration the social learning strategies of individual team members. We show that efficient networks outperform inefficient networks when individuals rely on conformity by copying the most frequent solution among their contacts. However, inefficient networks are superior when individuals follow the best member by copying the group member with the highest payoff. In addition, groups relying on conformity based on a small sample of others excel at complex tasks, while groups following the best member achieve greatest performance for simple tasks. Our findings reconcile contradictory results in the literature and have broad implications for the study of social learning across disciplines
    • …
    corecore