13,802 research outputs found
Should the advanced measurement approach be replaced with the standardized measurement approach for operational risk?
Recently, Basel Committee for Banking Supervision proposed to replace all
approaches, including Advanced Measurement Approach (AMA), for operational risk
capital with a simple formula referred to as the Standardised Measurement
Approach (SMA). This paper discusses and studies the weaknesses and pitfalls of
SMA such as instability, risk insensitivity, super-additivity and the implicit
relationship between SMA capital model and systemic risk in the banking sector.
We also discuss the issues with closely related operational risk
Capital-at-Risk (OpCar) Basel Committee proposed model which is the precursor
to the SMA. In conclusion, we advocate to maintain the AMA internal model
framework and suggest as an alternative a number of standardization
recommendations that could be considered to unify internal modelling of
operational risk. The findings and views presented in this paper have been
discussed with and supported by many OpRisk practitioners and academics in
Australia, Europe, UK and USA, and recently at OpRisk Europe 2016 conference in
London
Holistic Measures for Evaluating Prediction Models in Smart Grids
The performance of prediction models is often based on "abstract metrics"
that estimate the model's ability to limit residual errors between the observed
and predicted values. However, meaningful evaluation and selection of
prediction models for end-user domains requires holistic and
application-sensitive performance measures. Inspired by energy consumption
prediction models used in the emerging "big data" domain of Smart Power Grids,
we propose a suite of performance measures to rationally compare models along
the dimensions of scale independence, reliability, volatility and cost. We
include both application independent and dependent measures, the latter
parameterized to allow customization by domain experts to fit their scenario.
While our measures are generalizable to other domains, we offer an empirical
analysis using real energy use data for three Smart Grid applications:
planning, customer education and demand response, which are relevant for energy
sustainability. Our results underscore the value of the proposed measures to
offer a deeper insight into models' behavior and their impact on real
applications, which benefit both data mining researchers and practitioners.Comment: 14 Pages, 8 figures, Accepted and to appear in IEEE Transactions on
Knowledge and Data Engineering, 2014. Authors' final version. Copyright
transferred to IEE
Microservice Transition and its Granularity Problem: A Systematic Mapping Study
Microservices have gained wide recognition and acceptance in software
industries as an emerging architectural style for autonomic, scalable, and more
reliable computing. The transition to microservices has been highly motivated
by the need for better alignment of technical design decisions with improving
value potentials of architectures. Despite microservices' popularity, research
still lacks disciplined understanding of transition and consensus on the
principles and activities underlying "micro-ing" architectures. In this paper,
we report on a systematic mapping study that consolidates various views,
approaches and activities that commonly assist in the transition to
microservices. The study aims to provide a better understanding of the
transition; it also contributes a working definition of the transition and
technical activities underlying it. We term the transition and technical
activities leading to microservice architectures as microservitization. We then
shed light on a fundamental problem of microservitization: microservice
granularity and reasoning about its adaptation as first-class entities. This
study reviews state-of-the-art and -practice related to reasoning about
microservice granularity; it reviews modelling approaches, aspects considered,
guidelines and processes used to reason about microservice granularity. This
study identifies opportunities for future research and development related to
reasoning about microservice granularity.Comment: 36 pages including references, 6 figures, and 3 table
Short-Term Electricity Demand Forecasting with Machine Learning
Project Work presented as the partial requirement for obtaining a Master's degree in Data Science and Advanced Analytics, specialization in Business AnalyticsAn accurate short-term load forecasting (STLF) is one of the most critical inputs for power plant units’
planning commitment. STLF reduces the overall planning uncertainty added by the intermittent
production of renewable sources; thus, it helps to minimize the hydro-thermal electricity production
costs in a power grid. Although there is some research in the field and even several research
applications, there is a continual need to improve forecasts. This project proposes a set of machine
learning (ML) models to improve the accuracy of 168 hours forecasts. The developed models employ
features from multiple sources, such as historical load, weather, and holidays. Of the five ML models
developed and tested in various load profile contexts, the Extreme Gradient Boosting Regressor
(XGBoost) algorithm showed the best results, surpassing previous historical weekly predictions based
on neural networks. Additionally, because XGBoost models are based on an ensemble of decision
trees, it facilitated the model’s interpretation, which provided a relevant additional result, the
features’ importance in the forecasting
Autonomic State Management for Optimistic Simulation Platforms
We present the design and implementation of an autonomic state manager (ASM) tailored for integration within optimistic parallel discrete event simulation (PDES) environments based on the C programming language and the executable and linkable format (ELF), and developed for execution on x8664 architectures. With ASM, the state of any logical process (LP), namely the individual (concurrent) simulation unit being part of the simulation model, is allowed to be scattered on dynamically allocated memory chunks managed via standard API (e.g., malloc/free). Also, the application programmer is not required to provide any serialization/deserialization module in order to take a checkpoint of the LP state, or to restore it in case a causality error occurs during the optimistic run, or to provide indications on which portions of the state are updated by event processing, so to allow incremental checkpointing. All these tasks are handled by ASM in a fully transparent manner via (A) runtime identification (with chunk-level granularity) of the memory map associated with the LP state, and (B) runtime tracking of the memory updates occurring within chunks belonging to the dynamic memory map. The co-existence of the incremental and non-incremental log/restore modes is achieved via dual versions of the same application code, transparently generated by ASM via compile/link time facilities. Also, the dynamic selection of the best suited log/restore mode is actuated by ASM on the basis of an innovative modeling/optimization approach which takes into account stability of each operating mode with respect to variations of the model/environmental execution parameters
- …