10,200 research outputs found
Autonomic computing architecture for SCADA cyber security
Cognitive computing relates to intelligent computing platforms that are based on the disciplines of artificial intelligence, machine learning, and other innovative technologies. These technologies can be used to design systems that mimic the human brain to learn about their environment and can autonomously predict an impending anomalous situation. IBM first used the term āAutonomic Computingā in 2001 to combat the looming complexity crisis (Ganek and Corbi, 2003). The concept has been inspired by the human biological autonomic system. An autonomic system is self-healing, self-regulating, self-optimising and self-protecting (Ganek and Corbi, 2003). Therefore, the system should be able to protect itself against both malicious attacks and unintended mistakes by the operator
Architecture of Environmental Risk Modelling: for a faster and more robust response to natural disasters
Demands on the disaster response capacity of the European Union are likely to
increase, as the impacts of disasters continue to grow both in size and
frequency. This has resulted in intensive research on issues concerning
spatially-explicit information and modelling and their multiple sources of
uncertainty. Geospatial support is one of the forms of assistance frequently
required by emergency response centres along with hazard forecast and event
management assessment. Robust modelling of natural hazards requires dynamic
simulations under an array of multiple inputs from different sources.
Uncertainty is associated with meteorological forecast and calibration of the
model parameters. Software uncertainty also derives from the data
transformation models (D-TM) needed for predicting hazard behaviour and its
consequences. On the other hand, social contributions have recently been
recognized as valuable in raw-data collection and mapping efforts traditionally
dominated by professional organizations. Here an architecture overview is
proposed for adaptive and robust modelling of natural hazards, following the
Semantic Array Programming paradigm to also include the distributed array of
social contributors called Citizen Sensor in a semantically-enhanced strategy
for D-TM modelling. The modelling architecture proposes a multicriteria
approach for assessing the array of potential impacts with qualitative rapid
assessment methods based on a Partial Open Loop Feedback Control (POLFC) schema
and complementing more traditional and accurate a-posteriori assessment. We
discuss the computational aspect of environmental risk modelling using
array-based parallel paradigms on High Performance Computing (HPC) platforms,
in order for the implications of urgency to be introduced into the systems
(Urgent-HPC).Comment: 12 pages, 1 figure, 1 text box, presented at the 3rd Conference of
Computational Interdisciplinary Sciences (CCIS 2014), Asuncion, Paragua
Microservice Transition and its Granularity Problem: A Systematic Mapping Study
Microservices have gained wide recognition and acceptance in software
industries as an emerging architectural style for autonomic, scalable, and more
reliable computing. The transition to microservices has been highly motivated
by the need for better alignment of technical design decisions with improving
value potentials of architectures. Despite microservices' popularity, research
still lacks disciplined understanding of transition and consensus on the
principles and activities underlying "micro-ing" architectures. In this paper,
we report on a systematic mapping study that consolidates various views,
approaches and activities that commonly assist in the transition to
microservices. The study aims to provide a better understanding of the
transition; it also contributes a working definition of the transition and
technical activities underlying it. We term the transition and technical
activities leading to microservice architectures as microservitization. We then
shed light on a fundamental problem of microservitization: microservice
granularity and reasoning about its adaptation as first-class entities. This
study reviews state-of-the-art and -practice related to reasoning about
microservice granularity; it reviews modelling approaches, aspects considered,
guidelines and processes used to reason about microservice granularity. This
study identifies opportunities for future research and development related to
reasoning about microservice granularity.Comment: 36 pages including references, 6 figures, and 3 table
Self-Learning Cloud Controllers: Fuzzy Q-Learning for Knowledge Evolution
Cloud controllers aim at responding to application demands by automatically
scaling the compute resources at runtime to meet performance guarantees and
minimize resource costs. Existing cloud controllers often resort to scaling
strategies that are codified as a set of adaptation rules. However, for a cloud
provider, applications running on top of the cloud infrastructure are more or
less black-boxes, making it difficult at design time to define optimal or
pre-emptive adaptation rules. Thus, the burden of taking adaptation decisions
often is delegated to the cloud application. Yet, in most cases, application
developers in turn have limited knowledge of the cloud infrastructure. In this
paper, we propose learning adaptation rules during runtime. To this end, we
introduce FQL4KE, a self-learning fuzzy cloud controller. In particular, FQL4KE
learns and modifies fuzzy rules at runtime. The benefit is that for designing
cloud controllers, we do not have to rely solely on precise design-time
knowledge, which may be difficult to acquire. FQL4KE empowers users to specify
cloud controllers by simply adjusting weights representing priorities in system
goals instead of specifying complex adaptation rules. The applicability of
FQL4KE has been experimentally assessed as part of the cloud application
framework ElasticBench. The experimental results indicate that FQL4KE
outperforms our previously developed fuzzy controller without learning
mechanisms and the native Azure auto-scaling
- ā¦