32,705 research outputs found
The Performability Manager
The authors describe the performability manager, a distributed system component that contributes to a more effective and efficient use of system components and prevents quality of service (QoS) degradation. The performability manager dynamically reconfigures distributed systems whenever needed, to recover from failures and to permit the system to evolve over time and include new functionality. Large systems require dynamic reconfiguration to support dynamic change without shutting down the complete system. A distributed system monitor is needed to verify QoS. Monitoring a distributed system is difficult because of synchronization problems and minor differences in clock speeds. The authors describe the functionality and the operation of the performability manager (both informally and formally). Throughout the paper they illustrate the approach by an example distributed application: an ANSAware-based number translation service (NTS), from the intelligent networks (IN) area
Experimental analysis of computer system dependability
This paper reviews an area which has evolved over the past 15 years: experimental analysis of computer system dependability. Methodologies and advances are discussed for three basic approaches used in the area: simulated fault injection, physical fault injection, and measurement-based analysis. The three approaches are suited, respectively, to dependability evaluation in the three phases of a system's life: design phase, prototype phase, and operational phase. Before the discussion of these phases, several statistical techniques used in the area are introduced. For each phase, a classification of research methods or study topics is outlined, followed by discussion of these methods or topics as well as representative studies. The statistical techniques introduced include the estimation of parameters and confidence intervals, probability distribution characterization, and several multivariate analysis methods. Importance sampling, a statistical technique used to accelerate Monte Carlo simulation, is also introduced. The discussion of simulated fault injection covers electrical-level, logic-level, and function-level fault injection methods as well as representative simulation environments such as FOCUS and DEPEND. The discussion of physical fault injection covers hardware, software, and radiation fault injection methods as well as several software and hybrid tools including FIAT, FERARI, HYBRID, and FINE. The discussion of measurement-based analysis covers measurement and data processing techniques, basic error characterization, dependency analysis, Markov reward modeling, software-dependability, and fault diagnosis. The discussion involves several important issues studies in the area, including fault models, fast simulation techniques, workload/failure dependency, correlated failures, and software fault tolerance
Microservice Transition and its Granularity Problem: A Systematic Mapping Study
Microservices have gained wide recognition and acceptance in software
industries as an emerging architectural style for autonomic, scalable, and more
reliable computing. The transition to microservices has been highly motivated
by the need for better alignment of technical design decisions with improving
value potentials of architectures. Despite microservices' popularity, research
still lacks disciplined understanding of transition and consensus on the
principles and activities underlying "micro-ing" architectures. In this paper,
we report on a systematic mapping study that consolidates various views,
approaches and activities that commonly assist in the transition to
microservices. The study aims to provide a better understanding of the
transition; it also contributes a working definition of the transition and
technical activities underlying it. We term the transition and technical
activities leading to microservice architectures as microservitization. We then
shed light on a fundamental problem of microservitization: microservice
granularity and reasoning about its adaptation as first-class entities. This
study reviews state-of-the-art and -practice related to reasoning about
microservice granularity; it reviews modelling approaches, aspects considered,
guidelines and processes used to reason about microservice granularity. This
study identifies opportunities for future research and development related to
reasoning about microservice granularity.Comment: 36 pages including references, 6 figures, and 3 table
Can Component/Service-Based Systems Be Proved Correct?
Component-oriented and service-oriented approaches have gained a strong
enthusiasm in industries and academia with a particular interest for
service-oriented approaches. A component is a software entity with given
functionalities, made available by a provider, and used to build other
application within which it is integrated. The service concept and its use in
web-based application development have a huge impact on reuse practices.
Accordingly a considerable part of software architectures is influenced; these
architectures are moving towards service-oriented architectures. Therefore
applications (re)use services that are available elsewhere and many
applications interact, without knowing each other, using services available via
service servers and their published interfaces and functionalities. Industries
propose, through various consortium, languages, technologies and standards.
More academic works are also undertaken concerning semantics and formalisation
of components and service-based systems. We consider here both streams of works
in order to raise research concerns that will help in building quality
software. Are there new challenging problems with respect to service-based
software construction? Besides, what are the links and the advances compared to
distributed systems?Comment: 16 page
- …