137 research outputs found
Scalable analysis of stochastic process algebra models
The performance modelling of large-scale systems using discrete-state approaches is
fundamentally hampered by the well-known problem of state-space explosion, which
causes exponential growth of the reachable state space as a function of the number
of the components which constitute the model. Because they are mapped onto
continuous-time Markov chains (CTMCs), models described in the stochastic process
algebra PEPA are no exception. This thesis presents a deterministic continuous-state
semantics of PEPA which employs ordinary differential equations (ODEs) as the underlying
mathematics for the performance evaluation. This is suitable for models consisting
of large numbers of replicated components, as the ODE problem size is insensitive
to the actual population levels of the system under study. Furthermore, the ODE is
given an interpretation as the fluid limit of a properly defined CTMC model when the
initial population levels go to infinity. This framework allows the use of existing results
which give error bounds to assess the quality of the differential approximation. The
computation of performance indices such as throughput, utilisation, and average response
time are interpreted deterministically as functions of the ODE solution and are
related to corresponding reward structures in the Markovian setting.
The differential interpretation of PEPA provides a framework that is conceptually
analogous to established approximation methods in queueing networks based on meanvalue
analysis, as both approaches aim at reducing the computational cost of the analysis
by providing estimates for the expected values of the performance metrics of interest.
The relationship between these two techniques is examined in more detail in
a comparison between PEPA and the Layered Queueing Network (LQN) model. General
patterns of translation of LQN elements into corresponding PEPA components are
applied to a substantial case study of a distributed computer system. This model is
analysed using stochastic simulation to gauge the soundness of the translation. Furthermore,
it is subjected to a series of numerical tests to compare execution runtimes
and accuracy of the PEPA differential analysis against the LQN mean-value approximation
method.
Finally, this thesis discusses the major elements concerning the development of a
software toolkit, the PEPA Eclipse Plug-in, which offers a comprehensive modelling environment
for PEPA, including modules for static analysis, explicit state-space exploration,
numerical solution of the steady-state equilibrium of the Markov chain, stochastic
simulation, the differential analysis approach herein presented, and a graphical
framework for model editing and visualisation of performance evaluation results
Recent advances in petri nets and concurrency
CEUR Workshop Proceeding
Modelling parallel database management systems for performance prediction
Abstract unavailable please refer to PD
Performance requirements verification during software systems development
Requirements verification refers to the assurance that the implemented system reflects the specified requirements. Requirement verification is a process that continues through the life cycle of the software system. When the software crisis hit in 1960, a great deal of attention was placed on the verification of functional requirements, which were considered to be of crucial importance. Over the last decade, researchers have addressed the importance of integrating non-functional requirement in the verification process. An important non-functional requirement for software is performance. Performance requirement verification is known as Software Performance Evaluation. This thesis will look at performance evaluation of software systems. The performance evaluation of software systems is a hugely valuable task, especially in the early stages of a software project development. Many methods for integrating performance analysis into the software development process have been proposed. These methodologies work by utilising the software architectural models known in the software engineering field by transforming these into performance models, which can be analysed to gain the expected performance characteristics of the projected system. This thesis aims to bridge the knowledge gap between performance and software engineering domains by introducing semi-automated transformation methodologies. These are designed to be generic in order for them to be integrated into any software engineering development process. The goal of these methodologies is to provide performance related design guidance during the system development. This thesis introduces two model transformation methodologies. These are the improved state marking methodology and the UML-EQN methodology. It will also introduce the UML-JMT tool which was built to realise the UML-EQN methodology. With the help of automatic design models to performance model algorithms introduced in the UML-EQN methodology, a software engineer with basic knowledge of performance modelling paradigm can conduct a performance study on a software system design. This was proved in a qualitative study where the methodology and the tool deploying this methodology were tested by software engineers with varying levels of background, experience and from different sectors of the software development industry. The study results showed an acceptance for this methodology and the UML-JMT tool. As performance verification is a part of any software engineering methodology, we have to define frame works that would deploy performance requirements validation in the context of software engineering. Agile development paradigm was the result of changes in the overall environment of the IT and business worlds. These techniques are based on iterative development, where requirements, designs and developed programmes evolve continually. At present, the majority of literature discussing the role of requirements engineering in agile development processes seems to indicate that non-functional requirements verification is an unchartered territory. CPASA (Continuous Performance Assessment of Software Architecture) was designed to work in software projects where the performance can be affected by changes in the requirements and matches the main practices of agile modelling and development. The UML-JMT tool was designed to deploy the CPASA Performance evaluation tests
Parameter dependencies for reusable performance specifications of software components
To avoid design-related performance problems, model-driven performance prediction methods analyse the response times, throughputs, and resource utilizations of software architectures before and during implementation. This thesis proposes new modeling languages and according model transformations, which allow a reusable description of usage profile dependencies to the performance of software components. Predictions based on this new methods can support performance-related design decisions
Specification and refinement of software connectors
Tese de doutoramento em Informática (área de conhecimento de Fundamentos da Computação)Modern computer based systems are essentially based on the cooperation of
distributed, heterogeneous component organized into open software architectures
that, moreover, can survive in loosely-coupled environments and be easily adapted
to changing application requirements. Such is the case, for example, of applications
designed to take advantage of the increased computational power provided
by massively parallel systems or of the whole business of Internet-based software
development.
In order to develop such systems in a systematic way, the focus in development
method has switched, along the last decade, from functional to structural issues:
both data and processes are encapsulated into software units which are connected
into large systems resorting, to a number of techniques intended to support reusability
and modifiability.
Actually, the complexity and ubiquity achieved by software in present times
makes it imperative, more than ever, the availability of both technologies and sound
methods to drive its development. Programming ‘in–the–large’, component–based
programming and software architecture become popular expressions which embody
this sort of concerns and correspond to driving forces in current software engineering.
In such a context this thesis aims at introducing formal models for software connectors
as well as the corresponding notions of equivalence and refinement upon
which calculation principles for reasoning and transforming connector-based software
architectures can be developed. This research adopts an exogenous coordination
point of view in order to deal with components’ temporal and spatial decoupling
and, therefore, to provide support for looser levels of inter-component dependency.
The thesis also characterises a notion of behavioural interface for components and services. Interfaces and connectors are put together to form configurations, an
abstraction for representing software architectures.
A prototype implementation of a subset of the proposed models is provided, in
the form of a HASKELL library, as a proof of concept. Furthermore, the thesis reports
on a case study in which exogenous coordination is applied to the specification of
interactive systems.Um número crescente de sistemas computacionais é baseado na cooperação de
componentes interdependentes e heterogêneas, organizadas em arquiteturas abertas
capazes de sobreviverem em ambientes altamente distribuídos e facilmente adaptáveis
a alterações nos requisitos das aplicações que os suportam. Tal é o caso, por
exemplo, de aplicações que exploram o poder computacional de sistemas massivamente
paralelos ou de sistemas desenvolvidos sobre a Internet.
Para desenvolver este tipo de sistemas de forma sistemática, o foco nos métodos
de desenvolvimento alterou-se, ao longo da última década, dos aspectos funcionais
para os aspectos estruturais dos sistemas: ambos, estruturas de dados e processos
são encapsulados em unidades computacionais que são conectadas em grandes sistemas
utilizando-se de diversas técnicas que se pretendem capazes de suportar a
reutilização e a adaptabilidade do software.
Na realidade, a complexidade e ubiqüidade atingidas pelo software nos dias
correntes tornam imperativo, mais do que nunca, a disponibilidade de tecnologias
e sólidos métodos para conduzir este processo de desenvolvimento. Programação
’em-grande-escala’, programação baseada em componentes e arquiteturas de software
são expressões populares que englobam esta preocupação e correspondem aos
esforços direcionados pela engenharia de software.
Em tal contexto, esta tese tem por objetivo introduzir modelos formais para
conectores de software bem como as correspondentes noções de equivalência e refinamento
que suportem cálculos para raciocinar e transformar arquiteturas de software
baseada em conectores. Esta pesquisa adota um ponto de vista de coordenação
exógena para lidar com a separação espacial e temporal das componentes e suportar
níveis elevados de independência entre componentes.
A tese caracteriza, ainda, uma noção de interface comportamental para componentes e serviços. Interfaces e conectores agregam-se para formar configurações,
uma abstração introduzida para representar arquiteturas de software.
A implementação, em protótipo, de parte dos modelos propostos, sob a forma
de uma biblioteca em HASKELL, é fornecida como prova de conceito. Finalmente, a
tese percorre um estudo de caso em que coordenação exôgena é utilizada na especificação
de sistemas interactivos.Fundação para a Ciência e a Tecnologia (FCT), SFRH/BD/11083/200
- …