2,598 research outputs found
Component-based software engineering: a quantitative approach
Dissertação apresentada para a obtenção do Grau de Doutor em Informática pela Universidade Nova de Lisboa, Faculdade de Ciências e TecnologiaBackground: Often, claims in Component-Based Development (CBD) are only supported by qualitative expert opinion, rather than by quantitative data. This contrasts with the normal practice in other sciences, where a sound experimental validation of claims is standard practice. Experimental Software Engineering (ESE) aims to bridge this gap. Unfortunately, it is common to find experimental validation efforts that are
hard to replicate and compare, to build up the body of knowledge in CBD.
Objectives: In this dissertation our goals are (i) to contribute to evolution of ESE, in
what concerns the replicability and comparability of experimental work, and (ii) to apply our proposals to CBD, thus contributing to its deeper and sounder understanding.
Techniques: We propose a process model for ESE, aligned with current experimental
best practices, and combine this model with a measurement technique called
Ontology-Driven Measurement (ODM). ODM is aimed at improving the state of practice
in metrics definition and collection, by making metrics definitions formal and executable,without sacrificing their usability. ODM uses standard technologies that can be well adapted to current integrated development environments.
Results: Our contributions include the definition and preliminary validation of a process model for ESE and the proposal of ODM for supporting metrics definition and
collection in the context of CBD. We use both the process model and ODM to perform
a series experimental works in CBD, including the cross-validation of a component
metrics set for JavaBeans, a case study on the influence of practitioners expertise in
a sub-process of component development (component code inspections), and an observational study on reusability patterns of pluggable components (Eclipse plug-ins).
These experimental works implied proposing, adapting, or selecting adequate ontologies,
as well as the formal definition of metrics upon each of those ontologies.
Limitations: Although our experimental work covers a variety of component models and, orthogonally, both process and product, the plethora of opportunities for using our quantitative approach to CBD is far from exhausted.
Conclusions: The main contribution of this dissertation is the illustration, through
practical examples, of how we can combine our experimental process model with ODM to support the experimental validation of claims in the context of CBD, in a repeatable and comparable way. In addition, the techniques proposed in this dissertation
are generic and can be applied to other software development paradigms.Departamento de Informática of the Faculdade de Ciências e Tecnologia, Universidade Nova de Lisboa (FCT/UNL); Centro de Informática e Tecnologias da Informação of the FCT/UNL; Fundação para a Ciência e Tecnologia through the STACOS project(POSI/CHS/48875/2002); The Experimental Software Engineering Network (ESERNET);Association Internationale pour les Technologies Objets (AITO); Association forComputing Machinery (ACM
Hazard Relation Diagramme - Definition und Evaluation
Der Entwicklungsprozess sicherheitskritischer, software-intensiver eingebetteter Systeme wird im Besonderen durch die Notwendigkeit charakterisiert, zu einem frühestmöglichem Zeitpunkt im Rahmen des Safety Assessments sogenannte Hazards aufzudecken, welche im Betrieb zu Schaden in Form von Tod oder Verletzung von Menschen sowie zu Beschädigung oder Zerstörung externer Systeme führen können. Um die Sicherheit des Systems im Betrieb zu fördern, werden für jeden Hazard sogenannte Mitigationen entwickelt, welche durch hazard-mitigierende Anforderungen im Rahmen des Requirements Engineering dokumentiert werden. Hazard-mitigierende Anforderungen müssen in dem Sinne adäquat sein, dass sie zum einen die von Stakeholdern gewünschte Systemfunktionalität spezifizieren und zum anderen die Wahrscheinlichkeit von Schaden durch Hazards im Betrieb minimieren.
Die Adäquatheit von hazard-mitigierenden Anforderungen wird im Entwicklungsprozess im Rahmen der Anforderungsvalidierung bestimmt. Die Validierung von hazard-mitigierenden Anforderungen wird allerdings dadurch erschwert, dass Hazards sowie Kontextinformationen über Hazards ein Arbeitsprodukt des Safety Assessments darstellen und die hazard-mitigierenden Anforderungen ein Arbeitsprodukt des Requirements Engineering sind. Diese beiden Arbeitsprodukte sind in der Regel nicht schlecht integriert, sodass den Stakeholdern bei der Validierung nicht alle Informationen zur Verfügung stehen, die zur Bestimmung der Adäquatheit der hazard-mitigierenden Anforderungen notwendig sind. In Folge könnte es dazu kommen, dass Inadäquatheit in hazard-mitigierenden Anforderungen nicht aufgedeckt wird und das System fälschlicherweise als ausreichend sicher betrachtet wird.
Im Rahmen dieses Dissertationsvorhabens wurde ein Ansatz entwickelt, welcher Hazards, Kontextinformationen zu Hazards, hazard-mitigierende Anforderungen sowie die spezifischen Abhängigkeiten in einem graphischen Modell visualisiert und somit für die Validierung zugänglich macht. Zudem wird ein automatisierter Ansatz zur Generierung der graphischen Modelle vorgestellt und prototypisch implementiert. Darüber hinaus wird anhand von vier detaillierten empirischen Experimenten der Nutzen der graphischen Modelle für die Validierung hazard-mitigierender Anforderungen nachgewiesen.
Die vorliegende Arbeit leistet somit einen Beitrag zur Integration der Arbeitsergebnisse des Safety Assessments und des Requirements Engineerings mit dem Ziel die Validierung der Adäquatheit hazard-mitigierender Anforderungen zu unterstützen.The development process of safety-critical, software-intensive embedded systems is characterized by the need to identify hazards during safety assessment in early stages of development. During operation, such hazards may lead to harm to come to humans and external systems in the form of death, injury, damage, or destruction, respectively. In order to improve the safety of the system during operation, mitigations are conceived for each hazard, and documented during requirements engineering by means of hazard-mitigating requirements. These hazard-mitigating requirements must be adequate in the sense that they must specify the functionality required by the stakeholders and must render the system sufficiently safe during operation with regard to the identified hazards.
The adequacy of hazard-mitigating requirements is determined during requirements validation. Yet, the validation of the adequacy of hazard-mitigating requirements is burdened by the fact that hazards and contextual information about hazards are a work product of safety assessment and hazard-mitigating requirements are a work product of requirements engineering. These work products are poorly integrated such that the information needed to determine the adequacy of hazard-mitigating requirements are not available to stakeholders during validation. In consequence, there is the risk that inadequate hazard-mitigating requirements remain covert and the system is falsely considered sufficiently safe.
In this dissertation, an approach was developed, which visualizes hazards, contextual information about hazards, hazard-mitigating requirements, as well as their specific dependencies in graphical models. The approach hence renders these information accessible to stakeholders during validation. In addition, an approach to create these graphical models was developed and prototypically implemented. Moreover, the benefits of using these graphical models during validation of hazard-mitigating requirements was investigated and established by means of four detailed empirical experiments.
The dissertation at hand hence provides a contribution towards the integration of the work products of safety assessment and requirements engineering with the purpose to support the validation of the adequacy of hazard-mitigating requirements
MiSFIT: Mining Software Fault Information and Types
As software becomes more important to society, the number, age, and complexity of systems grow. Software organizations require continuous process improvement to maintain the reliability, security, and quality of these software systems. Software organizations can utilize data from manual fault classification to meet their process improvement needs, but organizations lack the expertise or resources to implement them correctly.
This dissertation addresses the need for the automation of software fault classification. Validation results show that automated fault classification, as implemented in the MiSFIT tool, can group faults of similar nature. The resulting classifications result in good agreement for common software faults with no manual effort.
To evaluate the method and tool, I develop and apply an extended change taxonomy to classify the source code changes that repaired software faults from an open source project. MiSFIT clusters the faults based on the changes. I manually inspect a random sample of faults from each cluster to validate the results. The automatically classified faults are used to analyze the evolution of a software application over seven major releases. The contributions of this dissertation are an extended change taxonomy for software fault analysis, a method to cluster faults by the syntax of the repair, empirical evidence that fault distribution varies according to the purpose of the module, and the identification of project-specific trends from the analysis of the changes
THE DECISION TO UNDERTAKE VOCATIONAL HIGHER EDUCATION IN SHIPPING AND LOGISTICS IN THE UK
This work investigates the decision to study shipping and logistics at advanced levels in
the UK. Documented evidence reports and analyses the perceptions of students on
vocational courses in shipping, transport and logistics and investigates why they chose
their particular fields of study.
A range of instruments are presented to analyse how students perceived that they had
arrived at their study decisions, including national surveys of undergraduates in maritime
business, postgraduates in shipping and logistics and professionals contemplating updating
short courses. Qualitative, quantitative and mapping methods are presented along with
perceptions of relevant professional outcome roles and other factors.
Exploratory approaches to proposing and evaluating alternative approaches to teaching
aimed at raising the student's perception of the nature of professional skills requirements
were predicated by identifying and defining local student schemae and tailoring aids to
their specific learning and teaching requirements.
A cognitive mapping approach enabled comparisons of perceptions between postgraduates,
whose individual beliefs, after being mapped and modelled as a directed network, were
analysed, and differences between maps were quantified. Quantitative pairwise map
comparisons included 54 individuals generating 1430 synchronal comparisons in one
cohort and four diachronal cohort comparisons. These revealed that distance measures
constrained by the numbers of transmitters or receivers, and the strength of relationships
where appropriate, formed the best discriminators.
Empirical and theoretical explanations of maps and attempts to compare particular
subgroups and explain differences were often inconclusive. A unified social cognitive
theory of career and academic interest, choice and performance generated useful
propositions relating to how individuals manage issues of self-efFicacy, expected outcomes
from decisions and their personal goals. Substantive work revealed problems of conflicting
domains between students' verbatim statements, only weakly coincident with theoretical
concepts. Conclusions that mapping is most powerful/when based on qualitative analysis
of the richness and diversity of individual perceptions; infer that no simple standard
decision process is operating and hence no single recruitment marketing device is apparent.
In applying and disseminating findings, where possible, proposals were made to assist
organisations promoting careers awareness and recruitment into relevant professions and
university based vocational courses, published by relevant professional bodies
Risks of Discrimination through the Use of Algorithms. A study compiled with a grant from the Federal Anti-Discrimination Agency
Algorithms, including artificial intelligence, are used in a variety of ways to differentiate people, services, products or positions. This study uses examples to illustrate the technical and organisational causes of discrimination risks and analyses the resulting forms of discrimination. Its particular focus is on the social risks from algorithmic differentiation and automated decision-making, including injustice by generalisation, treatment of people as mere objects, restrictions on the free development of personality and informational self-determination, accumulation effects and growing inequality as well as risks to societal goals of equality or social policy. In these cases, there is a need for reforms of the anti-discrimination and data protection law, but also for societal considerations and definitions of which kinds of algorithmic differentiations are considered acceptable in a society in order to protect fundamental rights and values. Last but not least, the study discusses tasks for anti-discrimination agencies and equality bodies, ranging from the identification and proof of algorithm-based discrimination to preventive and cooperative actions
A Framework for Exploiting Emergent Behaviour to capture 'Best Practice' within a Programming Domain
Inspection is a formalised process for reviewing an artefact in software engineering.
It is proven to significantly reduce defects, to ensure that what is delivered is what is
required, and that the finished product is effective and robust.
Peer code review is a less formal inspection of code, normally classified as
inadequate or substandard Inspection. Although it has an increased risk of not
locating defects, it has been shown to improve the knowledge and programming
skills of its participants.
This thesis examines the process of peer code review, comparing it to Inspection,
and attempts to describe how an informal code review can improve the knowledge
and skills of its participants by deploying an agent oriented approach.
During a review the participants discuss defects, recommendations and solutions, or
more generally their own experience. It is this instant adaptability to new
11
information that gives the review process the ability to improve knowledge. This
observed behaviour can be described as the emergent behaviour of the group of
programmers during the review.
The wider distribution of knowledge is currently only performed by programmers
attending other reviews. To maximise the benefits of peer code review, a
mechanism is needed by which the findings from one team can be captured and
propagated to other reviews / teams throughout an establishment.
A prototype multi-agent system is developed with the aim of capturing the emergent
properties of a team of programmers. As the interactions between the team members
is unstructured and the information traded is dynamic, a distributed adaptive system
is required to provide communication channels for the team and to provide a
foundation for the knowledge shared. Software agents are capable of adaptivity and
learning. Multi-agent systems are particularly effective at being deployed within
distributed architectures and are believed to be able to capture emergent behaviour.
The prototype system illustrates that the learning mechanism within the software
agents provides a solid foundation upon which the ability to detect defects can be
learnt. It also demonstrates that the multi-agent approach is apposite to provide the
free flow communication of ideas between programmers, not only to achieve the
sharing of defects and solutions but also at a high enough level to capture social
information. It is assumed that this social information is a measure of one element of
the review process's emergent behaviour.
The system is capable of monitoring the team-perceived abilities of programmers,
those who are influential on the programming style of others, and the issues upon
III
which programmers agree or disagree. If the disagreements are classified as
unimportant or stylistic issues, can it not therefore be assumed that all agreements
are concepts of "Best Practice"?
The conclusion is reached that code review is not a substandard Inspection but is in
fact complementary to the Inspection model, as the latter improves the process of
locating and identifying bugs while the former improves the knowledge and skill of
the programmers, and therefore the chance of bugs not being encoded to start with.
The prototype system demonstrates that it is possible to capture best practice from a
review team and that agents are well suited to the task. The performance criteria of
such a system have also been captured.
The prototype system has also shown that a reliable level of learning can be attained
for a real world task. The innovative way of concurrently deploying multiple agents
which use different approaches to achieve the same goal shows remarkable
robustness when learning from small example sets.
The novel way in which autonomy is promoted within the agents' design but
constrained within the agent community allows the system to provide a sufficiently
flexible communications structure to capture emergent social behaviour, whilst
ensuring that the agents remain committed to their own goals
Research and evidence based environmental health
Environmental health (EH) professionals have often spoken of the need to become more research active (Burke et al., 2002; McCarthy, 1996) and make their work more evidence based, but to date little has been written about how to achieve this in practice. This chapter is therefore written as an introductory guide to research for EH professionals, students, and policy makers. By developing your knowledge it is hoped you will feel more confident navigating the world of research; motivated towards making your own work more evidence based; and enthused about contributing to the evidence base from which others can learn. This chapter is not a research methods textbook, a step by step guide to research or evidence based environmental health, nor does it seek to make definitive statements about these complex areas. However it highlights the most important issues regarding research in environmental health, considers the importance of research to the environmental health profession and provides useful signposts towards further resources.
The chapter is divided into three sections. The first defines evidence based environmental health and why it remains a priority for EH professionals. The second section explores the key stages of environmental health research and provides guidance on the development of your reading skills. The final section suggests ways to become more research active and evidence based, acknowledging the many challenges EH professionals face and concluding with a vision for evidence based environmental health. The chapter ends with an annex including a glossary of environmental health research terms, a list of references and suggested further reading
- …