84 research outputs found
Towards the formalisation of use case maps
Formal specification of software systems has been very promising. Critics against the end
results of formal methods, that is, producing quality software products, is certainly rare. Instead,
reasons have been formulated to justify why the adoption of the technique in industry
remains limited. Some of the reasons are:
• Steap learning curve; formal techniques are said to be hard to use.
• Lack of a step-by-step construction mechanism and poor guidance.
• Difficulty to integrate the technique into the existing software processes.
Z is, arguably, one of the successful formal specification techniques that was extended to
Object-Z to accommodate object-orientation. The Z notation is based on first-order logic
and a strongly typed fragment of Zermelo-Fraenkel set theory. Some attempts have been
made to couple Z with semi-formal notations such as UML. However, the case of coupling
Object-Z (and also Z) and the Use Case Maps (UCMs) notation is still to be explored.
A Use Case Map (UCM) is a scenario-based visual notation facilitating the requirements
definition of complex systems. A UCM may be generated either from a set of informal
requirements, or from use cases normally expressed in natural language. UCMs have the
potential to bring more clarity into the functional description of a system. It may furthermore
eliminate possible errors in the user requirements. But UCMs are not suitable to reason
formally about system behaviour.
In this dissertation, we aim to demonstrate that a UCM can be transformed into Z and
Object-Z, by providing a transformation framework. Through a case study, the impact of
using UCM as an intermediate step in the process of producing a Z and Object-Z specification
is explored. The aim is to improve on the constructivity of Z and Object-Z, provide more
guidance, and address the issue of integrating them into the existing Software Requirements
engineering process.Computer ScienceM. Sc. (Computer Science)D. Phil. (Computer Science
3D geological models and their hydrogeological applications : supporting urban development : a case study in Glasgow-Clyde, UK
Urban planners and developers in some parts of the United Kingdom can now access geodata in an easy-to-retrieve and understandable format. 3D attributed geological framework models and associated GIS outputs, developed by the British Geological Survey (BGS), provide a predictive tool for planning site investigations for some of the UK's largest regeneration projects in the Thames and Clyde River catchments.
Using the 3D models, planners can get a 3D preview of properties of the subsurface using virtual cross-section and borehole tools in visualisation software, allowing critical decisions to be made before any expensive site investigation takes place, and potentially saving time and money. 3D models can integrate artificial and superficial deposits and bedrock geology, and can be used for recognition of major resources (such as water, thermal and sand and gravel), for example in buried valleys, groundwater modelling and assessing impacts of underground mining. A preliminary groundwater recharge and flow model for a pilot area in Glasgow has been developed using the 3D geological models as a framework.
This paper focuses on the River Clyde and the Glasgow conurbation, and the BGS's Clyde Urban Super-Project (CUSP) in particular, which supports major regeneration projects in and around the City of Glasgow in the West of Scotland
Formalising non-functional requirements embedded in user requirements notation (URN) models
The growing need for computer software in different sectors of activity, (health, agriculture,
industries, education, aeronautic, science and telecommunication) together with the
increasing reliance of the society as a whole on information technology, is placing a heavy
and fast growing demand on complex and high quality software systems. In this regard, the
anticipation has been on non-functional requirements (NFRs) engineering and formal methods.
Despite their common objective, these techniques have in most cases evolved separately.
NFRs engineering proceeds firstly, by deriving measures to evaluate the quality of the constructed
software (product-oriented approach), and secondarily by improving the engineering
process (process-oriented approach). With the ability to combine the analysis of both functional
and non-functional requirements, Goal-Oriented Requirements Engineering (GORE)
approaches have become de facto leading requirements engineering methods. They propose
through refinement/operationalisation, means to satisfy NFRs encoded in softgoals at an
early phase of software development. On the other side, formal methods have kept, so far,
their promise to eliminate errors in software artefacts to produce high quality software products
and are therefore particularly solicited for safety and mission critical systems for which
a single error may cause great loss including human life.
This thesis introduces the concept of Complementary Non-functional action (CNF-action)
to extend the analysis and development of NFRs beyond the traditional goals/softgoals
analysis, based on refinement/operationalisation, and to propagate the influence of NFRs
to other software construction phases. Mechanisms are also developed to integrate the formal
technique Z/Object-Z into the standardised User Requirements Notation (URN) to
formalise GRL models describing functional and non-functional requirements, to propagate
CNF-actions of the formalised NFRs to UCMs maps, to facilitate URN construction process
and the quality of URN models.School of ComputingD. Phil (Computer Science
Visual language representation for use case evolution and traceability
The primary goal of this research is to assist non-technical stakeholders involved in requirements engineering with a comprehensible method for managing changing requirements within a specific domain. An important part of managing evolving requirements over time is to maintain a temporal ordering of the changes and to support traceability of the modifications. This research defines a semi-formal syntactical and semantic definition of such a method using a visual language, RE/TRAC (Requirements Evolution with Traceability), and a supporting formal semantic notation RE/TRAC-SEM. RE/TRAC-SEM is an ontological specification employing a combination of models, including verbal definitions, set theory and a string language specification RE/TRAC-CF. The language RE/TRAC-CF enables the separation of the syntactical description of the visual language from the semantic meaning of the model, permitting varying target representations and taking advantage of existing efficient parsing algorithms for context-free grammars. As an application of the RE/TRAC representation, this research depicts the hierarchical step-wise refinement of UML use case diagrams to demonstrate evolving system requirements. In the current arena of software development, where systems are described using platform independent models (PIMs) which emphasize the front-end design process, requirements and design documents, including the use cases, have become the primary artifacts of the system. Therefore the management of requirements’ evolution has become even more critical in the creation and maintenance of systems
Hong Kong hospitals - the geographical implications of a hospital philosophy.
PhDThe pressures exerted on hospital facilities in Hong Kong
from an ageing population with increasing expectations,
are compounded by a continued growth in population.
Hospitals have clearly failed to deal with rising demand
and, as a consequence, are commonly perceived to be in a
state of crisis. In this respect, most comment has centred
on the overall quantity of provision and quality, as
assessed largely in terms of technical care and hotel
conditions.
This thesis highlights the additional issue of the spatial
inequality of provision in a rapidly changing urban scene.
In extending discussion to the "appropriateness" of new
hospital provision, the thesis examines the relationship
that hospitals have with their client populations. This
involves not only their geographical location, but also
their interaction with other health care providers in the
urban space and, most importantly, the roles which
hospitals have been assigned.
The thesis explores the link between the function of a
hospital and the principles on which the hospital system
is based, arguing that the system is not merely a product
of a particular politico-economic setting, but also of a
history of influences, not least of which has been the
need to mediate between the diverse cultures and
traditions of Hong Kong.
Guiding principles concerning the role and functioning of
hospitals can be collectively described as a "hospital
philosophy". Because it has arisen out of diverse
influences, such a guiding philosophy may be susceptible
to change, even though basic economic and political
relations remain essentially unaltered. Since a hospital
philosophy can affect location decisions and the way in
which the hospital interacts over space, any change in
philosophy may have spatial implications.
The thesis assesses the extent to which the philosophy can
be successfully altered from within the system by paying
particular attention to the relationship between one
hospital, which has proclaimed an alternative approach,
and the area which that hospital serves.
Also examined are the Government's own plans for changing
the operation of hospital services for the 1990s and their
spatial implications, assessing to what extent this
reflects a significant change in outlook towards hospital
care
Value-based Requirements Engineering: Exploring Innovative e-Commerce Ideas
Computersoftware wordt steeds meer een onderdeel van producten en diensten die bedrijven aanbieden aan hun klanten. Denk bijvoorbeeld aan een muziekwinkel op het world-wide-web: software zorgt ervoor dat klanten toch een shopping experience ervaren, terwijl zij niet fysiek in de winkel staan. Nog mooier is het als een klant niet een fysieke muziek-CD bestelt, maar de gekochte muziek direct na de aankoop kan beluisteren. Speciale software zorgt er dan voor dat de muziek over het Internet wordt getransporteerd en direct wordt afgespeeld bij de consument thuis. Internet winkels zijn een voorbeeld van de commerci묥 exploitatie van een innovatieve technologie als het world-wide-web. Kenmerkend voor dit soort ideeë® is de sterke koppeling tussen enerzijds business aspecten (bijvoorbeeld marketing, consumenten gedrag) en anderzijds technologische middelen. De centrale vraag in mijn proefschrift is hoe dergelijke nieuwe, technologie intensieve, business ideeë® kunnen worden ontwikkeld. Om dit vraagstuk te kunnen bestuderen ben ik langdurig werkzaam geweest bij een groot Nederlands consultancy bureau en bij Cisco Systems, een bedrijf dat vaak wordt aangehaald als schoolvoorbeeld voor de commerci묥 benutting van het Internet. Deze consultancy context bood mij de mogelijkheid om een aantal business-ideeë® daadwerkelijk te verkennen en te leren hoe dergelijke trajecten kunnen worden uitgevoerd. Daarnaast ben ik tijdens mijn onderzoek part-time aan de Vrije Universiteit Amsterdam verbonden geweest om deze trajecten te analyseren, te bestuderen en te structuren. Het totaal heeft geresulteerd in een praktische methode die gebruikt kan worden om nieuwe e-business ideeë® te verkennen, genaamd e3value. In kort komt mijn methode er op neer dat we kijken naar de economische- en technische haalbaarheid van een e-business idee. De recente e-business geschiedenis heeft namelijk duidelijk gemaakt veel ideeë® zijn mislukt en zelfs hebben geleid tot faillissement omdat de aangeboden dienst of product nauwelijks commercieel danwel technisch te exploiteren was. Kenmerkend aan e3value is dat we een drietal modellen maken van het e-business idee. Een model is een iets formelere beschrijving van een vaak vaag geformuleerd idee. Zo'n formelere beschrijving dwingt nauwkeurigheid af dat bijdraagt aan een beter begrip van het oorspronkelijke e-business idee. Een ander voordeel is dat het mogelijk wordt (automatisch) te redeneren over bijvoorbeeld de potentie tot winstgevendheid. De drie verschillende modellen beschrijven ieder een belangrijk aspect van een e-business idee: namelijk de creatie van economische waarde, de bedrijfsprocessen die daarvoor nodig zijn en de software componenten die daarin een rol spelen. Met dit laatste aspect doet e3value recht aan het belang van informatie-technologie in innovatieve e-business ideeë®® Naast de e3value methode zelf worden in mijn proefschrift een aantal innovatieve e-business ideeë® besproken en geanalyseerd. De belangrijkste verdiensten van e3value blijken dan een heldere uiteenzetting van het idee en de mogelijkheid tot redeneren over business-potentie van het idee te zijn.Akkermans, J.M. [Promotor
Automated Testing: Requirements Propagation via Model Transformation in Embedded Software
Testing is the most common activity to validate software systems and plays a key role in the software development process. In general, the software testing phase takes around 40-70% of the effort, time and cost. This area has been well researched over a long period of time. Unfortunately, while many researchers have found methods of reducing time and cost during the testing process, there are still a number of important related issues such as generating test cases from UCM scenarios and validate them need to be researched.
As a result, ensuring that an embedded software behaves correctly is non-trivial, especially when testing with limited resources and seeking compliance with safety-critical software standard. It thus becomes imperative to adopt an approach or methodology based on tools and best engineering practices to improve the testing process. This research addresses the problem of testing embedded software with limited resources by the following.
First, a reverse-engineering technique is exercised on legacy software tests aims to discover feasible transformation from test layer to test requirement layer. The feasibility of transforming the legacy test cases into an abstract model is shown, along with a forward engineering process to regenerate the test cases in selected test language.
Second, a new model-driven testing technique based on different granularity level (MDTGL) to generate test cases is introduced. The new approach uses models in order to manage the
complexity of the system under test (SUT). Automatic model transformation is applied to automate test case development which is a tedious, error-prone, and recurrent software development task.
Third, the model transformations that automated the development of test cases in the MDTGL methodology are validated in comparison with industrial testing process using embedded software specification. To enable the validation, a set of timed and functional requirement is introduced. Two case studies are run on an embedded system to generate test cases. The effectiveness of two testing approaches are determined and contrasted according to the generation of test cases and the correctness of the generated workflow. Compared to several techniques, our new approach generated useful and effective test cases with much less resources in terms of time and labor work.
Finally, to enhance the applicability of MDTGL, the methodology is extended with the creation of a trace model that records traceability links among generated testing artifacts. The traceability links, often mandated by software development standards, enable the support for visualizing traceability, model-based coverage analysis and result evaluation
Finding the pathology of major depression through effects on gene interaction networks
The disease signature of major depressive disorder is distributed across multiple physical scales and investigative specialties, including genes, cells and brain regions. No single mechanism or pathway currently implicated in depression can reproduce its diverse clinical presentation, which compounds the difficulty in finding consistently disrupted molecular functions. We confront these key roadblocks to depression research - multi-scale and multi-factor pathology - by conducting parallel investigations at the levels of genes, neurons and brain regions, using transcriptome networks to identify collective patterns of dysfunction. Our findings highlight how the collusion of multi-system deficits can form a broad-based, yet variable pathology behind the depressed phenotype. For instance, in a variant of the classic lethality-centrality relationship, we show that in neuropsychiatric disorders including major depression, differentially expressed genes are pushed out to the periphery of gene networks. At the level of cellular function, we develop a molecular signature of depression based on cross-species analysis of human and mouse microarrays from depression-affected areas, and show that these genes form a tight module related to oligodendrocyte function and neuronal growth/structure. At the level of brain-region communication, we find a set of genes and hormones associated with the loss of feedback between the amygdala and anterior cingulate cortex, based on a novel assay of interregional expression synchronization termed "gene coordination". These results indicate that in the absence of a single pathology, depression may be created by dysynergistic effects among genes, cell-types and brain regions, in what we term the "floodgate" model of depression. Beyond our specific biological findings, these studies indicate that gene interaction networks are a coherent framework in which to understand the faint expression changes found in depression and complex neuropsychiatric disorders
Research and technology: 1994 annual report of the John F. Kennedy Space Center
As the NASA Center responsible for assembly, checkout, servicing, launch, recovery, and operational support of Space Transportation System elements and payloads, the John F. Kennedy Space Center is placing increasing emphasis on its advanced technology development program. This program encompasses the efforts of the Engineering Development Directorate laboratories, most of the KSC operations contractors, academia, and selected commercial industries - all working in a team effort within their own areas of expertise. This edition of the Kennedy Space Center Research and Technology 1994 Annual Report covers efforts of all these contributors to the KSC advanced technology development program, as well as our technology transfer activities. The Technology Programs and Commercialization Office (DE-TPO), (407) 867-3017, is responsible for publication of this report and should be contacted for any desired information regarding the advanced technology program
Monocular depth estimation in images and sequences using occlusion cues
When humans observe a scene, they are able to perfectly distinguish the different parts composing it. Moreover, humans can easily reconstruct the spatial position of these parts and conceive a consistent structure. The mechanisms involving visual perception have been studied since the beginning of neuroscience but, still today, not all the processes composing it are known.
In usual situations, humans can make use of three different methods to estimate the scene structure. The first one is the so called divergence and it makes use of both eyes. When objects lie in front of the observed at a distance up to hundred meters, subtle differences in the image formation in each eye can be used to determine depth. When objects are not in the field of view of both eyes, other mechanisms should be used. In these cases, both visual cues and prior learned information can be used to determine depth. Even if these mechanisms are less accurate than divergence, humans can almost always infer the correct depth structure when using them. As an example of visual cues, occlusion, perspective or object size provide a lot of information about the structure of the scene. A priori information depends on each observer, but it is normally used subconsciously by humans to detect commonly known regions such as the sky, the ground or different types of objects.
In the last years, since technology has been able to handle the processing burden of vision systems, there has been lots of efforts devoted to design automated scene interpreting systems. In this thesis we address the problem of depth estimation using only one point of view and using only occlusion depth cues. The thesis objective is to detect occlusions present in the scene and combine them with a segmentation system so as to generate a relative depth order depth map for a scene. We explore both static and dynamic situations such as single images, frame inside sequences or full video sequences. In the case where a full image sequence is available, a system exploiting motion information to recover depth structure is also designed. Results are promising and competitive with respect to the state of the art literature, but there is still much room for improvement when compared to human depth perception performance.Quan els humans observen una escena, son capaços de distingir perfectament les parts que la composen i organitzar-les espacialment per tal de poder-se orientar. Els mecanismes que governen la percepció visual han estat estudiats des dels principis de la neurociència, però encara no es coneixen tots els processos biològic que hi prenen part. En situacions normals, els humans poden fer servir tres eines per estimar l’estructura de l’escena.
La primera és l’anomenada divergència. Aprofita l’ús de dos punts de vista (els dos ulls) i és capaç¸ de determinar molt acuradament la posició dels objectes ,que a una distà ncia de fins a cent metres, romanen enfront de l’observador. A mesura que augmenta la distà ncia o els objectes no es troben en el camp de visió dels dos ulls, altres mecanismes s’han d’utilitzar. Tant l’experiència anterior com certs indicis visuals s’utilitzen en aquests casos i, encara que la seva precisió és menor, els humans aconsegueixen quasi bé sempre interpretar bé el seu entorn. Els indicis visuals que aporten informació de profunditat més coneguts i utilitzats són per exemple, la perspectiva, les oclusions o el tamany de certs objectes. L’experiència anterior permet resoldre situacions vistes anteriorment com ara saber quins regions corresponen al terra, al cel o a objectes.
Durant els últims anys, quan la tecnologia ho ha permès, s’han intentat dissenyar sistemes que interpretessin automà ticament diferents tipus d’escena. En aquesta tesi s’aborda el tema de l’estimació de la profunditat utilitzant només un punt de vista i indicis visuals d’oclusió. L’objectiu del treball es la detecció d’aquests indicis i combinar-los amb un sistema de segmentació per tal de generar automà ticament els diferents plans de profunditat presents a una escena. La tesi explora tant situacions està tiques (imatges fixes) com situacions dinà miques, com ara trames dins de seqüències de vÃdeo o seqüències completes. En el cas de seqüències completes, també es proposa un sistema automà tic per reconstruir l’estructura de l’escena només amb informació de moviment. Els resultats del treball son prometedors i competitius amb la literatura del moment, però mostren encara que la visió per computador té molt marge de millora respecte la precisió dels humans
- …