41 research outputs found

    Towards an assessment framework of reuse: A Knowledge Level Analysis Approach

    Get PDF
    The process of assessing the suitability of reuse of a software component is complex. Indeed, software systems are typically developed as an assembly of existing components. The complexity of the assessment process is due to lack of clarity on how to compare the cost of adaptation of an existing component versus the cost of developing it from scratch. Indeed, often pursuit of reuse can lead to excessive rework and adaptation, or developing suites of components that often get neglected. This paper is an important step towards modelling the complex reuse assessment process. To assess the success factors that can underpin reuse, we analyze the cognitive factors that belie developers\u27 behavior during their decision-making when attempting to reuse. This analysis is the first building block of a broader aim to synthesize a framework to institute activities during the software development lifecycle to support reuse

    Centering ontologies in agent oriented software engineering processes

    Get PDF

    Real-time task attributes and temporal constraints

    Get PDF
    Real-time tasks need attributes for monitoring their execution and performing recovery actions in case of failures. Temporal constraints are a class of real-time task attributes where the constraints relate the status of the task to temporal entities. Violating temporal constraints can produce consequences of unknown severity. This paper is part of our on-going research on real-time multi agent systems constraints. We discuss the importance of temporal constraints and present a task model that explicitly represents temporal constraints. We also present our preliminary results from our initial implementation in the domain of Meeting Schedules Management involving multiple users assisted by agents

    DM model transformations framework

    Get PDF
    Metamodelling produces a \u27metamodel\u27 capable of generalizing the domain. A metamodel gathers all domain concepts and their relationships. It enables partitioning a domain problem into sub-problems. Decision makers can then develop a variety of domain solutions models based on mixing and matching solutions for sub-problems indentified using the metamodel. A repository of domain knowledge structured using the metamodel would allow the transformation of models generated from a higher level to a lower level according to scope of the problem on hand. In this paper, we reveal how a process of mixing and matching disaster management actions can be accomplished using our Disaster Management Metamodel (DMM). The paper describes DM model transformations underpinned by DMM. They are illustrated benefiting DM users creating appropriate DM solution models from extant partial solutions

    Computationally efficient ontology selection in software requirement planning

    Full text link
    Understanding the needs of stakeholders and prioritizing requirements are the vital steps in the development of any software application. Enabling tools to support these steps have a critical role in the success of the corresponding software application. Based on such a critical role, this paper presents a computationally efficient ontology selection in software requirement planning. The key point guiding the underlying design is that, once gathered, requirements need to be processed by decomposition towards the generation of a specified systems design. A representational framework allows for the expression of high level abstract conceptions under a single schema, which may then be made explicit in terms of axiomatic relations and expressed in a suitable ontology. The initial experimental results indicate that our framework for filtered selection of a suitable ontology operates in a computationally efficient manner

    Learning and discovery in incremental knowledge acquisition

    Full text link
    Knowledge Based Systems (KBS) have been actively investigated since the early period of AI. There are four common methods of building expert systems: modeling approaches, programming approaches, case-based approaches and machine-learning approaches. One particular technique is Ripple Down Rules (RDR) which may be classified as an incremental case-based approach. Knowledge needs to be acquired from experts in the context of individual cases viewed by them. In the RDR framework, the expert adds a new rule based on the context of an individual case. This task is simple and only affects the expert s workflow minimally. The rule added fixes an incorrect interpretation made by the KBS but with minimal impact on the KBS's previous correct performance. This provides incremental improvement. Despite these strengths of RDR, there are some limitations including rule redundancy, lack of intermediate features and lack of models. This thesis addresses these RDR limitations by applying automatic learning algorithms to reorganize the knowledge base, to learn intermediate features and possibly to discover domain models. The redundancy problem occurs because rules created in particular contexts which should have more general application. We address this limitation by reorganizing the knowledge base and removing redundant rules. Removal of redundant rules should also reduce the number of future knowledge acquisition sessions. Intermediate features improve modularity, because the expert can deal with features in groups rather than individually. In addition to the manual creation of intermediate features for RDR, we propose the automated discovery of intermediate features to speed up the knowledge acquisition process by generalizing existing rules. Finally, the Ripple Down Rules approach facilitates rapid knowledge acquisition as it can be initialized with a minimal ontology. Despite minimal modeling, we propose that a more developed knowledge model can be extracted from an existing RDR KBS. This may be useful in using RDR KBS for other applications. The most useful of these three developments was the automated discovery of intermediate features. This made a significant difference to the number of knowledge acquisition sessions required

    Disaster Management (DM) Model Transformations Framework

    Get PDF
    Metamodelling produces a ‘metamodel’ capable of generalizing the domain. A metamodel gathers all domain concepts and their relationships. It enables partitioning a domain problem into sub-problems. Decision makers can then develop a variety of domain solutions models based on mixing and matching solutions for sub-problems indentified using the metamodel. A repository of domain knowledge structured using the metamodel would allow the transformation of models generated from a higher level to a lower level according to scope of the problem on hand. In this paper, we reveal how a process of mixing and matching disaster management actions can be accomplished using our Disaster Management Metamodel (DMM). The paper describes DM model transformations underpinned by DMM. They are illustrated benefiting DM users creating appropriate DM solution models from extant partial solutions

    Customising agent based analysis towards analysis of disaster management knowledge

    Get PDF
    © 2016 Dedi Iskandar Inan, Ghassan Beydoun and Simon Opper. In developed countries such as Australia, for recurring disasters (e.g. floods), there are dedicated document repositories of Disaster Management Plans (DISPLANs), and supporting doctrine and processes that are used to prepare organisations and communities for disasters. They are maintained on an ongoing cyclical basis and form a key information source for community education, engagement and awareness programme in the preparation for and mitigation of disasters. DISPLANS, generally in semi-structured text document format, are then accessed and activated during the response and recovery to incidents to coordinate emergency service and community safety actions. However, accessing the appropriate plan and the specific knowledge within the text document from across its conceptual areas in a timely manner and sharing activities between stakeholders requires intimate domain knowledge of the plan contents and its development. This paper describes progress on an ongoing project with NSW State Emergency Service (NSW SES) to convert DISPLANs into a collection of knowledge units that can be stored in a unified repository with the goal to form the basis of a future knowledge sharing capability. All Australian emergency services covering a wide range of hazards develop DISPLANs of various structure and intent, in general the plans are created as instances of a template, for example those which are developed centrally by the NSW and Victorian SES’s State planning policies. In this paper, we illustrate how by using selected templates as part of an elaborate agent-based process, we can apply agent-oriented analysis more efficiently to convert extant DISPLANs into a centralised repository. The repository is structured as a layered abstraction according to Meta Object Facility (MOF). The work is illustrated using DISPLANs along the flood-prone Murrumbidgee River in central NSW

    Arctic smoke - aerosol characteristics during a record smoke event in the European Arctic and its radiative impact

    Get PDF
    In early May 2006 a record high air pollution event was observed at Ny-Ålesund, Spitsbergen. An atypical weather pattern established a pathway for the rapid transport of biomass burning aerosols from agricultural fires in Eastern Europe to the Arctic. Atmospheric stability was such that the smoke was constrained to low levels, within 2 km of the surface during the transport. A description of this smoke event in terms of transport and main aerosol characteristics can be found in Stohl et al. (2007). This study puts emphasis on the radiative effect of the smoke. The aerosol number size distribution was characterised by lognormal parameters as having an accumulation mode centered around 165–185 nm and almost 1.6 for geometric standard deviation of the mode. Nucleation and small Aitken mode particles were almost completely suppressed within the smoke plume measured at Ny-Ålesund. Chemical and microphysical aerosol information obtained at Mt. Zeppelin (474 m a.s.l) was used to derive input parameters for a one-dimensional radiation transfer model to explore the radiative effects of the smoke. The daily mean heating rate calculated on 2 May 2006 for the average size distribution and measured chemical composition reached 0.55 K day−1 at 0.5 km altitude for the assumed external mixture of the aerosols but showing much higher heating rates for an internal mixture (1.7 K day−1). In comparison a case study for March 2000 showed that the local climatic effects due to Arctic haze, using a regional climate model, HIRHAM, amounts to a maximum of 0.3 K day−1 of heating at 2 km altitude (Treffeisen et al., 2005)

    Document management and retrieval for specialised domains : an evolutionary user-based approach

    Full text link
    Browsing marked-up documents by traversing hyperlinks has become probably the most important means by which documents are accessed, both via the World Wide Web (WWW) and organisational Intranets. However, there is a pressing demand for document management and retrieval systems to deal appropriately with the massive number of documents available. There are two classes of solution: general search engines, whether for the WWW or an Intranet, which make little use of specific domain knowledge or hand-crafted specialised systems which are costly to build and maintain. The aim of this thesis was to develop a document management and retrieval system suitable for small communities as well as individuals in specialised domains on the Web. The aim was to allow users to easily create and maintain their own organisation of documents while ensuring continual improvement in the retrieval performance of the system as it evolves. The system developed is based on the free annotation of documents by users and is browsed using the concept lattice of Formal Concept Analysis (FCA). A number of annotation support tools were developed to aid the annotation process so that a suitable system evolved. Experiments were conducted in using the system to assist in finding staff and student home pages at the School of Computer Science and Engineering, University of New South Wales. Results indicated that the annotation tools provided a good level of assistance so that documents were easily organised and a lattice-based browsing structure that evolves in an ad hoc fashion provided good efficiency in retrieval performance. An interesting result suggested that although an established external taxonomy can be useful in proposing annotation terms, users appear to be very selective in their use of terms proposed. Results also supported the hypothesis that the concept lattice of FCA helped take users beyond a narrow search to find other useful documents. In general, lattice-based browsing was considered as a more helpful method than Boolean queries or hierarchical browsing for searching a specialised domain. We conclude that the concept lattice of Formal Concept Analysis, supported by annotation techniques is a useful way of supporting the flexible open management of documents required by individuals, small communities and in specialised domains. It seems likely that this approach can be readily integrated with other developments such as further improvements in search engines and the use of semantically marked-up documents, and provide a unique advantage in supporting autonomous management of documents by individuals and groups - in a way that is closely aligned with the autonomy of the WWW
    corecore