1,217 research outputs found

    Speak Clearly, If You Speak at All; Carve Every Word Before You Let It Fall: Problems of Ambiguous Terminology in eLearning System Development

    Get PDF
    This paper addresses issues associated with the development of eLearning software systems. The development of software systems in general is a highly complex process, and a number of methodologies and models have been developed to help address some of these complexities. Generally the first stage in most development processes is the gathering of requirements which involves elicitation from end-users. This process is made more complex by problems associated with ambiguous terminology. Types of ambiguous terminology include homonymous, polysemous and inaccurate terms. This range of ambiguous terminology can cause significant misunderstandings in the requirements gathering process, which in turn can lead to software systems that do not meet the requirements of the end-users. This research seeks to explore some of the more common terms that can be ambiguously interpreted in the development of eLearning systems, and suggests software engineering approaches to help alleviate the potentially erroneous outcomes of these ambiguities

    Information systems flexibility

    Get PDF

    A critical analysis of two refactoring tools

    Get PDF
    This study provides a critical analysis of refactoring by surveying the refactoring tools in IDEA and Eclipse. Ways are discussed to locate targets for refactorings, via detection of code smells from static code analysis in IDEA and during the compilation process in Eclipse. New code smells are defined as well as the refactorings needed to remove the code smells. The impacts the code smells have on design are well documented. Considerable effort is made to describe how these code smells and their refactorings can be used to improve design. Practical methods are provided to detect code smells in large projects such as Sun’s JDK. The methodology includes a classification scheme to categorise code smells by their value and complexity to handle large projects more efficiently. Additionally a detailed analysis is performed on the evolution of the JDK from a maintainability point of view. Code smells are used to measure maintainability in this instance.Dissertation (MSc (Computer Science))--University of Pretoria, 2008.Computer Scienceunrestricte

    Bioinformatic Investigations Into the Genetic Architecture of Renal Disorders

    Get PDF
    Modern genomic analysis has a significant bioinformatic component due to the high volume of complex data that is involved. During investigations into the genetic components of two renal diseases, we developed two software tools. // Genome-Wide Association Studies (GWAS) datasets may be genotyped on different microarrays and subject to different annotation, leading to a mosaic case-control cohort that has inherent errors, primarily due to strand mismatching. Our software REMEDY seeks to detect and correct strand designation of input datasets, as well as filtering for common sources of noise such as structural and multi-allelic variants. We performed a GWAS on a large cohort of Steroid-sensitive nephrotic syndrome samples; the mosaic input datasets were pre-processed with REMEDY prior to merging and analysis. Our results show that REMEDY significantly reduced noise in GWAS output results. REMEDY outperforms existing software as it has significantly more features available such as auto-strand designation detection, comprehensive variant filtering and high-speed variant matching to dbSNP. // The second tool supported the analysis of a newly characterised rare renal disorder: Polycystic kidney disease with hyperinsulinemic hypoglycemia (HIPKD). Identification of the underlying genetic cause led to the hypothesis that a change in chromatin looping at a specific locus affected the aetiology of the disease. We developed LOOPER, a software suite capable of predicting chromatin loops from ChIP-Seq data to explore the possible conformations of chromatin architecture in the HIPKD genomic region. LOOPER predicted several interesting functional and structural loops that supported our hypothesis. We then extended LOOPER to visualise ChIA-PET and ChIP-Seq data as a force-directed graph to show experimental structural and functional chromatin interactions. Next, we re-analysed the HIPKD region with LOOPER to show experimentally validated chromatin interactions. We first confirmed our original predicted loops and subsequently discovered that the local genomic region has many more chromatin features than first thought

    Adaptive object management for distributed systems

    Get PDF
    This thesis describes an architecture supporting the management of pluggable software components and evaluates it against the requirement for an enterprise integration platform for the manufacturing and petrochemical industries. In a distributed environment, we need mechanisms to manage objects and their interactions. At the least, we must be able to create objects in different processes on different nodes; we must be able to link them together so that they can pass messages to each other across the network; and we must deliver their messages in a timely and reliable manner. Object based environments which support these services already exist, for example ANSAware(ANSA, 1989), DEC's Objectbroker(ACA,1992), Iona's Orbix(Orbix,1994)Yet such environments provide limited support for composing applications from pluggable components. Pluggability is the ability to install and configure a component into an environment dynamically when the component is used, without specifying static dependencies between components when they are produced. Pluggability is supported to a degree by dynamic binding. Components may be programmed to import references to other components and to explore their interfaces at runtime, without using static type dependencies. Yet thus overloads the component with the responsibility to explore bindings. What is still generally missing is an efficient general-purpose binding model for managing bindings between independently produced components. In addition, existing environments provide no clear strategy for dealing with fine grained objects. The overhead of runtime binding and remote messaging will severely reduce performance where there are a lot of objects with complex patterns of interaction. We need an adaptive approach to managing configurations of pluggable components according to the needs and constraints of the environment. Management is made difficult by embedding bindings in component implementations and by relying on strong typing as the only means of verifying and validating bindings. To solve these problems we have built a set of configuration tools on top of an existing distributed support environment. Specification tools facilitate the construction of independent pluggable components. Visual composition tools facilitate the configuration of components into applications and the verification of composite behaviours. A configuration model is constructed which maintains the environmental state. Adaptive management is made possible by changing the management policy according to this state. Such policy changes affect the location of objects, their bindings, and the choice of messaging system

    A VISUAL DESIGN METHOD AND ITS APPLICATION TO HIGH RELIABILITY HYPERMEDIA SYSTEMS

    Get PDF
    This work addresses the problem of the production of hypermedia documentation for applications that require high reliability, particularly technical documentation in safety critical industries. One requirement of this application area is for the availability of a task-based organisation, which can guide and monitor such activities as maintenance and repair. In safety critical applications there must be some guarantee that such sequences are correctly presented. Conventional structuring and design methods for hypermedia systems do not allow such guarantees to be made. A formal design method that is based on a process algebra is proposed as a solution to this problem. Design methods of this kind need to be accessible to information designers. This is achieved by use of a technique already familiar to them: the storyboard. By development of a storyboard notation that is syntactically equivalent to a process algebra a bridge is made between information design and computer science, allowing formal analysis and refinement of the specification drafted by information designers. Process algebras produce imperative structures that do not map easily into the declarative formats used for some hypermedia systems, but can be translated into concurrent programs. This translation process, into a language developed by the author, called ClassiC, is illustrated and the properties that make ClassiC a suitable implementation target discussed. Other possible implementation targets are evaluated, and a comparative illustration given of translation into another likely target, Java

    A meta-semantic language for smart component-adapters

    Get PDF
    The issues confronting the software development community today are significantly different from the problems it faced only a decade ago. Advances in software development tools and technologies during the last two decades have greatly enhanced the ability to leverage large amounts of software for creating new applications through the reuse of software libraries and application frameworks. The problems facing organizations today are increasingly focused around systems integration and the creation of information flows. Software modeling based on the assembly of reusable components to support software development has not been successfully implemented on a wide scale. Several models for reusable software components have been suggested which primarily address the wiring-level connectivity problem. While this is considered necessary, it is not sufficient to support an automated process of component assembly. Two critical issues that remain unresolved are: (1) semantic modeling of components, and (2) deployment process that supports automated assembly. The first issue can be addressed through domain-based standardization that would make it possible for independent developers to produce interoperable components based on a common set of vocabulary and understanding of the problem domain. This is important not only for providing a semantic basis for developing components but also for the interoperability between systems. The second issue is important for two reasons: (a) eliminate the need for developers to be involved in the final assembly of software components, and (b) provide a basis for the development process to be potentially driven by the user. To resolve the above remaining issues (1) and (2) a late binding mechanism between components based on meta-protocols is required. In this dissertation we address the above issues by proposing a generic framework for the development of software components and an interconnection language, COMPILE, for the specification of software systems from components. The computational model of the COMPILE language is based on late and dynamic binding of the components\u27 control, data, and function properties. The use of asynchronous callbacks for method invocation allows control binding among components to be late and dynamic. Data exchanged between components is defined through the use of a meta- language that can describe the semantics of the information but without being bound to any specific programming language type representation. Late binding to functions is accomplished by maintaining domain-based semantics as component metainformation. This information allows clients of components to map generic requested service to specific functions
    • …
    corecore