42,069 research outputs found

    Knowledge-based systems and geological survey

    Get PDF
    This personal and pragmatic review of the philosophy underpinning methods of geological surveying suggests that important influences of information technology have yet to make their impact. Early approaches took existing systems as metaphors, retaining the separation of maps, map explanations and information archives, organised around map sheets of fixed boundaries, scale and content. But system design should look ahead: a computer-based knowledge system for the same purpose can be built around hierarchies of spatial objects and their relationships, with maps as one means of visualisation, and information types linked as hypermedia and integrated in mark-up languages. The system framework and ontology, derived from the general geoscience model, could support consistent representation of the underlying concepts and maintain reference information on object classes and their behaviour. Models of processes and historical configurations could clarify the reasoning at any level of object detail and introduce new concepts such as complex systems. The up-to-date interpretation might centre on spatial models, constructed with explicit geological reasoning and evaluation of uncertainties. Assuming (at a future time) full computer support, the field survey results could be collected in real time as a multimedia stream, hyperlinked to and interacting with the other parts of the system as appropriate. Throughout, the knowledge is seen as human knowledge, with interactive computer support for recording and storing the information and processing it by such means as interpolating, correlating, browsing, selecting, retrieving, manipulating, calculating, analysing, generalising, filtering, visualising and delivering the results. Responsibilities may have to be reconsidered for various aspects of the system, such as: field surveying; spatial models and interpretation; geological processes, past configurations and reasoning; standard setting, system framework and ontology maintenance; training; storage, preservation, and dissemination of digital records

    A Product Line Systems Engineering Process for Variability Identification and Reduction

    Full text link
    Software Product Line Engineering has attracted attention in the last two decades due to its promising capabilities to reduce costs and time to market through reuse of requirements and components. In practice, developing system level product lines in a large-scale company is not an easy task as there may be thousands of variants and multiple disciplines involved. The manual reuse of legacy system models at domain engineering to build reusable system libraries and configurations of variants to derive target products can be infeasible. To tackle this challenge, a Product Line Systems Engineering process is proposed. Specifically, the process extends research in the System Orthogonal Variability Model to support hierarchical variability modeling with formal definitions; utilizes Systems Engineering concepts and legacy system models to build the hierarchy for the variability model and to identify essential relations between variants; and finally, analyzes the identified relations to reduce the number of variation points. The process, which is automated by computational algorithms, is demonstrated through an illustrative example on generalized Rolls-Royce aircraft engine control systems. To evaluate the effectiveness of the process in the reduction of variation points, it is further applied to case studies in different engineering domains at different levels of complexity. Subject to system model availability, reduction of 14% to 40% in the number of variation points are demonstrated in the case studies.Comment: 12 pages, 6 figures, 2 tables; submitted to the IEEE Systems Journal on 3rd June 201

    An Empirical Study of a Repeatable Method for Reengineering Procedural Software Systems to Object- Oriented Systems

    Get PDF
    This paper describes a repeatable method for reengineering a procedural system to an object-oriented system. The method uses coupling metrics to assist a domain expert in identifying candidate objects. An application of the method to a simple program is given, and the effectiveness of the various coupling metrics are discussed. We perform a detailed comparison of our repeatable method with an ad hoc, manual reengineering effort based on the same procedural program. The repeatable method was found to be effective for identifying objects. It produced code that was much smaller, more efficient, and passed more regression tests than the ad hoc method. Analysis of object-oriented metrics indicated both simpler code and less variability among classes for the repeatable method

    An overview of Mirjam and WeaveC

    Get PDF
    In this chapter, we elaborate on the design of an industrial-strength aspectoriented programming language and weaver for large-scale software development. First, we present an analysis on the requirements of a general purpose aspect-oriented language that can handle crosscutting concerns in ASML software. We also outline a strategy on working with aspects in large-scale software development processes. In our design, we both re-use existing aspect-oriented language abstractions and propose new ones to address the issues that we identified in our analysis. The quality of the code ensured by the realized language and weaver has a positive impact both on maintenance effort and lead-time in the first line software development process. As evidence, we present a short evaluation of the language and weaver as applied today in the software development process of ASML

    A heuristic-based approach to code-smell detection

    Get PDF
    Encapsulation and data hiding are central tenets of the object oriented paradigm. Deciding what data and behaviour to form into a class and where to draw the line between its public and private details can make the difference between a class that is an understandable, flexible and reusable abstraction and one which is not. This decision is a difficult one and may easily result in poor encapsulation which can then have serious implications for a number of system qualities. It is often hard to identify such encapsulation problems within large software systems until they cause a maintenance problem (which is usually too late) and attempting to perform such analysis manually can also be tedious and error prone. Two of the common encapsulation problems that can arise as a consequence of this decomposition process are data classes and god classes. Typically, these two problems occur together ā€“ data classes are lacking in functionality that has typically been sucked into an over-complicated and domineering god class. This paper describes the architecture of a tool which automatically detects data and god classes that has been developed as a plug-in for the Eclipse IDE. The technique has been evaluated in a controlled study on two large open source systems which compare the tool results to similar work by Marinescu, who employs a metrics-based approach to detecting such features. The study provides some valuable insights into the strengths and weaknesses of the two approache

    Artificial Intelligence for the Financial Services Industry: What Challenges Organizations to Succeed?

    Get PDF
    As a research field, artificial intelligence (AI) exists for several years. More recently, technological breakthroughs, coupled with the fast availability of data, have brought AI closer to commercial use. Internet giants such as Google, Amazon, Apple or Facebook invest significantly into AI, thereby underlining its relevance for business models worldwide. For the highly data driven finance industry, AI is of intensive interest within pilot projects, still, few AI applications have been implemented so far. This study analyzes drivers and inhibitors of a successful AI application in the finance industry based on panel data comprising 22 semi-structured interviews with experts in AI in finance. As theoretical lens, we structured our results using the TOE framework. Guidelines for applying AI successfully reveal AI-specific role models and process competencies as crucial, before trained algorithms will have reached a quality level on which AI applications will operate without human intervention and moral concerns
    • ā€¦
    corecore