42,069 research outputs found
Knowledge-based systems and geological survey
This personal and pragmatic review of the philosophy underpinning methods of geological surveying suggests that important influences of information technology have yet to make their impact. Early approaches took existing systems as metaphors, retaining the separation of maps, map explanations and information archives, organised around map sheets of fixed boundaries, scale and content. But system design should look ahead: a computer-based knowledge system for the same purpose can be built around hierarchies of spatial objects and their relationships, with maps as one means of visualisation, and information types linked as hypermedia and integrated in mark-up languages. The system framework and ontology, derived from the general geoscience model, could support consistent representation of the underlying concepts and maintain reference information on object classes and their behaviour. Models of processes and historical configurations could clarify the reasoning at any level of object detail and introduce new concepts such as complex systems. The up-to-date interpretation might centre on spatial models, constructed with explicit geological reasoning and evaluation of uncertainties. Assuming (at a future time) full computer support, the field survey results could be collected in real time as a multimedia stream, hyperlinked to and interacting with the other parts of the system as appropriate. Throughout, the knowledge is seen as human knowledge, with interactive computer support for recording and storing the information and processing it by such means as interpolating, correlating, browsing, selecting, retrieving, manipulating, calculating, analysing, generalising, filtering, visualising and delivering the results. Responsibilities may have to be reconsidered for various aspects of the system, such as: field surveying; spatial models and interpretation; geological processes, past configurations and reasoning; standard setting, system framework and ontology maintenance; training; storage, preservation, and dissemination of digital records
A Product Line Systems Engineering Process for Variability Identification and Reduction
Software Product Line Engineering has attracted attention in the last two
decades due to its promising capabilities to reduce costs and time to market
through reuse of requirements and components. In practice, developing system
level product lines in a large-scale company is not an easy task as there may
be thousands of variants and multiple disciplines involved. The manual reuse of
legacy system models at domain engineering to build reusable system libraries
and configurations of variants to derive target products can be infeasible. To
tackle this challenge, a Product Line Systems Engineering process is proposed.
Specifically, the process extends research in the System Orthogonal Variability
Model to support hierarchical variability modeling with formal definitions;
utilizes Systems Engineering concepts and legacy system models to build the
hierarchy for the variability model and to identify essential relations between
variants; and finally, analyzes the identified relations to reduce the number
of variation points. The process, which is automated by computational
algorithms, is demonstrated through an illustrative example on generalized
Rolls-Royce aircraft engine control systems. To evaluate the effectiveness of
the process in the reduction of variation points, it is further applied to case
studies in different engineering domains at different levels of complexity.
Subject to system model availability, reduction of 14% to 40% in the number of
variation points are demonstrated in the case studies.Comment: 12 pages, 6 figures, 2 tables; submitted to the IEEE Systems Journal
on 3rd June 201
An Empirical Study of a Repeatable Method for Reengineering Procedural Software Systems to Object- Oriented Systems
This paper describes a repeatable method for reengineering a procedural
system to an object-oriented system. The method uses coupling metrics to assist a domain
expert in identifying candidate objects. An application of the method to a simple program
is given, and the effectiveness of the various coupling metrics are discussed. We perform
a detailed comparison of our repeatable method with an ad hoc, manual reengineering
effort based on the same procedural program. The repeatable method was found to be
effective for identifying objects. It produced code that was much smaller, more efficient,
and passed more regression tests than the ad hoc method. Analysis of object-oriented
metrics indicated both simpler code and less variability among classes for the repeatable
method
Recommended from our members
Emergence of ERPII Characteristics within an ERP integration context
It is widely accepted that Enterprise Resource Planning (ERP) can provide organizations with efficiency and productivity gains, in terms of aggregating and streamlining internal business processes. It is also well understood that embarking upon the implementation of such an IT project, also presents many risks and challenges to the incumbent corporation, as witnessed by numerous cases in the normative IS literature on this subject. Through the description of a case study organizationās ERP integration experiences, the authors highlight the emergence of those characteristics which define the componentization, and extension of ERP functionalities (i.e. so-called ERPII) in terms of a failed ERP-led, Enterprise Application Integration (EAI) implementation within an industrial products organization. As a result of the exploratory research approach used, it is hoped that the definition of such factors will provide an insight into the development and management of such technology investments
An overview of Mirjam and WeaveC
In this chapter, we elaborate on the design of an industrial-strength aspectoriented programming language and weaver for large-scale software development. First, we present an analysis on the requirements of a general purpose aspect-oriented language that can handle crosscutting concerns in ASML software. We also outline a strategy on working with aspects in large-scale software development processes. In our design, we both re-use existing aspect-oriented language abstractions and propose new ones to address the issues that we identified in our analysis. The quality of the code ensured by the realized language and weaver has a positive impact both on maintenance effort and lead-time in the first line software development process. As evidence, we present a short evaluation of the language and weaver as applied today in the software development process of ASML
A heuristic-based approach to code-smell detection
Encapsulation and data hiding are central tenets of the object oriented paradigm. Deciding what data and behaviour to form into a class and where to draw the line between its public and private details can make the difference between a class that is an understandable, flexible and reusable abstraction and one which is not. This decision is a difficult one and may easily result in poor encapsulation which can then have serious implications for a number of system qualities. It is often hard to identify such encapsulation problems within large software systems until they cause a maintenance problem (which is usually too late) and attempting to perform such analysis manually can also be tedious and error prone. Two of the common encapsulation problems that can arise as a consequence of this decomposition process are data classes and god classes. Typically, these two problems occur together ā data classes are lacking in functionality that has typically been sucked into an over-complicated and domineering god class. This paper describes the architecture of a tool which automatically detects data and god classes that has been developed as a plug-in for the Eclipse IDE. The technique has been evaluated in a controlled study on two large open source systems which compare the tool results to similar work by Marinescu, who employs a metrics-based approach to detecting such features. The study provides some valuable insights into the strengths and weaknesses of the two approache
Artificial Intelligence for the Financial Services Industry: What Challenges Organizations to Succeed?
As a research field, artificial intelligence (AI) exists for several years. More recently, technological breakthroughs, coupled with the fast availability of data, have brought AI closer to commercial use. Internet giants such as Google, Amazon, Apple or Facebook invest significantly into AI, thereby underlining its relevance for business models worldwide. For the highly data driven finance industry, AI is of intensive interest within pilot projects, still, few AI applications have been implemented so far. This study analyzes drivers and inhibitors of a successful AI application in the finance industry based on panel data comprising 22 semi-structured interviews with experts in AI in finance. As theoretical lens, we structured our results using the TOE framework. Guidelines for applying AI successfully reveal AI-specific role models and process competencies as crucial, before trained algorithms will have reached a quality level on which AI applications will operate without human intervention and moral concerns
Recommended from our members
An ontological approach for recovering legacy business content
Legacy Information Systems (LIS) pose a challenge for many organizations. On one hand, LIS are viewed as aging systems needing replacement; on the other hand, years of accumulated business knowledge have made these systems mission-critical. Current approaches however are often criticized for being overtly dependent on technology and ignoring the business knowledge which resides within LIS. In this light, this paper proposes a means of capturing the business knowledge in a technology agnostic manner and transforming it in a way that reaps the benefits of clear semantic expression - this transformation is achieved via the careful use of ontology. The approach called Content Sophistication (CS) aims to provide a model of the business that more closely adheres to the semantics and relationships of objects existing in the real world. The approach is illustrated via an example taken from a case study concerning the renovation of a large financial system and the outcome of the approach results in technology agnostic models that show improvements along several dimensions
- ā¦