21,382 research outputs found

    Domain modelling and the co-design of business rules in the telecommunication business area.

    Get PDF
    This paper discusses the development of an enterprise domain model in an environment where part of the domain knowledge is vague and not yet formalised in company-wide business rules. The domain model was developed for a young company starting in the telecommunications sector. The company relied on a number of stand-alone business support systems and sought for a manner to integrate them. There was opted for the development of an enterprise-wide domain model that had to serve as an integration layer to coordinate the stand-alone applications. A specific feature of the company was that it could build up its information infrastructure form scratch, so that many aspects of its business were still in the process of being defined. The paper will highlight parts of the Enterprise Model where there was a need for co-designing business rules together with the domain model. A result of this whole effort was that the company got more insight into important domain knowledge and developed a common understanding across functional areas of the way of doing business.domain modelling; business rules; object-oriented analysis; business process modelling;

    Special Session on Industry 4.0

    Get PDF
    No abstract available

    Verification of Agent-Based Artifact Systems

    Full text link
    Artifact systems are a novel paradigm for specifying and implementing business processes described in terms of interacting modules called artifacts. Artifacts consist of data and lifecycles, accounting respectively for the relational structure of the artifacts' states and their possible evolutions over time. In this paper we put forward artifact-centric multi-agent systems, a novel formalisation of artifact systems in the context of multi-agent systems operating on them. Differently from the usual process-based models of services, the semantics we give explicitly accounts for the data structures on which artifact systems are defined. We study the model checking problem for artifact-centric multi-agent systems against specifications written in a quantified version of temporal-epistemic logic expressing the knowledge of the agents in the exchange. We begin by noting that the problem is undecidable in general. We then identify two noteworthy restrictions, one syntactical and one semantical, that enable us to find bisimilar finite abstractions and therefore reduce the model checking problem to the instance on finite models. Under these assumptions we show that the model checking problem for these systems is EXPSPACE-complete. We then introduce artifact-centric programs, compact and declarative representations of the programs governing both the artifact system and the agents. We show that, while these in principle generate infinite-state systems, under natural conditions their verification problem can be solved on finite abstractions that can be effectively computed from the programs. Finally we exemplify the theoretical results of the paper through a mainstream procurement scenario from the artifact systems literature

    AI Solutions for MDS: Artificial Intelligence Techniques for Misuse Detection and Localisation in Telecommunication Environments

    Get PDF
    This report considers the application of Articial Intelligence (AI) techniques to the problem of misuse detection and misuse localisation within telecommunications environments. A broad survey of techniques is provided, that covers inter alia rule based systems, model-based systems, case based reasoning, pattern matching, clustering and feature extraction, articial neural networks, genetic algorithms, arti cial immune systems, agent based systems, data mining and a variety of hybrid approaches. The report then considers the central issue of event correlation, that is at the heart of many misuse detection and localisation systems. The notion of being able to infer misuse by the correlation of individual temporally distributed events within a multiple data stream environment is explored, and a range of techniques, covering model based approaches, `programmed' AI and machine learning paradigms. It is found that, in general, correlation is best achieved via rule based approaches, but that these suffer from a number of drawbacks, such as the difculty of developing and maintaining an appropriate knowledge base, and the lack of ability to generalise from known misuses to new unseen misuses. Two distinct approaches are evident. One attempts to encode knowledge of known misuses, typically within rules, and use this to screen events. This approach cannot generally detect misuses for which it has not been programmed, i.e. it is prone to issuing false negatives. The other attempts to `learn' the features of event patterns that constitute normal behaviour, and, by observing patterns that do not match expected behaviour, detect when a misuse has occurred. This approach is prone to issuing false positives, i.e. inferring misuse from innocent patterns of behaviour that the system was not trained to recognise. Contemporary approaches are seen to favour hybridisation, often combining detection or localisation mechanisms for both abnormal and normal behaviour, the former to capture known cases of misuse, the latter to capture unknown cases. In some systems, these mechanisms even work together to update each other to increase detection rates and lower false positive rates. It is concluded that hybridisation offers the most promising future direction, but that a rule or state based component is likely to remain, being the most natural approach to the correlation of complex events. The challenge, then, is to mitigate the weaknesses of canonical programmed systems such that learning, generalisation and adaptation are more readily facilitated

    Formal certification and compliance for run-time service environments

    Get PDF
    With the increased awareness of security and safety of services in on-demand distributed service provisioning (such as the recent adoption of Cloud infrastructures), certification and compliance checking of services is becoming a key element for service engineering. Existing certification techniques tend to support mainly design-time checking of service properties and tend not to support the run-time monitoring and progressive certification in the service execution environment. In this paper we discuss an approach which provides both design-time and runtime behavioural compliance checking for a services architecture, through enabling a progressive event-driven model-checking technique. Providing an integrated approach to certification and compliance is a challenge however using analysis and monitoring techniques we present such an approach for on-going compliance checking

    An agent-based fuzzy cognitive map approach to the strategic marketing planning for industrial firms

    Get PDF
    This is the post-print version of the final paper published in Industrial Marketing Management. The published article is available from the link below. Changes resulting from the publishing process, such as peer review, editing, corrections, structural formatting, and other quality control mechanisms may not be reflected in this document. Changes may have been made to this work since it was submitted for publication. Copyright @ 2013 Elsevier B.V.Industrial marketing planning is a typical example of an unstructured decision making problem due to the large number of variables to consider and the uncertainty imposed on those variables. Although abundant studies identified barriers and facilitators of effective industrial marketing planning in practice, the literature still lacks practical tools and methods that marketing managers can use for the task. This paper applies fuzzy cognitive maps (FCM) to industrial marketing planning. In particular, agent based inference method is proposed to overcome dynamic relationships, time lags, and reusability issues of FCM evaluation. MACOM simulator also is developed to help marketing managers conduct what-if scenarios to see the impacts of possible changes on the variables defined in an FCM that represents industrial marketing planning problem. The simulator is applied to an industrial marketing planning problem for a global software service company in South Korea. This study has practical implication as it supports marketing managers for industrial marketing planning that has large number of variables and their cause–effect relationships. It also contributes to FCM theory by providing an agent based method for the inference of FCM. Finally, MACOM also provides academics in the industrial marketing management discipline with a tool for developing and pre-verifying a conceptual model based on qualitative knowledge of marketing practitioners.Ministry of Education, Science and Technology (Korea
    corecore