117 research outputs found

    Business rules based legacy system evolution towards service-oriented architecture.

    Get PDF
    Enterprises can be empowered to live up to the potential of becoming dynamic, agile and real-time. Service orientation is emerging from the amalgamation of a number of key business, technology and cultural developments. Three essential trends in particular are coming together to create a new revolutionary breed of enterprise, the service-oriented enterprise (SOE): (1) the continuous performance management of the enterprise; (2) the emergence of business process management; and (3) advances in the standards-based service-oriented infrastructures. This thesis focuses on this emerging three-layered architecture that builds on a service-oriented architecture framework, with a process layer that brings technology and business together, and a corporate performance layer that continually monitors and improves the performance indicators of global enterprises provides a novel framework for the business context in which to apply the important technical idea of service orientation and moves it from being an interesting tool for engineers to a vehicle for business managers to fundamentally improve their businesses

    Continuous Process Auditing (CPA): an Audit Rule Ontology Approach to Compliance and Operational Audits

    Get PDF
    Continuous Auditing (CA) has been investigated over time and it is, somewhat, in practice within nancial and transactional auditing as a part of continuous assurance and monitoring. Enterprise Information Systems (EIS) that run their activities in the form of processes require continuous auditing of a process that invokes the action(s) speci ed in the policies and rules in a continuous manner and/or sometimes in real-time. This leads to the question: How much could continuous auditing mimic the actual auditing procedures performed by auditing professionals? We investigate some of these questions through Continuous Process Auditing (CPA) relying on heterogeneous activities of processes in the EIS, as well as detecting exceptions and evidence in current and historic databases to provide audit assurance

    COLLABORATIVE RULE-BASED PROACTIVE SYSTEMS: MODEL, INFORMATION SHARING STRATEGY AND CASE STUDIES

    Get PDF
    The Proactive Computing paradigm provides us with a new way to make the multitude of computing systems, devices and sensors spread through our modern environment, work for/pro the human beings and be active on our behalf. In this paradigm, users are put on top of the interactive loop and the underlying IT systems are automated for performing even the most complex tasks in a more autonomous way. This dissertation focuses on providing further means, at both theoretical and applied levels, to design and implement Proactive Systems. It is shown how smart mobile, wearable and/or server applications can be developed with the proposed Rule-Based Middleware Model for computing pro-actively and for operating on multiple platforms. In order to represent and to reason about the information that the proactive system needs to know about its environment where it performs its computations, a new technique called Proactive Scenario is proposed. As an extension of its scope and properties, and for achieving global reasoning over inter-connected proactive systems, a new collaborative technique called Global Proactive Scenario is then proposed. Furthermore, to show their potential, three real world case studies of (collaborative) proactive systems have been explored for validating the proposed development methodology and its related technological framework in various domains like e-Learning, e-Business and e-Health. Results from these experiments con rm that software applications designed along the lines of the proposed rule-based proactive system model together with the concepts of local and global proactive scenarios, are capable of actively searching for the information they need, of automating tasks and procedures that do not require the user's input, of detecting various changes in their context and of taking measures to adapt to it for addressing the needs of the people which use these systems, and of performing collaboration and global reasoning over multiple proactive engines spread across different networks

    Model driven design and data integration in semantic web information systems

    Get PDF
    The Web is quickly evolving in many ways. It has evolved from a Web of documents into a Web of applications in which a growing number of designers offer new and interactive Web applications with people all over the world. However, application design and implementation remain complex, error-prone and laborious. In parallel there is also an evolution from a Web of documents into a Web of `knowledge' as a growing number of data owners are sharing their data sources with a growing audience. This brings the potential new applications for these data sources, including scenarios in which these datasets are reused and integrated with other existing and new data sources. However, the heterogeneity of these data sources in syntax, semantics and structure represents a great challenge for application designers. The Semantic Web is a collection of standards and technologies that offer solutions for at least the syntactic and some structural issues. If offers semantic freedom and flexibility, but this leaves the issue of semantic interoperability. In this thesis we present Hera-S, an evolution of the Model Driven Web Engineering (MDWE) method Hera. MDWEs allow designers to create data centric applications using models instead of programming. Hera-S especially targets Semantic Web sources and provides a flexible method for designing personalized adaptive Web applications. Hera-S defines several models that together define the target Web application. Moreover we implemented a framework called Hydragen, which is able to execute the Hera-S models to run the desired Web application. Hera-S' core is the Application Model (AM) in which the main logic of the application is defined, i.e. defining the groups of data elements that form logical units or subunits, the personalization conditions, and the relationships between the units. Hera-S also uses a so-called Domain Model (DM) that describes the content and its structure. However, this DM is not Hera-S specific, but instead allows any Semantic Web source representation as its DM, as long as its content can be queried by the standardized Semantic Web query language SPARQL. The same holds for the User Model (UM). The UM can be used for personalization conditions, but also as a source of user-related content if necessary. In fact, the difference between DM and UM is conceptual as their implementation within Hydragen is the same. Hera-S also defines a presentation model (PM) which defines presentation details of elements like order and style. In order to help designers with building their Web applications we have introduced a toolset, Hera Studio, which allows to build the different models graphically. Hera Studio also provides some additional functionality like model checking and deployment of the models in Hydragen. Both Hera-S and its implementation Hydragen are designed to be flexible regarding the user of models. In order to achieve this Hydragen is a stateless engine that queries for relevant information from the models at every page request. This allows the models and data to be changed in the datastore during runtime. We show that one way to exploit this flexibility is by applying aspect-orientation to the AM. Aspect-orientation allows us to dynamically inject functionality that pervades the entire application. Another way to exploit Hera-S' flexibility is in reusing specialized components, e.g. for presentation generation. We present a configuration of Hydragen in which we replace our native presentation generation functionality by the AMACONT engine. AMACONT provides more extensive multi-level presentation generation and adaptation capabilities as well aspect-orientation and a form of semantic based adaptation. Hera-S was designed to allow the (re-)use of any (Semantic) Web datasource. It even opens up the possibility for data integration at the back end, by using an extendible storage layer in our database of choice Sesame. However, even though theoretically possible it still leaves much of the actual data integration issue. As this is a recurring issue in many domains, a broader challenge than for Hera-S design only, we decided to look at this issue in isolation. We present a framework called Relco which provides a language to express data transformation operations as well as a collection of techniques that can be used to (semi-)automatically find relationships between concepts in different ontologies. This is done with a combination of syntactic, semantic and collaboration techniques, which together provide strong clues for which concepts are most likely related. In order to prove the applicability of Relco we explore five application scenarios in different domains for which data integration is a central aspect. This includes a cultural heritage portal, Explorer, for which data from several datasources was integrated and was made available by a mapview, a timeline and a graph view. Explorer also allows users to provide metadata for objects via a tagging mechanism. Another application is SenSee: an electronic TV-guide and recommender. TV-guide data was integrated and enriched with semantically structured data from several sources. Recommendations are computed by exploiting the underlying semantic structure. ViTa was a project in which several techniques for tagging and searching educational videos were evaluated. This includes scenarios in which user tags are related with an ontology, or other tags, using the Relco framework. The MobiLife project targeted the facilitation of a new generation of mobile applications that would use context-based personalization. This can be done using a context-based user profiling platform that can also be used for user model data exchange between mobile applications using technologies like Relco. The final application scenario that is shown is from the GRAPPLE project which targeted the integration of adaptive technology into current learning management systems. A large part of this integration is achieved by using a user modeling component framework in which any application can store user model information, but which can also be used for the exchange of user model data

    Trust, Accountability, and Autonomy in Knowledge Graph-based AI for Self-determination

    Full text link
    Knowledge Graphs (KGs) have emerged as fundamental platforms for powering intelligent decision-making and a wide range of Artificial Intelligence (AI) services across major corporations such as Google, Walmart, and AirBnb. KGs complement Machine Learning (ML) algorithms by providing data context and semantics, thereby enabling further inference and question-answering capabilities. The integration of KGs with neuronal learning (e.g., Large Language Models (LLMs)) is currently a topic of active research, commonly named neuro-symbolic AI. Despite the numerous benefits that can be accomplished with KG-based AI, its growing ubiquity within online services may result in the loss of self-determination for citizens as a fundamental societal issue. The more we rely on these technologies, which are often centralised, the less citizens will be able to determine their own destinies. To counter this threat, AI regulation, such as the European Union (EU) AI Act, is being proposed in certain regions. The regulation sets what technologists need to do, leading to questions concerning: How can the output of AI systems be trusted? What is needed to ensure that the data fuelling and the inner workings of these artefacts are transparent? How can AI be made accountable for its decision-making? This paper conceptualises the foundational topics and research pillars to support KG-based AI for self-determination. Drawing upon this conceptual framework, challenges and opportunities for citizen self-determination are illustrated and analysed in a real-world scenario. As a result, we propose a research agenda aimed at accomplishing the recommended objectives

    A Review of Rule Learning Based Intrusion Detection Systems and Their Prospects in Smart Grids

    Get PDF

    A framework for SLA-centric service-based Utility Computing

    Get PDF
    Nicht angegebenService oriented Utility Computing paves the way towards realization of service markets, which promise metered services through negotiable Service Level Agreements (SLA). A market does not necessarily imply a simple buyer-seller relationship, rather it is the culmination point of a complex chain of stake-holders with a hierarchical integration of value along each link in the chain. In service value chains, services corresponding to different partners are aggregated in a producer-consumer manner resulting in hierarchical structures of added value. SLAs are contracts between service providers and service consumers, which ensure the expected Quality of Service (QoS) to different stakeholders at various levels in this hierarchy. \emph{This thesis addresses the challenge of realizing SLA-centric infrastructure to enable service markets for Utility Computing.} Service Level Agreements play a pivotal role throughout the life cycle of service aggregation. The activities of service selection and service negotiation followed by the hierarchical aggregation and validation of services in service value chain, require SLA as an enabling technology. \emph{This research aims at a SLA-centric framework where the requirement-driven selection of services, flexible SLA negotiation, hierarchical SLA aggregation and validation, and related issues such as privacy, trust and security have been formalized and the prototypes of the service selection model and the validation model have been implemented. } The formal model for User-driven service selection utilizes Branch and Bound and Heuristic algorithms for its implementation. The formal model is then extended for SLA negotiation of configurable services of varying granularity in order to tweak the interests of the service consumers and service providers. %and then formalizing the requirements of an enabling infrastructure for aggregation and validation of SLAs existing at multiple levels and spanning % along the corresponding service value chains. The possibility of service aggregation opens new business opportunities in the evolving landscape of IT-based Service Economy. A SLA as a unit of business relationships helps establish innovative topologies for business networks. One example is the composition of computational services to construct services of bigger granularity thus giving room to business models based on service aggregation, Composite Service Provision and Reselling. This research introduces and formalizes the notions of SLA Choreography and hierarchical SLA aggregation in connection with the underlying service choreography to realize SLA-centric service value chains and business networks. The SLA Choreography and aggregation poses new challenges regarding its description, management, maintenance, validation, trust, privacy and security. The aggregation and validation models for SLA Choreography introduce concepts such as: SLA Views to protect the privacy of stakeholders; a hybrid trust model to foster business among unknown partners; and a PKI security mechanism coupled with rule based validation system to enable distributed queries across heterogeneous boundaries. A distributed rule based hierarchical SLA validation system is designed to demonstrate the practical significance of these notions

    Optimization and inference under fuzzy numerical constraints

    Get PDF
    Εκτεταμένη έρευνα έχει γίνει στους τομείς της Ικανοποίησης Περιορισμών με διακριτά (ακέραια) ή πραγματικά πεδία τιμών. Αυτή η έρευνα έχει οδηγήσει σε πολλαπλές σημασιολογικές περιγραφές, πλατφόρμες και συστήματα για την περιγραφή σχετικών προβλημάτων με επαρκείς βελτιστοποιήσεις. Παρά ταύτα, λόγω της ασαφούς φύσης πραγματικών προβλημάτων ή ελλιπούς μας γνώσης για αυτά, η σαφής μοντελοποίηση ενός προβλήματος ικανοποίησης περιορισμών δεν είναι πάντα ένα εύκολο ζήτημα ή ακόμα και η καλύτερη προσέγγιση. Επιπλέον, το πρόβλημα της μοντελοποίησης και επίλυσης ελλιπούς γνώσης είναι ακόμη δυσκολότερο. Επιπροσθέτως, πρακτικές απαιτήσεις μοντελοποίησης και μέθοδοι βελτιστοποίησης του χρόνου αναζήτησης απαιτούν συνήθως ειδικές πληροφορίες για το πεδίο εφαρμογής, καθιστώντας τη δημιουργία ενός γενικότερου πλαισίου βελτιστοποίησης ένα ιδιαίτερα δύσκολο πρόβλημα. Στα πλαίσια αυτής της εργασίας θα μελετήσουμε το πρόβλημα της μοντελοποίησης και αξιοποίησης σαφών, ελλιπών ή ασαφών περιορισμών, καθώς και πιθανές στρατηγικές βελτιστοποίησης. Καθώς τα παραδοσιακά προβλήματα ικανοποίησης περιορισμών λειτουργούν βάσει συγκεκριμένων και προκαθορισμένων κανόνων και σχέσεων, παρουσιάζει ενδιαφέρον η διερεύνηση στρατηγικών και βελτιστοποιήσεων που θα επιτρέπουν το συμπερασμό νέων ή/και αποδοτικότερων περιορισμών. Τέτοιοι επιπρόσθετοι κανόνες θα μπορούσαν να βελτιώσουν τη διαδικασία αναζήτησης μέσω της εφαρμογής αυστηρότερων περιορισμών και περιορισμού του χώρου αναζήτησης ή να προσφέρουν χρήσιμες πληροφορίες στον αναλυτή για τη φύση του προβλήματος που μοντελοποιεί.Extensive research has been done in the areas of Constraint Satisfaction with discrete/integer and real domain ranges. Multiple platforms and systems to deal with these kinds of domains have been developed and appropriately optimized. Nevertheless, due to the incomplete and possibly vague nature of real-life problems, modeling a crisp and adequately strict satisfaction problem may not always be easy or even appropriate. The problem of modeling incomplete knowledge or solving an incomplete/relaxed representation of a problem is a much harder issue to tackle. Additionally, practical modeling requirements and search optimizations require specific domain knowledge in order to be implemented, making the creation of a more generic optimization framework an even harder problem.In this thesis, we will study the problem of modeling and utilizing incomplete and fuzzy constraints, as well as possible optimization strategies. As constraint satisfaction problems usually contain hard-coded constraints based on specific problem and domain knowledge, we will investigate whether strategies and generic heuristics exist for inferring new constraint rules. Additional rules could optimize the search process by implementing stricter constraints and thus pruning the search space or even provide useful insight to the researcher concerning the nature of the investigated problem
    corecore