15 research outputs found

    A Model-Based Approach for the Management of Electronic Invoices

    Get PDF
    The globalized market pushes companies to expand their business boundaries to a whole new level. In order to efficiently support this environment, business transactions must be executed over the Internet. However, there are several factors complicating this process, such as the current state of electronic invoices. Electronic invoice adoption is not widespread because of the current format fragmentation originated by national regulations. In this paper we present an approach based on Model-Driven Engineering techniques and abstractions for supporting the core functions of invoice management systems. We compare our solution with the traditional implementations and try to analyze the advantages MDE can bring to this specific domain

    Validation Framework for RDF-based Constraint Languages

    Get PDF
    In this thesis, a validation framework is introduced that enables to consistently execute RDF-based constraint languages on RDF data and to formulate constraints of any type. The framework reduces the representation of constraints to the absolute minimum, is based on formal logics, consists of a small lightweight vocabulary, and ensures consistency regarding validation results and enables constraint transformations for each constraint type across RDF-based constraint languages

    A Semantic Framework for Declarative and Procedural Knowledge

    Get PDF
    In any scientic domain, the full set of data and programs has reached an-ome status, i.e. it has grown massively. The original article on the Semantic Web describes the evolution of a Web of actionable information, i.e.\ud information derived from data through a semantic theory for interpreting the symbols. In a Semantic Web, methodologies are studied for describing, managing and analyzing both resources (domain knowledge) and applications (operational knowledge) - without any restriction on what and where they\ud are respectively suitable and available in the Web - as well as for realizing automatic and semantic-driven work\ud ows of Web applications elaborating Web resources.\ud This thesis attempts to provide a synthesis among Semantic Web technologies, Ontology Research, Knowledge and Work\ud ow Management. Such a synthesis is represented by Resourceome, a Web-based framework consisting of two components which strictly interact with each other: an ontology-based and domain-independent knowledge manager system (Resourceome KMS) - relying on a knowledge model where resource and operational knowledge are contextualized in any domain - and a semantic-driven work ow editor, manager and agent-based execution system (Resourceome WMS).\ud The Resourceome KMS and the Resourceome WMS are exploited in order to realize semantic-driven formulations of work\ud ows, where activities are semantically linked to any involved resource. In the whole, combining the use of domain ontologies and work ow techniques, Resourceome provides a exible domain and operational knowledge organization, a powerful engine for semantic-driven work\ud ow composition, and a distributed, automatic and\ud transparent environment for work ow execution

    Control of geodata on GML-format

    Get PDF
    Denne masteroppgaven har som hovedmål å vise hvordan programmet SOSI-kontroll, et program for kontroll av geodataformater, kan moderniseres for å også kunne kvalitetssikre og kontrollere geodata på GML-formatet. Arbeidet går igjennom hvilken metode som egner seg best for at denne kontrollen skal bli gjennomført på en hensiktsmessig måte. Utgangspunktet for oppgaven er de tre trinnene for kontroll av XML-dokument som er kjent: syntaks, skjema og schematronkontroll. Det viste seg at disse tre trinnene ikke er tilstrekkelig kontroll av formatet. I oppgaven ble det undersøkt om det eksisterer funksjonalitet fra SOSI-kontroll, som er ønskelig å implementere i et program som skal kontrollere GML-formatet. Det vises at det er et behov for at syntakskontrollen blir videreutviklet. Denne kontrollen undersøker om GML-dokumenter og schematronskjema er på velformet XML og om innholdet i disse filene og applikasjonsskjema følger spesifikasjoner. Kontrollen av geometri på skjemanivå, viste seg å ikke være godt nok. Det er nødvendig med en ekstra kontroll av geometri. Den implementerte kontrollen henter ut all geometri fra GML-dokumentet og så er det opp til brukeren hva han ønsker å undersøke eller kontrollere. Et resultat av masterarbeidet viser at det er ønskelig med en mer detaljert rapportering. I tillegg til å rapportere informasjon fra syntaks, skjema og schematronkontrollen, så vil rapporten også inkludere en statistikk-, kontroll- og kvalitetsrapport. En målsetning var å finne ut hva programvare for kontroll av GML må gi brukeren informasjon om. Etter å ha lest denne oppgaven vil en utvikler av et slikt program ha funnet svaret på det.The main goal of the thesis is to show how the computer program SOSI-kontroll, that controls geodata formats, can be modernized so that it can also handle the GML-format. The study evaluates possible methods that are best suited for performing this control in an expedient way. The thesis is based on the three steps of how to control XML-documents: syntax valdiation, schema validation and schematron validation. It turns out that these steps are not sufficient for an adequate control of the format. It was investigated whether there exists validation software for other geodata formats (see SOSI-control), and other functionality that will be desirable to implement in software for verification of the GML-format. It is seen that there is a need for an improved syntax check. This check examines whether GML-documents, applicationschema and schematronschema are written in well-formed XML and that the content of the files follow a set of given specifications. It also turns out that the verification of geometry, which is done during the schema validation, is not sufficient. It is therefore necessary to implement an additional control of geometry. This control retrieves the geometry from the GML-documents and gives the user options for what they want to examine or verify. The results from this thesis show that it is desirable to report the results in more detail. In addition to reporting information from the syntax validation, schema validation and schematron validation, the report should also include a statistics-, control- and quality-report. A goal was to determine what software for control of GML must provide the user information about. A developer of such programs should have the answer to that after reading this thesis.M-GEO

    Assessment of IT Infrastructures: A Model Driven Approach

    Get PDF
    Several approaches to evaluate IT infrastructure architectures have been proposed, mainly by supplier and consulting firms. However, they do not have a unified approach of these architectures where all stakeholders can cement the decision-making process, thus facilitating comparability as well as the verification of best practices adoption. The main goal of this dissertation is the proposal of a model-based approach to mitigate this problem. A metamodel named SDM (System Definition Model) and expressed with the UML (Unified Modeling Language) is used to represent structural and operational knowledge on the infrastructures. This metamodel is automatically instantiated through the capture of infrastructures configurations of existing distributed architectures, using a proprietary tool and a transformation tool that was built in the scope of this dissertation. The quantitative evaluation is performed using the M2DM (Meta-Model Driven Measurement) approach that uses OCL (Object Constraint Language) to formulate the required metrics. This proposal is expected to increase the understandability of IT infrastructures by all stakeholders (IT architects, application developers, testers, operators and maintenance teams) as well as to allow expressing their strategies of management and evolution. To illustrate the use of the proposed approach, we assess the complexity of some real cases in the diachronic and synchronic perspective

    Managing the consistency of distributed documents

    Get PDF
    Many businesses produce documents as part of their daily activities: software engineers produce requirements specifications, design models, source code, build scripts and more; business analysts produce glossaries, use cases, organisation charts, and domain ontology models; service providers and retailers produce catalogues, customer data, purchase orders, invoices and web pages. What these examples have in common is that the content of documents is often semantically related: source code should be consistent with the design model, a domain ontology may refer to employees in an organisation chart, and invoices to customers should be consistent with stored customer data and purchase orders. As businesses grow and documents are added, it becomes difficult to manually track and check the increasingly complex relationships between documents. The problem is compounded by current trends towards distributed working, either over the Internet or over a global corporate network in large organisations. This adds complexity as related information is not only scattered over a number of documents, but the documents themselves are distributed across multiple physical locations. This thesis addresses the problem of managing the consistency of distributed and possibly heterogeneous documents. “Documents” is used here as an abstract term, and does not necessarily refer to a human readable textual representation. We use the word to stand for a file or data source holding structured information, like a database table, or some source of semi-structured information, like a file of comma-separated values or a document represented in a hypertext markup language like XML [Bray et al., 2000]. Document heterogeneity comes into play when data with similar semantics is represented in different ways: for example, a design model may store a class as a rectangle in a diagram whereas a source code file will embed it as a textual string; and an invoice may contain an invoice identifier that is composed of a customer name and date, both of which may be recorded and managed separately. Consistency management in this setting encompasses a number of steps. Firstly, checks must be executed in order to determine the consistency status of documents. Documents are inconsistent if their internal elements hold values that do not meet the properties expected in the application domain or if there are conflicts between the values of elements in multiple documents. The results of a consistency check have to be accumulated and reported back to the user. And finally, the user may choose to change the documents to bring them into a consistent state. The current generation of tools and techniques is not always sufficiently equipped to deal with this problem. Consistency checking is mostly tightly integrated or hardcoded into tools, leading to problems with extensibility with respect to new types of documents. Many tools do not support checks of distributed data, insisting instead on accumulating everything in a centralized repository. This may not always be possible, due to organisational or time constraints, and can represent excessive overhead if the only purpose of integration is to improve data consistency rather than deriving any additional benefit. This thesis investigates the theoretical background and practical support necessary to support consistency management of distributed documents. It makes a number of contributions to the state of the art, and the overall approach is validated in significant case studies that provide evidence of its practicality and usefulness

    Interoperability of Enterprise Software and Applications

    Get PDF

    Model morphisms (MoMo) to enable language independent information models and interoperable business networks

    Get PDF
    MSc. Dissertation presented at Faculdade de Ciências e Tecnologia of Universidade Nova de Lisboa to obtain the Master degree in Electrical and Computer EngineeringWith the event of globalisation, the opportunities for collaboration became more evident with the effect of enlarging business networks. In such conditions, a key for enterprise success is a reliable communication with all the partners. Therefore, organisations have been searching for flexible integrated environments to better manage their services and product life cycle, where their software applications could be easily integrated independently of the platform in use. However, with so many different information models and implementation standards being used, interoperability problems arise. Moreover,organisations are themselves at different technological maturity levels, and the solution that might be good for one, can be too advanced for another, or vice-versa. This dissertation responds to the above needs, proposing a high level meta-model to be used at the entire business network, enabling to abstract individual models from their specificities and increasing language independency and interoperability, while keeping all the enterprise legacy software‟s integrity intact. The strategy presented allows an incremental mapping construction, to achieve a gradual integration. To accomplish this, the author proposes Model Driven Architecture (MDA) based technologies for the development of traceable transformations and execution of automatic Model Morphisms

    Spécification, validation et satisfiabilité [i.e. satisfaisabilité] de contraintes hybrides par réduction à la logique temporelle

    Get PDF
    Depuis quelques années, de nombreux champs de l'informatique ont été transformés par l'introduction d'une nouvelle vision de la conception et de l'utilisation d'un système, appelée approche déclarative. Contrairement à l'approche dite impérative, qui consiste à décrire au moyen d'un langage formelles opérations à effectuer pour obtenir un résultat, l'approche déclarative suggère plutôt de décrire le résultat désiré, sans spécifier comment ce «but» doit être atteint. L'approche déclarative peut être vue comme le prolongement d'une tendance ayant cours depuis les débuts de l'informatique et visant à résoudre des problèmes en manipulant des concepts d'un niveau d'abstraction toujours plus élevé. Le passage à un paradigme déclaratif pose cependant certains problèmes: les outils actuels sont peu appropriés à une utilisation déclarative. On identifie trois questions fondamentales qui doivent être résolues pour souscrire à ce nouveau paradigme: l'expression de contraintes dans un langage formel, la validation de ces contraintes sur une structure, et enfin la construction d'une structure satisfaisant une contrainte donnée. Cette thèse étudie ces trois problèmes selon l'angle de la logique mathématique. On verra qu'en utilisant une logique comme fondement formel d'un langage de « buts », les questions de validation et de construction d'une structure se transposent en deux questions mathématiques, le model checking et la satisfiabilité, qui sont fondamentales et largement étudiées. En utilisant comme motivation deux contextes concrets, la gestion de réseaux et les architectures orientées services, le travail montrera qu'il est possible d'utiliser la logique mathématique pour décrire, vérifier et construire des configurations de réseaux ou des compositions de services web. L'aboutissement de la recherche consiste en le développement de la logique CTLFO+, permettant d'exprimer des contraintes sur les données, sur la séquences des opérations\ud d'un système, ainsi que des contraintes dites «hybrides». Une réduction de CTL-FO+ à la logique temporelle CTL permet de réutiliser de manière efficace des outils de vérification existants. ______________________________________________________________________________ MOTS-CLÉS DE L’AUTEUR : Méthodes formelles, Services web, Réseaux

    Ontological analysis of means-end links

    No full text
    The i* community has raised several main dialects and dozens of variations in the definition of the i* language. Differences may be found related not just to the representation of new concepts but to the very core of the i* language. In previous work we have tackled this issue mainly from a syntactic point of view, using metamodels and syntactic-based model interoperability frameworks. In this paper, we go one step beyond and consider the use of foundational ontologies in general, and UFO in particular, as a way to clarify the meaning of core i* constructs and as the basis to propose a normative definition. We focus here on one of the most characteristics i* constructs, namely means-end links.Postprint (published version
    corecore