46 research outputs found

    Error propagation metrics from XMI

    Get PDF
    This work describes the production of an application Error Propagation Metrics from XMI which can extract process and display software design metrics from XMI files. The tool archives these design metrics in a standard XML format defined by a metric document type definition.;XMI is a flavour of XML allowing the description of UML models. As such, the XMI representation of a software design will include information from which a variety of software design metrics can be extracted. These metrics are potentially useful in improving the software design process, either throughout the early stages of design if a suitable XMI-enabled modelling tool is deployed, or to enable the comparison of completed software projects, by extracting design metrics from UML models reverse engineered from the implemented source code.;The tool is able to derive the error propagation of metrics from test XMI files created from UML sequence and state diagrams and from reverse engineered Java source code. However, variation was observed between the XMI representations generated by different software design tools, limiting the ability of the tool to process XMI from all sources. Furthermore, it was noted that subtle differences between UML design representations might have a marked effect on the quality of metrics derived.;In conclusion in order to validate the usefulness of these metrics that can be extracted from XMI files it would be useful to follow well-documented design projects throughout the total design and implementation process. Alternatively, the tool might be used to compare metrics from well-matched design implementations. In either case design metrics will only be of true value to software engineers if they can be associated empirically with a validated measure of system quality

    Web service composition: A survey of techniques and tools

    Get PDF
    Web services are a consolidated reality of the modern Web with tremendous, increasing impact on everyday computing tasks. They turned the Web into the largest, most accepted, and most vivid distributed computing platform ever. Yet, the use and integration of Web services into composite services or applications, which is a highly sensible and conceptually non-trivial task, is still not unleashing its full magnitude of power. A consolidated analysis framework that advances the fundamental understanding of Web service composition building blocks in terms of concepts, models, languages, productivity support techniques, and tools is required. This framework is necessary to enable effective exploration, understanding, assessing, comparing, and selecting service composition models, languages, techniques, platforms, and tools. This article establishes such a framework and reviews the state of the art in service composition from an unprecedented, holistic perspective

    Development of a conceptual graphical user interface framework for the creation of XML metadata for digital archives

    Get PDF
    This dissertation is motivated by the DFG sponsored Jonas Cohn Archive digitization project at Steinheim-Institut whose aim was to preserve and provide digital access to structured handwritten historical archive material highlighting New Kantian philosophy scattered in the correspondence, diaries and private journals kept by and written to and by Jonas Cohn. The dissertation describes a framework for processing and presenting multi-standard digital archive material. A set of standard markup schema and semantic bibliographic descriptions have been chosen to illustrate the multiple standard and hence semantic heterogeneous digital archiving process. The standards include Text Encoding Initiative (TEI), Metadata Encoding and Transmission Standard (METS) and Metadata Object Description Schema (MODS). The chosen standards best illustrate the structural contrast between the systematic archive, digitized archive and digitized text standards. Furthermore, combined digital preservation and presentation approaches offer not only the digitized texts but also metadata structured variably sized images of the archive documents enabling virtual visualization. State of the art applications focus solely on either one of the structural areas neglecting the compound idea of a virtual digital archive. The content of this work describes the requirements analysis for managing multi-structured and therefore multi-standard digital archival artefacts in textual and image form. In addition to the architecture and design, an infrastructure suitable for processing, managing and presenting such scholarly archives is sought for recognition as a digital framework useful for the preservation and access to digitized cultural resources. The proposed solution therefore includes the instrumentation of a conglomerate of existing and novel XML technology for transformations based in a centralized application. The archive can then be managed via a client-server application thereby focusing archival activities on structured data collection and information preservation illustrated in the dissertation process by the: • Development of a prototype data model allowing the integration of the relevant markup schema • Implementation of a prototype client server application handling archive processing, management and presentation and based on the data model already mentioned • Development and implementation of a role archive access user interface Furthermore as an infrastructural development serving expert archivists from the humanities, the dissertation explores methods of binding the existing XML metadata creation process to other programming languages. In doing so, one opens further for channels simplifying the metadata creation process by integrating the use of graphical user interfaces. To this end the java programming language, its swing and AWT graphical user interface libraries, associated relational persistency and enterprise client server architecture resemble a suitable environment for integrating XML metadata into main stream computing. Hence the implementation of Java XML Data Binding as part of the metadata creation framework is part and parcel of the proposed solution.Diese Arbeit geht hervor aus dem von der DFG geförderten Projekt zu Digitalisierung des Jonas Cohn Archivs im Steinheim-Institut, dessen Ziel es ist, eine strukturierte Auswahl von Handschriften des Philosophen Jonas Cohns in digitaler Form zu bewahren und den Zugang zu ihnen zu erleichtern. Die Dissertation beschreibt ein Rahmenwerk für die digitale Verarbeitung und Präsentation digitalisierter Archivinhalte und ihrer Metadaten, strukturiert anhand von mehr als einem Beschreibungsstandard. Eine Auswahl von Standard Markup Schemata und bibliographisch semantischen Beschreibungen wurde getroffen, um die Problematik darzustellen, die aus der Berücksichtigung mehrerer Standards und damit aus semantischer Heterogenität des Digitalisierungsprozesses entsteht. Diese Auswahl umfasst unter anderem die Text Encoding Initiative (TEI), Metadata Encoding and Transmission Schema (METS) und Metadata Object Description Schema (MODS) als Beispiele für Beschreibungsstandards. Diese Standards sind am besten geeignet, die strukturellen und semantischen Unterschiede zwischen den Standards eines systematisch und semantisch zu digitalisierenden Archivs darzustellen. Zusätzlich verbindet der Ansatz die digitale Bewahrung und Präsentation von digitalisierten Texten und von Metadaten strukturierter Bilder der Archivinhalte. Dies ermöglicht eine virtuelle Präsentation des digitalen Archivs. Eine große Zahl bekannter Digitalisierungsanwendungen folgt nur einer der beiden Strukturierungsziele Bewahrung und Präsentation, wodurch der Ansatz eines vollständig virtuellen digitalen Archivs vernachlässigt wird. Der Schwerpunkt dieser Arbeit ist die Beschreibung einer Managementinfrastruktur für die Erfassung und Auszeichnung von Multi-Standard Metadaten für digitale Handschriftensammlungen. Zusätzlich zu der Architektur und dem Design wird nach einer geeigneten Infrastruktur gesucht für die Erfassung, Verarbeitung und die Präsentation wissenschaftlicher Archive als digitales Rahmenwerk für den Zugang zu und die Bewahrung von Kulturbesitz. Die hier vorgeschlagene Lösung sieht deshalb die Nutzung bestehender und neuer XML Technologien vor, verknüpft in einer zentralen Anwendung. So wird im Rahmen der Dissertation die Strukturierung des Archivs mittels einer Client-Server-Anwendung betrieben und die Bewahrungsmaßnahmen als Prozess herausgearbeitet. Die Arbeit verfolgt mehrere Zielsetzungen: • Die Entwicklung eines prototypischen Datenmodells mit der Einbindung relevanter Markup Schemata • Die Implementierung einer prototypischen Client Server Anwendung für die Bearbeitung, Erfassung und Präsentation der Archive anhand des beschriebenen Datenmodells • Die Entwicklung, Implementierung und Bewertung einer Benutzerschnittstelle für die Interaktion mit dem Rahmenwerk anhand einer Expertenevaluation

    Formal Digital Description of Production Equipment Modules for supporting System Design and Deployment

    Get PDF
    The requirements for production systems are moving towards higher flexibility, adaptability and reactivity. Increasing volatility in global and local economies, shorter product life cycles and the ever-increasing number of product variants arising from product customization have led to a demand for production systems which can respond more rapidly to these changing requirements. Therefore, whenever a new product, or product variant, enters production, the production system designer must be able to create an easily-reconfigurable production system which not only meets the User Requirements (UR) but is quick and cost-efficient to set up. Modern production systems must be able to integrate new product variants with minimum effort. In the event of a product changeover or an unforeseen incident, such as the mechanical failure of a production resource, it must be possible to reconfigure the production system smoothly and seamlessly by adding, removing or altering the resources. Ideally, auto-configuration should obviate the need to manually re-programme the system once it has been reconfigured.The cornerstone of any solution to the above-mentioned challenges is the concept of being able to create formalised, comprehensive descriptions of all production resources. Providing universally-recognised digital representations of all the multifarious resources used in a production system would enable a standardised exchange of information between the different actors involved in building a new production system. Such freely available and machine-readable information could also be utilised by the wide variety of software tools that come into play during the different life cycle phases of a production system, thus considerably extending its useful life. These digital descriptions would also offer a multi-faceted foundation for the reconfiguration of production systems. The production paradigms presented here would support state-of-the-art production systems, such as Reconfigurable Manufacturing Systems (RMSs), Holonic Manufacturing Systems (HMSs) and Evolvable Production Systems (EPSs).The methodological framework for this research is Design Research Methodology (DRM) supported with Systems Engineering, Action Research, and case-based research. The first two were used to develop the concept and data models for the resource descriptions, through a process of iterative development. The case-based research was used for verification, through the modelling and analysis of two separate production systems used in this research. The concept, on which this thesis is based, is itself based on the triplicity of production system design, i.e. Product, Process and Resource. The processes, are implemented through the capabilities of the resources, which are thus directly linked to the product requirements. The driving force behind this new approach to production system design is its strong emphasis on making production systems that can be reconfigured easily. Successful system reconfiguration can only be achieved, however, if all the required production resources can be quickly and easily compared to all the available production resources in one unified, and universally accepted form. These descriptions must not only be able to capture all of a production system’s capabilities, but must also include information about its interfaces, kinematics, technical properties and its control and communication abilities.The answer to this lies in the Emplacement Concept, which is described and developed in this thesis. The Emplacement Concept proposes the creation of a multi-layered Generic Model containing information about production resources in three different layers. These are the Abstract Module Description (AMD), the Module Description (MD), and the Module Instance Description (MID). Each of these layers has unique characteristics which can be utilised in the different phases of designing, commissioning and reconfiguring a production system. The AMD is the most abstract (general) descriptive layer and can be used for initial system design iterations. It ensures that the proposed resources for the production system are exchangeable and interchangeable, and thus guides the selection of production resources and the implementation (or reconfiguration) of a production system. The MD is the next level down, and provides a more detailed description of the type of production resource, providing ’finer granularity’ for the descriptions. The MID provides the finest level of granularity and contains invaluable information about the individual instances of a particular production resource. This research involves two practical implementations of the Generic Model. These are used to model and digitally represent all the production resources used in the two use-case environments. All the modules in the production systems (25 in all) were modelled and described with the data models developed here. In fact, we were able to freeze the data models after the first case study, as they didn’t need any major changes in order to model the production resources of the second use-case environment. This demonstrates the general applicability of the proposed approach for modelling modular production resources.The advantages of being able to describe production resources in a unified digital form are many and varied. For example, production systems which are described in this way are much more agile. They can react faster to changes in demand and can be reconfigured easily and quickly. The resource descriptions also improve the sustainability of production systems because they provide detailed information about the exact capabilities and characteristics of all the available resources. This means that production system designers are better placed to utilise ready-made modules, (design by re-use). Being able to use readily available production modules means that the Time to Market and Time to Volume are improved, as new production systems can be built or reconfigured using tested and fully operational modules, which can easily be integrated into an already operational production system. Finally, the resource descriptions are an essential source of information for auto-configuration tools, allowing automated, or semi-automated production system design. However, harvesting the full benefits of all these outcomes requires that the tools used to create new production systems can understand and utilise the modular descriptions proposed by this concept. This, in turn, presupposes that the all the formalised descriptions of the production modules provided here will be made publicly available, and will form the basis for an ever-expanding library of such descriptions

    Model driven design and data integration in semantic web information systems

    Get PDF
    The Web is quickly evolving in many ways. It has evolved from a Web of documents into a Web of applications in which a growing number of designers offer new and interactive Web applications with people all over the world. However, application design and implementation remain complex, error-prone and laborious. In parallel there is also an evolution from a Web of documents into a Web of `knowledge' as a growing number of data owners are sharing their data sources with a growing audience. This brings the potential new applications for these data sources, including scenarios in which these datasets are reused and integrated with other existing and new data sources. However, the heterogeneity of these data sources in syntax, semantics and structure represents a great challenge for application designers. The Semantic Web is a collection of standards and technologies that offer solutions for at least the syntactic and some structural issues. If offers semantic freedom and flexibility, but this leaves the issue of semantic interoperability. In this thesis we present Hera-S, an evolution of the Model Driven Web Engineering (MDWE) method Hera. MDWEs allow designers to create data centric applications using models instead of programming. Hera-S especially targets Semantic Web sources and provides a flexible method for designing personalized adaptive Web applications. Hera-S defines several models that together define the target Web application. Moreover we implemented a framework called Hydragen, which is able to execute the Hera-S models to run the desired Web application. Hera-S' core is the Application Model (AM) in which the main logic of the application is defined, i.e. defining the groups of data elements that form logical units or subunits, the personalization conditions, and the relationships between the units. Hera-S also uses a so-called Domain Model (DM) that describes the content and its structure. However, this DM is not Hera-S specific, but instead allows any Semantic Web source representation as its DM, as long as its content can be queried by the standardized Semantic Web query language SPARQL. The same holds for the User Model (UM). The UM can be used for personalization conditions, but also as a source of user-related content if necessary. In fact, the difference between DM and UM is conceptual as their implementation within Hydragen is the same. Hera-S also defines a presentation model (PM) which defines presentation details of elements like order and style. In order to help designers with building their Web applications we have introduced a toolset, Hera Studio, which allows to build the different models graphically. Hera Studio also provides some additional functionality like model checking and deployment of the models in Hydragen. Both Hera-S and its implementation Hydragen are designed to be flexible regarding the user of models. In order to achieve this Hydragen is a stateless engine that queries for relevant information from the models at every page request. This allows the models and data to be changed in the datastore during runtime. We show that one way to exploit this flexibility is by applying aspect-orientation to the AM. Aspect-orientation allows us to dynamically inject functionality that pervades the entire application. Another way to exploit Hera-S' flexibility is in reusing specialized components, e.g. for presentation generation. We present a configuration of Hydragen in which we replace our native presentation generation functionality by the AMACONT engine. AMACONT provides more extensive multi-level presentation generation and adaptation capabilities as well aspect-orientation and a form of semantic based adaptation. Hera-S was designed to allow the (re-)use of any (Semantic) Web datasource. It even opens up the possibility for data integration at the back end, by using an extendible storage layer in our database of choice Sesame. However, even though theoretically possible it still leaves much of the actual data integration issue. As this is a recurring issue in many domains, a broader challenge than for Hera-S design only, we decided to look at this issue in isolation. We present a framework called Relco which provides a language to express data transformation operations as well as a collection of techniques that can be used to (semi-)automatically find relationships between concepts in different ontologies. This is done with a combination of syntactic, semantic and collaboration techniques, which together provide strong clues for which concepts are most likely related. In order to prove the applicability of Relco we explore five application scenarios in different domains for which data integration is a central aspect. This includes a cultural heritage portal, Explorer, for which data from several datasources was integrated and was made available by a mapview, a timeline and a graph view. Explorer also allows users to provide metadata for objects via a tagging mechanism. Another application is SenSee: an electronic TV-guide and recommender. TV-guide data was integrated and enriched with semantically structured data from several sources. Recommendations are computed by exploiting the underlying semantic structure. ViTa was a project in which several techniques for tagging and searching educational videos were evaluated. This includes scenarios in which user tags are related with an ontology, or other tags, using the Relco framework. The MobiLife project targeted the facilitation of a new generation of mobile applications that would use context-based personalization. This can be done using a context-based user profiling platform that can also be used for user model data exchange between mobile applications using technologies like Relco. The final application scenario that is shown is from the GRAPPLE project which targeted the integration of adaptive technology into current learning management systems. A large part of this integration is achieved by using a user modeling component framework in which any application can store user model information, but which can also be used for the exchange of user model data

    DSpace 1.8 manual

    Get PDF

    Making Presentation Math Computable

    Get PDF
    This Open-Access-book addresses the issue of translating mathematical expressions from LaTeX to the syntax of Computer Algebra Systems (CAS). Over the past decades, especially in the domain of Sciences, Technology, Engineering, and Mathematics (STEM), LaTeX has become the de-facto standard to typeset mathematical formulae in publications. Since scientists are generally required to publish their work, LaTeX has become an integral part of today's publishing workflow. On the other hand, modern research increasingly relies on CAS to simplify, manipulate, compute, and visualize mathematics. However, existing LaTeX import functions in CAS are limited to simple arithmetic expressions and are, therefore, insufficient for most use cases. Consequently, the workflow of experimenting and publishing in the Sciences often includes time-consuming and error-prone manual conversions between presentational LaTeX and computational CAS formats. To address the lack of a reliable and comprehensive translation tool between LaTeX and CAS, this thesis makes the following three contributions. First, it provides an approach to semantically enhance LaTeX expressions with sufficient semantic information for translations into CAS syntaxes. Second, it demonstrates the first context-aware LaTeX to CAS translation framework LaCASt. Third, the thesis provides a novel approach to evaluate the performance for LaTeX to CAS translations on large-scaled datasets with an automatic verification of equations in digital mathematical libraries. This is an open access book
    corecore