29 research outputs found

    The Lexicon Graph Model : a generic model for multimodal lexicon development

    Get PDF
    Trippel T. The Lexicon Graph Model : a generic model for multimodal lexicon development. Bielefeld (Germany): Bielefeld University; 2006.Das Lexicon Graph Model stellt ein Modell für Lexika dar, die korpusbasiert sein können und multimodale Informationen enthalten. Hierbei wird die Perspektive der Lexikontheorie eingenommen, wobei die zugrundeliegenden Datenstrukturen sowohl vom Lexikon als auch von Annotationen betrachtet werden. Letztere fallen dadurch in das Blickfeld, weil sie als Grundlage für die Erstellung von Lexika gesehen werden. Der Begriff des Lexikons bezieht sich hier sowohl auf den Bereich des Wörterbuchs als auch der in elektronischen Applikationen integrierten Lexikondatenbanken. Die existierenden Formalismen und Ansätze der Lexikonentwicklung zeigen verschiedene Probleme im Zusammenhang mit Lexika auf, etwa die Zusammenfassung von existierenden Lexika zu einem, die Disambiguierung von Mehrdeutigkeiten im Lexikon auf verschiedenen lexikalischen Ebenen, die Repräsentation von anderen Modalitäten im Lexikon, die Selektion des lexikalischen Schlüsselbegriffs für Lexikonartikel, etc. Der vorliegende Ansatz geht davon aus, dass sich Lexika zwar in ihrem Inhalt, nicht aber in einer grundlegenden Struktur unterscheiden, so dass verschiedenartige Lexika im Rahmen eines Unifikationsprozesses dublettenfrei miteinander verbunden werden können. Hieraus resultieren deklarative Lexika. Für Lexika können diese Graphen mit dem Lexikongraph-Modell wie hier dargestellt modelliert werden. Dabei sind Lexikongraphen analog den von Bird und Libermann beschriebenen Annotationsgraphen gesehen und können daher auch ähnlich verarbeitet werden. Die Untersuchung des Lexikonformalismus beruht auf vier Schritten. Zunächst werden existierende Lexika analysiert und beschrieben. Danach wird mit dem Lexikongraph-Modell eine generische Darstellung von Lexika vorgestellt, die auch implementiert und getestet wird. Basierend auf diesem Formalismus wird die Beziehung zu Annotationsgraphen hergestellt, wobei auch beschrieben wird, welche Maßstäbe an angemessene Annotationen für die Verwendung zur Lexikonentwicklung angelegt werden müssen.The Lexicon Graph Model provides a model and framework for lexicons that can be corpus based and contain multimodal information. The focus is more from the lexicon theory perspective, looking at the underlying data structures that are part of existing lexicons and corpora. The term lexicon in linguistics and artificial intelligence is used in different ways, including traditional print dictionaries in book form, CD-ROM editions, Web based versions of the same, but also computerized resources of similar structures to be used by applications. These applications cover systems for human-machine communication as well as spell checkers. The term lexicon in this work is used as the most generic term covering all lexical applications. Existing formalisms in lexicon development show different problems with lexicons, for example combining different kinds of lexical resources, disambiguation on different lexical levels, the representation of different modalities in a lexicon. The Lexicon Graph Model presupposes that lexicons can have different structures but have fundamentally a similar structure, making it possible to combine lexicons in a unification process, resulting in a declarative lexicon. The underlying model is a graph, the Lexicon Graph, which is modeled similar to Annotation Graphs as described by Bird and Libermann. The investigation of the lexicon formalism contains four steps, that is the analysis of existing lexicons, the introduction of the Lexicon Graph Model as a generic representation for lexicons, the implementation of the formalism in different contexts and an evaluation of the formalism. It is shown that Annotation Graphs and Lexicon Graphs are indeed related not only in their formalism and it is shown, what standards have to be applied to annotations to be usable for lexicon development

    OpenWeather: a peer-to-peer weather data transmission protocol

    Get PDF
    The study of the weather is performed using instruments termed weather stations. These weather stations are distributed around the world, collecting the data from the different phenomena. Several weather organizations have been deploying thousands of these instruments, creating big networks to collect weather data. These instruments are collecting the weather data and delivering it for later processing in the collections points. Nevertheless, all the methodologies used to transmit the weather data are based in protocols non adapted for this purpose. Thus, the weather stations are limited by the data formats and protocols used in them, not taking advantage of the real-time data available on them. We research the weather instruments, their technology and their network capabilities, in order to provide a solution for the mentioned problem. OpenWeather is the protocol proposed to provide a more optimum and reliable way to transmit the weather data. We evaluate the environmental factors, such as location or bandwidth availability, in order to design a protocol adapted to the requirements established by the automatic weather stations. A peer to peer architecture is proposed, providing a functional implementation of OpenWeather protocol. The evaluation of the protocol is executed in a real scenario, providing the hints to adapt the protocol to a common automatic weather station.Comment: Available as well: http://lib.tkk.fi/Final_project/2011/urn100502.pd

    IDL-XML based information sharing model for enterprise integration

    Get PDF
    CJM is a mechanized approach to problem solving in an enterprise. Its basis is intercommunication between information systems, in order to provide faster and more effective decision making process. These results help minimize human error, improve overall productivity and guarantee customer satisfaction. Most enterprises or corporations started implementing integration by adopting automated solutions in a particular process, department, or area, in isolation from the rest of the physical or intelligent process resulting in the incapability for systems and equipment to share information with each other and with other computer systems. The goal in a manufacturing environment is to have a set of systems that will interact seamlessly with each other within a heterogeneous object framework overcoming the many barriers (language, platforms, and even physical location) that do not grant information sharing. This study identifies the data needs of several information systems of a corporation and proposes a conceptual model to improve the information sharing process and thus Computer Integrated Manufacturing. The architecture proposed in this work provides a methodology for data storage, data retrieval, and data processing in order to provide integration at the enterprise level. There are four layers of interaction in the proposed IXA architecture. The name TXA (DDL - XML Architecture for Enterprise Integration) is derived from the standards and technologies used to define the layers and corresponding functions of each layer. The first layer addresses the systems and applications responsible for data manipulation. The second layer provides the interface definitions to facilitate the interaction between the applications on the first layer. The third layer is where data would be structured using XML to be stored and the fourth layer is a central repository and its database management system

    Management and Visualisation of Non-linear History of Polygonal 3D Models

    Get PDF
    The research presented in this thesis concerns the problems of maintenance and revision control of large-scale three dimensional (3D) models over the Internet. As the models grow in size and the authoring tools grow in complexity, standard approaches to collaborative asset development become impractical. The prevalent paradigm of sharing files on a file system poses serious risks with regards, but not limited to, ensuring consistency and concurrency of multi-user 3D editing. Although modifications might be tracked manually using naming conventions or automatically in a version control system (VCS), understanding the provenance of a large 3D dataset is hard due to revision metadata not being associated with the underlying scene structures. Some tools and protocols enable seamless synchronisation of file and directory changes in remote locations. However, the existing web-based technologies are not yet fully exploiting the modern design patters for access to and management of alternative shared resources online. Therefore, four distinct but highly interconnected conceptual tools are explored. The first is the organisation of 3D assets within recent document-oriented No Structured Query Language (NoSQL) databases. These "schemaless" databases, unlike their relational counterparts, do not represent data in rigid table structures. Instead, they rely on polymorphic documents composed of key-value pairs that are much better suited to the diverse nature of 3D assets. Hence, a domain-specific non-linear revision control system 3D Repo is built around a NoSQL database to enable asynchronous editing similar to traditional VCSs. The second concept is that of visual 3D differencing and merging. The accompanying 3D Diff tool supports interactive conflict resolution at the level of scene graph nodes that are de facto the delta changes stored in the repository. The third is the utilisation of HyperText Transfer Protocol (HTTP) for the purposes of 3D data management. The XML3DRepo daemon application exposes the contents of the repository and the version control logic in a Representational State Transfer (REST) style of architecture. At the same time, it manifests the effects of various 3D encoding strategies on the file sizes and download times in modern web browsers. The fourth and final concept is the reverse-engineering of an editing history. Even if the models are being version controlled, the extracted provenance is limited to additions, deletions and modifications. The 3D Timeline tool, therefore, implies a plausible history of common modelling operations such as duplications, transformations, etc. Given a collection of 3D models, it estimates a part-based correspondence and visualises it in a temporal flow. The prototype tools developed as part of the research were evaluated in pilot user studies that suggest they are usable by the end users and well suited to their respective tasks. Together, the results constitute a novel framework that demonstrates the feasibility of a domain-specific 3D version control

    The XL Web Service Language : Concepts and Implementation

    Get PDF
    The XL programming language has been build on two very simple premises. First, XML is the forthcoming language used to describe and communicate complex data. Second, services provided via the internet are complex but loosely coupled and use XML. Services are neither bound to a certain platform, computer, or application scenario. The interaction between service provider and consumer is based on availability and reliability of interface descriptions and the coherence to internet standards like HTTP and XML. The XL language provides the means to easily describe complex services based on the XML data model, the XML query language XQuery and an XML storage model. In the following, the ideas behind the XL language, the language itself, and the XL runtime engine used as a prototype will be described in detail. Furthermore, different statement processing concepts, different usage scenarios and the non-functional requirements of the runtime engine itself are discusse

    Analyzing Voice And Video Call Service Performance Over A Local Area Network

    Get PDF
    Tez (Yüksek Lisans) -- İstanbul Teknik Üniversitesi, Fen Bilimleri Enstitüsü, 2010Thesis (M.Sc.) -- İstanbul Technical University, Institute of Science and Technology, 2010Bu çalışmada, VOIP teknolojisinden ve bu teknolojiyi kablolu ve kablosuz ortamda gerçeklemenin en önemli darboğazları anlatılacaktır. Ayrıca H.323, SIP (Session Initiation Protocol), Megaco ve MGCP gibi yaygın olarak kullanılan ses iletim protokolleri ve H.261, H.263 ve H.264 gibi görüntü iletim protokollerinden bahsedilmiştir. Ses kodek seçimi ve VOIP servis kalitesine etki eden faktörleri anlatılmaktadır. Bu tezde, ses, görüntü ve veri iletişimini aynı anda bünyesinde barındıran gerçek şebekeler simüle edilecektir. Kullanıcılara rastlantısal olarak ses, görüntü ve FTP gibi birtakım uygulamalar atanmıştır. Ayrıca önerilen kablolu şebekeye, kablosuz bir şebeke ilave edilerek sonuçlar incelenecektir. Optimal servis kalitesini sağlamak için seçilen uygun kuyruklama mekanizmaları ve kodek seçimlerini içeren senaryolar incelenecek ve OPNET ile elde edilmiş simülasyon sonuçları tartışılacaktır.In this study, we present a detailed description of the VoIP and also the most common challenges of implementing voice communication into wireline or wireless networks are discussed. Common voice protocols, such as H.323, Session Initiation Protocol (SIP), Megaco, MGCP and video protocols such as H.261, H.263, H.264 are described as well. CODEC selection and factors affecting VoIP Quality of Service are analyzed. We simulate a real network which includes both voice, video and data communication simultaneously. Workstations are randomly assigned to different applications, such as voice, video and FTP. We will also implement a wireless network to our proposed system. The scenarios including selecting appropriate queuing scheme and codec selection are presented and the simulation results with OPNET are drawn.Yüksek LisansM.Sc

    Remote Attestation for Constrained Relying Parties

    Get PDF
    In today's interconnected world, which contains a massive and rapidly growing number of devices, it is important to have security measures that detect unexpected or unwanted behavior of those devices. Remote attestation -- a procedure for evaluating the software and hardware properties of a remote entity -- is one of those measures. Remote attestation has been used for a long time in Mobile Device Management solutions to assess the security of computers and smartphones. The rise of the Internet of Things (IoT) introduced a new research direction for attestation, which involves IoT devices. The current trend in the academic research of attestation involves a powerful entity, called "verifier", attesting and appraising a less powerful entity, called "attester". However, academic works have not considered the opposite scenario, where a resource constrained device needs to evaluate the security of more powerful devices. In addition, these works do not have the notion of a "relying party" -- the entity that receives the attestation results computed by the verifier to determine the trustworthiness of the attester. There are many scenarios where a resource constrained device might want to evaluate the trustworthiness of a more powerful device. For example, a sensor or wearable may need to assess the state of a smartphone before sending data to it, or a network router may allow only trusted devices to connect to the network. The aim of this thesis is to design an attestation procedure suitable for constrained relying parties. Developing the attestation procedure is done through analyzing possible attestation result formats found in the industry, benchmarking the suitable formats, proposing and formally analyzing an attestation protocol for constrained relying parties, and implementing a prototype of a constrained relying party

    Automating System-Level Data-Interchange Software Through a System Interface Description Language

    Get PDF
    RÉSUMÉ Les plates-formes d'aujourd'hui, telles que les simulateurs de missions (FMS), présentent un niveau sans précédent d'intégration de systèmes matériels et logiciels. Dans ce contexte, les intégrateurs de systèmes sont confrontés à une hétérogénéité d'interfaces système qui doivent être alignées et reliées ensemble afin de fournir les capacités prévues d'une plate-forme. Le seul aspect des échanges de données système est problématique allant de données désalignées jusqu'à des environnements multi-architecturaux utilisant différents types de protocoles de communication. Les intégrateurs sont également confrontés à des défis similaires lors de l'interaction de multiples plates-formes ensemble à travers des environnements de simulation distribuée où chaque plate-forme peut être considérée comme un système avec sa propre interface distincte. D'autre part, permettre la réutilisation de système à travers diverses plates-formes en support aux gammes de produits est un défi pour les fournisseurs de systèmes, car ils doivent adapter leurs interfaces système à des plates-formes hétérogènes faisant donc face aux mêmes difficultés que les intégrateurs. En outre, l'introduction de modifications aux interfaces système afin de répondre aux besoins tardifs d'affaires, ou à des contraintes de performance imprévues, par exemple, est d'autant plus ardue que leurs impacts sont difficiles à prévoir et que leurs effets sont souvent décelés tard dans le processus d'intégration. En conséquence, cette thèse aborde la nécessité de simplifier l'intégration et l'interopérabilité système afin de réduire leurs coûts associés et d'accroître leur efficacité ainsi que leur efficience. Elle est destinée à apporter de nouvelles avancées dans les domaines de l'intégration système et de l'interopérabilité système. Notamment, en établissant une taxonomie commune, et en augmentant la compréhension des interfaces système, des divers aspects impactant les échanges de données système, des considérations des environnements multi-architecturaux, ainsi que des facteurs permettant la gouvernance d'interface ainsi que de la réutilisation système. À cette fin, deux objectifs de recherche ont été formulés. Le premier objectif vise à définir un langage utilisé pour décrire les interfaces système et les divers aspects entourant leurs échanges de données. Par conséquent, trois aspects principaux sont étudiés relatifs aux interfaces système: les éléments de langage pertinents utilisés pour les décrire, la modélisation des interfaces système avec ce langage, et la capture des considérations multi-architecturales. Le second objectif vise à définir une méthode pour automatiser le logiciel responsable des échanges de données système comme moyen pour simplifier les tâches impliquées dans l'intégration et l'interopérabilité système. Par conséquent, les compilateurs de modèles et les techniques de génération de code sont étudiés. La démonstration de ces objectifs apporte de nouvelles avancées dans l'état de l'art de l'intégration système et de l'interopérabilité système. Notamment, ceci culmine en un nouveau langage de description d'interface système, SIDL, utilisé pour capturer les interfaces système et les divers aspects entourant leurs échanges de données, ainsi qu'en une nouvelle méthode pour automatiser le logiciel d'échange de données au niveau système à partir des interfaces systèmes capturées dans ce langage. L'avènement de SIDL contribue également une nouvelle taxonomie fournissant une perspective complète sur l'interopérabilité système ainsi qu'en un langage commun qui peut être partagé entre les parties prenantes, tels que les intégrateurs, les fournisseurs et les experts système. Étant agnostique aux architectures, SIDL fournit un seul point de vue architectural supervisant toutes les interfaces système et capture les considérations multi-architecturales ce qui n'a jamais été réalisé avant ce travail. D'autant plus, un générateur de code SIDL est introduit présentant la nouveauté de générer le logiciel d'échange de données à partir d'un bassin plus riche d'information, notamment à partir des relations système de haut niveau allant jusqu'au bas niveau couvrant les détails protocolaires et d'encodage. En raison des considérations multi-architecturales qui sont capturées nativement dans SIDL, ceci permet au générateur de code d'être agnostique aux architectures le rendant réutilisable dans d'autres contextes. Cette thèse ouvre également la voie à de futures recherches bâtissant sur ses contributions. Elle propose même une vision pour le développement d'applications logicielles avec comme objectif final de repousser encore plus loin les limites de la simplification et de l'automatisation des tâches liées à l'intégration et à l'interopérabilité système.----------ABSTRACT Today’s platforms, such as full mission simulators (FMSs), exhibit an unprecedented level of hardware and software system integration. In this context, system integrators face heterogeneous system interfaces which need to be aligned and interconnected together in order to deliver a platform's intended capabilities. The sole aspect of the data systems exchange is problematic ranging from data misalignment up to multi-architecture environments over varying kinds of communication protocols. Similar challenges are also faced by integrators when interoperating multiple platforms together through distributed simulation environments where each platform can be seen as a system with its own distinct interface. On the other hand, enabling system reuse across multiple platforms for product line support is challenging for system suppliers, as they need to adapt system interfaces to heterogeneous platforms therefore facing similar challenges as integrators. Furthermore, the introduction of system interface changes in order to respond to late business needs, or unforeseen performance constraints for instance, is even more arduous as impacts are challenging to predict and their effect are often found late into the integration process. Consequently, this thesis tackles the need to simplify system integration and interoperability in order to reduce their associated costs and increase their effectiveness along with their efficiency. It is meant to bring new advances in the fields of system integration and system interoperability. Notably, by establishing a common taxonomy, and by increasing the understanding of system interfaces, the various aspects impacting system data exchanges, multi-architecture environment considerations, and the factors enabling interface governance as well as system reuse. To this end, two research objectives have been formulated. The first objective aims at defining a language used to describe system interfaces and the various aspects surrounding their data exchanges. Therefore, three key aspects are studied relating to system interfaces: the relevant language elements used to describe them, modeling system interfaces with the language, and capturing multi-architecture considerations. The second objective aims at defining a method to automate the software responsible for system data exchanges as a way of simplifying the tasks involved in system integration and interoperability. Therefore, model compilers and code generation techniques are studied. The demonstration of these objectives brings new advances in the state of the art of system integration and system interoperability. Notably, this culminates in a novel system interface description language, SIDL, used to capture system interfaces and the various aspects surrounding their data exchanges, as well as a new method for automating the system-level data-interchange software from system interfaces captured in this language. The advent of SIDL also contributes a new taxonomy providing a comprehensive perspective over system interoperability as well as a common language which can be shared amongst stakeholders, such as integrators, suppliers, and system experts. Being architecture-agnostic, SIDL provides a single architectural viewpoint overseeing all system interfaces and capturing multi-architecture considerations which was never achieved prior to this work. Furthermore, a SIDL code generator is introduced which has the novelty of generating the data-interchange software from a richer pool of information, notably from the high-level system relationships down to the low-level protocol and encoding details. Because multi-architecture considerations are captured natively in SIDL, this enables the code generator to be architecture-agnostic making it reusable in other contexts. This thesis also paves the way for future research building upon its contributions. It even proposes a vision for software application development with the end goal being to push further the boundaries of simplifying and automating the tasks involved in system integration and interoperability
    corecore