65 research outputs found

    An approach to map geography mark-up language data to resource description framework schema

    Get PDF
    GML serves as premier modeling language used to represent data of geographic information related to geography locations. However, a problem of GML is its ability to integrate with a variety of geographical and GPS applications. Since, GML saves data in coordinates and in topology for the purpose to integrate data with variety of applications on semantic web, data be mapped to Resource Description Framework (RDF) and Resource Description Framework Schema (RDFS). An approach of mapping GML metadata to RDFS is presented in this paper. This study focuses on the methodology to convert GML data in semantics to represent in extended and enriched form such as RDFS as representation in RDF is not sufficient over semantic web. Firstly, we have GML script from case study and parse it using GML parser and get XML file. XML file parse using Java and get text file to extract GML features and then get a graph form of these features. After that we designed methodology of prototype tool to map GML features to RDFS. Tool performed features by features mapping and extracted results are represented in the tabular form of mapping GML metadata to RDFS. © 2020, Springer Nature Singapore Pte Ltd.E

    Ontology Based Data Access in Statoil

    Get PDF
    Ontology Based Data Access (OBDA) is a prominent approach to query databases which uses an ontology to expose data in a conceptually clear manner by abstracting away from the technical schema-level details of the underlying data. The ontology is ‘connected’ to the data via mappings that allow to automatically translate queries posed over the ontology into data-level queries that can be executed by the underlying database management system. Despite a lot of attention from the research community, there are still few instances of real world industrial use of OBDA systems. In this work we present data access challenges in the data-intensive petroleum company Statoil and our experience in addressing these challenges with OBDA technology. In particular, we have developed a deployment module to create ontologies and mappings from relational databases in a semi-automatic fashion; a query processing module to perform and optimise the process of translating ontological queries into data queries and their execution over either a single DB of federated DBs; and a query formulation module to support query construction for engineers with a limited IT background. Our modules have been integrated in one OBDA system, deployed at Statoil, integrated with Statoil’s infrastructure, and evaluated with Statoil’s engineers and data

    AXMEDIS 2008

    Get PDF
    The AXMEDIS International Conference series aims to explore all subjects and topics related to cross-media and digital-media content production, processing, management, standards, representation, sharing, protection and rights management, to address the latest developments and future trends of the technologies and their applications, impacts and exploitation. The AXMEDIS events offer venues for exchanging concepts, requirements, prototypes, research ideas, and findings which could contribute to academic research and also benefit business and industrial communities. In the Internet as well as in the digital era, cross-media production and distribution represent key developments and innovations that are fostered by emergent technologies to ensure better value for money while optimising productivity and market coverage

    Data Driven Adaptation of Heterogeneous Service-Oriented Processes

    Get PDF
    Η με βάση τα δεδομένα προσαρμογή διαδικασιών αποτελεί μια επέκταση της έννοιας των Δυναμικών και με βάση τα Δεδομένα Καθοδηγουμενων Συστήματων (DDDAS) όπως αυτά έχουν καθοριστεί από την Δαρεμά. Συγεκριμένα όπως και στα DDDAS συστήματα η προσέγγιση μας επιτρέπει την προσφορά προσαρμοζόμενων διαδικασιών χρησιμοποιώντας διαθέσιμες πληροφορίες και υπηρεσίες. H προσφορά προσαρμοζόμενων διαδικασιών περιλαμβάνει την αναγνώριση και χρήση πιθανών εναλλακτικών μονοπατιών εκτέλεσης (ή διαδρομών) για την επίτευξη των στόχων και υπό-στόχων της κάθε διαδικασίας. Τα εναλλακτικά μονοπάτια λαμβάνουν υπόψη και χρησιμοποιούν σχετικές πληροφορίες ή/και υπηρεσίες (ή συνθέσεις υπηρεσιών). Για την αναζήτηση των πιθανών εναλλακτικών χρησιμοποιούνται τεχνικές από το χώρο της Τεχνητής Νοημοσύνης Σχεδιασμού (AI Planning) και της υπολογιστικής Πλαισίου (Context-Aware computing) κατά τον χρόνο διάθεσης της διαδικασίας. Κατά τον υπολογισμό των πιθανών εναλλακτικών, στόχος της προσέγγισης μας είναι η μείωση των βημάτων εκτέλεσης, δλδ του πλήθους των εργασιών της διαδικασίας που έχουν οριστείIn principle the Data-Driven Process Adaptation (DDPA) approach is based on the concept of Dynamic Data Driven Application Systems (DDDAS) as this is stated by Darema in [8]. In accordance to the DDDAS notion such systems support the utilization of appropriate information at specific decision points so as to make real systems more efficient. In this regard, DDPA accommodates the provision of adaptable service processes by exploiting the use of information available to the process environment in addition to existing services. Adaptation in the context of our approach includes the identification and use of possible alternatives for the achievement of the goals and sub-goals defined in a process; alternatives include the utilization of available related information and/or services (or service chains). Data-Driven adaptation incorporates AI planning and Context-Aware Computing techniques to support the identification of possible alternatives at deployment time. When calculating the possible alternatives the goal of our approach is to reduce the number of steps, i.e. number of process tasks, defined in the original process

    Accelerating Event Stream Processing in On- and Offline Systems

    Get PDF
    Due to a growing number of data producers and their ever-increasing data volume, the ability to ingest, analyze, and store potentially never-ending streams of data is a mission-critical task in today's data processing landscape. A widespread form of data streams are event streams, which consist of continuously arriving notifications about some real-world phenomena. For example, a temperature sensor naturally generates an event stream by periodically measuring the temperature and reporting it with measurement time in case of a substantial change to the previous measurement. In this thesis, we consider two kinds of event stream processing: online and offline. Online refers to processing events solely in main memory as soon as they arrive, while offline means processing event data previously persisted to non-volatile storage. Both modes are supported by widely used scale-out general-purpose stream processing engines (SPEs) like Apache Flink or Spark Streaming. However, such engines suffer from two significant deficiencies that severely limit their processing performance. First, for offline processing, they load the entire stream from non-volatile secondary storage and replay all data items into the associated online engine in order of their original arrival. While this naturally ensures unified query semantics for on- and offline processing, the costs for reading the entire stream from non-volatile storage quickly dominate the overall processing costs. Second, modern SPEs focus on scaling out computations across the nodes of a cluster, but use only a fraction of the available resources of individual nodes. This thesis tackles those problems with three different approaches. First, we present novel techniques for the offline processing of two important query types (windowed aggregation and sequential pattern matching). Our methods utilize well-understood indexing techniques to reduce the total amount of data to read from non-volatile storage. We show that this improves the overall query runtime significantly. In particular, this thesis develops the first index-based algorithms for pattern queries expressed with the Match_Recognize clause, a new and powerful language feature of SQL that has received little attention so far. Second, we show how to maximize resource utilization of single nodes by exploiting the capabilities of modern hardware. Therefore, we develop a prototypical shared-memory CPU-GPU-enabled event processing system. The system provides implementations of all major event processing operators (filtering, windowed aggregation, windowed join, and sequential pattern matching). Our experiments reveal that regarding resource utilization and processing throughput, such a hardware-enabled system is superior to hardware-agnostic general-purpose engines. Finally, we present TPStream, a new operator for pattern matching over temporal intervals. TPStream achieves low processing latency and, in contrast to sequential pattern matching, is easily parallelizable even for unpartitioned input streams. This results in maximized resource utilization, especially for modern CPUs with multiple cores

    XATA 2006: XML: aplicações e tecnologias associadas

    Get PDF
    Esta é a quarta conferência sobre XML e Tecnologias Associadas. Este evento tem-se tornado um ponto de encontro para quem se interessa pela temática e tem sido engraçado observar que os participantes gostam e tentam voltar nos anos posteriores. O grupo base de trabalho, a comissão científica, também tem vindo a ser alargada e todos os que têm colaborado com vontade e com uma qualidade crescente ano após ano. Pela quarta vez estou a redigir este prefácio e não consigo evitar a redacção de uma descrição da evolução da XATA ao longo destes quatro anos: 2003 Nesta "reunião", houve uma vintena de trabalhos submetidos, maioritariamente da autoria ou da supervisão dos membros que integravam a comissão organizadora o que não envalidou uma grande participação e acesas discussões. 2004 Houve uma participação mais forte da comunidade portuguesa mas ainda com números pouco expressivos. Nesta altura, apostou-se também numa forte participação da indústria, o que se traduziu num conjunto apreciável de apresentações de casos reais. Foi introduzido o processo de revisão formal dos trabalhos submetidos. 2005 Houve uma forte adesão nacional e internacional (Espanha e Brasil, o que para um evento onde se pretende privilegiar a língua portuguesa é ainda mais significativo). A distribuição geográfica em Portugal também aumentou, havendo mais instituições participantes. Automatizaram-se várias tarefas como o processo de submissão e de revisão de artigos. 2006 Nesta edição actual, e contrariamente ao que acontece no plano nacional, houve um crescimento significativo. Em todas as edições, tem sido objectivo da comissão organizadora, previlegiar a produção científica e dar voz ao máximo número de participantes. Nesse sentido, este ano, não haverá oradores convidados, sendo o programa integralmente preenchido com as apresentações dos trabalhos seleccionados. Apesar disso ainda houve uma taxa significativa de rejeições, principalmente devido ao elevado número de submissões. Foi introduzido também, nesta edição, um dia de tutoriais com o objectivo de fornecer competências mínimas a quem quer começar a trabalhar na área e também poder assistir de uma forma mais informada à conferência. Se analisarmos as temáticas, abordadas nas quatro conferências, percebemos que também aqui há uma evolução no sentido de uma maior maturidade. Enquanto que no primeiro encontro, os trabalhos abordavam problemas emergentes na utilização da tecnologia, no segundo encontro a grande incidência foi nos Web Services, uma nova tecnologia baseada em XML, no terceiro, a maior incidência foi na construção de repositórios, motores de pesquisa e linguagens de interrogação, nesta quarta edição há uma distribuição quase homogénea por todas as áreas temáticas tendo mesmo aparecido trabalhos que abordam aspectos científicos e tecnológicos da base da tecnologia XML. Desta forma, podemos concluir que a tecnologia sob o ponto de vista de utilização e aplicação está dominada e que a comunidade portuguesa começa a fazer contributos para a ciência de base.Microsoft

    Metadata management services for spatial data infrastructure

    Get PDF
    Am Geographischen Institut der Humboldt Universität zu Berlin wird täglich mit räumlichen Daten gearbeitet. Die erfolgreiche Arbeit von Forschungsgruppen, Lehrtätigen und Studenten basiert auf brauchbaren Datengrundlagen. Um diese Fülle von Ressourcen überschaubar zu organisieren wird seit einigen Jahren eine Geodateninfrastruktur unterhalten. Sie verfügt - neben anderen Anwendungen - über ein Geoportal, das dem Benutzer erlaubt auf die Geodatenbanken des Instituts zuzugreifen. Die Geodateninfrastruktur erlaubt dem Benutzer Ressourcen institutsweit zu suchen, anzuzeigen und (wieder) zu benutzen. Durch dieses kooperative Netzwerk sollen Synergieeffekte erzielt werden da Beschaffungskosten für Neudaten entfallen. Zusätzlich kann die Geodateninfrastruktur Lehrtätigkeit unterstützen und als praktisches Beispiel in den Lehrplan integriert werden. Kernstück dieses virtuellen Netzwerks sind Metadaten. Sie ermöglichen die umfassende Beschreibung der Ressourcen des Instituts, sowie Suche und Identifikation von Ressourcen durch das Geoportal. Der Metadaten Katalog des Instituts dient der Organisation dieser Metadaten in standardisierter Form. Das Ziel der vorliegenden Arbeit ist es, ein neues Metadaten Management Systems für die Geodateninfrastruktur des Geographischen Instituts zu implementieren. Der am Ende stehende funktionsfähige Prototyp soll vom Leitbild des „user-centric SDI“ Ansatzes geprägt sein. Dieses Konzept repräsentiert die nunmehr dritte Generation von Geodatenbanken und rückt den Benutzer in das Zentrum der Aufmerksamkeit - und dies von Beginn des Implementierungsprozesses an. Der gesamte Arbeitsfluss soll demzufolge stark vom Feedback der späteren Benutzer und deren Anforderungen geprägt sein. Mit „Joint Application Design“ und „Rapid Prototyping“ wurden Methoden gewählt, die diese Art von Software Entwicklung unter aktivem Nutzerengagement unterstützen. Als Folge nehmen Nutzerbefragungen, Präsentations- und Informationsveranstaltungen sowie Fragebogendesign und Auswertung in dieser Arbeit prominente Stellungen ein. Viele Weichen in der Softwareentwicklung wurden nach Auswertung von Nutzerbefragungen gestellt. Im Vorfeld wurde eine Unterteilung der Institutsmitglieder in „Experten“ und (potentielle zukünftige) „Nutzer“ getroffen. Wenige Experten wurden für grundlegende Entscheidungen herangezogen; die Nutzergemeinschaft wurde zu Informationsveranstaltungen eingeladen und mittels Fragebogen zum Thema Interface Design und der optimalen Bedienbarkeit des Geoportals befragt. Diese Veranstaltungen sollten über die Vorteile der Geodateninfrastruktur informieren, und durch aktive Beteiligung die Nutzergemeinschaft zu stärken und zu vergrößern. Jede GDI basiert auf Kommunikations- und Kooperationsprozessen, weshalb diese Aktivitäten Garanten für eine langfristig erfolgreiche Initiative darstellen. Eine vorangegangene Software Evaluation ließ, unter Berücksichtigung der gesammelten Nutzeranforderungen, für das Softwarepacket GeoNetwork open source entscheiden. Die Technische Entwicklung und die Gestaltung der Computer-Nutzer-Schnittstellen des GeoNetwork Prototypen wurden in sich wiederholenden Feedbackschleifen geplant. Abwechselnd soll die Generierung neuer Prototypen auf erneute Präsentationen inklusive Nutzerbefragungen folgen. Die Ergebnisse dieser Befragungen geben die Richtung für weitere Arbeit am Prototyp vor. Als methodischer Rahmen diente der „Rapid Prototyping“ Ansatz. Diskussionen in der Runde der Experten sowie die ständige Einbindung dieser in wichtige Entscheidungen rund um die GDI soll Teambildung fördern und die Mitglieder der Expertenrunde an das Projekt binden. Sie sind es, die später Verantwortlichkeiten für Metadaten übernehmen und delegieren können und damit einen wichtigen Beitrag zur Wartung und Instanthaltung der Infrastruktur leisten. Vorliegende Arbeit beschreibt Planung, Umsetzung und Ergebnis des Implementierungsprozesses dieses Prototyps unter Anwendung spezieller, auf Benutzer Partizipation und Feedback aufbauender Methoden. Es wird am Beispiel der speziellen Fallstudie diskutiert wie weit die gewählten Methoden im Sinne des Konzept des „unser-centric SDI“ eingesetzt werden und wie diese Praxis nachhaltig die Benutzerzufriedenheit steigert und zum Erfolg einer GDI langfristig beiträgt. Die Arbeit schließt mit einem Ausblick in die nahe und ferne Zukunft der möglichen Weiterentwicklung der GDI des Geographischen Instituts.Working with spatial data is “daily bread” at the Department of Geography at Humboldt Universität zu Berlin. The success of research projects, staff members’ work and students’ university routines depends on high quality data and resources. A couple of years ago the department’s own Spatial Data Infrastructure was founded to organize and publish these resources and corresponding metadata. This virtual infrastructure offers a geoportal that allows the user to discover, visualize and (re-)use the department’s spatial and aspatial resources. Maintaining this cooperative network aims at synergy effects like reduction of costs for the acquirement of new resources. Moreover, SDI can be used to support teaching activities and serve as a practical example in the curriculum. Central for SDI are metadata; they represent a comprehensive structured description of the department’s resources and are a core piece of the geoportal’s functionalities to discover and identify data. The department’s Metadata Catalogue serves as a container for structured organization of metadata. This project goal is the implementation of a new metadata management system for the department’s Spatial Data Infrastructure. The resulting prototype should be developed following the user-centric SDI (third generation SDI) paradigm. This approach considers the (possible future) user community’s requirements and feedback as highly important and suggests an implementation process with continuous user participation. Both methods, “Joint Application Design” and “Rapid Prototyping”, rely on active user participation and were chosen and applied to support this concept. As a consequence, user assessments, information and dissemination activities and design and analysis of questionnaires occupied a prominent part of this study; the most important decisions during the implementation process were based on user feedback. In the forefront, users were distinguished between (possible future) “users” and “experts”. A small group of experts was asked to discuss and make fundamental decisions about the department’s SDI development, and the community of users was invited to informative events and to participate by filling out a questionnaire about the geoportal’s usability and interface design. These events were expected to raise user interest, foster a user community and user participation and to provide information about usage and benefits of the department’s SDI. SDI, as a communication and cooperation network, benefits from these activities in the long run. A preliminary software evaluation and the assessment of user requirements led to the decision that GeoNetwork open source was the most promising software to replace the department’s current metadata management system. Technical development and implementation of GeoNetwork prototype and its interfaces was accompanied by continuous feedback loops in accordance with the concept of “Rapid Prototyping”. The development of each new version of the prototype is followed by the presentation to users and collection of feedback. This feedback sets the agenda for further developments. Members of the expert group were constantly invited to participate in the SDI implementation process. Discussions regarding elemental SDI issues should foster team building and should bind experts to the project. They are the ones who are needed to take over custodianship for resources and metadata and to therefore play central roles in maintaining the department’s SDI. The thesis at hand describes the planning, design, realization and results of the implementation of a metadata management system prototype, by facilitating special, user participation methods. Using the example of this special case it discusses the combination of these methods with a user-centric SDI approach and implications in terms of user satisfaction and long-term SDI success. The final chapter offers a discussion about the implementation process and closes with an outlook on the possible short and long term development of the department of Geography’s SDI node

    Spatiotemporal enabled Content-based Image Retrieval

    Full text link
    corecore