211 research outputs found

    VGStore: A Multimodal Extension to SPARQL for Querying RDF Scene Graph

    Full text link
    Semantic Web technology has successfully facilitated many RDF models with rich data representation methods. It also has the potential ability to represent and store multimodal knowledge bases such as multimodal scene graphs. However, most existing query languages, especially SPARQL, barely explore the implicit multimodal relationships like semantic similarity, spatial relations, etc. We first explored this issue by organizing a large-scale scene graph dataset, namely Visual Genome, in the RDF graph database. Based on the proposed RDF-stored multimodal scene graph, we extended SPARQL queries to answer questions containing relational reasoning about color, spatial, etc. Further demo (i.e., VGStore) shows the effectiveness of customized queries and displaying multimodal data.Comment: ISWC 2022 Posters, Demos, and Industry Track

    EQL-CE: An Event Query Language for Connected Environment Management

    Get PDF
    International audienceRecent technological advances have fueled the rise of connected environments (e.g., smart buildings and cities). Event Query Languages (EQL) have been used to define (and later detect) events in these environments. However, existing languages are limited to the definition of event patterns. They share the following limitations: (i) lack of consideration of the environment, sensor network, and application domain in their queries; (ii) lack of provided query types for the definition/handling of components/component instances; (iii) lack of considered data and datatypes (e.g., scalar, multimedia) needed for the definition of specific events; and (iv) difficulty in coping with the dynamicity of the environments. To address the aforementioned limitations, we propose here an EQL specifically designed for connected environments, denoted EQL-CE. We describe its framework, detail the used language, syntax, and queries. Finally, we illustrate the usage of EQL-CE in a smart mall example

    Model driven design and data integration in semantic web information systems

    Get PDF
    The Web is quickly evolving in many ways. It has evolved from a Web of documents into a Web of applications in which a growing number of designers offer new and interactive Web applications with people all over the world. However, application design and implementation remain complex, error-prone and laborious. In parallel there is also an evolution from a Web of documents into a Web of `knowledge' as a growing number of data owners are sharing their data sources with a growing audience. This brings the potential new applications for these data sources, including scenarios in which these datasets are reused and integrated with other existing and new data sources. However, the heterogeneity of these data sources in syntax, semantics and structure represents a great challenge for application designers. The Semantic Web is a collection of standards and technologies that offer solutions for at least the syntactic and some structural issues. If offers semantic freedom and flexibility, but this leaves the issue of semantic interoperability. In this thesis we present Hera-S, an evolution of the Model Driven Web Engineering (MDWE) method Hera. MDWEs allow designers to create data centric applications using models instead of programming. Hera-S especially targets Semantic Web sources and provides a flexible method for designing personalized adaptive Web applications. Hera-S defines several models that together define the target Web application. Moreover we implemented a framework called Hydragen, which is able to execute the Hera-S models to run the desired Web application. Hera-S' core is the Application Model (AM) in which the main logic of the application is defined, i.e. defining the groups of data elements that form logical units or subunits, the personalization conditions, and the relationships between the units. Hera-S also uses a so-called Domain Model (DM) that describes the content and its structure. However, this DM is not Hera-S specific, but instead allows any Semantic Web source representation as its DM, as long as its content can be queried by the standardized Semantic Web query language SPARQL. The same holds for the User Model (UM). The UM can be used for personalization conditions, but also as a source of user-related content if necessary. In fact, the difference between DM and UM is conceptual as their implementation within Hydragen is the same. Hera-S also defines a presentation model (PM) which defines presentation details of elements like order and style. In order to help designers with building their Web applications we have introduced a toolset, Hera Studio, which allows to build the different models graphically. Hera Studio also provides some additional functionality like model checking and deployment of the models in Hydragen. Both Hera-S and its implementation Hydragen are designed to be flexible regarding the user of models. In order to achieve this Hydragen is a stateless engine that queries for relevant information from the models at every page request. This allows the models and data to be changed in the datastore during runtime. We show that one way to exploit this flexibility is by applying aspect-orientation to the AM. Aspect-orientation allows us to dynamically inject functionality that pervades the entire application. Another way to exploit Hera-S' flexibility is in reusing specialized components, e.g. for presentation generation. We present a configuration of Hydragen in which we replace our native presentation generation functionality by the AMACONT engine. AMACONT provides more extensive multi-level presentation generation and adaptation capabilities as well aspect-orientation and a form of semantic based adaptation. Hera-S was designed to allow the (re-)use of any (Semantic) Web datasource. It even opens up the possibility for data integration at the back end, by using an extendible storage layer in our database of choice Sesame. However, even though theoretically possible it still leaves much of the actual data integration issue. As this is a recurring issue in many domains, a broader challenge than for Hera-S design only, we decided to look at this issue in isolation. We present a framework called Relco which provides a language to express data transformation operations as well as a collection of techniques that can be used to (semi-)automatically find relationships between concepts in different ontologies. This is done with a combination of syntactic, semantic and collaboration techniques, which together provide strong clues for which concepts are most likely related. In order to prove the applicability of Relco we explore five application scenarios in different domains for which data integration is a central aspect. This includes a cultural heritage portal, Explorer, for which data from several datasources was integrated and was made available by a mapview, a timeline and a graph view. Explorer also allows users to provide metadata for objects via a tagging mechanism. Another application is SenSee: an electronic TV-guide and recommender. TV-guide data was integrated and enriched with semantically structured data from several sources. Recommendations are computed by exploiting the underlying semantic structure. ViTa was a project in which several techniques for tagging and searching educational videos were evaluated. This includes scenarios in which user tags are related with an ontology, or other tags, using the Relco framework. The MobiLife project targeted the facilitation of a new generation of mobile applications that would use context-based personalization. This can be done using a context-based user profiling platform that can also be used for user model data exchange between mobile applications using technologies like Relco. The final application scenario that is shown is from the GRAPPLE project which targeted the integration of adaptive technology into current learning management systems. A large part of this integration is achieved by using a user modeling component framework in which any application can store user model information, but which can also be used for the exchange of user model data

    Towards Semantically Enabled Complex Event Processing

    Full text link

    Nomothesi@ api - reengineering the electronic platform

    Get PDF
    Ο στόχος αυτής της εργασίας, είναι να συμβάλει στον τομέα της αναπαράστασης νομικής γνώσης και στην ενσωμάτωση αυτής στην περιοχή των ανοιχτών δεδομένων στην Ελλάδα, τόσο από τεχνολογική σκοπιά, όσο και από άποψη διαφάνειας. Η Νομοθεσί@, είναι μια πλατφόρμα που σκοπό έχει να δώσει πρόσβαση στην ελληνική νομοθεσία, με τη χρήση ενός νομικού XML/RDF προτύπου και με διασυνδεδεμένα δεδομένα (linked data). Αυτή η νέα έκδοση της Νομοθεσίας προτείνει την αντικατάσταση του προηγούμενου προτύπου XML για τα ελληνικά νομικά έγγραφα για ένα νέο RDF, μια νέα Spring MVC αρχιτεκτονική και την παροχή πολλών REST υπηρεσιών όπως αυτή ενός SPARQL Endpoint. Η σύνδεση δεδομένων αφορά τη διασύνδεση και την ανοιχτή δημοσίευση ελληνικών δημόσιων δεδομένων και των νομοθετικών δεδομένων κατά μήκος της Ευρωπαϊκής Ένωσης, με σκοπό την ενίσχυση της ηλεκτρονικής διακυβέρνησης. Πάνω σε αυτές τις αρχές, προσπαθήσαμε να επεκτείνουμετη Νομοθεσί@ με ένα ενοποιημένο RDF Σχήμα δεδομένων, προκειμένου να δημιουργηθεί ένα RESTful API για να αξιοποιήσει ολόκληρη την πολύτιμη σημασιολογική πληροφορία που έχει να προσφέρει η ελληνική νομοθεσία και να ενθαρρύνει περαιτέρω και πιο πολύπλοκα έργα που βασίζονται στον τομέα του διαδικτύου για την αναζήτηση και την περιήγηση της νομοθεσίας.The objective of this thesis is to contribute in legal knowledge’s representation and its integration in the area of Open Data in Greece, both from a technological perspective and in terms of transparency. Nomothesi@, is a platform to provide access to Greek Legislation, by means of a legal XML/RDF syntax and linked data. This new version of Nomethesi@ proposes the replacement of the previous XML standard for Greek legal documents to a new RDF one, a new Spring MVC architecture and many REST services such as a SPARQL Endpoint. Linking data is about interlinking and publishing openly Greek public data and legislative data across EU in order to enhance E-Government. On these fundamentals, we tried to expand Nomothesi@ with a unified RDF Schema, in order to create a RESTful API to serve all the precious semantic information Greek Legislation has to offer and to encourage further and more complex projects based on web services for searching and browsing legislation

    Connected Information Management

    Get PDF
    Society is currently inundated with more information than ever, making efficient management a necessity. Alas, most of current information management suffers from several levels of disconnectedness: Applications partition data into segregated islands, small notes don’t fit into traditional application categories, navigating the data is different for each kind of data; data is either available at a certain computer or only online, but rarely both. Connected information management (CoIM) is an approach to information management that avoids these ways of disconnectedness. The core idea of CoIM is to keep all information in a central repository, with generic means for organization such as tagging. The heterogeneity of data is taken into account by offering specialized editors. The central repository eliminates the islands of application-specific data and is formally grounded by a CoIM model. The foundation for structured data is an RDF repository. The RDF editing meta-model (REMM) enables form-based editing of this data, similar to database applications such as MS access. Further kinds of data are supported by extending RDF, as follows. Wiki text is stored as RDF and can both contain structured text and be combined with structured data. Files are also supported by the CoIM model and are kept externally. Notes can be quickly captured and annotated with meta-data. Generic means for organization and navigation apply to all kinds of data. Ubiquitous availability of data is ensured via two CoIM implementations, the web application HYENA/Web and the desktop application HYENA/Eclipse. All data can be synchronized between these applications. The applications were used to validate the CoIM ideas

    Deliverable D2.2 Specification of lightweight metadata models for multimedia annotation

    Get PDF
    This deliverable presents a state-of-art and requirements analysis report for the LinkedTV metadata model as part of the WP2 of the LinkedTV project. More precisely, we first provide a comprehensive overview of numerous multimedia metadata formats and standards that have been proposed by various communities: broadcast industry, multimedia analysis industry, news and photo industry, web community, etc. Then, we derive a number of requirements for a LinkedTV metadata model. Next, we present what will be the LinkedTV metadata ontology, a set of built-in classes and properties added to a number of well-used vocabularies for representing the different metadata dimensions used in LinkedTV, namely: legacy metadata covering both broadcast information in the wide sense and content metadata and multimedia analysis results at a very fine grained level. We finally provide a set of useful SPARQL queries that have been evaluated in order to show the usefulness and expressivity of our proposed ontology

    Image retrieval using automatic region tagging

    Get PDF
    The task of tagging, annotating or labelling image content automatically with semantic keywords is a challenging problem. To automatically tag images semantically based on the objects that they contain is essential for image retrieval. In addressing these problems, we explore the techniques developed to combine textual description of images with visual features, automatic region tagging and region-based ontology image retrieval. To evaluate the techniques, we use three corpora comprising: Lonely Planet travel guide articles with images, Wikipedia articles with images and Goats comic strips. In searching for similar images or textual information specified in a query, we explore the unification of textual descriptions and visual features (such as colour and texture) of the images. We compare the effectiveness of using different retrieval similarity measures for the textual component. We also analyse the effectiveness of different visual features extracted from the images. We then investigate the best weight combination of using textual and visual features. Using the queries from the Multimedia Track of INEX 2005 and 2006, we found that the best weight combination significantly improves the effectiveness of the retrieval system. Our findings suggest that image regions are better in capturing the semantics, since we can identify specific regions of interest in an image. In this context, we develop a technique to tag image regions with high-level semantics. This is done by combining several shape feature descriptors and colour, using an equal-weight linear combination. We experimentally compare this technique with more complex machine-learning algorithms, and show that the equal-weight linear combination of shape features is simpler and at least as effective as using a machine learning algorithm. We focus on the synergy between ontology and image annotations with the aim of reducing the gap between image features and high-level semantics. Ontologies ease information retrieval. They are used to mine, interpret, and organise knowledge. An ontology may be seen as a knowledge base that can be used to improve the image retrieval process, and conversely keywords obtained from automatic tagging of image regions may be useful for creating an ontology. We engineer an ontology that surrogates concepts derived from image feature descriptors. We test the usability of the constructed ontology by querying the ontology via the Visual Ontology Query Interface, which has a formally specified grammar known as the Visual Ontology Query Language. We show that synergy between ontology and image annotations is possible and this method can reduce the gap between image features and high-level semantics by providing the relationships between objects in the image. In this thesis, we conclude that suitable techniques for image retrieval include fusing text accompanying the images with visual features, automatic region tagging and using an ontology to enrich the semantic meaning of the tagged image regions
    corecore