33 research outputs found

    S-RDF: A New RDF Serialization Format for Better Storage Without Losing Human Readability

    Get PDF
    International audienceNowadays, RDF data becomes more and more popular on the Web due to the advances of the Semantic Web and the Linked Open Data initiatives. Several works are focused on transforming relational databases to RDF by storing related data in N-Triple serialization format. However, these approaches do not take into account the existing normalization of their databases since N-Triple format allows data redundancy and does not control any normalization by itself. Moreover, the mostly used and recommended serialization formats, such as RDF/XML, Turtle, and HDT, have either high human-readability but waste storage capacity, or focus further on storage capacities while providing low human-readability. To overcome these limitations, we propose here a new serialization format, called S-RDF. By considering the structure (graph) and values of the RDF data separately, S-RDF reduces the duplicity of values by using unique identifiers. Results show an important improvement over the existing serialization formats in terms of storage (up to 71,66% w.r.t. N-Triples) and human readability

    RiAiR: A Framework for Sensitive RDF Protection

    Get PDF
    International audienceThe Semantic Web and the Linked Open Data (LOD) initiatives promote the integration and combination of RDF data on the Web. In some cases, data need to be analyzed and protected before publication in order to avoid the disclosure of sensitive information. However, existing RDF techniques do not ensure that sensitive information cannot be discovered since all RDF resources are linked in the Semantic Web and the combination of different datasets could produce or disclose unexpected sensitive information. In this context, we propose a framework, called RiAiR, which reduces the complexity of the RDF structure in order to decrease the interaction of the expert user for the classification of RDF data into identifiers, quasi-identifiers, etc. An intersection process suggests disclosure sources that can compromise the data. Moreover, by a generalization method, we decrease the connections among resources to comply with the main objectives of integration and combination of the Semantic Web. Results show a viability and high performance for a scenario where heterogeneous and linked datasets are present

    Semantic Web Datatype Inference: Towards Better RDF Matching

    Get PDF
    International audienceIn the context of RDF document matching/integration, the datatype information, which is related to literal objects, is an important aspect to be analyzed in order to better determine similar RDF documents. In this paper, we propose a datatype inference process based on four steps: (i) predicate information analysis (i.e., deduce the datatype from existing range property); (ii) analysis of the object value itself by a pattern-matching process (i.e., recognize the object lexical-space); (iii) semantic analysis of the predicate name and its context; and (iv) generalization of numeric and binary datatypes to ensure the integration. We evaluated the performance and the accuracy of our approach with datasets from DBpedia. Results show that the execution time of the inference process is linear and its accuracy can increase up to 97.10%. \textcopyright 2017, Springer International Publishing AG

    Modelado de control de acceso para empresas cooperativas basado en la organizaciĂłn

    Get PDF
    Desarrolla el uso de acceso de control basada en organizaciones y ontologías que permite a los usuarios de diferentes empresas compartir y acceder a recursos localizados en diferentes partes, a través de una modelización de roles, recursos y acciones en ontologías. El proceso es semi-automático gracias a la ayuda de algoritmos de similitud para ontologías y la creación automática de nuevas reglas de acceso de control con la supervisión del administrador para el uso del recurso.Trabajo de investigació

    Automatic recognition of Soundpainting for the Generation of Electronic Music Sounds

    Get PDF
    This work aims to explore the use of a new gesture-based interaction built on automatic recognition of Soundpainting structured gestural language. In the proposed approach, a composer (called Soundpainter) performs Soundpainting gestures facing a Kinect sensor (Microsoft). Then, a gesture recognition system captures gestures that are sent to a sound generator software. The proposed method was used to stage an artistic show in which a Soundpainter had to improvise with 6 different gestures to generate a musical composition from different sounds in real time. The accuracy of the gesture recognition system was evaluated as well as Soundpainter's user experience. In addition, a user evaluation study for using our proposed system in a learning context was also conducted. Current results open up perspectives for the design of new artistic expressions based on the use of automatic gestural recognition supported by Soundpainting language

    Interface malicieuse installation ou performance multimédia?

    Get PDF
    Présentation alt.IHM, Actes de la 31e conférence francophone sur l'Interaction Homme-Machine (IHM 2019), Grenoble, FranceInterface malicieuse est un projet d’installation performative. Une caméra en plan fixe et un micro capte la salle et ce qui s'y passe. L’état du dispositif numérique est défini par un ensemble de transformations des signaux captés, ainsi que les paramètres de réglage associés. Par sa présence ou en manipulant les matériaux et objets physiques présents dans la salle, le public interagit directement avec image et son. C’est le schéma de base d’une installation interactive. L’originalité ici, c'est qu’incognito parmi les spectateurs, il y a un opérateur/performeur qui peut modifier l'état du dispositif par signes, attitudes ou gestes. Grâce à cette interface, il dirige le déroulement de la performance, jouant malicieusement avec la stabilité du système et créant la confusion entre ce que le public voit et ce qui se passe au présent dans la salle

    Une étude sur la prise en compte simultanée de deux modalités pour la reconnaissance de gestes de SoundPainting

    Get PDF
    National audienceNowadays, gestures are being adopted as a new modality in the field of Human-Computer Interaction (HMI), where the physical movements of the whole body can perform unlimited actions. Soundpainting is a language of artistic composition used for more than forty years. However, the work on the recognition of SoundPainting gestures is limited and they do not take into account the movements of the fingers and the hand in the gestures which constitute an essential part of SoundPainting. In this context, we con- ducted a study to explore the combination of 3D postures and muscle activity for the recognition of SoundPainting gestures. In order to carry out this study, we created a Sound- Painting database of 17 gestures with data from two sensors (Kinect and Myo). We formulated four hypotheses concerning the accuracy of recognition. The results allowed to characterize the best sensor according to the typology of the gesture, to show that a "simple" combination of the two sensors does not necessarily improves the recognition, that a combination of features is not necessarily more efficient than taking into account a single well-chosen feature, finally, that changing the frequency of the data acquisition provided by these sensors does not have a significant impact on the recognition of gestures.Actuellement, les gestes sont adoptés comme une nouvelle modalité dans le domaine de l'interaction homme-machine, où les mouvements physiques de tout le corps peuvent effectuer des actions quasi-illimitées. Le Soundpainting est un langage de composition artistique utilisé depuis plus de quarante ans. Pourtant, les travaux sur la reconnaissance des gestes SoundPainting sont limités et ils ne prennent pas en compte les mouvements des doigts et de la main dans les gestes qui constituent une partie essentielle de SoundPainting. Dans ce contexte, nous avons réalisé une étude pour explorer la combinaison de postures 3D et de l'activité musculaire pour la reconnaissance des gestes SoundPainting. Pour réaliser cette étude, nous avons créé une base de données SoundPainting de 17 gestes avec les données provenant de deux capteurs (Kinect et Myo). Nous avons formulé quatre hypothèses portant sur la précision de la reconnaissance. Les résultats ont permis de caractériser le meilleur capteur en fonction de la typologie du geste, de montrer qu'une "simple" combinaison des deux capteurs n'entraîne pas forcément une amélioration de la reconnaissance, de même une combinaisons de caractéristiques n'est pas forcément plus performante que la prise en compte d'une seule caractéristique bien choisie, enfin, que le changement de la cadence d'acquisition des données fournies par ces capteurs n'a pas un impact significatif sur la reconnaissance des gestes

    A Multi-modal Visual Emotion Recognition Method to Instantiate an Ontology

    Get PDF
    Human emotion recognition from visual expressions is an important research area in computer vision and machine learning owing to its significant scientific and commercial potential. Since visual expressions can be captured from different modalities (e.g., face expressions, body posture, hands pose), multi-modal methods are becoming popular for analyzing human reactions. In contexts in which human emotion detection is performed to associate emotions to certain events or objects to support decision making or for further analysis, it is useful to keep this information in semantic repositories, which offers a wide range of possibilities for implementing smart applications. We propose a multi-modal method for human emotion recognition and an ontology-based approach to store the classification results in EMONTO, an extensible ontology to model emotions. The multi-modal method analyzes facial expressions, body gestures, and features from the body and the environment to determine an emotional state; this processes each modality with a specialized deep learning model and applying a fusion method. Our fusion method, called EmbraceNet+, consists of a branched architecture that integrates the EmbraceNet fusion method with other ones. We experimentally evaluate our multi-modal method on an adaptation of the EMOTIC dataset. Results show that our method outperforms the single-modal methods

    An Ontology for Modeling Cultural Heritage Knowledge in Urban Tourism

    Get PDF
    Urban tourism information available on Internet has been of enormous relevance to motivate the tourism in many countries. There exist many applications focused on promoting and preserving the cultural heritage, through urban tourism, which in turn demand a well-defined and standard model for representing the whole knowledge of this domain, thus ensuring interoperable and flexible applications. Current studies propose the use of ontologies to formally model such knowledge. Nonetheless, most of them only represent partial knowledge of cultural heritage or are restrictive to an indoor perspective (i.e., museum ontologies). In this context, we propose the ontology CURIOCITY ( Cultural Heritage for Urban Tourism in Indoor/Outdoor environments of the CITY ), to represent the cultural heritage knowledge based on UNESCO’s definitions. CURIOCITY ontology has a three-level architecture (Upper, Middle, and Lower ontologies) in accordance with a purpose of modularity and levels of specificity. In this paper, we describe in detail all modules of CURIOCITY ontology and perform a comparative evaluation with state-of-the-art ontologies. Additionally, to demonstrate the suitability of CURIOCITY ontology, we show several touristic services offered through a framework supported in the ontology. The framework includes an automatic population process, that allows transforming a museum data repository (in CSV format) into RDF triples of CURIOCITY ontology to automatically populate the CURIOCITY repository, and facilities to develop a set of tourism applications and services, following the UNESCO’s definitions

    Multimodal Emotional Understanding in Robotics

    Get PDF
    In the context of Human-Robot Interaction (HRI), emotional understanding is becoming more popular because it turns robots more humanized and user-friendly. Giving a robot the ability to recognize emotions has several difficulties due to the limits of the robots’ hardware and the real-world environments in which it works. In this sense, an out-of-robot approach and a multimodal approach can be the solution. This paper presents the implementation of a previous proposed multi-modal emotional system in the context of social robotics; that works on a server and bases its prediction in four modalities as inputs (face, posture, body, and context features) captured through the robot’s sensors; the predicted emotion triggers some robot behavior changes. Working on a server allows overcoming the robot’s hardware limitations but gaining some delay in the communication. Working with several modalities allows facing complex real-world scenarios strongly and adaptively. This research is focused on analyzing, explaining, and arguing the usability and viability of an out-of-robot and multimodal approach for emotional robots. Functionality tests were applied with the expected results, demonstrating that the entire proposed system takes around two seconds; delay justified on the deep learning models used, which are improvable. Regarding the HRI evaluations, a brief discussion about the remaining assessments is presented, explaining how difficult it can be a well-done evaluation of this work. The demonstration of the system functionality can be seen at https://youtu.be/MYYfazSa2N0
    corecore