3,845 research outputs found

    Interoperability between classification systems using metadata

    Get PDF
    The First on-Line conference on Metadata and Semantics Research Conference (MTSR'05): Approaches to advanced information systems, 21-30 November 2005Metadata are structures which catalogue, classify, describe and articulate electronic information. The Subject element of Dublin Core is used for c1assification systems and subject headings. There are five ways of applying semantic interoperability: interoperability between controlled vocabularies in the same language; between controlled vocabularies in different languages and classification systems; between subject headings and c1assification systems; between c1assification systems; and between languages. The relations between diverse types of standards or systems present diverse difficulties. The electronic information container, which is Internet, guarantees the trend to try and achieve the interoperability of content analysis, whether it be between c1assification systems, or subject headings. The organisation of information in a physical format has transferred its organisational forms to the structuring of electronic information. The digital formal transforms the organisational form itself. If, in information the message is the medium, in organisation the structure is the medium.Publicad

    De-Fragmenting Knowledge: Using Metadata for Interconnecting Courses

    Get PDF
    E-learning systems are often based on the notion of "course": an interconnected set of resources aiming at presenting material related to a particular topic. Course authors do provide external links to related material. Such external links are however "frozen" at the time of publication of the course. Metadata are useful for classifying and finding e-learning artifacts. In many cases, metadata are used by Learning Management Systems to import, export, sequence and present learning objects. The use of metadata by humans is in general limited to a search functionality, e.g. by authors who search for material that can be reused. We argue that metadata can be used to enrich the interconnection among courses, and to present to the student a richer variety of interconnected resources. We implemented a system that presents an instance of this idea

    Compliance Using Metadata

    Get PDF
    Everybody talks about the data economy. Data is collected stored, processed and re-used. In the EU, the GDPR creates a framework with conditions (e.g. consent) for the processing of personal data. But there are also other legal provisions containing requirements and conditions for the processing of data. Even today, most of those are hard-coded into workflows or database schemes, if at all. Data lakes are polluted with unusable data because nobody knows about usage rights or data quality. The approach presented here makes the data lake intelligent. It remembers usage limitations and promises made to the data subject or the contractual partner. Data can be used as risk can be assessed. Such a system easily reacts on new requirements. If processing is recorded back into the data lake, the recording of this information allows to prove compliance. This can be shown to authorities on demand as an audit trail. The concept is best exemplified by the SPECIAL project https://specialprivacy.eu (Scalable Policy-aware Linked Data Architecture For PrivacyPrivacy, TransparencyTransparency and ComplianceCompliance). SPECIAL has several use cases, but the basic framework is applicable beyond those cases

    Using metadata for content indexing within an OER network

    Full text link
    This paper outlines the ICT solution for a metadata portal indexing open educational resources within a network of institutions. The network is aimed at blending academic and entrepreneurial knowledge,by enabling higher education institutions to publish various academic learning resources e.g. video lectures, course planning materials, or thematic content, whereasenterprises can present different forms of expert knowledge, such as case studies, expert presentations on specific topics, demonstrations of software implementation in practice and the like. As these resources need to bediscoverable, accessible and shared by potential learners across the learning environment, it is very important that they are well described and tagged in a standard way in machine readable form by metadata. Only then can they be successfully used and reused, especially when a large amount of these resources is reached, which makes it hard for the user to locate efficiently those of interest. The metadata set adopted in our approach relies on two standards: Dublin Core and Learning Object Metadata. The aim of metadata and the corresponding metadata portal described in this paper is to provide structured access to information on open educational resources within the network

    Android Malware Characterization using Metadata and Machine Learning Techniques

    Get PDF
    Android Malware has emerged as a consequence of the increasing popularity of smartphones and tablets. While most previous work focuses on inherent characteristics of Android apps to detect malware, this study analyses indirect features and meta-data to identify patterns in malware applications. Our experiments show that: (1) the permissions used by an application offer only moderate performance results; (2) other features publicly available at Android Markets are more relevant in detecting malware, such as the application developer and certificate issuer, and (3) compact and efficient classifiers can be constructed for the early detection of malware applications prior to code inspection or sandboxing.Comment: 4 figures, 2 tables and 8 page

    Schema integration using metadata

    Get PDF
    Includes bibliographical references (p. 10-11).Work was supported in part by Reuters and the International Financial Research Services Research Center at the Massachusetts Institute of Technology.Michael Siegel, Stuart E. Madnick

    Improving Livestreaming Latency Using Metadata

    Get PDF
    In live media streaming, latency between media capture at a sender and playback at a receiver is optimized to improve user experience. However, media analysis and editing algorithms (like object/sound/speech recognition) that operate on the stream can introduce delays; thus, media may not be transmittable immediately after capture due to delays introduced by processing and potential modification. This disclosure describes techniques of variable latency streaming, where the playback latency relative to the live edge varies during the playback depending on instructions that are generated based on content analysis of the stream. The instructions can be multiplexed into the live stream as timed metadata samples and demultiplexed by the player application at the receiver. The user can set preferences that dictate whether and how the receiver follows the instructions

    You are your Metadata: Identification and Obfuscation of Social Media Users using Metadata Information

    Get PDF
    Metadata are associated to most of the information we produce in our daily interactions and communication in the digital world. Yet, surprisingly, metadata are often still catergorized as non-sensitive. Indeed, in the past, researchers and practitioners have mainly focused on the problem of the identification of a user from the content of a message. In this paper, we use Twitter as a case study to quantify the uniqueness of the association between metadata and user identity and to understand the effectiveness of potential obfuscation strategies. More specifically, we analyze atomic fields in the metadata and systematically combine them in an effort to classify new tweets as belonging to an account using different machine learning algorithms of increasing complexity. We demonstrate that through the application of a supervised learning algorithm, we are able to identify any user in a group of 10,000 with approximately 96.7% accuracy. Moreover, if we broaden the scope of our search and consider the 10 most likely candidates we increase the accuracy of the model to 99.22%. We also found that data obfuscation is hard and ineffective for this type of data: even after perturbing 60% of the training data, it is still possible to classify users with an accuracy higher than 95%. These results have strong implications in terms of the design of metadata obfuscation strategies, for example for data set release, not only for Twitter, but, more generally, for most social media platforms.Comment: 11 pages, 13 figures. Published in the Proceedings of the 12th International AAAI Conference on Web and Social Media (ICWSM 2018). June 2018. Stanford, CA, US
    • …
    corecore