617 research outputs found

    Multimedia Standards

    Get PDF
    The aim of this paper is to review some of the standards, connected with multimedia and their metadata. We start with MPEG family. MPEG-21 provides an open framework for multimedia delivery and consumption. MPEG- 7 is a multimedia content description standard. With the Internet grow several format were proposed for media scenes description. Some of them are open standards such as: VRML1, X3D2, SMIL3, SVG4, MPEG-4 BIFS, MPEG-4, XMT, MPEG-4, LaSER, COLLADA5, published by ISO, W3C, etc. Television has become the most important mass medium. Standards such as MHEG, DAVIC, Java TV, MHP, GEM, OCAP and ACAP have been developed. Efficient video-streaming is presented. There exist a large number of standards for representing audiovisual metadata. We cover the Material Exchange Format (MXF), the Digital Picture Exchange (DPX), and the Digital Cinema Package (DCP)

    Preface

    Get PDF
    In recent years, the research community has demonstrated considerable interest in Digital Culture (DC) and its online presentation, accessibility, exploitation, preservation and reuse through new tools and services to model, analyse and visualize vast amounts of DC data. The focus is on the search of innovations especially in areas/subareas relevant to data management and processing—innovative and creative tools for approaching cultural assets, applications and services for better access to and exploiting of the rich and diverse digital cultural heritage in a sustainable way, intelligent curation, creative use/re-use and remix, reinterpretation, study, understanding, analysis, personalization, adaptation, semantics, protection, restoration, preservation, etc. The research community deals with important issues of handling data directly, affecting the economy (represented by the creative and re-creative industry), the public sector (cultural institutions—museums, libraries, galleries, etc.), education, and society as a whole. This special issue of Serdica Journal of Computing aims to contribute to a deeper understanding of DC

    TECHNICAL OBSTACLES TO SPEECH INTELLIGIBILITY

    Get PDF
    As voice communication increasingly rely on technological devices, understanding the intricate relationship between equipment properties and communication outcomes becomes imperative. The purpose of this study is to investigate the impact of technical condition of communication equipment on the intelligibility of voice communications. To achieve this goal, methods and results of a qualitative analysis of the voice signals were used, as well as statistical methods for establishing the relevant dependencies. The importance of technical characteristics of voice signals for speech intelligibility was considered. The influence of the technical parameters of electronic communication equipment and the working environment on the technical characteristics of voice signals and thus on the intelligibility of speech was discussed. On the basis of the research, conclusions were drawn highlighting the serious influence of the communication equipment characteristics and the expertise of the service personnel on speech intelligibility

    Creating a Probabilistic Model for WordNet

    Get PDF
    We present a probabilistic model for extracting and storing information from WordNet and the British National Corpus. We map the data into a directed probabilistic graph that can be used to compute the conditional probability between a pair of words from the English language. For example, the graph can be used to deduce that there is a 10% probability that someone who is interested in dogs is also interested in the word “canine”. We propose three ways for computing this probability, where the best results are achieved when performing multiple random walks in the graph. Unlike existing approaches that only process the structured data in WordNet, we process all available information, including natural language descriptions. The available evidence is expressed as simple Horn clauses with probabilities. It is then aggregated using a Markov Logic Network model to create the probabilistic graph. We experimentally validate the quality of the data on five different benchmarks that contain collections of pairs of words and their semantic similarity as determined by humans. In the experimental section, we show that our random walk algorithm with logarithmic distance metric produces higher correlation with the results of the human judgment on three of the five benchmarks and better overall average correlation than the current state-of-the-art algorithms

    Semantic Document Clustering Using Information from WordNet and DBPedia

    Get PDF
    Semantic document clustering is a type of unsupervised learning in which documents are grouped together based on their meaning. Unlike traditional approaches that cluster documents based on common keywords, this technique can group documents that share no words in common as long as they are on the same subject. We compute the similarity between two documents as a function of the semantic similarity between the words and phrases in the documents. We model information from WordNet and DBPedia as a probabilistic graph that can be used to compute the similarity between two terms. We experimentally validate our algorithm on the Reuters-21578 benchmark, which contains 11,362 newswire stories that are grouped in 82 categories using human judgment. We apply the k-means clustering algorithm to group the documents using a similarity metric that is based on keyword matching and one that uses the probabilistic graph. We show that the second approach produces higher precision and recall, which corresponds to better alignment with the classification that was done by human experts

    Defining Network Activity Patterns Using First Order Temporal Logics

    Get PDF
    Part of network management is collecting information about the activities that go on around a distributed system and analyzing it in real time, at a deferred moment, or both. The reason such information may be stored in log files and analyzed later is to data-mine it so that interesting, unusual, or abnormal patterns can be discovered. In this paper we propose defining patterns in network activity logs using a dialect of First Order Temporal Logics (FOTL), called First Order Temporal Logic with Duration Constrains (FOTLDC). This logic is powerful enough to describe most network activity patterns because it can handle both causal and temporal correlations. Existing results for data-mining patterns with similar structure give us the confidence that discovering DFOTL patterns in network activity logs can be done efficiently

    The Third National Information Day: Open Access to Scientific Information

    Get PDF
    The paper presents in brief the “Open Access Infrastructure for Research in Europe” project and what is done in Bulgaria in the area of open access to scientific information

    The Balkans in 2010: economic scenarios

    Get PDF
    • …
    corecore