170,284 research outputs found

    Generalised Decision Level Ensemble Method for Classifying Multi-media Data

    Get PDF
    In recent decades, multimedia data have been commonly generated and used in various domains, such as in healthcare and social media due to their ability of capturing rich information. But as they are unstructured and separated, how to fuse and integrate multimedia datasets and then learn from them eectively have been a main challenge to machine learning. We present a novel generalised decision level ensemble method (GDLEM) that combines the multimedia datasets at decision level. After extracting features from each of multimedia datasets separately, the method trains models independently on each media dataset and then employs a generalised selection function to choose the appropriate models to construct a heterogeneous ensemble. The selection function is dened as a weighted combination of two criteria: the accuracy of individual models and the diversity among the models. The framework is tested on multimedia data and compared with other heterogeneous ensembles. The results show that the GDLEM is more exible and eective

    Living with the Semantic Gap: Experiences and remedies in the context of medical imaging

    No full text
    Semantic annotation of images is a key concern for the newly emerged applications of semantic multimedia. Machine processable descriptions of images make it possible to automate a variety of tasks from search and discovery to composition and collage of image data bases. However, the ever occurring problem of the semantic gap between the low level descriptors and the high level interpretation of an image poses new challenges and needs to be addressed before the full potential of semantic multimedia can be realised. We explore the possibilities and lessons learnt with applied semantic multimedia from our engagement with medical imaging where we deployed ontologies and a novel distributed architecture to provide semantic annotation, decision support and methods for tackling the semantic gap problem

    A survey on big multimedia data processing and management in smart cities

    Full text link
    © 2019 Association for Computing Machinery. All rights reserved. Integration of embedded multimedia devices with powerful computing platforms, e.g., machine learning platforms, helps to build smart cities and transforms the concept of Internet of Things into Internet of Multimedia Things (IoMT). To provide different services to the residents of smart cities, the IoMT technology generates big multimedia data. The management of big multimedia data is a challenging task for IoMT technology. Without proper management, it is hard to maintain consistency, reusability, and reconcilability of generated big multimedia data in smart cities. Various machine learning techniques can be used for automatic classification of raw multimedia data and to allow machines to learn features and perform specific tasks. In this survey, we focus on various machine learning platforms that can be used to process and manage big multimedia data generated by different applications in smart cities. We also highlight various limitations and research challenges that need to be considered when processing big multimedia data in real-time

    A compiler extension for parallelizing arrays automatically on the cell heterogeneous processor

    Get PDF
    This paper describes the approaches taken to extend an array programming language compiler using a Virtual SIMD Machine (VSM) model for parallelizing array operations on Cell Broadband Engine heterogeneous machine. This development is part of ongoing work at the University of Glasgow for developing array compilers that are beneficial for applications in many areas such as graphics, multimedia, image processing and scientific computation. Our extended compiler, which is built upon the VSM interface, eases the parallelization processes by allowing automatic parallelisation without the need for any annotations or process directives. The preliminary results demonstrate significant improvement especially on data-intensive applications

    The use of hypermedia to increase the productivity of software development teams

    Get PDF
    Rapid progress in low-cost commercial PC-class multimedia workstation technology will potentially have a dramatic impact on the productivity of distributed work groups of 50-100 software developers. Hypermedia/multimedia involves the seamless integration in a graphical user interface (GUI) of a wide variety of data structures, including high-resolution graphics, maps, images, voice, and full-motion video. Hypermedia will normally require the manipulation of large dynamic files for which relational data base technology and SQL servers are essential. Basic machine architecture, special-purpose video boards, video equipment, optical memory, software needed for animation, network technology, and the anticipated increase in productivity that will result for the introduction of hypermedia technology are covered. It is suggested that the cost of the hardware and software to support an individual multimedia workstation will be on the order of $10,000

    Human-computer interaction : Guidelines for web animation

    Get PDF
    Human-computer interaction in the large is an interdisciplinary area which attracts researchers, educators, and practioners from many differenf fields. Human-computer interaction studies a human and a machine in communication, it draws from supporting knowledge on both the machine and the human side. This paper is related to the human side of human-computer interaction and focuses on animations. The growing use of animation in Web pages testifies to the increasing ease with which such multimedia features can be created. This trend shows a commitment to animation that is often unmatched by the skill of the implementers. The paper presents a set of guidelines and tips to help designers prepare better and more effective Web sites. These guidelines are drawn from an extensive literature survey

    Understanding How Image Quality Affects Deep Neural Networks

    Full text link
    Image quality is an important practical challenge that is often overlooked in the design of machine vision systems. Commonly, machine vision systems are trained and tested on high quality image datasets, yet in practical applications the input images can not be assumed to be of high quality. Recently, deep neural networks have obtained state-of-the-art performance on many machine vision tasks. In this paper we provide an evaluation of 4 state-of-the-art deep neural network models for image classification under quality distortions. We consider five types of quality distortions: blur, noise, contrast, JPEG, and JPEG2000 compression. We show that the existing networks are susceptible to these quality distortions, particularly to blur and noise. These results enable future work in developing deep neural networks that are more invariant to quality distortions.Comment: Final version will appear in IEEE Xplore in the Proceedings of the Conference on the Quality of Multimedia Experience (QoMEX), June 6-8, 201
    corecore