238 research outputs found

    Advanced content-based semantic scene analysis and information retrieval: the SCHEMA project

    Get PDF
    The aim of the SCHEMA Network of Excellence is to bring together a critical mass of universities, research centers, industrial partners and end users, in order to design a reference system for content-based semantic scene analysis, interpretation and understanding. Relevant research areas include: content-based multimedia analysis and automatic annotation of semantic multimedia content, combined textual and multimedia information retrieval, semantic -web, MPEG-7 and MPEG-21 standards, user interfaces and human factors. In this paper, recent advances in content-based analysis, indexing and retrieval of digital media within the SCHEMA Network are presented. These advances will be integrated in the SCHEMA module-based, expandable reference system

    Classification Modeling for Malaysian Blooming Flower Images Using Neural Networks

    Get PDF
    Image processing is a rapidly growing research area of computer science and remains as a challenging problem within the computer vision fields. For the classification of flower images, the problem is mainly due to the huge similarities in terms of colour and texture. The appearance of the image itself such as variation of lights due to different lighting condition, shadow effect on the object’s surface, size, shape, rotation and position, background clutter, states of blooming or budding may affect the utilized classification techniques. This study aims to develop a classification model for Malaysian blooming flowers using neural network with the back propagation algorithms. The flower image is extracted through Region of Interest (ROI) in which texture and colour are emphasized in this study. In this research, a total of 960 images were extracted from 16 types of flowers. Each ROI was represented by three colour attributes (Hue, Saturation, and Value) and four textures attribute (Contrast, Correlation, Energy and Homogeneity). In training and testing phases, experiments were carried out to observe the classification performance of Neural Networks with duplication of difficult pattern to learn (referred to as DOUBLE) as this could possibly explain as to why some flower images were difficult to learn by classifiers. Results show that the overall performance of Neural Network with DOUBLE is 96.3% while actual data set is 68.3%, and the accuracy obtained from Logistic Regression with actual data set is 60.5%. The Decision Tree classification results indicate that the highest performance obtained by Chi-Squared Automatic Interaction Detection(CHAID) and Exhaustive CHAID (EX-CHAID) is merely 42% with DOUBLE. The findings from this study indicate that Neural Network with DOUBLE data set produces highest performance compared to Logistic Regression and Decision Tree. Therefore, NN has been potential in building Malaysian blooming flower model. Future studies can be focused on increasing the sample size and ROI thus may lead to a higher percentage of accuracy. Nevertheless, the developed flower model can be used as part of the Malaysian Blooming Flower recognition system in the future where the colours and texture are needed in the flower identification process

    Some Issues in the Art Image Database Systems

    Get PDF
    In this paper we illustrate several aspects of art databases, such as: the spread of the multimedia art images; the main characteristics of art images; main art images search models; unique characteristics for art image retrieval; the importance of the sensory and semantic gaps. In addition, we present several interesting features of an art image database, such as: image indexing; feature extraction; analysis on various levels of precision; style classification. We stress color features and their base, painting analysis and painting styles. We study also which MPEG-7 descriptors are best for fine painting images retrieval. An experimental system is developed to see how these descriptors work on 900 art images from several remarkable art periods. On the base of our experiments some suggestions for improving the process of searching and analysis of fine art images are given

    Content-based video copy detection using multimodal analysis

    Get PDF
    Ankara : The Department of Computer Engineering and the Institute of Engineering and Science of Bilkent University, 2009.Thesis (Master's) -- Bilkent University, 2009.Includes bibliographical references leaves 67-76.Huge and increasing amount of videos broadcast through networks has raised the need of automatic video copy detection for copyright protection. Recent developments in multimedia technology introduced content-based copy detection (CBCD) as a new research field alternative to the watermarking approach for identification of video sequences. This thesis presents a multimodal framework for matching video sequences using a three-step approach: First, a high-level face detector identifies facial frames/shots in a video clip. Matching faces with extended body regions gives the flexibility to discriminate the same person (e.g., an anchor man or a political leader) in different events or scenes. In the second step, a spatiotemporal sequence matching technique is employed to match video clips/segments that are similar in terms of activity. Finally the non-facial shots are matched using low-level visual features. In addition, we utilize fuzzy logic approach for extracting color histogram to detect shot boundaries of heavily manipulated video clips. Methods for detecting noise, frame-droppings, picture-in-picture transformation windows, and extracting mask for still regions are also proposed and evaluated. The proposed method was tested on the query and reference dataset of CBCD task of TRECVID 2008. Our results were compared with the results of top-8 most successful techniques submitted to this task. Experimental results show that the proposed method performs better than most of the state-of-the-art techniques, in terms of both effectiveness and efficiency.KĂŒĂ§ĂŒktunç, OnurM.S
    • 

    corecore