2,690 research outputs found
Digital Image Access & Retrieval
The 33th Annual Clinic on Library Applications of Data Processing, held at the University of Illinois at Urbana-Champaign in March of 1996, addressed the theme of "Digital Image Access & Retrieval." The papers from this conference cover a wide range of topics concerning digital imaging technology for visual resource collections. Papers covered three general areas: (1) systems, planning, and implementation; (2) automatic and semi-automatic indexing; and (3) preservation with the bulk of the conference focusing on indexing and retrieval.published or submitted for publicatio
Hybrid image representation methods for automatic image annotation: a survey
In most automatic image annotation systems, images are represented with low level features using either global
methods or local methods. In global methods, the entire image is used as a unit. Local methods divide images into blocks where fixed-size sub-image blocks are adopted as sub-units; or into regions by using segmented regions as sub-units in images. In contrast to typical automatic image annotation methods that use either global or local features exclusively, several recent methods have considered incorporating the two kinds of information, and believe that the combination of the two levels of features is
beneficial in annotating images. In this paper, we provide a
survey on automatic image annotation techniques according to
one aspect: feature extraction, and, in order to complement
existing surveys in literature, we focus on the emerging image annotation methods: hybrid methods that combine both global and local features for image representation
Techniques for effective and efficient fire detection from social media images
Social media could provide valuable information to support decision making in
crisis management, such as in accidents, explosions and fires. However, much of
the data from social media are images, which are uploaded in a rate that makes
it impossible for human beings to analyze them. Despite the many works on image
analysis, there are no fire detection studies on social media. To fill this
gap, we propose the use and evaluation of a broad set of content-based image
retrieval and classification techniques for fire detection. Our main
contributions are: (i) the development of the Fast-Fire Detection method
(FFDnR), which combines feature extractor and evaluation functions to support
instance-based learning, (ii) the construction of an annotated set of images
with ground-truth depicting fire occurrences -- the FlickrFire dataset, and
(iii) the evaluation of 36 efficient image descriptors for fire detection.
Using real data from Flickr, our results showed that FFDnR was able to achieve
a precision for fire detection comparable to that of human annotators.
Therefore, our work shall provide a solid basis for further developments on
monitoring images from social media.Comment: 12 pages, Proceedings of the International Conference on Enterprise
Information Systems. Specifically: Marcos Bedo, Gustavo Blanco, Willian
Oliveira, Mirela Cazzolato, Alceu Costa, Jose Rodrigues, Agma Traina, Caetano
Traina, 2015, Techniques for effective and efficient fire detection from
social media images, ICEIS, 34-4
The aceToolbox: low-level audiovisual feature extraction for retrieval and classification
In this paper we present an overview of a software platform
that has been developed within the aceMedia project,
termed the aceToolbox, that provides global and local lowlevel feature extraction from audio-visual content. The toolbox is based on the MPEG-7 eXperimental Model (XM),
with extensions to provide descriptor extraction from arbitrarily shaped image segments, thereby supporting local descriptors reflecting real image content. We describe the architecture of the toolbox as well as providing an overview of the descriptors supported to date. We also briefly describe the segmentation algorithm provided. We then demonstrate the usefulness of the toolbox in the context of two different content processing scenarios: similarity-based retrieval in large collections and scene-level classification of still images
Video Data Visualization System: Semantic Classification And Personalization
We present in this paper an intelligent video data visualization tool, based
on semantic classification, for retrieving and exploring a large scale corpus
of videos. Our work is based on semantic classification resulting from semantic
analysis of video. The obtained classes will be projected in the visualization
space. The graph is represented by nodes and edges, the nodes are the keyframes
of video documents and the edges are the relation between documents and the
classes of documents. Finally, we construct the user's profile, based on the
interaction with the system, to render the system more adequate to its
references.Comment: graphic
Recommended from our members
Using a lightweight multimedia content model for semantic annotation
In this paper we discuss the use of a multimedia content model for automatic extraction of semantic metadata from multimedia content. We developed a modular and extensible framework to model the content feature of multimedia data and also describe the way it can be integrated with other existing vocabularies. The goal of this model is to generate sufficient understanding of media content, its context and its relation to domain knowledge in order to perform multimedia reasoning. We implemented a tool that analyzes and links low-level descriptions to higher-level domain specific semantic concepts by means of statistical learning and clustering analysis. Experimental result shows the approach performs well in visual concept prediction in the image which can be further augmented with other information sources such as context text and or audio source
Video metadata extraction in a videoMail system
Currently the world swiftly adapts to visual communication. Online services like
YouTube and Vine show that video is no longer the domain of broadcast television only.
Video is used for different purposes like entertainment, information, education or communication.
The rapid growth of todayâs video archives with sparsely available editorial data creates
a big problem of its retrieval. The humans see a video like a complex interplay of
cognitive concepts. As a result there is a need to build a bridge between numeric values and semantic concepts. This establishes a connection that will facilitate videosâ retrieval by humans.
The critical aspect of this bridge is video annotation. The process could be done manually or automatically. Manual annotation is very tedious, subjective and expensive.
Therefore automatic annotation is being actively studied.
In this thesis we focus on the multimedia content automatic annotation. Namely
the use of analysis techniques for information retrieval allowing to automatically extract
metadata from video in a videomail system. Furthermore the identification of text, people, actions, spaces, objects, including animals and plants.
Hence it will be possible to align multimedia content with the text presented in the
email message and the creation of applications for semantic video database indexing and retrieving
- âŠ