18,882 research outputs found
BoWFire: Detection of Fire in Still Images by Integrating Pixel Color and Texture Analysis
Emergency events involving fire are potentially harmful, demanding a fast and
precise decision making. The use of crowdsourcing image and videos on crisis
management systems can aid in these situations by providing more information
than verbal/textual descriptions. Due to the usual high volume of data,
automatic solutions need to discard non-relevant content without losing
relevant information. There are several methods for fire detection on video
using color-based models. However, they are not adequate for still image
processing, because they can suffer on high false-positive results. These
methods also suffer from parameters with little physical meaning, which makes
fine tuning a difficult task. In this context, we propose a novel fire
detection method for still images that uses classification based on color
features combined with texture classification on superpixel regions. Our method
uses a reduced number of parameters if compared to previous works, easing the
process of fine tuning the method. Results show the effectiveness of our method
of reducing false-positives while its precision remains compatible with the
state-of-the-art methods.Comment: 8 pages, Proceedings of the 28th SIBGRAPI Conference on Graphics,
Patterns and Images, IEEE Pres
Automating the construction of scene classifiers for content-based video retrieval
This paper introduces a real time automatic scene classifier within content-based video retrieval. In our envisioned approach end users like documentalists, not image processing experts, build classifiers interactively, by simply indicating positive examples of a scene. Classification consists of a two stage procedure. First, small image fragments called patches are classified. Second, frequency vectors of these patch classifications are fed into a second classifier for global scene classification (e.g., city, portraits, or countryside). The first stage classifiers can be seen as a set of highly specialized, learned feature detectors, as an alternative to letting an image processing expert determine features a priori. We present results for experiments on a variety of patch and image classes. The scene classifier has been used successfully within television archives and for Internet porn filtering
Fusing image representations for classification using support vector machines
In order to improve classification accuracy different image representations
are usually combined. This can be done by using two different fusing schemes.
In feature level fusion schemes, image representations are combined before the
classification process. In classifier fusion, the decisions taken separately
based on individual representations are fused to make a decision. In this paper
the main methods derived for both strategies are evaluated. Our experimental
results show that classifier fusion performs better. Specifically Bayes belief
integration is the best performing strategy for image classification task.Comment: Image and Vision Computing New Zealand, 2009. IVCNZ '09. 24th
International Conference, Wellington : Nouvelle-Z\'elande (2009
Tongue Image Analysis for Diabetes Mellitus Diagnosis Based on SOM Kohonen
Tongue diagnosis is an important diagnostic method for
evaluating the condition of internal organ by looking at
the image of tongue . However, due to its qualitative, subjective and experience-based nature, traditional tongue diagnosis has a very limited application in clinical medicine. Moreover, traditional tongue diagnosis is always concerned with the identification of syndromes rather than with the connection between tongue abnormal appearances and diseases. This is not well understood in Western medicine, thus greatly obstruct its wider use in the world. In this paper, we present a novel computerized tongue inspection method aiming to address these problems. First, two kinds of quantitative features, chromatic and textural measures, are extracted from tongue images by using popular
digital image processing techniques. Then, SOM
Kohonen are employed to model the relationship
between these quantitative features and diseases. The
effectiveness of the method is tested on 35 patients affected by Diabetes Mellitus as well as other 30 healthy volunteers, and the diagnostic results predicted by the previously trained SOM Kohonen classifiers are compared with the HOMA-B
Techniques for effective and efficient fire detection from social media images
Social media could provide valuable information to support decision making in
crisis management, such as in accidents, explosions and fires. However, much of
the data from social media are images, which are uploaded in a rate that makes
it impossible for human beings to analyze them. Despite the many works on image
analysis, there are no fire detection studies on social media. To fill this
gap, we propose the use and evaluation of a broad set of content-based image
retrieval and classification techniques for fire detection. Our main
contributions are: (i) the development of the Fast-Fire Detection method
(FFDnR), which combines feature extractor and evaluation functions to support
instance-based learning, (ii) the construction of an annotated set of images
with ground-truth depicting fire occurrences -- the FlickrFire dataset, and
(iii) the evaluation of 36 efficient image descriptors for fire detection.
Using real data from Flickr, our results showed that FFDnR was able to achieve
a precision for fire detection comparable to that of human annotators.
Therefore, our work shall provide a solid basis for further developments on
monitoring images from social media.Comment: 12 pages, Proceedings of the International Conference on Enterprise
Information Systems. Specifically: Marcos Bedo, Gustavo Blanco, Willian
Oliveira, Mirela Cazzolato, Alceu Costa, Jose Rodrigues, Agma Traina, Caetano
Traina, 2015, Techniques for effective and efficient fire detection from
social media images, ICEIS, 34-4
Object-Based Greenhouse Classification from GeoEye-1 and WorldView-2 Stereo Imagery
Remote sensing technologies have been commonly used to perform greenhouse detection and mapping. In this research, stereo pairs acquired by very high-resolution optical satellites GeoEye-1 (GE1) and WorldView-2 (WV2) have been utilized to carry out the land cover classification of an agricultural area through an object-based image analysis approach, paying special attention to greenhouses extraction. The main novelty of this work lies in the joint use of single-source stereo-photogrammetrically derived heights and multispectral information from both panchromatic and pan-sharpened orthoimages. The main features tested in this research can be grouped into different categories, such as basic spectral information, elevation data (normalized digital surface model; nDSM), band indexes and ratios, texture and shape geometry. Furthermore, spectral information was based on both single orthoimages and multiangle orthoimages. The overall accuracy attained by applying nearest neighbor and support vector machine classifiers to the four multispectral bands of GE1 were very similar to those computed from WV2, for either four or eight multispectral bands. Height data, in the form of nDSM, were the most important feature for greenhouse classification. The best overall accuracy values were close to 90%, and they were not improved by using multiangle orthoimages
- âŠ