49,990 research outputs found
Unsupervised Text Extraction from G-Maps
This paper represents an text extraction method from Google maps, GIS
maps/images. Due to an unsupervised approach there is no requirement of any
prior knowledge or training set about the textual and non-textual parts. Fuzzy
CMeans clustering technique is used for image segmentation and Prewitt method
is used to detect the edges. Connected component analysis and gridding
technique enhance the correctness of the results. The proposed method reaches
98.5% accuracy level on the basis of experimental data sets.Comment: Proc. IEEE Conf. #30853, International Conference on Human Computer
Interactions (ICHCI'13), Chennai, India, 23-24 Aug., 201
Recommended from our members
Use of colour for hand-filled form analysis and recognition
Colour information in form analysis is currently under utilised. As technology has advanced and computing costs have reduced, the processing of forms in colour has now become practicable. This paper describes a novel colour-based approach to the extraction of filled data from colour form images. Images are first quantised to reduce the colour complexity and data is extracted by examining the colour characteristics of the images. The improved performance of the proposed method has been verified by comparing the processing time, recognition rate, extraction precision and recall rate to that of an equivalent black and white system
Automated annotation of landmark images using community contributed datasets and web resources
A novel solution to the challenge of automatic image annotation is described. Given an image with GPS data of its location of capture, our system returns a semantically-rich annotation comprising tags which both identify the landmark in the image, and provide an interesting fact about it, e.g. "A view of the Eiffel Tower, which was built in 1889 for an international exhibition in Paris". This exploits visual and textual web mining in combination with content-based image
analysis and natural language processing. In the first stage, an input image is matched to a set of community contributed images (with keyword tags) on the basis of its GPS information and image classification techniques. The depicted landmark is inferred from the keyword tags for the matched set. The system then takes advantage of the information written about landmarks available on the web at large to extract a fact about the landmark in the image. We report component evaluation results from an implementation of our solution on a mobile device. Image localisation and matching oers 93.6% classication accuracy; the selection of appropriate tags for use in annotation performs well (F1M of
0.59), and it subsequently automatically identies a correct toponym for use in captioning and fact extraction in 69.0% of the tested cases; finally the fact extraction returns an interesting caption in 78% of cases
- âŠ