11 research outputs found

    Word Searching in Scene Image and Video Frame in Multi-Script Scenario using Dynamic Shape Coding

    Full text link
    Retrieval of text information from natural scene images and video frames is a challenging task due to its inherent problems like complex character shapes, low resolution, background noise, etc. Available OCR systems often fail to retrieve such information in scene/video frames. Keyword spotting, an alternative way to retrieve information, performs efficient text searching in such scenarios. However, current word spotting techniques in scene/video images are script-specific and they are mainly developed for Latin script. This paper presents a novel word spotting framework using dynamic shape coding for text retrieval in natural scene image and video frames. The framework is designed to search query keyword from multiple scripts with the help of on-the-fly script-wise keyword generation for the corresponding script. We have used a two-stage word spotting approach using Hidden Markov Model (HMM) to detect the translated keyword in a given text line by identifying the script of the line. A novel unsupervised dynamic shape coding based scheme has been used to group similar shape characters to avoid confusion and to improve text alignment. Next, the hypotheses locations are verified to improve retrieval performance. To evaluate the proposed system for searching keyword from natural scene image and video frames, we have considered two popular Indic scripts such as Bangla (Bengali) and Devanagari along with English. Inspired by the zone-wise recognition approach in Indic scripts[1], zone-wise text information has been used to improve the traditional word spotting performance in Indic scripts. For our experiment, a dataset consisting of images of different scenes and video frames of English, Bangla and Devanagari scripts were considered. The results obtained showed the effectiveness of our proposed word spotting approach.Comment: Multimedia Tools and Applications, Springe

    Recognizing and understanding user behaviors from screencasts

    Get PDF
    User interacts with computers or mobile devices, leading to user behaviors on screen. In the context of software engineering, analyzing user behavior enables many applications such as intelligent bug fix, code completion and knowledge recommendation for developers. Such technique can be extended to more general knowledge worker environment, in which users have to manipulate devices according to specific guidelines. Existing works rely heavily on software instrumentation to obtain user actions from operation systems, which is hard to deploy and maintain. In addition, considering the security and privacy of some scenarios, non-intrusive is the major requirement to be included in the system. In this work, we leverage Computer Vision and Natural Language Processing techniques to recognize and understand user behaviors from screencasts, which is a non-intrusive and cross-platform method. We first recognize 10 categories of low level user actions such as mouse moving and type text, then summarize them to higher level abstractions (i.e. line-granularity coding steps). We also try to interpret user interaction with applications by multi-task learning and generate structured language descriptions (i.e. command, widget and location). Finally, unsupervised learning method is introduced for GUI linting problem, which is taken as a case study of user behavior analysis. To train the deep neural networks, we collect diverse video data from YouTube, Twitch and Bugzilla, and manually label them to build the dataset. The experiment results demonstrate the high performance of proposed method, and the user study validate the practical applications of many downstream tasks
    corecore