155 research outputs found

    A Novel Scheme for Video Similarity Detection

    Get PDF

    Pattern Recognition

    Get PDF
    Pattern recognition is a very wide research field. It involves factors as diverse as sensors, feature extraction, pattern classification, decision fusion, applications and others. The signals processed are commonly one, two or three dimensional, the processing is done in real- time or takes hours and days, some systems look for one narrow object class, others search huge databases for entries with at least a small amount of similarity. No single person can claim expertise across the whole field, which develops rapidly, updates its paradigms and comprehends several philosophical approaches. This book reflects this diversity by presenting a selection of recent developments within the area of pattern recognition and related fields. It covers theoretical advances in classification and feature extraction as well as application-oriented works. Authors of these 25 works present and advocate recent achievements of their research related to the field of pattern recognition

    Speech data analysis for semantic indexing of video of simulated medical crises.

    Get PDF
    The Simulation for Pediatric Assessment, Resuscitation, and Communication (SPARC) group within the Department of Pediatrics at the University of Louisville, was established to enhance the care of children by using simulation based educational methodologies to improve patient safety and strengthen clinician-patient interactions. After each simulation session, the physician must manually review and annotate the recordings and then debrief the trainees. The physician responsible for the simulation has recorded 100s of videos, and is seeking solutions that can automate the process. This dissertation introduces our developed system for efficient segmentation and semantic indexing of videos of medical simulations using machine learning methods. It provides the physician with automated tools to review important sections of the simulation by identifying who spoke, when and what was his/her emotion. Only audio information is extracted and analyzed because the quality of the image recording is low and the visual environment is static for most parts. Our proposed system includes four main components: preprocessing, speaker segmentation, speaker identification, and emotion recognition. The preprocessing consists of first extracting the audio component from the video recording. Then, extracting various low-level audio features to detect and remove silence segments. We investigate and compare two different approaches for this task. The first one is threshold-based and the second one is classification-based. The second main component of the proposed system consists of detecting speaker changing points for the purpose of segmenting the audio stream. We propose two fusion methods for this task. The speaker identification and emotion recognition components of our system are designed to provide users the capability to browse the video and retrieve shots that identify ”who spoke, when, and the speaker’s emotion” for further analysis. For this component, we propose two feature representation methods that map audio segments of arbitary length to a feature vector with fixed dimensions. The first one is based on soft bag-of-word (BoW) feature representations. In particular, we define three types of BoW that are based on crisp, fuzzy, and possibilistic voting. The second feature representation is a generalization of the BoW and is based on Fisher Vector (FV). FV uses the Fisher Kernel principle and combines the benefits of generative and discriminative approaches. The proposed feature representations are used within two learning frameworks. The first one is supervised learning and assumes that a large collection of labeled training data is available. Within this framework, we use standard classifiers including K-nearest neighbor (K-NN), support vector machine (SVM), and Naive Bayes. The second framework is based on semi-supervised learning where only a limited amount of labeled training samples are available. We use an approach that is based on label propagation. Our proposed algorithms were evaluated using 15 medical simulation sessions. The results were analyzed and compared to those obtained using state-of-the-art algorithms. We show that our proposed speech segmentation fusion algorithms and feature mappings outperform existing methods. We also integrated all proposed algorithms and developed a GUI prototype system for subjective evaluation. This prototype processes medical simulation video and provides the user with a visual summary of the different speech segments. It also allows the user to browse videos and retrieve scenes that provide answers to semantic queries such as: who spoke and when; who interrupted who? and what was the emotion of the speaker? The GUI prototype can also provide summary statistics of each simulation video. Examples include: for how long did each person spoke? What is the longest uninterrupted speech segment? Is there an unusual large number of pauses within the speech segment of a given speaker

    Information Extraction on Para-Relational Data.

    Full text link
    Para-relational data (such as spreadsheets and diagrams) refers to a type of nearly relational data that shares the important qualities of relational data but does not present itself in a relational format. Para-relational data often conveys highly valuable information and is widely used in many different areas. If we can convert para-relational data into the relational format, many existing tools can be leveraged for a variety of interesting applications, such as data analysis with relational query systems and data integration applications. This dissertation aims to convert para-relational data into a high-quality relational form with little user assistance. We have developed four standalone systems, each addressing a specific type of para-relational data. Senbazuru is a prototype spreadsheet database management system that extracts relational information from a large number of spreadsheets. Anthias is an extension of the Senbazuru system to convert a broader range of spreadsheets into a relational format. Lyretail is an extraction system to detect long-tail dictionary entities on webpages. Finally, DiagramFlyer is a web-based search system that obtains a large number of diagrams automatically extracted from web-crawled PDFs. Together, these four systems demonstrate that converting para-relational data into the relational format is possible today, and also suggest directions for future systems.PhDComputer Science and EngineeringUniversity of Michigan, Horace H. Rackham School of Graduate Studieshttp://deepblue.lib.umich.edu/bitstream/2027.42/120853/1/chenzhe_1.pd

    Feature binding of MPEG-7 Visual Descriptors Using Chaotic Series

    Get PDF
    Due to advanced segmentation and tracking algorithms, a video can be divided into numerous objects. Segmentation and tracking algorithms output different low-level object features, resulting in a high-dimensional feature vector per object. The challenge is to generate feature vector of objects which can be mapped to human understandable description, such as object labels, e.g., person, car. MPEG-7 provides visual descriptors to describe video contents. However, generally the MPEG-7 visual descriptors are highly redundant, and the feature coefficients in these descriptors need to be pre-processed for domain specific application. Ideal case would be if MPEG-7 visual descriptor based feature vector, can be processed similar to some functional simulations of human brain activity. There has been a established link between the analysis of temporal human brain oscillatory signals and chaotic dynamics from the electroencephalography (EEG) of the brain neurons. Neural signals in limited brain activities are found to be behaviorally relevant (previously appeared to be noise) and can be simulated using chaotic series. Chaotic series is referred to as either a finite-difference or an ordinary differential equation, which presents non-random, irregular fluctuations of parameter values over time in a dynamical system. The dynamics in a chaotic series can be high - or low -dimensional, and the dimensionality can be deduced from the topological dimension of the attractor of the chaotic series. An attractor is manifested by the tendency of a non-linear finite difference equation or an ordinary differential equation, under various but delimited conditions, to go to a reproducible active state, and stay there. We propose a feature binding method, using chaotic series, to generate a new feature vector, C-MP7 , to describe video objects. The proposed method considers MPEG-7 visual descriptor coefficients as dynamical systems. Dynamical systems are excited (similar to neuronal excitation) with either high- or low-dimensional chaotic series, and then histogram-based clustering is applied on the simulated chaotic series coefficients to generate C-MP7 . The proposed feature binding offers better feature vector with high-dimensional chaotic series simulation than with low-dimensional chaotic series, over MPEG-7 visual descriptor based feature vector. Diverse video objects are grouped in four generic classes (e.g., has [barbelow]person, has [barbelow]group [barbelow]of [barbelow]persons, has [barbelow]vehicle, and has [barbelow]unknown ) to observe how well C-MP7 describes different video objects compared to MPEG-7 feature vector. In C-MP7 , with high dimensional chaotic series simulation, 1). descriptor coefficients are reduced dynamically up to 37.05% compared to 10% in MPEG-7 , 2) higher variance is achieved than MPEG-7 , 3) multi-class discriminant analysis of C-MP7 with Fisher-criteria shows increased binary class separation for clustered video objects than that of MPEG-7 , and 4) C-MP7 , specifically provides good clustering of video objects for has [barbelow]vehicle class against other classes. To test C-MP7 in an application, we deploy a combination of multiple binary classifiers for video object classification. Related work on video object classification use non-MPEG-7 features. We specifically observe classification of challenging surveillance video objects, e.g., incomplete objects, partial occlusion, background over lapping, scale and resolution variant objects, indoor / outdoor lighting variations. C-MP7 is used to train different classes of video objects. Object classification accuracy is verified with both low-dimensional and high-dimensional chaotic series based feature binding for C-MP7 . Testing of diverse video objects with high-dimensional chaotic series simulation shows, 1) classification accuracy significantly improves on average, 83% compared to the 62% with MPEG-7 , 2) excellent clustering of vehicle objects leads to above 99% accuracy for only vehicles against all other objects, and 3) with diverse video objects, including objects from poor segmentation. C-MP7 is more robust as a feature vector in classification than MPEG-7 . Initial results on sub-group classification for male and female video objects in has [barbelow]person class are also presentated as subjective observations. Earlier, chaos series properties have been used in video processing applications for compression and digital watermarking. To our best knowledge, this work is the first to use chaotic series for video object description and apply it for object classificatio
    • …
    corecore