193,670 research outputs found

    Unsupervised video summarization framework using keyframe extraction and video skimming

    Full text link
    Video is one of the robust sources of information and the consumption of online and offline videos has reached an unprecedented level in the last few years. A fundamental challenge of extracting information from videos is a viewer has to go through the complete video to understand the context, as opposed to an image where the viewer can extract information from a single frame. Apart from context understanding, it almost impossible to create a universal summarized video for everyone, as everyone has their own bias of keyframe, e.g; In a soccer game, a coach person might consider those frames which consist of information on player placement, techniques, etc; however, a person with less knowledge about a soccer game, will focus more on frames which consist of goals and score-board. Therefore, if we were to tackle problem video summarization through a supervised learning path, it will require extensive personalized labeling of data. In this paper, we attempt to solve video summarization through unsupervised learning by employing traditional vision-based algorithmic methodologies for accurate feature extraction from video frames. We have also proposed a deep learning-based feature extraction followed by multiple clustering methods to find an effective way of summarizing a video by interesting key-frame extraction. We have compared the performance of these approaches on the SumMe dataset and showcased that using deep learning-based feature extraction has been proven to perform better in case of dynamic viewpoint videos.Comment: 5 pages, 3 figures. Technical Repor

    Investigating Information Structure of Phishing Emails Based on Persuasive Communication Perspective

    Get PDF
    Current approaches of phishing filters depend on classifying messages based on textually discernable features such as IP-based URLs or domain names as those features that can be easily extracted from a given phishing message. However, in the same sense, those easily perceptible features can be easily manipulated by sophisticated phishers. Therefore, it is important that universal patterns of phishing messages should be identified for feature extraction to serve as a basis for text classification. In this paper, we demonstrate that user perception regarding phishing message can be identified in central and peripheral routes of information processing. We also present a method of formulating quantitative model that can represent persuasive information structure in phishing messages. This paper makes contribution to phishing classification research by presenting the idea of universal information structure in terms of persuasive communication theories

    Investigating Information Structure of Phishing Emails Based on Persuasive Communication Perspective

    Get PDF
    Current approaches of phishing filters depend on classifying messages based on textually discernable features such as IP-based URLs or domain names as those features that can be easily extracted from a given phishing message. However, in the same sense, those easily perceptible features can be easily manipulated by sophisticated phishers. Therefore, it is important that universal patterns of phishing messages should be identified for feature extraction to serve as a basis for text classification. In this paper, we demonstrate that user perception regarding phishing message can be identified in central and peripheral routes of information processing. We also present a method of formulating quantitative model that can represent persuasive information structure in phishing messages. This paper makes contribution to phishing classification research by presenting the idea of universal information structure in terms of persuasive communication theories

    Weight-based Channel-model Matrix Framework provides a reasonable solution for EEG-based cross-dataset emotion recognition

    Full text link
    Cross-dataset emotion recognition as an extremely challenging task in the field of EEG-based affective computing is influenced by many factors, which makes the universal models yield unsatisfactory results. Facing the situation that lacks EEG information decoding research, we first analyzed the impact of different EEG information(individual, session, emotion and trial) for emotion recognition by sample space visualization, sample aggregation phenomena quantification, and energy pattern analysis on five public datasets. Based on these phenomena and patterns, we provided the processing methods and interpretable work of various EEG differences. Through the analysis of emotional feature distribution patterns, the Individual Emotional Feature Distribution Difference(IEFDD) was found, which was also considered as the main factor of the stability for emotion recognition. After analyzing the limitations of traditional modeling approach suffering from IEFDD, the Weight-based Channel-model Matrix Framework(WCMF) was proposed. To reasonably characterize emotional feature distribution patterns, four weight extraction methods were designed, and the optimal was the correction T-test(CT) weight extraction method. Finally, the performance of WCMF was validated on cross-dataset tasks in two kinds of experiments that simulated different practical scenarios, and the results showed that WCMF had more stable and better emotion recognition ability.Comment: 18 pages, 12 figures, 8 table

    Singular spectrum analysis and its application in Lamb wave-based damage detection

    Get PDF
    This paper proposes singular spectrum analysis (SSA) based feature extraction method in Lamb wave based damage detection. SSA is used for the decomposition of the acquired Lamb wave signal into an additive set of principal components and a new universal approach for selection of the principal components is presented. The principal components which contain the least measurement noise and the most damage information are then used to detect local damage in an aluminum plate and a new approach based on the maximum likelihood analysis for damage signal decomposition is proposed. Genetic algorithm is adopted for the purpose of making the similarity between the synthetic signal and the target signal reach the maximum. The experimental result shows that the proposed method is capable of yielding accurate identified results with noisy measurement

    Universal Image Steganalytic Method

    Get PDF
    In the paper we introduce a new universal steganalytic method in JPEG file format that is detecting well-known and also newly developed steganographic methods. The steganalytic model is trained by MHF-DZ steganographic algorithm previously designed by the same authors. The calibration technique with the Feature Based Steganalysis (FBS) was employed in order to identify statistical changes caused by embedding a secret data into original image. The steganalyzer concept utilizes Support Vector Machine (SVM) classification for training a model that is later used by the same steganalyzer in order to identify between a clean (cover) and steganographic image. The aim of the paper was to analyze the variety in accuracy of detection results (ACR) while detecting testing steganographic algorithms as F5, Outguess, Model Based Steganography without deblocking, JP Hide&Seek which represent the generally used steganographic tools. The comparison of four feature vectors with different lengths FBS (22), FBS (66) FBS(274) and FBS(285) shows promising results of proposed universal steganalytic method comparing to binary methods

    Speaker recognition by means of restricted Boltzmann machine adaptation

    Get PDF
    Restricted Boltzmann Machines (RBMs) have shown success in speaker recognition. In this paper, RBMs are investigated in a framework comprising a universal model training and model adaptation. Taking advantage of RBM unsupervised learning algorithm, a global model is trained based on all available background data. This general speaker-independent model, referred to as URBM, is further adapted to the data of a specific speaker to build speaker-dependent model. In order to show its effectiveness, we have applied this framework to two different tasks. It has been used to discriminatively model target and impostor spectral features for classification. It has been also utilized to produce a vector-based representation for speakers. This vector-based representation, similar to i-vector, can be further used for speaker recognition using either cosine scoring or Probabilistic Linear Discriminant Analysis (PLDA). The evaluation is performed on the core test condition of the NIST SRE 2006 database.Peer ReviewedPostprint (author's final draft

    Applying MDL to Learning Best Model Granularity

    Get PDF
    The Minimum Description Length (MDL) principle is solidly based on a provably ideal method of inference using Kolmogorov complexity. We test how the theory behaves in practice on a general problem in model selection: that of learning the best model granularity. The performance of a model depends critically on the granularity, for example the choice of precision of the parameters. Too high precision generally involves modeling of accidental noise and too low precision may lead to confusion of models that should be distinguished. This precision is often determined ad hoc. In MDL the best model is the one that most compresses a two-part code of the data set: this embodies ``Occam's Razor.'' In two quite different experimental settings the theoretical value determined using MDL coincides with the best value found experimentally. In the first experiment the task is to recognize isolated handwritten characters in one subject's handwriting, irrespective of size and orientation. Based on a new modification of elastic matching, using multiple prototypes per character, the optimal prediction rate is predicted for the learned parameter (length of sampling interval) considered most likely by MDL, which is shown to coincide with the best value found experimentally. In the second experiment the task is to model a robot arm with two degrees of freedom using a three layer feed-forward neural network where we need to determine the number of nodes in the hidden layer giving best modeling performance. The optimal model (the one that extrapolizes best on unseen examples) is predicted for the number of nodes in the hidden layer considered most likely by MDL, which again is found to coincide with the best value found experimentally.Comment: LaTeX, 32 pages, 5 figures. Artificial Intelligence journal, To appea
    corecore