120 research outputs found

    MVGL analyser for multi-classifier based spam filtering system

    Full text link
    In the last decade, the rapid growth of the Internet and email, there has been a dramatic growth in spam. Spam is commonly defined as unsolicited email messages and protecting email from the infiltration of spam is an important research issue. Classifications algorithms have been successfully used to filter spam, but with a certain amount of false positive trade-offs, which is unacceptable to users sometimes. This paper presents an approach to overcome the burden of GL (grey list) analyzer as further refinements to our multi-classifier based classification model (Islam, M. and W. Zhou 2007). In this approach, we introduce a ldquomajority voting grey list (MVGL)rdquo analyzing technique which will analyze the generated GL emails by using the majority voting (MV) algorithm. We have presented two different variations of the MV system, one is simple MV (SMV) and other is the ranked MV (RMV). Our empirical evidence proofs the improvements of this approach compared to the existing GL analyzer of multi-classifier based spam filtering process.<br /

    Current challenges in content based image retrieval by means of low-level feature combining

    Get PDF
    The aim of this paper is to discuss a fusion of the two most popular low-level image features - colour and shape - in the aspect of content-based image retrieval. By combining them we can achieve much higher accuracy in various areas, e.g. pattern recognition, object representation, image retrieval. To achieve such a goal two general strategies (sequential and parallel) for joining elementary queries were proposed. Usually they are employed to construct a processing structure, where each image is being decomposed into regions, based on shapes with some characteristic properties - colour and its distribution. In the paper we provide an analysis of this proposition as well as the exemplary results of application in the Content Based Image Retrieval problem. The original contribution of the presented work is related to different fusions of several shape and colour descriptors (standard and non-standard ones) and joining them into parallel or sequential structures giving considerable improvements in content-based image retrieval. The novelty is based on the fact that many existing methods (even complex ones) work in single domain (shape or colour), while the proposed approach joins features from different areas

    Bi-modal emotion recognition from expressive face and body gestures

    Full text link
    Psychological research findings suggest that humans rely on the combined visual channels of face and body more than any other channel when they make judgments about human communicative behavior. However, most of the existing systems attempting to analyze the human nonverbal behavior are mono-modal and focus only on the face. Research that aims to integrate gestures as an expression mean has only recently emerged. Accordingly, this paper presents an approach to automatic visual recognition of expressive face and upper-body gestures from video sequences suitable for use in a vision-based affective multi-modal framework. Face and body movements are captured simultaneously using two separate cameras. For each video sequence single expressive frames both from face and body are selected manually for analysis and recognition of emotions. Firstly, individual classifiers are trained from individual modalities. Secondly, we fuse facial expression and affective body gesture information at the feature and at the decision level. In the experiments performed, the emotion classification using the two modalities achieved a better recognition accuracy outperforming classification using the individual facial or bodily modality alone. © 2006 Elsevier Ltd. All rights reserved

    Fusing face and body display for Bi-modal emotion recognition: Single frame analysis and multi-frame post integration

    Full text link
    This paper presents an approach to automatic visual emotion recognition from two modalities: expressive face and body gesture. Pace and body movements are captured simultaneously using two separate cameras. For each face and body image sequence single "expressive" frames are selected manually for analysis and recognition of emotions. Firstly, individual classifiers are trained from individual modalities for mono-modal emotion recognition. Secondly, we fuse facial expression and affective body gesture information at the feature and at the decision-level. In the experiments performed, the emotion classification using the two modalities achieved a better recognition accuracy outperforming the classification using the individual facial modality. We further extend the affect analysis into a whole image sequence by a multi-frame post integration approach over the single frame recognition results. In our experiments, the post integration based on the fusion of face and body has shown to be more accurate than the post integration based on the facial modality only. © Springer-Verlag Berlin Heidelberg 2005
    • …
    corecore