9,531 research outputs found

    Fast Fight Detection

    Get PDF
    Action recognition has become a hot topic within computer vision. However, the action recognition community has focused mainly on relatively simple actions like clapping, walking, jogging, etc. The detection of specific events with direct practical use such as fights or in general aggressive behavior has been comparatively less studied. Such capability may be extremely useful in some video surveillance scenarios like prisons, psychiatric centers or even embedded in camera phones. As a consequence, there is growing interest in developing violence detection algorithms. Recent work considered the well-known Bag-of-Words framework for the specific problem of fight detection. Under this framework, spatio-temporal features are extracted from the video sequences and used for classification. Despite encouraging results in which high accuracy rates were achieved, the computational cost of extracting such features is prohibitive for practical applications. This work proposes a novel method to detect violence sequences. Features extracted from motion blobs are used to discriminate fight and non-fight sequences. Although the method is outperformed in accuracy by state of the art, it has a significantly faster computation time thus making it amenable for real-time applications

    Learning to Detect Violent Videos using Convolutional Long Short-Term Memory

    Full text link
    Developing a technique for the automatic analysis of surveillance videos in order to identify the presence of violence is of broad interest. In this work, we propose a deep neural network for the purpose of recognizing violent videos. A convolutional neural network is used to extract frame level features from a video. The frame level features are then aggregated using a variant of the long short term memory that uses convolutional gates. The convolutional neural network along with the convolutional long short term memory is capable of capturing localized spatio-temporal features which enables the analysis of local motion taking place in the video. We also propose to use adjacent frame differences as the input to the model thereby forcing it to encode the changes occurring in the video. The performance of the proposed feature extraction pipeline is evaluated on three standard benchmark datasets in terms of recognition accuracy. Comparison of the results obtained with the state of the art techniques revealed the promising capability of the proposed method in recognizing violent videos.Comment: Accepted in International Conference on Advanced Video and Signal based Surveillance(AVSS 2017

    Detecting violent excerpts in movies using audio

    Get PDF
    This thesis addresses the problem of automatically detecting violence in movie excerpts, based on audio and video features. A solution to this problem is relevant for a number of applications, including preventing children from being exposed to violence in the existing media, which may avoid the development of violent behavior. We analyzed and extracted audio and video features directly from the movie excerpt and used them to classify the movie excerpt as violent or non-violent. In order to find the best feature set and to achieve the best performance, our experiments use two different machine learning classifiers: Support Vector Machines (SVM) and Neural Networks (NN). We used a balanced subset of the existing ACCEDE database of movie excerpts containing 880 movie excerpts manually tagged as violent or non-violent. During an early experimental stage, using the features originally included in the ACCEDE database, we tested the use of audio features alone, video features alone and combinations of audio and video features. These results provided our baseline for further experiments using alternate audio features, extracted using available toolkits, and alternate video features, extracted using our own methods. Our most relevant conclusions are as follows: 1) audio features can be easily extracted using existing tools and have a strong impact in the system performance; 2) in terms of video features, features related with motion and shot transitions on a scene seem to have a better impact when compared with features related with color or luminance; 3) the best results are achieved by combining audio and video features. In general, the SVM classifier seems to work better for this problem, despite the performance of both classifiers being similar for the best feature setEsta tese aborda o problema da deteção de violência em excertos de filmes, com base em características extraídas do audio e do video. A resolução deste problema é relevante para um vasto leque de aplicações, incluindo evitar ou monitorizar a exposição de crianças à violência que existe nos vários tipos de média, o que pode evitar que estas desenvolvam comportamentos violentos. Analisámos e extraímos características áudio e vídeo diretamente do excerto de filme e usámo-las para classificar excertos de filme como violentos ou não violentos. De forma a encontrar o melhor conjunto de caracteristicas e atingir a melhor performance, as nossas experiências utilizam dois classificadores, nomeadamente: Support Vector Machines (SVM) e Redes Neuronais(NN). Foi usado um conjunto balanceado de excertos de filmes, retirado da base de dados ACCEDE, conjunto esse, que contém 880 excertos de filme, anotados manualmente como violentos ou não violentos. Durante as primeiras experiências, usando características incluídas na base de dados ACCEDE, testámos caracteristicas áudio e características vídeo, individualmente, e combinações de características áudio e vídeo. Estes resultados estabeleceram o ponto de partida para as experiências que os seguiram, usando outras características áudio, extraídas através de ferramentas disponíveis, e outras características vídeo, extraídas através dos nossos próprios métodos. As conclusões mais relevantes a que chegámos são as seguintes: 1) características áudio podem ser facilmente extraídas usando ferramentas já existentes e têm grande impacto na performance do sistema; 2) em termos de características vídeo, caracteristicas relacionadas com o movimentos e transições entre planos numa cena, parecem ter mais impacto do que características relacionadas com cor e luminância; 3) Os melhores resultados ocorrem quando se combinam características áudio e vídeo, sendo que, em geral, o classificador SVM parece ser mais adequado para o problema, apesar da performance dos dois classificadores ser semelhante para o melhor conjunto de características a que chegámos

    Is this Harmful? Learning to Predict Harmfulness Ratings from Video

    Full text link
    Automatically identifying harmful content in video is an important task with a wide range of applications. However, due to the difficulty of collecting high-quality labels as well as demanding computational requirements, the task has not had a satisfying general approach. Typically, only small subsets of the problem are considered, such as identifying violent content. In cases where the general problem is tackled, rough approximations and simplifications are made to deal with the lack of labels and computational complexity. In this work, we identify and tackle the two main obstacles. First, we create a dataset of approximately 4000 video clips, annotated by professionals in the field. Secondly, we demonstrate that advances in video recognition enable training models on our dataset that consider the full context of the scene. We conduct an in-depth study on our modeling choices and find that we greatly benefit from combining the visual and audio modality and that pretraining on large-scale video recognition datasets and class balanced sampling further improves performance. We additionally perform a qualitative study that reveals the heavily multi-modal nature of our dataset. Our dataset will be made available upon publication.Comment: 11 pages, 15 figure

    Multi-perspective cost-sensitive context-aware multi-instance sparse coding and its application to sensitive video recognition

    Get PDF
    With the development of video-sharing websites, P2P, micro-blog, mobile WAP websites, and so on, sensitive videos can be more easily accessed. Effective sensitive video recognition is necessary for web content security. Among web sensitive videos, this paper focuses on violent and horror videos. Based on color emotion and color harmony theories, we extract visual emotional features from videos. A video is viewed as a bag and each shot in the video is represented by a key frame which is treated as an instance in the bag. Then, we combine multi-instance learning (MIL) with sparse coding to recognize violent and horror videos. The resulting MIL-based model can be updated online to adapt to changing web environments. We propose a cost-sensitive context-aware multi- instance sparse coding (MI-SC) method, in which the contextual structure of the key frames is modeled using a graph, and fusion between audio and visual features is carried out by extending the classic sparse coding into cost-sensitive sparse coding. We then propose a multi-perspective multi- instance joint sparse coding (MI-J-SC) method that handles each bag of instances from an independent perspective, a contextual perspective, and a holistic perspective. The experiments demonstrate that the features with an emotional meaning are effective for violent and horror video recognition, and our cost-sensitive context-aware MI-SC and multi-perspective MI-J-SC methods outperform the traditional MIL methods and the traditional SVM and KNN-based methods

    A COMPUTATION METHOD/FRAMEWORK FOR HIGH LEVEL VIDEO CONTENT ANALYSIS AND SEGMENTATION USING AFFECTIVE LEVEL INFORMATION

    No full text
    VIDEO segmentation facilitates e±cient video indexing and navigation in large digital video archives. It is an important process in a content-based video indexing and retrieval (CBVIR) system. Many automated solutions performed seg- mentation by utilizing information about the \facts" of the video. These \facts" come in the form of labels that describe the objects which are captured by the cam- era. This type of solutions was able to achieve good and consistent results for some video genres such as news programs and informational presentations. The content format of this type of videos is generally quite standard, and automated solutions were designed to follow these format rules. For example in [1], the presence of news anchor persons was used as a cue to determine the start and end of a meaningful news segment. The same cannot be said for video genres such as movies and feature films. This is because makers of this type of videos utilized different filming techniques to design their videos in order to elicit certain affective response from their targeted audience. Humans usually perform manual video segmentation by trying to relate changes in time and locale to discontinuities in meaning [2]. As a result, viewers usually have doubts about the boundary locations of a meaningful video segment due to their different affective responses. This thesis presents an entirely new view to the problem of high level video segmentation. We developed a novel probabilistic method for affective level video content analysis and segmentation. Our method had two stages. In the first stage, a®ective content labels were assigned to video shots by means of a dynamic bayesian 0. Abstract 3 network (DBN). A novel hierarchical-coupled dynamic bayesian network (HCDBN) topology was proposed for this stage. The topology was based on the pleasure- arousal-dominance (P-A-D) model of a®ect representation [3]. In principle, this model can represent a large number of emotions. In the second stage, the visual, audio and a®ective information of the video was used to compute a statistical feature vector to represent the content of each shot. Affective level video segmentation was achieved by applying spectral clustering to the feature vectors. We evaluated the first stage of our proposal by comparing its emotion detec- tion ability with all the existing works which are related to the field of a®ective video content analysis. To evaluate the second stage, we used the time adaptive clustering (TAC) algorithm as our performance benchmark. The TAC algorithm was the best high level video segmentation method [2]. However, it is a very computationally intensive algorithm. To accelerate its computation speed, we developed a modified TAC (modTAC) algorithm which was designed to be mapped easily onto a field programmable gate array (FPGA) device. Both the TAC and modTAC algorithms were used as performance benchmarks for our proposed method. Since affective video content is a perceptual concept, the segmentation per- formance and human agreement rates were used as our evaluation criteria. To obtain our ground truth data and viewer agreement rates, a pilot panel study which was based on the work of Gross et al. [4] was conducted. Experiment results will show the feasibility of our proposed method. For the first stage of our proposal, our experiment results will show that an average improvement of as high as 38% was achieved over previous works. As for the second stage, an improvement of as high as 37% was achieved over the TAC algorithm

    Alpha, Beta, Sigma: A Critical Analysis of Sigma Male Ideology

    Get PDF
    In the online environment, the Manosphere has been identified as an unstructured network of groups who express harmful anti-feminist, and anti-progressive views. Informally associated with the Manosphere, Sigma Male ideology has emerged as an allegedly unique classification of men who are successful and popular, but also silent and rebellious. Despite assertions that they adhere to their own principles, Sigma Male ideological expressions, as conveyed through video memes of select fictional role models, demonstrate that they are more intimately connected to the Manosphere than acknowledged. This research paper applies critical qualitative meme analysis to TikTok videos that feature the specific Sigma Male inspirational figure of Travis Bickle from Martin Scorsese’s 1976 film Taxi Driver. The objective of this study was to establish how Sigma Male representational practices reflect a comparable, or distinct ideology from the Manosphere. The resulting analysis of Sigma Male memes revealed that while their ideological perspectives correspond with the reactionary values of the Manosphere, they differ in being implicitly political. The ideological sentiments of Sigma Males are rather affectively charged and represent a point of political orientation where regressive political views are likely to develop
    corecore