1,776 research outputs found

    Fast Fight Detection

    Get PDF
    Action recognition has become a hot topic within computer vision. However, the action recognition community has focused mainly on relatively simple actions like clapping, walking, jogging, etc. The detection of specific events with direct practical use such as fights or in general aggressive behavior has been comparatively less studied. Such capability may be extremely useful in some video surveillance scenarios like prisons, psychiatric centers or even embedded in camera phones. As a consequence, there is growing interest in developing violence detection algorithms. Recent work considered the well-known Bag-of-Words framework for the specific problem of fight detection. Under this framework, spatio-temporal features are extracted from the video sequences and used for classification. Despite encouraging results in which high accuracy rates were achieved, the computational cost of extracting such features is prohibitive for practical applications. This work proposes a novel method to detect violence sequences. Features extracted from motion blobs are used to discriminate fight and non-fight sequences. Although the method is outperformed in accuracy by state of the art, it has a significantly faster computation time thus making it amenable for real-time applications

    Deception Detection in Videos

    Full text link
    We present a system for covert automated deception detection in real-life courtroom trial videos. We study the importance of different modalities like vision, audio and text for this task. On the vision side, our system uses classifiers trained on low level video features which predict human micro-expressions. We show that predictions of high-level micro-expressions can be used as features for deception prediction. Surprisingly, IDT (Improved Dense Trajectory) features which have been widely used for action recognition, are also very good at predicting deception in videos. We fuse the score of classifiers trained on IDT features and high-level micro-expressions to improve performance. MFCC (Mel-frequency Cepstral Coefficients) features from the audio domain also provide a significant boost in performance, while information from transcripts is not very beneficial for our system. Using various classifiers, our automated system obtains an AUC of 0.877 (10-fold cross-validation) when evaluated on subjects which were not part of the training set. Even though state-of-the-art methods use human annotations of micro-expressions for deception detection, our fully automated approach outperforms them by 5%. When combined with human annotations of micro-expressions, our AUC improves to 0.922. We also present results of a user-study to analyze how well do average humans perform on this task, what modalities they use for deception detection and how they perform if only one modality is accessible. Our project page can be found at \url{https://doubaibai.github.io/DARE/}.Comment: AAAI 2018, project page: https://doubaibai.github.io/DARE

    Efficient audio signal processing for embedded systems

    Get PDF
    We investigated two design strategies that would allow us to efficiently process audio signals on embedded systems such as mobile phones and portable electronics. In the first strategy, we exploit properties of the human auditory system to process audio signals. We designed a sound enhancement algorithm to make piezoelectric loudspeakers sound "richer" and "fuller," using a combination of bass extension and dynamic range compression. We also developed an audio energy reduction algorithm for loudspeaker power management by suppressing signal energy below the masking threshold. In the second strategy, we use low-power analog circuits to process the signal before digitizing it. We designed an analog front-end for sound detection and implemented it on a field programmable analog array (FPAA). The sound classifier front-end can be used in a wide range of applications because programmable floating-gate transistors are employed to store classifier weights. Moreover, we incorporated a feature selection algorithm to simplify the analog front-end. A machine learning algorithm AdaBoost is used to select the most relevant features for a particular sound detection application. We also designed the circuits to implement the AdaBoost-based analog classifier.PhDCommittee Chair: Anderson, David; Committee Member: Hasler, Jennifer; Committee Member: Hunt, William; Committee Member: Lanterman, Aaron; Committee Member: Minch, Bradle

    Information extraction from primary care visits to support patient-provider interactions

    Get PDF
    The extent of electronic health record systems usage in clinical settings has affected the dynamic between clinicians and patients and has thus been connected to physician morale and the quality of care patients receive. Recent research has also uncovered a correlation between physician burnout and negative physician attitudes electronic health record systems. In order to begin exploring the nature of the relationship between electronic health record usage, physician burnout, and patient care, it is necessary to first analyze patient-provider interactions within the context of verbal features such as turn-taking and non-verbal features such as eye-contact. While previous works have sought to annotate non-verbal and verbal features via manual coding techniques and then analyze their impacts, we seek to automate the process of annotation in order to create a more robust system of analysis in less time-consuming fashion. This research thesis focuses upon physician gaze and speaking annotations, as these are non-verbal and verbal components of the interaction which can be connected to eye-contact and turn-taking, respectively, which are themselves features that have linked in certain research to patient outcomes. Previously published work from within this project has demonstrated the viability of extracting image features in the form of YOLO-based person positioning coordinates and optical flow summary statistics to inform the learning of physician gaze for two physicians and six patients with over 80% minimum accuracy. The work described in this thesis expands upon the previous findings by increasing the number of patients and physicians in the realm of analysis; by diversifying the classifiers to be more robust to new data; and by incorporating automatically extracted audio information in the form of mel frequency cepstral coefficients and its derivatives, as well as an additional optical flow summary statistic, in order to make predictions regarding physician gaze and speaking annotations on a frame by frame basis. We thus illustrate a process of developing and implementing an automated system for multiple video labeling of physician-patient interactions. In so doing, we demonstrate that a combination of audio and visual features can be combined to inform the predictions of physician gaze and speaking annotations in both testing and sequential validation data. While our approach focuses upon learning physician gaze and speaking annotations, the methodologies introduced can be extended to capture other aspects of the interaction as well as connect these interactions to patient ratings of clinical interactions, physician usage of electronic health record systems, and measures of physician burnout. Ultimately, the approaches presented in this paper can aid the creation of an interactive system providing instantaneous feedback to providers during clinician visits, which will be created with the intention of improving clinical care within the context of electronic health care so as to enhance care, improve patient outcomes, and reduce instances of physician burnout

    Efficient smile detection by Extreme Learning Machine

    Get PDF
    Smile detection is a specialized task in facial expression analysis with applications such as photo selection, user experience analysis, and patient monitoring. As one of the most important and informative expressions, smile conveys the underlying emotion status such as joy, happiness, and satisfaction. In this paper, an efficient smile detection approach is proposed based on Extreme Learning Machine (ELM). The faces are first detected and a holistic flow-based face registration is applied which does not need any manual labeling or key point detection. Then ELM is used to train the classifier. The proposed smile detector is tested with different feature descriptors on publicly available databases including real-world face images. The comparisons against benchmark classifiers including Support Vector Machine (SVM) and Linear Discriminant Analysis (LDA) suggest that the proposed ELM based smile detector in general performs better and is very efficient. Compared to state-of-the-art smile detector, the proposed method achieves competitive results without preprocessing and manual registration

    Audio-assisted movie dialogue detection

    Get PDF
    An audio-assisted system is investigated that detects if a movie scene is a dialogue or not. The system is based on actor indicator functions. That is, functions which define if an actor speaks at a certain time instant. In particular, the cross-correlation and the magnitude of the corresponding the cross-power spectral density of a pair of indicator functions are input to various classifiers, such as voted perceptions, radial basis function networks, random trees, and support vector machines for dialogue/non-dialogue detection. To boost classifier efficiency AdaBoost is also exploited. The aforementioned classifiers are trained using ground truth indicator functions determined by human annotators for 41 dialogue and another 20 non-dialogue audio instances. For testing, actual indicator functions are derived by applying audio activity detection and actor clustering to audio recordings. 23 instances are randomly chosen among the aforementioned 41 dialogue instances, 17 of which correspond to dialogue scenes and 6 to non-dialogue ones. Accuracy ranging between 0.739 and 0.826 is reported. © 2008 IEEE

    Automatic Music Genre Classification of Audio Signals with Machine Learning Approaches

    Get PDF
    Musical genre classification is put into context byexplaining about the structures in music and how it is analyzedand perceived by humans. The increase of the music databaseson the personal collection and the Internet has brought a greatdemand for music information retrieval, and especiallyautomatic musical genre classification. In this research wefocused on combining information from the audio signal thandifferent sources. This paper presents a comprehensivemachine learning approach to the problem of automaticmusical genre classification using the audio signal. Theproposed approach uses two feature vectors, Support vectormachine classifier with polynomial kernel function andmachine learning algorithms. More specifically, two featuresets for representing frequency domain, temporal domain,cepstral domain and modulation frequency domain audiofeatures are proposed. Using our proposed features SVM act asstrong base learner in AdaBoost, so its performance of theSVM classifier cannot improve using boosting method. Thefinal genre classification is obtained from the set of individualresults according to a weighting combination late fusionmethod and it outperformed the trained fusion method. Musicgenre classification accuracy of 78% and 81% is reported onthe GTZAN dataset over the ten musical genres and theISMIR2004 genre dataset over the six musical genres,respectively. We observed higher classification accuracies withthe ensembles, than with the individual classifiers andimprovements of the performances on the GTZAN andISMIR2004 genre datasets are three percent on average. Thisensemble approach show that it is possible to improve theclassification accuracy by using different types of domainbased audio features
    • …
    corecore