9 research outputs found

    PhD Forum: Investigating the performance of a multi-modal approach to unusual event detection

    Get PDF
    In this paper, we investigate the parameters under- pinning our previously presented system for detecting unusual events in surveillance applications [1]. The system identifies anomalous events using an unsupervised data-driven approach. During a training period, typical activities within a surveilled environment are modeled using multi-modal sensor readings. Significant deviations from the established model of regular activity can then be flagged as anomalous at run-time. Using this approach, the system can be deployed and automatically adapt for use in any environment without any manual adjustment. Experiments carried out on two days of audio-visual data were performed and evaluated using a manually annotated ground- truth. We investigate sensor fusion and quantitatively evaluate the performance gains over single modality models. We also investigate different formulations of our cluster-based model of usual scenes as well as the impact of dynamic thresholding on identifying anomalous events. Experimental results are promis- ing, even when modeling is performed using very simple audio and visual features

    Unusual event detection in real-world surveillance applications

    Get PDF
    Given the near-ubiquity of CCTV, there is significant ongoing research effort to apply image and video analysis methods together with machine learning techniques towards autonomous analysis of such data sources. However, traditional approaches to scene understanding remain dependent on training based on human annotations that need to be provided for every camera sensor. In this thesis, we propose an unusual event detection and classification approach which is applicable to real-world visual monitoring applications. The goal is to infer the usual behaviours in the scene and to judge the normality of the scene on the basis on the model created. The first requirement for the system is that it should not demand annotated data to train the system. Annotation of the data is a laborious task, and it is not feasible in practice to annotate video data for each camera as an initial stage of event detection. Furthermore, even obtaining training examples for the unusual event class is challenging due to the rarity of such events in video data. Another requirement for the system is online generation of results. In surveillance applications, it is essential to generate real-time results to allow a swift response by a security operator to prevent harmful consequences of unusual and antisocial events. The online learning capabilities also mean that the model can be continuously updated to accommodate natural changes in the environment. The third requirement for the system is the ability to run the process indefinitely. The mentioned requirements are necessary for real-world surveillance applications and the approaches that conform to these requirements need to be investigated. This thesis investigates unusual event detection methods that conform with real-world requirements and investigates the issue through theoretical and experimental study of machine learning and computer vision algorithms

    Anti-social behavior detection in audio-visual surveillance systems

    Get PDF
    In this paper we propose a general purpose framework for detection of unusual events. The proposed system is based on the unsupervised method for unusual scene detection in web{cam images that was introduced in [1]. We extend their algorithm to accommodate data from different modalities and introduce the concept of time-space blocks. In addition, we evaluate early and late fusion techniques for our audio-visual data features. The experimental results on 192 hours of data show that data fusion of audio and video outperforms using a single modality

    Low cost angular displacement sensors for biomechanical applications - a review

    Get PDF
    In the general scientific quest for increased quality of life a natural ambition is to know more about human body kinematics. Varied knowledge can be extracted from sensors placed on human body and through associated biomechanical parameter evaluation the causal connection between different biomechanical parameters and medical conditions can be inferred. From a biomechanical point of view, one of the most important parameters within the human body is the amplitude of angular movements of joints. Although many angular sensors are used in industry, particular characteristics such as small size, flexibility and appropriate attachment methods must be taken into consideration when estimating the amplitude of movement of human joints. This paper reviews the existing low cost easy to manipulate angular sensors listed in the scientific literature, which currently are or could be used in rehabilitation engineering, physiotherapy or biomechanical evaluations in sport. The review is carried out in terms of a classification based on the sensors’ working principles and includes resistive, capacitive, magnetic and piezoresistive sensors

    Performance analysis and visualisation in tennis using a low-cost camera network

    Get PDF
    We describe a novel system for tennis performance analysis that allows coaches to review games and provide detailed audio-visual feedback to tennis athletes. The basis for our system is a network of low-cost IP cameras surrounding the tennis court. Our system exploits the output of several visual analysis modules, including the tracking of players and the tennis ball, and the extraction of player silhouettes for 3D reconstruction. A range of intuitive tools within the interface allow tennis coaches to add 2D and 3D annotations to live video, view play from multiple perspectives, record audio commentary and compute game statistics in real-time. The result is a video file that can be used to provide personalised feedback to the players or for use as a teaching resource for others. While we focus on tennis in this work, we believe our system can be generalised to other sports and allow a range of non-professional sports clubs to provide high-quality feedback to their athletes

    MedFit: a mobile application for recovering CVD patients

    Get PDF
    The third phase of the recovery from cardiovascular disease (CVD) is an exercise-based rehabilitation programme. However, adherence to an exercise regime is typically not maintained by the patient for a variety of reasons such as lack of time, financial constraints, etc. In order to facilitate patients to perform their exercises from the comfort of their home and at their own convenience, we have developed a mobile application, termed MedFit. It provides access to a tailored suite of exercises along with easy to understand guidance from audio and video instructions. Two types of wearable sensors are utilized to allow motivational feedback to be provided to the user for self monitoring and to provide near real-time feedback. Fitbit, a commercially available activity and fitness tracker, is used to provide in-depth feedback for self-monitoring over longer periods of time (e.g. day, week, month), whereas the Shimmer wireless sensing platform provides the data for near real-time feedback on the quality of the exercises performed. MedFit is a simple and intuitive mobile application designed to provide the motivation and tools for patients to help ensure faster recovery from the trauma caused by CVD. In this paper we describe the MedFit application as a demo submission to the 2nd MMHealth Workshop at ACM MM 2017

    How interaction methods affect image segmentation: User experience in the task

    Get PDF
    Interactive image segmentation is extensively used in photo editing when the aim is to separate a foreground object from its background so that it is available for various applications. The goal of the interaction is to get an accurate segmentation of the object with the minimal amount of human effort. To improve the usability and user experience using interactive image segmentation we present three interaction methods and study the effect of each using both objective and subjective metrics, such as, accuracy, amount of effort needed, cognitive load and preference of interaction method as voted by users. The novelty of this paper is twofold. First , the evaluation of interaction methods is carried out with objective metrics such as object and boundary accuracies in tandem with subjective metrics to cross check if they support each other. Second, we analyze Electroencephalography (EEG) data obtained from subjects perform- ing the segmentation as an indicator of brain activity. The experimental results potentially give valuable cues for the development of easy-to-use yet efficient interaction methods for image segmentation

    Reduction of false alarms triggered by spiders/cobwebs in surveillance camera networks.

    Get PDF
    The percentage of false alarms caused by spiders in automated surveillance can range from 20-50%. False alarms increase the workload of surveillance personnel validating the alarms and the maintenance labor cost associated with regular cleaning of webs. We propose a novel, cost effective method to detect false alarms triggered by spiders/webs in surveillance camera networks. This is accomplished by building a spider classifier intended to be a part of the surveillance video processing pipeline. The proposed method uses a feature descriptor obtained by early fusion of blur and texture. The approach is sufficiently efficient for real-time processing and yet comparable in performance with more computationally costly approaches like SIFT with bag of visual words aggregation. The proposed method can eliminate 98.5% of false alarms caused by spiders in a data set supplied by an industry partner, with a false positive rate of less than 1

    Design and development of the medFit app: a mobile application for cardiovascular disease rehabilitation

    No full text
    Rehabilitation from cardiovascular disease (CVD) usually requires lifestyle changes, especially an increase in exercise and physical activity. However, uptake and adherence to exercise is low for community-based programmes. We propose a mobile application that allows users to choose the type of exercise and compete it at a convenient time in the comfort of their own home. Grounded in a behaviour change framework, the application provides feedback and encouragement to continue exercising and to improve on previous results. The application also utilizes wearable wireless technologies in order to provide highly personalized feedback. The application can accurately detect if a specific exercise is being done, and count the associated number of repetitions utilizing accelerometer or gyroscope signals Machine learning models are employed to recognize individual local muscular endurance (LME) exercises, achieving overall accuracy of more than 98%. This technology allows providing a near real-time personalized feedback which mimics the feedback that the user might expect from an instructor. This is provided to motivate users to continue the recovery process.peer-reviewe
    corecore