134 research outputs found

    Human motion reconstruction using wearable accelerometers

    Get PDF
    We address the problem of capturing human motion in scenarios where the use of a traditional optical motion capture system is impractical. Such scenarios are relatively commonplace, such as in large spaces, outdoors or at competitive sporting events, where the limitations of such systems are apparent: the small physical area where motion capture can be done and the lack of robustness to lighting changes and occlusions. In this paper, we advocate the use of body-worn wearable wireless accelerometers for reconstructing human motion and to this end we outline a system that is more portable than traditional optical motion capture systems, whilst producing naturalistic motion. Additionally, if information on the person's root position is available, an extended version of our algorithm can use this information to correct positional drift

    Detector adaptation by maximising agreement between independent data sources

    Get PDF
    Traditional methods for creating classifiers have two main disadvantages. Firstly, it is time consuming to acquire, or manually annotate, the training collection. Secondly, the data on which the classifier is trained may be over-generalised or too specific. This paper presents our investigations into overcoming both of these drawbacks simultaneously, by providing example applications where two data sources train each other. This removes both the need for supervised annotation or feedback, and allows rapid adaptation of the classifier to different data. Two applications are presented: one using thermal infrared and visual imagery to robustly learn changing skin models, and another using changes in saturation and luminance to learn shadow appearance parameters

    Comparison of fusion methods for thermo-visual surveillance tracking

    Get PDF
    In this paper, we evaluate the appearance tracking performance of multiple fusion schemes that combine information from standard CCTV and thermal infrared spectrum video for the tracking of surveillance objects, such as people, faces, bicycles and vehicles. We show results on numerous real world multimodal surveillance sequences, tracking challenging objects whose appearance changes rapidly. Based on these results we can determine the most promising fusion scheme

    Automatic camera selection for activity monitoring in a multi-camera system for tennis

    Get PDF
    In professional tennis training matches, the coach needs to be able to view play from the most appropriate angle in order to monitor players' activities. In this paper, we describe and evaluate a system for automatic camera selection from a network of synchronised cameras within a tennis sporting arena. This work combines synchronised video streams from multiple cameras into a single summary video suitable for critical review by both tennis players and coaches. Using an overhead camera view, our system automatically determines the 2D tennis-court calibration resulting in a mapping that relates a player's position in the overhead camera to their position and size in another camera view in the network. This allows the system to determine the appearance of a player in each of the other cameras and thereby choose the best view for each player via a novel technique. The video summaries are evaluated in end-user studies and shown to provide an efficient means of multi-stream visualisation for tennis player activity monitoring

    Combining inertial and visual sensing for human action recognition in tennis

    Get PDF
    In this paper, we present a framework for both the automatic extraction of the temporal location of tennis strokes within a match and the subsequent classification of these as being either a serve, forehand or backhand. We employ the use of low-cost visual sensing and low-cost inertial sensing to achieve these aims, whereby a single modality can be used or a fusion of both classification strategies can be adopted if both modalities are available within a given capture scenario. This flexibility allows the framework to be applicable to a variety of user scenarios and hardware infrastructures. Our proposed approach is quantitatively evaluated using data captured from elite tennis players. Results point to the extremely accurate performance of the proposed approach irrespective of input modality configuration

    Dublin City University at TRECVID 2008

    Get PDF
    In this paper we describe our system and experiments performed for both the automatic search task and the event detection task in TRECVid 2008. For the automatic search task for 2008 we submitted 3 runs utilizing only visual retrieval experts, continuing our previous work in examining techniques for query-time weight generation for data-fusion and determining what we can get from global visual only experts. For the event detection task we submitted results for 5 required events (ElevatorNoEntry, OpposingFlow, PeopleMeet, Embrace and PersonRuns) and 1 optional event (DoorOpenClose)

    Image processing for smart browsing of ocean colour data products and subsequent incorporation into a multi-modal sensing framework

    Get PDF
    Ocean colour is defined as the water hue due to the presence of tiny plants containing the pigment chlorophyll, sediments and coloured dissolved organic material and so water colour can provide valuable information on coastal ecosystems. The ‘Ocean Colour project’ collects data from various satellites (e.g. MERIS, MODIS) and makes this data available online. One method of searching the Ocean Colour project data is to visually browse level 1 and level 2 data. Users can search via location (regions), time and data type. They are presented with images which cover chlorophyll, quasi-true colour and sea surface temperature (11 μ) and links to the source data. However it is often preferable for users to search such a complex and large dataset by event and analyse the distribution of colour in an image before examination of the source data. This will allow users to browse and search ocean colour data more efficiently and to include this information more seamlessly into a framework that incorporates sensor information from a variety of modalities. This paper presents a system for more efficient management and analysis of ocean colour data and suggests how this information can be incorporated into a multi-modal sensing framework for a smarter, more adaptive environmental sensor network

    River water-level estimation using visual sensing

    Get PDF
    This paper reports our initial work on the extraction of en- vironmental information from images sampled from a camera deployed to monitor a river environment. It demonstrates very promising results for the use of a visual sensor in a smart multi-modal sensor network
    corecore