9 research outputs found

    Moving Object Detection and Tracking in Open-Air Test Bed

    Get PDF
    In mobile and ubiquitous computing environments, acquisition of contextual information about a user situation is necessary to provide useful services. Although the definition of user context may change according to the situation or the service used, contextual information about who, where, and when are considered to be essential. We have built a test bed with multiple sensors: floor pressure sensors, RFID (radio frequency identification) tag systems, and cameras, to carry out experiments to detect the positions of users and track their movement. The conventional background subtraction method by using cameras was used for moving object detection and tracking. In this paper, we propose knowledge application and parameter adaptation in the background subtraction method. The results are presented to show that the proposed method decreases the detection errors

    Sign-language synthesis for mobile environments

    Get PDF
    This paper describes the synthesis of sign-language animation for mobile environments. Sign language is synthesized by using either the motion-capture or motion-primitive method. An editing system can add facial expressions, mouth shapes and gestures to the sign-language CG animation. Sign-language animation is displayed on PDA screens to inform the user of his/her mobile environment

    Efficient coding and resonance spike identification for topside ionogram processing

    Get PDF
    This paper describes an effective coding method to eliminate the redundancy contained in digital ionograms and an algorithm to identify the resonance spikes appearing on topside ionograms. This work is a first step toward automatic profile reduction to obtain ionospheric electron density profiles. Topside sounder data recorded by ISIS-2 satellite are digitized and converted to digital ionograms. A quantitative comparison of data compression techniques, based on the run-length and predictive coding method, is made and leads the conclusion that the modified run-length coding method is most effective and useful from the practical view point. This simulation experiment results in self-consistent determination of characteristic frequencies with good accuracy except for the ionograms with obscure resonance spikes

    Spectral characteristics of radio noise at low and medium frequencies in the Antarctic topside ionosphere

    Get PDF
    The ISIS topside sounder data obtained at Syowa Sation, Antarctica, for the period from April 1976 to November 1977 are examined with emphasis on the noise spectra appearing on the Automatic Gain Control (AGC) data and on the ionograms. The noise events were observed on 16 out of 88 ISIS-1 passes and on 8 out of 138 ISIS-2 passes. At high altitudes near ISIS-1 apogee, almost all of the noise events are due to auroral kilometric radiations (AKR). A special event of AKR observed in the dayside ionosphere is investigated in detail. The result shows that this cusp-associated AKR occurred in a large scale region of electron density depletion where the ratio of electron plasma frequency, f_N to electron gyrofrequency, f_H ranges from 0.1 to 0.05. At altitudes below 2900km, two types of noise were observed; the whistler mode noise and the noise band appearing between the local f_N and f_T (upper hybrid resonance frequency). These noises are examined in connection with the local characteristic frequencies. The dependence of these noise intensities on the relationship between f_N and f_H is found to be in a qualitative agreement to Maggs\u27 power flux calculation of the electrostatic noise using the plausible auroral electron beam models

    Recognition of Local Features for Camera-based Sign Language Recognition System

    No full text
    15th International Conference on Pattern Recognition : 3-7 Sept. 2000 : SpainA sign language recognition system is required to use information from both global features, such as hand movement and location, and local features, such as hand shape and orientation. In this paper, we present an adequate local feature recognizer for a sign language recognition system. Our basic approach is to represent the hand images extracted from sign-language images as symbols which correspond to clusters by a clustering technique. The clusters are created from a training set of extracted hand images so that a similar appearance can be classified into the same cluster on an eigenspace. The experimental results indicate that our system can recognize a sign language word even in two-handed and hand-to-hand contact cases

    Appearance-based Recognition of Hand Shapes for Sign Language in Low Resolution Image

    No full text
    4th Asian Conference on Computer Vision, ACCV2000 : January 8-11, 2000 : TaiwanA sign language recognition system is required to use information from both global features, such as hand movements and locations, and local features, such as hand shapes and orientations We propose that a system acquires the images taken of the body of a person performing sign language, and selects possible words by detecting global features, then narrowing them of on the choices to one by using local features detected from the extracted low resolution hand images In this paper, we present an adequate local feature recognizer for a sign language recognition system. Our basic approach is to represent a set of local features with a cluster where they preliminary experiment is performed for verification of this recognition method. The experimental result indicates that our proposed method is fit for the sign language recognition system

    Recognition of Local Features for Camera-based Sign-Language Recognition System

    No full text
    <小特集>ヒューマンインタフェースA sign-language recognition system should use information from both global features, such as hand movement and location, and local features, such as hand shape and orientation. We designed a system that first selects possible words by using the detected global features, then narrows the choices down to one by using the detected local features. In this paper, we describe an adequate local feature recognizer for a sign-language recognition system. Our basic approach is to represent the hand images extracted from sign-language images as symbols corresponding to clusters by using a clustering technique. The clusters are created from a training set of extracted hand images so that images with a similar appearance can be classified into the same cluster in an eigenspace. Experimental results showed that our system can recognize a signed word even in two-handed and hand-to-hand contact cases

    カメラを用いた手話認識における見えの違いを考慮した手話の局所特徴認識

    No full text
    A sign-language recognition system should use information from both global features, such as hand movement and location, and local features, such as hand shape and orientation. We designed a system that first selects possible words by using the detected global features, then narrows the choices down to one by using the detected local features. In this paper, we describe an adequate local feature recognizer for a sign-language recognition system. Our basic approach is to represent the hand images extracted from sign-language images as symbols corresponding to clusters by using a clustering technique. The clusters are created from a training set of extracted hand images so that images with a similar appearance can be classified into the same cluster in an eigenspace. Experimental results showed that our system can recognize a signed word even in two-handed and hand-to-hand contact cases.ヒューマンインタフェー
    corecore