882 research outputs found

    Residual Inter-Contact Time for Opportunistic Networks with Pareto Inter-Contact Time: Two Nodes Case

    Get PDF
    PDPTA'15 : The 21st International Conference on Parallel and Distributed Processing Techniques and Applications , Jul 27-30, 2015 , Las Vegas, NV, USAOpportunistic networks (OppNets) are appealing for many applications, such as wild life monitoring, disaster relief and mobile data offloading. In such a network, a message arriving at a mobile node could be transmitted to another mobile node when they opportunistically move into each other's transmission range (called in contact), and after multi-hop similar transmissions the message will finally reach its destination. Therefore, for one message the time interval from its arrival at a mobile node to the time the mobile node contacts another node constitutes an essential part of the message's whole delay. Thus, studying stochastic properties of this time interval between two nodes lays a solid foundation for evaluating the whole message delay in OppNets. Note that this time interval is within the time interval between two consecutive node contacts (called inter-contact time) and it is also referred to as residual inter-contact time. In this paper, we derive the closed-form distribution for residual inter-contact time. First, we formulate the contact process of a pair of mobile nodes as a renewal process, where the inter-contact time features the popular Pareto distribution. Then, we derive, based on the renewal theory, closed-form results for the transient distribution of residual inter-contact time and also its limiting distribution. Our theoretical results on distribution of residual inter-contact time are validated by simulations

    INDIVIDUAL DIFFERENCES IN BRAIN ACTIVITIES WHEN HUMAN WISHES TO LISTEN TO MUSIC CONTINUOUSLY USING NEAR-INFRARED SPECTROSCOPY

    Get PDF
    This paper introduces an individual difference in the activities of the prefrontal cortex when a person wants to listen to music using near-infrared spectroscopy. The individual differences are confirmed by visualizing the variation in oxygenated hemoglobin level. The sensing positions used to record the brain activities are around the prefrontal cortex. The existence of individual differences was verified by experiments. The experiment results show that active positions while feeling a wish to listen to music are different in each subject, and an oxygenated hemoglobin level is different in each subject compared to its value when a subject does not feel the wish to listen to music. The experiment results show that it is possible to detect a wish to listen to the music based on changes in the oxygenated hemoglobin level. Also, these results suggest that active positions are different in each subject because the sensitivities and how to feel on stimulus are different. Lastly, the results suggest that it is possible to express the individual differences as differences in active positions

    EEG Analysis Method to Detect Unspoken Answers to Questions Using MSNNs

    Get PDF
    Brain–computer interfaces (BCI) facilitate communication between the human brain and computational systems, additionally offering mechanisms for environmental control to enhance human life. The current study focused on the application of BCI for communication support, especially in detecting unspoken answers to questions. Utilizing a multistage neural network (MSNN) replete with convolutional and pooling layers, the proposed method comprises a threefold approach: electroencephalogram (EEG) measurements, EEG feature extraction, and answer classification. The EEG signals of the participants are captured as they mentally respond with “yes” or “no” to the posed questions. Feature extraction was achieved through an MSNN composed of three distinct convolutional neural network models. The first model discriminates between the EEG signals with and without discernible noise artifacts, whereas the subsequent two models are designated for feature extraction from EEG signals with or without such noise artifacts. Furthermore, a support vector machine is employed to classify the answers to the questions. The proposed method was validated via experiments using authentic EEG data. The mean and standard deviation values for sensitivity and precision of the proposed method were 99.6% and 0.2%, respectively. These findings demonstrate the viability of attaining high accuracy in a BCI by preliminarily segregating the EEG signals based on the presence or absence of artifact noise and underscore the stability of such classification. Thus, the proposed method manifests prospective advantages of separating EEG signals characterized by noise artifacts for enhanced BCI performance

    Japanese sign language classification based on gathered images and neural networks

    Get PDF
    This paper proposes a method to classify words in Japanese Sign Language (JSL). This approach employs a combined gathered image generation technique and a neural network with convolutional and pooling layers (CNNs). The gathered image generation generates images based on mean images. Herein, the maximum difference value is between blocks of mean and JSL motions images. The gathered images comprise blocks that having the calculated maximum difference value. CNNs extract the features of the gathered images, while a support vector machine for multi-class classification, and a multilayer perceptron are employed to classify 20 JSL words. The experimental results had 94.1% for the mean recognition accuracy of the proposed method. These results suggest that the proposed method can obtain information to classify the sample words

    Development of Eye Mouse Using EOG signals and Learning Vector Quantization Method

    Get PDF
    Recognition of eye motions has attracted more and more attention of researchers all over the world in recent years. Compared with other body movements, eye motion is responsive and needs a low consumption of physical strength. In particular, for patients with severe physical disabilities, eye motion is the last spontaneous motion for them to make a respond. In order to provide an efficient means of communication for patients such as ALS (amyotrophic lateral sclerosis) who cannot move even their muscles except eye, in this paper we proposed a system that uses EOG signals and Learning Vector Quantization algorithm to recognize eye motions. According to recognition results, we use API (application programming interface) to control cursor movements. This system would be used as a means of communication to help ALS patients

    Japanese Janken Recognition by Support Vector Machine Based on Electromyogram of Wrist

    Get PDF
    We propose a method which can discriminate hand motions in this paper. We measure an electromyogram (EMG) of wrist by using 8 dry type sensors. We focus on four motions, such as "Rock-Scissors-Paper" and "Neutral". "Neutral" is a state that does not do anything. In the proposed method, we apply fast Fourier transformation (FFT) to measured EMG data, and then remove a hum noise. Next, we combine values of sensors based on a Gaussian function. In this Gaussian function, variance and mean are 0.2 and 0, respectively. We then apply normalization by linear transformation to the values. Subsequently, we resize the values into the range from -1 to 1. Finally, a support vector machine (SVM) conducts learning and discrimination to classify them. We conducted experiments with seven subjects. Average of discrimination accuracy was 89.8%. In the previous method, the discrimination accuracy was 77.1%. Therefore, the proposed method is better in accuracy than the previous method. In future work, we will conduct an experiment which discriminates Japanese Janken of a subject who is not learned
    corecore