50 research outputs found

    SAFE: An EEG dataset for stable affective feature selection

    Get PDF
    An affective brain-computer interface (aBCI) is a direct communication pathway between human brain and computer, via which the computer tries to recognize the affective states of its user and respond accordingly. As aBCI introduces personal affective factors into human-computer interaction, it could potentially enrich the user’s experience during the interaction. Successful emotion recognition plays a key role in such a system. The state-of-the-art aBCIs leverage machine learning techniques which consist in acquiring affective electroencephalogram (EEG) signals from the user and calibrating the classifier to the affective patterns of the user. Many studies have reported satisfactory recognition accuracy using this paradigm. However, affective neural patterns are volatile over time even for the same subject. The recognition accuracy cannot be maintained if the usage of aBCI prolongs without recalibration. Existing studies have overlooked the performance evaluation of aBCI during long-term use. In this paper, we propose SAFE—an EEG dataset for stable affective feature selection. The dataset includes multiple recording sessions spanning across several days for each subject. Multiple sessions across different days were recorded so that the long-term recognition performance of aBCI can be evaluated. Based on this dataset, we demonstrate that the recognition accuracy of aBCIs deteriorates when re-calibration is ruled out during long-term usage. Then, we propose a stable feature selection method to choose the most stable affective features, for mitigating the accuracy deterioration to a lesser extent and maximizing the aBCI performance in the long run. We invite other researchers to test the performance of their aBCI algorithms on this dataset, and especially to evaluate the long-term performance of their methods

    PaLM 2 Technical Report

    Full text link
    We introduce PaLM 2, a new state-of-the-art language model that has better multilingual and reasoning capabilities and is more compute-efficient than its predecessor PaLM. PaLM 2 is a Transformer-based model trained using a mixture of objectives. Through extensive evaluations on English and multilingual language, and reasoning tasks, we demonstrate that PaLM 2 has significantly improved quality on downstream tasks across different model sizes, while simultaneously exhibiting faster and more efficient inference compared to PaLM. This improved efficiency enables broader deployment while also allowing the model to respond faster, for a more natural pace of interaction. PaLM 2 demonstrates robust reasoning capabilities exemplified by large improvements over PaLM on BIG-Bench and other reasoning tasks. PaLM 2 exhibits stable performance on a suite of responsible AI evaluations, and enables inference-time control over toxicity without additional overhead or impact on other capabilities. Overall, PaLM 2 achieves state-of-the-art performance across a diverse set of tasks and capabilities. When discussing the PaLM 2 family, it is important to distinguish between pre-trained models (of various sizes), fine-tuned variants of these models, and the user-facing products that use these models. In particular, user-facing products typically include additional pre- and post-processing steps. Additionally, the underlying models may evolve over time. Therefore, one should not expect the performance of user-facing products to exactly match the results reported in this report

    Measurement of jet fragmentation in Pb+Pb and pppp collisions at sNN=2.76\sqrt{{s_\mathrm{NN}}} = 2.76 TeV with the ATLAS detector at the LHC

    Get PDF

    EEG-based emotion recognition using machine learning techniques

    No full text
    Electroencephalography (EEG)-based emotion recognition attempts to detect the affective states of humans directly via spontaneous EEG signals, bypassing the peripheral nervous system. In this thesis, we explore various machine learning techniques for EEG-based emotion recognition, and focus on the three research gaps outlined as follows. 1. Stable feature selection for recalibration-less affective Brain-Computer Interfaces. 2. Cross-subject transfer learning for calibration-less affective Brain-Computer Interfaces. 3. Unsupervised feature learning for affective Brain-Computer Interfaces. We propose several novel methods in this thesis to address the three research gaps and validate our proposed methods by experiments. Extensive comparisons between our methods and other existing methods justify the advantages of our methods.Doctor of Philosoph

    Forecast horizon of multi-item dynamic lot size model with perishable inventory.

    No full text
    This paper studies a multi-item dynamic lot size problem for perishable products where stock deterioration rates and inventory costs are age-dependent. We explore structural properties in an optimal solution under two cost structures and develop a dynamic programming algorithm to solve the problem in polynomial time when the number of products is fixed. We establish forecast horizon results that can help the operation manager to decide the precise forecast horizon in a rolling decision-making process. Finally, based on a detailed test bed of instance, we obtain useful managerial insights on the impact of deterioration rate and lifetime of products on the length of forecast horizon

    Median forecast horizon as a function of lifetime and joint setup cost.

    No full text
    <p>Median forecast horizon as a function of lifetime and joint setup cost.</p

    Median forecast horizon as a function of lifetime and demand variability.

    No full text
    <p>Median forecast horizon as a function of lifetime and demand variability.</p

    Median forecast horizon as a function of grade and joint setup cost.

    No full text
    <p>Median forecast horizon as a function of grade and joint setup cost.</p

    The grades of deterioration rate.

    No full text
    <p>The grades of deterioration rate.</p

    Median forecast horizon as a function of grade and demand growth.

    No full text
    <p>Median forecast horizon as a function of grade and demand growth.</p
    corecore