40 research outputs found
Recommended from our members
From M-ary Query to Bit Query: a new strategy for efficient large-scale RFID identification
The tag collision avoidance has been viewed as one of the most important research problems in RFID communications and bit tracking technology has been widely embedded in query tree (QT) based algorithms to tackle such challenge. Existing solutions show further opportunity to greatly improve the reading performance because collision queries and empty queries are not fully explored. In this paper, a bit query (BQ) strategy based Mary query tree protocol (BQMT) is presented, which can not only eliminate idle queries but also separate collided tags into many small subsets and make full use of the collided bits. To further optimize the reading performance, a modified dual prefixes matching (MDPM) mechanism is presented to allow multiple tags to respond in the same slot and thus significantly reduce the number of queries. Theoretical analysis and simulations are supplemented to validate the effectiveness of the proposed BQMT and MDPM, which outperform the existing QT-based algorithms. Also, the BQMT and MDPM can be combined to BQMDPM to improve the reading performance in system efficiency, total identification time, communication complexity and average energy cost
TeGit: Generating High-Quality Instruction-Tuning Data with Text-Grounded Task Design
High-quality instruction-tuning data is critical to improving LLM
capabilities. Existing data collection methods are limited by unrealistic
manual labeling costs or by the hallucination of relying solely on LLM
generation. To address the problems, this paper presents a scalable method to
automatically collect high-quality instructional adaptation data by training
language models to automatically design tasks based on human-written texts.
Intuitively, human-written text helps to help the model attenuate illusions
during the generation of tasks. Unlike instruction back-translation-based
methods that directly take the given text as a response, we require the model
to generate the \textit{instruction}, \textit{input}, and \textit{output}
simultaneously to filter the noise. The results of the automated and manual
evaluation experiments demonstrate the quality of our dataset.Comment: Work in progres
Fusion of Facial Expressions and EEG for Multimodal Emotion Recognition
This paper proposes two multimodal fusion methods between brain and peripheral signals for emotion recognition. The input signals are electroencephalogram and facial expression. The stimuli are based on a subset of movie clips that correspond to four specific areas of valance-arousal emotional space (happiness, neutral, sadness, and fear). For facial expression detection, four basic emotion states (happiness, neutral, sadness, and fear) are detected by a neural network classifier. For EEG detection, four basic emotion states and three emotion intensity levels (strong, ordinary, and weak) are detected by two support vector machines (SVM) classifiers, respectively. Emotion recognition is based on two decision-level fusion methods of both EEG and facial expression detections by using a sum rule or a production rule. Twenty healthy subjects attended two experiments. The results show that the accuracies of two multimodal fusion detections are 81.25% and 82.75%, respectively, which are both higher than that of facial expression (74.38%) or EEG detection (66.88%). The combination of facial expressions and EEG information for emotion recognition compensates for their defects as single information sources
Recommended from our members
Long-read sequencing reveals genomic structural variations that underlie creation of quality protein maize
Mutation of o2 doubles maize endosperm lysine content, but it causes an inferior kernel phenotype. Developing quality protein maize (QPM) by introgressing o2 modifiers (Mo2s) into the o2 mutant benefits millions of people in developing countries where maize is a primary protein source. Here, we report genome sequence and annotation of a South African QPM line K0326Y, which is assembled from single-molecule, real-time shotgun sequencing reads collinear with an optical map. We achieve a N50 contig length of 7.7 million bases (Mb) directly from long-read assembly, compared to those of 1.04 Mb for B73 and 1.48 Mb for Mo17. To characterize Mo2s, we map QTLs to chromosomes 1, 6, 7, and 9 using an F2 population derived from crossing K0326Y and W64Ao2. RNA-seq analysis of QPM and o2 endosperms reveals a group of differentially expressed genes that coincide with Mo2 QTLs, suggesting a potential role in vitreous endosperm formation.Open access journalThis item from the UA Faculty Publications collection is made available by the University of Arizona with support from the University of Arizona Libraries. If you have questions, please contact us at [email protected]
Combining Facial Expressions and Electroencephalography to Enhance Emotion Recognition
Emotion recognition plays an essential role in human–computer interaction. Previous studies have investigated the use of facial expression and electroencephalogram (EEG) signals from single modal for emotion recognition separately, but few have paid attention to a fusion between them. In this paper, we adopted a multimodal emotion recognition framework by combining facial expression and EEG, based on a valence-arousal emotional model. For facial expression detection, we followed a transfer learning approach for multi-task convolutional neural network (CNN) architectures to detect the state of valence and arousal. For EEG detection, two learning targets (valence and arousal) were detected by different support vector machine (SVM) classifiers, separately. Finally, two decision-level fusion methods based on the enumerate weight rule or an adaptive boosting technique were used to combine facial expression and EEG. In the experiment, the subjects were instructed to watch clips designed to elicit an emotional response and then reported their emotional state. We used two emotion datasets—a Database for Emotion Analysis using Physiological Signals (DEAP) and MAHNOB-human computer interface (MAHNOB-HCI)—to evaluate our method. In addition, we also performed an online experiment to make our method more robust. We experimentally demonstrated that our method produces state-of-the-art results in terms of binary valence/arousal classification, based on DEAP and MAHNOB-HCI data sets. Besides this, for the online experiment, we achieved 69.75% accuracy for the valence space and 70.00% accuracy for the arousal space after fusion, each of which has surpassed the highest performing single modality (69.28% for the valence space and 64.00% for the arousal space). The results suggest that the combination of facial expressions and EEG information for emotion recognition compensates for their defects as single information sources. The novelty of this work is as follows. To begin with, we combined facial expression and EEG to improve the performance of emotion recognition. Furthermore, we used transfer learning techniques to tackle the problem of lacking data and achieve higher accuracy for facial expression. Finally, in addition to implementing the widely used fusion method based on enumerating different weights between two models, we also explored a novel fusion method, applying boosting technique
Combining Facial Expressions and Electroencephalography to Enhance Emotion Recognition
Emotion recognition plays an essential role in human−computer interaction. Previous studies have investigated the use of facial expression and electroencephalogram (EEG) signals from single modal for emotion recognition separately, but few have paid attention to a fusion between them. In this paper, we adopted a multimodal emotion recognition framework by combining facial expression and EEG, based on a valence-arousal emotional model. For facial expression detection, we followed a transfer learning approach for multi-task convolutional neural network (CNN) architectures to detect the state of valence and arousal. For EEG detection, two learning targets (valence and arousal) were detected by different support vector machine (SVM) classifiers, separately. Finally, two decision-level fusion methods based on the enumerate weight rule or an adaptive boosting technique were used to combine facial expression and EEG. In the experiment, the subjects were instructed to watch clips designed to elicit an emotional response and then reported their emotional state. We used two emotion datasets—a Database for Emotion Analysis using Physiological Signals (DEAP) and MAHNOB-human computer interface (MAHNOB-HCI)—to evaluate our method. In addition, we also performed an online experiment to make our method more robust. We experimentally demonstrated that our method produces state-of-the-art results in terms of binary valence/arousal classification, based on DEAP and MAHNOB-HCI data sets. Besides this, for the online experiment, we achieved 69.75% accuracy for the valence space and 70.00% accuracy for the arousal space after fusion, each of which has surpassed the highest performing single modality (69.28% for the valence space and 64.00% for the arousal space). The results suggest that the combination of facial expressions and EEG information for emotion recognition compensates for their defects as single information sources. The novelty of this work is as follows. To begin with, we combined facial expression and EEG to improve the performance of emotion recognition. Furthermore, we used transfer learning techniques to tackle the problem of lacking data and achieve higher accuracy for facial expression. Finally, in addition to implementing the widely used fusion method based on enumerating different weights between two models, we also explored a novel fusion method, applying boosting technique
Online Reservation Intention of Tourist Attractions in the COVID-19 Context: An Extended Technology Acceptance Model
Travel reservation is an important way to improve tourist experiences and digitally manage tourist attractions in the COVID-19 context. However, few studies have focused on the online reservation intentions of tourist attractions and its influencing factors. Based on the theory of the technology acceptance model (TAM), two variables (perceived risk and government policy) are introduced to expand on the theoretical model. This study investigates the influence of subjective norms, government policy, perceived usefulness, perceived ease of use, and perceived risk on reservation intentions of tourist attractions. An online survey was conducted in China, and 255 questionnaires were collected. The data were analysed using SPSS 26.0 and AMOS 28.0 to construct a structural equation modelling and analyse the path. The findings show that (1) subjective norms have no significant impact on reservation behaviours under voluntary situations; (2) perceived usefulness positively affects tourists’ reservation intention; and (3) perceived risk has a significant negative impact on reservation intention, and government policy is the main factor affecting tourists’ reservation intentions. These findings enhance the understanding of tourists’ reservation intentions and extend the TAM theory. From the practice perspective, tourist attraction operators should continue to strengthen the construction of the reservation system, improve tourists’ experiences, reduce the perceived risk of tourists, and other stakeholders such as the government should strengthen cooperation, promote the reservation system, and create a good reservation atmosphere