2 research outputs found

    Adaptive Multi-Type Fingerprint Indoor Positioning and Localization Method Based on Multi-Task Learning and Weight Coefficients K-Nearest Neighbor

    No full text
    The complex indoor environment makes the use of received fingerprints unreliable as an indoor positioning and localization method based on fingerprint data. This paper proposes an adaptive multi-type fingerprint indoor positioning and localization method based on multi-task learning (MTL) and Weight Coefficients K-Nearest Neighbor (WCKNN), which integrates magnetic field, Wi-Fi and Bluetooth fingerprints for positioning and localization. The MTL fuses the features of different types of fingerprints to search the potential relationship between them. It also exploits the synergy between the tasks, which can boost up positioning and localization performance. Then the WCKNN predicts another position of the fingerprints in a certain class determined by the obtained location. The final position is obtained by fusing the predicted positions using a weighted average method whose weights are the positioning errors provided by positioning error prediction models. Experimental results indicated that the proposed method achieved 98.58% accuracy in classifying locations with a mean positioning error of 1.95 m

    Affective feature knowledge interaction for empathetic conversation generation

    No full text
    A popular chatbot can generate natural and human-like responses, and the crucial technology is the ability to understand and appreciate the emotions and demands expressed from the perspective of the user. However, some empathetic dialogue generation models only specialise in commonsense and neglect emotion, which can only get a one-sided understanding of the user's situation and makes the model unable to express emotion better. In this paper, we propose a novel affective feature knowledge interactive model named AFKI, to enhance response generation performance, which enriches conversation history to obtain emotional interactive context by leveraging fine-grained emotional features and commonsense knowledge. Furthermore, we utilise an emotional interactive context encoder to learn higher-level affective interaction information and distill the emotional state feature to guide the empathetic response generation. The emotional features are to well capture the subtle differences of the user's emotional expression, and the commonsense knowledge improves the representation of affective information on generated responses. Extensive experiments on the empathetic conversation task demonstrate that our model generates multiple responses with higher emotion accuracy and stronger empathetic ability compared with baseline model approaches for empathetic response generation
    corecore