CORE
CO
nnecting
RE
positories
Services
Services overview
Explore all CORE services
Access to raw data
API
Dataset
FastSync
Content discovery
Recommender
Discovery
OAI identifiers
OAI Resolver
Managing content
Dashboard
Bespoke contracts
Consultancy services
Support us
Support us
Membership
Sponsorship
Research partnership
About
About
About us
Our mission
Team
Blog
FAQs
Contact us
Community governance
Governance
Advisory Board
Board of supporters
Research network
Innovations
Our research
Labs
Cross-modal analysis of facial EMG in micro-expressions and data annotation algorithm
Authors
东子朝
刘烨
+4 more
张建行
李婧婷
王俨
王甦菁
Publication date
1 January 2023
Publisher
Doi
Abstract
长久以来,微表情的小样本问题始终制约着微表情分析的发展,而小样本问题归根到底是因为微表情的数据标注十分困难。本研究希望借助面部肌电作为技术手段,从微表情数据自动标注、半自动标注和无标注三个方面各提出一套解决方案。对于自动标注,提出基于面部远端肌电的微表情自动标注方案;对于半自动标注,提出基于单帧标注的微表情起止帧自动标注;对于无标注,提出了基于肌电信号的跨模态自监督学习算法。同时,本研究还希望借助肌电模态,对微表情的呈现时间和幅度等机理特征进行拓展研究。</p
Similar works
Full text
Available Versions
Institutional Repository of Institute of Psychology, Chinese Academy of Sciences
See this paper in CORE
Go to the repository landing page
Download from data provider
oai:/ir.psych.ac.cn:311026/462...
Last time updated on 11/06/2025