16 research outputs found
Kinect Depth Recovery via the Cooperative Profit Random Forest Algorithm
The depth map captured by Kinect usually contain missing depth data. In this paper, we propose a novel method to recover the missing depth data with the guidance of depth information of each neighborhood pixel. In the proposed framework, a self-taught mechanism and a cooperative profit random forest (CPRF) algorithm are combined to predict the missing depth data based on the existing depth data and the corresponding RGB image. The proposed method can overcome the defects of the traditional methods which is prone to producing artifact or blur on the edge of objects. The experimental results on the Berkeley 3-D Object Dataset (B3DO) and the Middlebury benchmark dataset show that the proposed method outperforms the existing method for the recovery of the missing depth data. In particular, it has a good effect on maintaining the geometry of objects
Recommended from our members
MGEED: A Multimodal Genuine Emotion and Expression Detection Database
Multimodal emotion recognition has attracted increasing interest from academia and industry in recent years, since it enables emotion detection using various modalities, such as facial expression images, speech and physiological signals. Although research in this field has grown rapidly, it is still challenging to create a multimodal database containing facial electrical information due to the difficulty in capturing natural and subtle facial expression signals, such as optomyography (OMG) signals. To this end, we present a newly developed Multimodal Genuine Emotion and Expression Detection (MGEED) database in this paper, which is the first publicly available database containing the facial OMG signals. MGEED consists of 17 subjects with over 150K facial images, 140K depth maps and different modalities of physiological signals including OMG, electroencephalography (EEG) and electrocardiography (ECG) signals. The emotions of the participants are evoked by video stimuli and the data are collected by a multimodal sensing system. With the collected data, an emotion recognition method is developed based on multimodal signal synchronisation, feature extraction, fusion and emotion prediction. The results show that superior performance can be achieved by fusing the visual, EEG and OMG features. The database can be obtained from https://github.com/YMPort/MGEED.Engineering and Physical Sciences Research Council (EPSRC) through the Project 4D Facial Sensing and Modelling under Grant EP/N025849/1