6 research outputs found

    Underwater Target Detection and 3D Reconstruction System Based on Binocular Vision

    No full text
    To better solve the problem of target detection in marine environment and to deal with the difficulty of 3D reconstruction of underwater target, a binocular vision-based underwater target detection and 3D reconstruction system is proposed in this paper. Two optical sensors are used as the vision of the system. Firstly, denoising and color restoration are performed on the image sequence acquired by the vision of the system and the underwater target is segmented and extracted according to the image saliency using the super-pixel segmentation method. Secondly, aiming to reduce mismatch, we improve the semi-global stereo matching method by strictly constraining the matching in the valid target area and then optimizing the basic disparity map within each super-pixel area using the least squares fitting interpolation method. Finally, based on the optimized disparity map, triangulation principle is used to calculate the three-dimensional data of the target and the 3D structure and color information of the target can be given by MeshLab. The experimental results show that for a specific size underwater target, the system can achieve higher measurement accuracy and better 3D reconstruction effect within a suitable distance

    Hemispheric Asymmetry of Functional Brain Networks under Different Emotions Using EEG Data

    No full text
    Despite many studies reporting hemispheric asymmetry in the representation and processing of emotions, the essence of the asymmetry remains controversial. Brain network analysis based on electroencephalography (EEG) is a useful biological method to study brain function. Here, EEG data were recorded while participants watched different emotional videos. According to the videos’ emotional categories, the data were divided into four categories: high arousal high valence (HAHV), low arousal high valence (LAHV), low arousal low valence (LALV) and high arousal low valence (HALV). The phase lag index as a connectivity index was calculated in theta (4–7 Hz), alpha (8–13 Hz), beta (14–30 Hz) and gamma (31–45 Hz) bands. Hemispheric networks were constructed for each trial, and graph theory was applied to quantify the hemispheric networks’ topological properties. Statistical analyses showed significant topological differences in the gamma band. The left hemispheric network showed significantly higher clustering coefficient (Cp), global efficiency (Eg) and local efficiency (Eloc) and lower characteristic path length (Lp) under HAHV emotion. The right hemispheric network showed significantly higher Cp and Eloc and lower Lp under HALV emotion. The results showed that the left hemisphere was dominant for HAHV emotion, while the right hemisphere was dominant for HALV emotion. The research revealed the relationship between emotion and hemispheric asymmetry from the perspective of brain networks

    Double-Camera Fusion System for Animal-Position Awareness in Farming Pens

    No full text
    In livestock breeding, continuous and objective monitoring of animals is manually unfeasible due to the large scale of breeding and expensive labour. Computer vision technology can generate accurate and real-time individual animal or animal group information from video surveillance. However, the frequent occlusion between animals and changes in appearance features caused by varying lighting conditions makes single-camera systems less attractive. We propose a double-camera system and image registration algorithms to spatially fuse the information from different viewpoints to solve these issues. This paper presents a deformable learning-based registration framework, where the input image pairs are initially linearly pre-registered. Then, an unsupervised convolutional neural network is employed to fit the mapping from one view to another, using a large number of unlabelled samples for training. The learned parameters are then used in a semi-supervised network and fine-tuned with a small number of manually annotated landmarks. The actual pixel displacement error is introduced as a complement to an image similarity measure. The performance of the proposed fine-tuned method is evaluated on real farming datasets and demonstrates significant improvement in lowering the registration errors than commonly used feature-based and intensity-based methods. This approach also reduces the registration time of an unseen image pair to less than 0.5 s. The proposed method provides a high-quality reference processing step for improving subsequent tasks such as multi-object tracking and behaviour recognition of animals for further analysis

    Double-Camera Fusion System for Animal-Position Awareness in Farming Pens

    No full text
    In livestock breeding, continuous and objective monitoring of animals is manually unfeasible due to the large scale of breeding and expensive labour. Computer vision technology can generate accurate and real-time individual animal or animal group information from video surveillance. However, the frequent occlusion between animals and changes in appearance features caused by varying lighting conditions makes single-camera systems less attractive. We propose a double-camera system and image registration algorithms to spatially fuse the information from different viewpoints to solve these issues. This paper presents a deformable learning-based registration framework, where the input image pairs are initially linearly pre-registered. Then, an unsupervised convolutional neural network is employed to fit the mapping from one view to another, using a large number of unlabelled samples for training. The learned parameters are then used in a semi-supervised network and fine-tuned with a small number of manually annotated landmarks. The actual pixel displacement error is introduced as a complement to an image similarity measure. The performance of the proposed fine-tuned method is evaluated on real farming datasets and demonstrates significant improvement in lowering the registration errors than commonly used feature-based and intensity-based methods. This approach also reduces the registration time of an unseen image pair to less than 0.5 s. The proposed method provides a high-quality reference processing step for improving subsequent tasks such as multi-object tracking and behaviour recognition of animals for further analysis
    corecore