16 research outputs found

    Human-human interaction recognition based on spatial and motion trend feature

    Get PDF

    Online action recognition based on skeleton motion distribution

    Get PDF

    Visual focus of attention estimation using eye center localization

    Get PDF
    Estimating people visual focus of attention (VFOA) plays a crucial role in various practical systems such as human-robot interaction. It is challenging to extract the cue of the VFOA of a person due to the difficulty of recognizing gaze directionality. In this paper, we propose an improved integrodifferential approach to represent gaze via efficiently and accurately localizing the eye center in lower resolution image. The proposed method takes advantage of the drastic intensity changes between the iris and the sclera and the grayscale of the eye center as well. The number of kernels is optimized to convolute the original eye region image, and the eye center is located via searching the maximum ratio derivative of the neighbor curve magnitudes in the convolution image. Experimental results confirm that the algorithm outperforms the state-of-the-art methods in terms of computational cost, accuracy, and robustness to illumination changes

    Assembling convolution neural networks for automatic viewing transformation

    Get PDF
    Images taken under different camera poses are rotated or distorted, which leads to poor perception experiences. This paper proposes a new framework to automatically transform the images to the conformable view setting by assembling different convolution neural networks. Specifically, a referential 3D ground plane is firstly derived from the RGB image and a novel projection mapping algorithm is developed to achieve automatic viewing transformation. Extensive experimental results demonstrate that the proposed method outperforms the state-ofthe-art vanishing points based methods by a large margin in terms of accuracy and robustness

    Multi-stage adaptive regression for online activity recognition

    No full text
    Online activity recognition which aims to detect and recognize activity instantly from a continuous video stream is a key technology in human-robot interaction. However, the partial activity observation problem, mainly due to the incomplete sequence acquisition, makes it greatly challenging. This paper proposes a novel approach, named Multi-stage Adaptive Regression (MAR), for online activity recognition with the main focus on addressing the partial observation problem. Specifically, the MAR framework delicately assembles overlapped activity observations to improve its robustness against arbitrary activity segments. Then multiple score functions corresponding to each specific performance stage are collaboratively learned via a adaptive label strategy to enhance its power of discriminating similar partial activities. Moreover, the Online Human Interaction (OHI) database is constructed to evaluate the online activity recognition in human interaction scenarios. Extensive experimental evaluations on the Multi-Modal Action Detection (MAD) database and the OHI database show that the MAR method achieves an outstanding performance over the state-of-the-art approaches

    Altered Intrinsic Coupling between Functional Connectivity Density and Amplitude of Low-Frequency Fluctuation in Mild Cognitive Impairment with Depressive Symptoms

    No full text
    Neuroimaging studies have demonstrated that major depressive disorder increases the risk of dementia in older individuals with mild cognitive impairment. We used resting-state functional magnetic resonance imaging to explore the intrinsic coupling patterns between the amplitude and synchronisation of low-frequency brain fluctuations using the amplitude of low-frequency fluctuations (ALFF) and the functional connectivity density (FCD) in 16 patients who had mild cognitive impairment with depressive symptoms (D-MCI) (mean age: 69.6 ± 6.2 years) and 18 patients with nondepressed mild cognitive impairment (nD-MCI) (mean age: 72.1 ± 9.7 years). Coupling was quantified as the correlations between the ALFF values and their associated FCDs. The results showed that the ALFF values in the D-MCI group were higher in the left medial prefrontal cortex (mPFC) and lower in the right precentral gyrus (preCG), and the FCD values were higher in the left medial temporal gyrus (MTG) than those in the nD-MCI group. Further, correlation analyses demonstrated that, in the D-MCI group, the mPFC was negatively correlated with the MTG. These findings may relate to the characteristics of mood disorders in patients with MCI, and they offer further insight into the neuropathophysiology of MCI with depressive symptoms
    corecore