6 research outputs found

    Social experience does not abolish cultural diversity in eye movements.

    Get PDF
    Adults from Eastern (e.g., China) and Western (e.g., USA) cultural groups display pronounced differences in a range of visual processing tasks. For example, the eye movement strategies used for information extraction during a variety of face processing tasks (e.g., identification and facial expressions of emotion categorization) differs across cultural groups. Currently, many of the differences reported in previous studies have asserted that culture itself is responsible for shaping the way we process visual information, yet this has never been directly investigated. In the current study, we assessed the relative contribution of genetic and cultural factors by testing face processing in a population of British Born Chinese adults using face recognition and expression classification tasks. Contrary to predictions made by the cultural differences framework, the majority of British Born Chinese adults deployed "Eastern" eye movement strategies, while approximately 25% of participants displayed "Western" strategies. Furthermore, the cultural eye movement strategies used by individuals were consistent across recognition and expression tasks. These findings suggest that "culture" alone cannot straightforwardly account for diversity in eye movement patterns. Instead a more complex understanding of how the environment and individual experiences can influence the mechanisms that govern visual processing is required

    Inter-CubeSat Communication with V-band "Bull's eye" antenna

    Get PDF
    We present the study of a simple communication scenario between two CubeSats using a V-band “Bull's eye” antenna that we designed for this purpose. The return loss of the antenna has a -10dB bandwidth of 0.7 GHz and a gain of 15.4dBi at 60 GHz. Moreover, the low-profile shape makes it easily integrable in a CubeSat chassis. The communication scenario study shows that, using 0.01W VubiQ modules and V-band “Bull’s eye” antennas, CubeSats can efficiently transmit data within a 500 MHz bandwidth and with a 10-6 BER while being separated by up to 98m, under ideal conditions, or 50m under worst case operating conditions (5° pointing misalignment in E- and H-plane of the antenna, and 5° polarisation misalignment)

    Face Hallucination With Finishing Touches

    Full text link
    Obtaining a high-quality frontal face image from a low-resolution (LR) non-frontal face image is primarily important for many facial analysis applications. However, mainstreams either focus on super-resolving near-frontal LR faces or frontalizing non-frontal high-resolution (HR) faces. It is desirable to perform both tasks seamlessly for daily-life unconstrained face images. In this paper, we present a novel Vivid Face Hallucination Generative Adversarial Network (VividGAN) for simultaneously super-resolving and frontalizing tiny non-frontal face images. VividGAN consists of coarse-level and fine-level Face Hallucination Networks (FHnet) and two discriminators, i.e., Coarse-D and Fine-D. The coarse-level FHnet generates a frontal coarse HR face and then the fine-level FHnet makes use of the facial component appearance prior, i.e., fine-grained facial components, to attain a frontal HR face image with authentic details. In the fine-level FHnet, we also design a facial component-aware module that adopts the facial geometry guidance as clues to accurately align and merge the frontal coarse HR face and prior information. Meanwhile, two-level discriminators are designed to capture both the global outline of a face image as well as detailed facial characteristics. The Coarse-D enforces the coarsely hallucinated faces to be upright and complete while the Fine-D focuses on the fine hallucinated ones for sharper details. Extensive experiments demonstrate that our VividGAN achieves photo-realistic frontal HR faces, reaching superior performance in downstream tasks, i.e., face recognition and expression classification, compared with other state-of-the-art methods

    Face Recognition and Expression Classification

    No full text

    Host scholar:

    No full text
    II In face image analysis, there are three basic research topics: face detection, face recognition, and expression classification. They are all pattern classification tasks. Recently, manifold learning and nonlinear dimensionality reduction algorithms (e.g. Isomap, LLE, and Laplacian Eigenmaps) have been studied due to their capability on such tasks with varying conditions. We propose a novel manifold learning algorithm based on Riemannian Normal Coordinates (RNC). First, a simple geometry model is studied to show that face images under varying poses and lightings can form a curved manifold. Then, we present some theoretical results that classifications on a Riemannian manifold can be transferred onto its coordinate charts. Our method models the face manifold as an approximating simplicial complex, and the RNC of one data point is given by the inverse of the exponential mapping, which preserves distances on the radial geodesics. Experimental results demonstrate the excellent performance of our method. III To the beautiful country
    corecore