62 research outputs found

    Participation of Different Forces and Coeducation in Peking University: From Reports of Newspaper Media, 1918-1920

    Get PDF
    It was an important achievement that Peking University abolished female forbiddance and implemented coeducation in Women’s Liberation Movement during the May 4th New Culture Movement. In this period, newspaper media kept up with the historical trend. As the leader of public opinion, Newspaper media intervened, reported and publicized the abolishment of female forbiddance and the implementation of coeducation in Peking University. It promoted the involvement of different social forces: the Principal of Peking University — Cai Yuanpei, female intellectuals, teachers and students of Peking university. These various forces played different roles in this trend. In their interaction, the abolishment of female forbiddance and the implementation of coeducation in Peking University began with appeals and debates and eventually ended with the realization. Furthermore, it aroused a nationwide impact in the education field. It not only reflects that coeducation in universities is a historical trend of democratization and modernization in higher education, but also indicates that newspaper media plays an indispensable role in Women’s Liberation Movement

    4D Human Body Capture from Egocentric Video via 3D Scene Grounding

    Full text link
    We introduce a novel task of reconstructing a time series of second-person 3D human body meshes from monocular egocentric videos. The unique viewpoint and rapid embodied camera motion of egocentric videos raise additional technical barriers for human body capture. To address those challenges, we propose a simple yet effective optimization-based approach that leverages 2D observations of the entire video sequence and human-scene interaction constraint to estimate second-person human poses, shapes, and global motion that are grounded on the 3D environment captured from the egocentric view. We conduct detailed ablation studies to validate our design choice. Moreover, we compare our method with the previous state-of-the-art method on human motion capture from monocular video, and show that our method estimates more accurate human-body poses and shapes under the challenging egocentric setting. In addition, we demonstrate that our approach produces more realistic human-scene interaction

    Summer Outdoor Thermal Perception for the Elderly in a Comprehensive Park of Changsha, China

    No full text
    Thermal perception is an important factor affecting the usage of outdoor spaces (e.g., urban parks). The elderly are the main visitors of urban parks; however, few studies investigated the thermal perception of the elderly in urban parks in summer. Taking a comprehensive urban park in Changsha, China, as an example, this study examined the thermal perception of the elderly and investigated the impacts of age, gender, and health status on the thermal perception through field observation, questionnaires, and field measurement of meteorological variables. The results showed that: (1) The neutral physiological equivalent temperature (PET) was 24.48 °C, with a range of 21.99−26.97 °C. The comfortable PET was 25.41 °C, and the 90% acceptable PET was 25.84−33.19 °C. (2) The neutral PET increased with the elderly’s age (e.g., 23.19 °C, 25.33 °C, and 25.36 °C, respectively, for people aged 60–69, 70–79, and ≥80 years old). The thermal sensitivity of the elderly increased with the increase in age. (3) Moving to the shade provided by trees or buildings is the main thermal adaptation behavior of the elderly in the park in summer. This study extended the understanding of the outdoor thermal perception of the elderly in summer and can help better urban park planning and design to improve the thermal perception of elderly visitors in summer in Changsha (China)

    4D Human Body Capture from Egocentric Video via 3D Scene Grounding

    No full text
    We introduce a novel task of reconstructing a time series of second-person 3D human body meshes from monocular egocentric videos. The unique viewpoint and rapid embodied camera motion of egocentric videos raise additional technical barriers for human body capture. To address those challenges, we propose a simple yet effective optimization-based approach that leverages 2D observations of the entire video sequence and human-scene interaction constraint to estimate second-person human poses, shapes, and global motion that are grounded on the 3D environment captured from the egocentric view. We conduct detailed ablation studies to validate our design choice. Moreover, we compare our method with the previous state-of-the-art method on human motion capture from monocular video, and show that our method estimates more accurate human-body poses and shapes under the challenging egocentric setting. In addition, we demonstrate that our approach produces more realistic human-scene interaction

    Human Action Transfer Based on 3D Model Reconstruction

    No full text
    We present a practical and effective method for human action transfer. Given a sequence of source action and limited target information, we aim to transfer motion from source to target. Although recent works based on GAN or VAE achieved impressive results for action transfer in 2D, there still exists a lot of problems which cannot be avoided, such as distorted and discontinuous human body shape, blurry cloth texture and so on. In this paper, we try to solve these problems in a novel 3D viewpoint. On the one hand, we design a skeleton-to-3D-mesh generator to generate the 3D model, which achieves huge improvement on appearance reconstruction. Furthermore, we add a temporal connection to improve the smoothness of the model. On the other hand, instead of directly utilizing the image in RGB space, we transform the target appearance information into UV space for further pose transformation. Specially, unlike conventional graphics render method directly projects visible pixels to UV space, our transformation is according to pixel’s semantic information. We perform experiments on Human3.6M and HumanEva-I to evaluate the performance of pose generator. Both qualitative and quantitative results show that our method outperforms methods based on generation method in 2D. Additionally, we compare our render method with graphic methods on Human3.6M and People-snapshot. The comparison results show that our render method is more robust and effective
    • …
    corecore