96 research outputs found

    Influence of Intergenerational Parenting on Gross Motor Skills Among Children Aged 3-6 Years Old

    Get PDF
    The stage of 3-6 years old is a critical period for the development of children\u27s gross motor skills, influencing their growth and development. Intergenerational parenting is a common way of family education in China. Studies have shown that the rough intergenerational parenting concept and limited energy of the elderly reduce the quality of life of the children under care (Qimeng Jiang, Nan Zhou, 2020). However, no research focused on the influence of intergenerational parenting on gross motor skills. This study aimed to explore the influence of different intergenerational parenting style on gross motor skills among children aged 3-6 years old. The participants were 62 children (25 boys and 37 girls) aged 3-6 years old from Liaoning Province. All of the participants were under intergenerational parenting. Gross motor skills were assessed using the Test of Gross Motor Development-Third Edition (TGMD-3), which includes locomotor skills and ball skills components. The intergenerational parenting status was divided into parent-dominated intergenerational parenting and grandparent-dominated intergenerational parenting according to a questionnaire (Lu Ye, 2020). The parenting style included authoritative, authoritarian and tolerant styles. The scores of three styles were determined by the grandparents-reported Chinese version of Parental Authority Questionnaire. Descriptive statistics, independent t-test and Pearson correlation were employed and the significant levels were set at 0.05. The results showed that participants had lower mean scores in both locomotor skills (Mboy = 12.58±4.42, Mgirl =13.51 ±3.03) and ball skills (Mboy = 8.46±2.9, Mgirl = 8.04±3.34) compared to the Chinese norm. There was no significant difference between parent-dominated and grandparent-dominated intergenerational parenting (M Parent-dominated =22.90±4.15, M grandparent-dominated = 20.48±4.47; t = 1.269, p = 0.209). Correlation analysis indicated a small association between the score of locomotor skills and authoritative style (r = 0.269, p \u3c 0.05). No significant relationship was found between other parenting styles and the scores of TGMD-3. It is concluded that intergenerational parenting may negatively influence children’s gross motor development. Parent-dominated and grandparent-dominated intergenerational parenting may not have differences in children’s motor development. The authoritative parenting style of intergenerational education has certain impacts on children\u27s gross motor skills, especially on children\u27s locomotor skills

    Online expansion: is it another kind of strategic manufacturer response to a dominant retailer?

    Get PDF
    YesThe issues of channel conflict and channel power have received widespread research attention, including Geylani et al.’s (2007) work on channel relations in an asymmetric retail setting. Specifically, these authors suggest that a manufacturer can respond to a dominant retailer’s pricing pressure by raising the wholesale price for a weak retailer over that for the dominant retailer while transferring demand to the weak retailer channel via cooperative advertising. But, is online expansion another kind of strategic manufacturer’s optimal response to a dominant retailer? In this paper, we extend this work by adding a direct online selling channel to illustrate the impact of the manufacturer’s internet entry on firms’ demands, profits, and pricing strategies and on consumer welfare. Our analysis thus includes a condition in which the manufacturer can add an online channel. If such an online channel is opened, the channel-supported network externality will always benefit the manufacturer but hurt the retailers. Consumers, however, will only benefit from the network externality when a dominant retailer is present and will be hurt when both retailers are symmetric.National Natural Science Foundation of China, Chongqing’s Natural Science Foundation, British Academ

    A convolutional attentional neural network for sentiment classification

    Get PDF
    Neural network models with attention mechanism have shown their efficiencies on various tasks. However, there is little research work on attention mechanism for text classification and existing attention model for text classification lacks of cognitive intuition and mathematical explanation. In this paper, we propose a new architecture of neural network based on the attention model for text classification. In particular, we show that the convolutional neural network (CNN) is a reasonable model for extracting attentions from text sequences in mathematics. We then propose a novel attention model base on CNN and introduce a new network architecture which combines recurrent neural network with our CNN-based attention model. Experimental results on five datasets show that our proposed models can accurately capture the salient parts of sentences to improve the performance of text classification

    Point Clouds Are Specialized Images: A Knowledge Transfer Approach for 3D Understanding

    Full text link
    Self-supervised representation learning (SSRL) has gained increasing attention in point cloud understanding, in addressing the challenges posed by 3D data scarcity and high annotation costs. This paper presents PCExpert, a novel SSRL approach that reinterprets point clouds as "specialized images". This conceptual shift allows PCExpert to leverage knowledge derived from large-scale image modality in a more direct and deeper manner, via extensively sharing the parameters with a pre-trained image encoder in a multi-way Transformer architecture. The parameter sharing strategy, combined with a novel pretext task for pre-training, i.e., transformation estimation, empowers PCExpert to outperform the state of the arts in a variety of tasks, with a remarkable reduction in the number of trainable parameters. Notably, PCExpert's performance under LINEAR fine-tuning (e.g., yielding a 90.02% overall accuracy on ScanObjectNN) has already approached the results obtained with FULL model fine-tuning (92.66%), demonstrating its effective and robust representation capability

    Convolution-based neural attention with applications to sentiment classification

    Get PDF
    Neural attention mechanism has achieved many successes in various tasks in natural language processing. However, existing neural attention models based on a densely connected network are loosely related to the attention mechanism found in psychology and neuroscience. Motivated by the finding in neuroscience that human possesses the template-searching attention mechanism, we propose to use convolution operation to simulate attentions and give a mathematical explanation of our neural attention model. We then introduce a new network architecture, which combines a recurrent neural network with our convolution-based attention model and further stacks an attention-based neural model to build a hierarchical sentiment classification model. The experimental results show that our proposed models can capture salient parts of the text to improve the performance of sentiment classification at both the sentence level and the document level

    The effect of the gravitational constant variation on the propagation of gravitational waves

    Full text link
    Since the first detection of gravitational waves, they have been used to investigate various fundamental problems, including the variation of physical constants. Regarding the gravitational constant, previous works focused on the effect of the gravitational constant variation on the gravitational wave generation. In this paper, we investigate the effect of the gravitational constant variation on the gravitational wave propagation. The Maxwell-like equation that describes the propagation of gravitational waves is extended in this paper to account for situations where the gravitational constant varies. Based on this equation, we find that the amplitude of gravitational waves will be corrected. Consequently the estimated distance to the gravitational wave source without considering such a correction may be biased. Applying our correction result to the well known binary neutron star coalescence event GW170817, we get a constraint on the variation of the gravitational constant. Relating our result to the Yukawa deviation of gravity, we for the first time get the constraint of the Yukawa parameters in 10Mpc scale. This scale corresponds to a graviton mass mg∼10−31m_g\sim10^{-31}eV

    VoxGenesis: Unsupervised Discovery of Latent Speaker Manifold for Speech Synthesis

    Full text link
    Achieving nuanced and accurate emulation of human voice has been a longstanding goal in artificial intelligence. Although significant progress has been made in recent years, the mainstream of speech synthesis models still relies on supervised speaker modeling and explicit reference utterances. However, there are many aspects of human voice, such as emotion, intonation, and speaking style, for which it is hard to obtain accurate labels. In this paper, we propose VoxGenesis, a novel unsupervised speech synthesis framework that can discover a latent speaker manifold and meaningful voice editing directions without supervision. VoxGenesis is conceptually simple. Instead of mapping speech features to waveforms deterministically, VoxGenesis transforms a Gaussian distribution into speech distributions conditioned and aligned by semantic tokens. This forces the model to learn a speaker distribution disentangled from the semantic content. During the inference, sampling from the Gaussian distribution enables the creation of novel speakers with distinct characteristics. More importantly, the exploration of latent space uncovers human-interpretable directions associated with specific speaker characteristics such as gender attributes, pitch, tone, and emotion, allowing for voice editing by manipulating the latent codes along these identified directions. We conduct extensive experiments to evaluate the proposed VoxGenesis using both subjective and objective metrics, finding that it produces significantly more diverse and realistic speakers with distinct characteristics than the previous approaches. We also show that latent space manipulation produces consistent and human-identifiable effects that are not detrimental to the speech quality, which was not possible with previous approaches. Audio samples of VoxGenesis can be found at: \url{https://bit.ly/VoxGenesis}.Comment: preprin

    4K-DMDNet: diffraction model-driven network for 4K computer-generated holography

    Get PDF
    Deep learning offers a novel opportunity to achieve both high-quality and high-speed computer-generated holography (CGH). Current data-driven deep learning algorithms face the challenge that the labeled training datasets limit the training performance and generalization. The model-driven deep learning introduces the diffraction model into the neural network. It eliminates the need for the labeled training dataset and has been extensively applied to hologram generation. However, the existing model-driven deep learning algorithms face the problem of insufficient constraints. In this study, we propose a model-driven neural network capable of high-fidelity 4K computer-generated hologram generation, called 4K Diffraction Model-driven Network (4K-DMDNet). The constraint of the reconstructed images in the frequency domain is strengthened. And a network structure that combines the residual method and sub-pixel convolution method is built, which effectively enhances the fitting ability of the network for inverse problems. The generalization of the 4K-DMDNet is demonstrated with binary, grayscale and 3D images. High-quality full-color optical reconstructions of the 4K holograms have been achieved at the wavelengths of 450 nm, 520 nm, and 638 nm

    Network Pruning Spaces

    Full text link
    Network pruning techniques, including weight pruning and filter pruning, reveal that most state-of-the-art neural networks can be accelerated without a significant performance drop. This work focuses on filter pruning which enables accelerated inference with any off-the-shelf deep learning library and hardware. We propose the concept of \emph{network pruning spaces} that parametrize populations of subnetwork architectures. Based on this concept, we explore the structure aspect of subnetworks that result in minimal loss of accuracy in different pruning regimes and arrive at a series of observations by comparing subnetwork distributions. We conjecture through empirical studies that there exists an optimal FLOPs-to-parameter-bucket ratio related to the design of original network in a pruning regime. Statistically, the structure of a winning subnetwork guarantees an approximately optimal ratio in this regime. Upon our conjectures, we further refine the initial pruning space to reduce the cost of searching a good subnetwork architecture. Our experimental results on ImageNet show that the subnetwork we found is superior to those from the state-of-the-art pruning methods under comparable FLOPs
    • …
    corecore