73 research outputs found

    A Comprehensive Approach to Investigating the Social Dimension in European Higher Education Systems—EUROSTUDENT and the PL4SD Country Reviews

    Get PDF
    Additional file 4: Table S4. The Akaike information criterion (AIC) for different models compared to REML-PED. The log likelihoods for different models were used to calculate AIC following [35]. The AIC using “REML-PED” was scaled to zero for each trait and the AIC for the other models were expressed as the difference from AIC in “REML-PED”. GREML-MS” is the relative residual variance and DRP variance calculated using the GREML-MS method with partitioning of imputed sequence variants into MAF groups. “REML-GRM” is the relative residual variance and DRP variance calculated by fitting 50 k SNPs with the REML-GRM model implemented in GCTA. “REML-PED” is the relative residual variance and DRP variance calculated by fitting pedigree relationships with the REML-PED model implemented in DMU. “REML-PEDGRM” is the relative residual variance and DRP variance calculated by fitting both 50 k SNPs and pedigree relationships with the REML-PEDGRM model implemented in DMU. Results for the model that did not converge for the trait are not presented

    Data_Sheet_1_Understanding of facial features in face perception: insights from deep convolutional neural networks.PDF

    No full text
    IntroductionFace recognition has been a longstanding subject of interest in the fields of cognitive neuroscience and computer vision research. One key focus has been to understand the relative importance of different facial features in identifying individuals. Previous studies in humans have demonstrated the crucial role of eyebrows in face recognition, potentially even surpassing the importance of the eyes. However, eyebrows are not only vital for face recognition but also play a significant role in recognizing facial expressions and intentions, which might occur simultaneously and influence the face recognition process.MethodsTo address these challenges, our current study aimed to leverage the power of deep convolutional neural networks (DCNNs), an artificial face recognition system, which can be specifically tailored for face recognition tasks. In this study, we investigated the relative importance of various facial features in face recognition by selectively blocking feature information from the input to the DCNN. Additionally, we conducted experiments in which we systematically blurred the information related to eyebrows to varying degrees.ResultsOur findings aligned with previous human research, revealing that eyebrows are the most critical feature for face recognition, followed by eyes, mouth, and nose, in that order. The results demonstrated that the presence of eyebrows was more crucial than their specific high-frequency details, such as edges and textures, compared to other facial features, where the details also played a significant role. Furthermore, our results revealed that, unlike other facial features, the activation map indicated that the significance of eyebrows areas could not be readily adjusted to compensate for the absence of eyebrow information. This finding explains why masking eyebrows led to more significant deficits in face recognition performance. Additionally, we observed a synergistic relationship among facial features, providing evidence for holistic processing of faces within the DCNN.DiscussionOverall, our study sheds light on the underlying mechanisms of face recognition and underscores the potential of using DCNNs as valuable tools for further exploration in this field.</p

    Sport eucation and lifelong learning

    No full text
    Pd­(II)-catalyzed C–H sulfonylation of azobenzenes with arylsulfonyl chlorides has been developed. The sulfonylazobenzenes were obtained in moderate to excellent yields for 28 examples. This protocol features high efficiency, wide functional group tolerance, and atom economy

    The description of class labels of SEU.

    No full text
    Learning powerful discriminative features is the key for machine fault diagnosis. Most existing methods based on convolutional neural network (CNN) have achieved promising results. However, they primarily focus on global features derived from sample signals and fail to explicitly mine relationships between signals. In contrast, graph convolutional network (GCN) is able to efficiently mine data relationships by taking graph data with topological structure as input, making them highly effective for feature representation in non-Euclidean space. In this article, to make good use of the advantages of CNN and GCN, we propose a graph attentional convolutional neural network (GACNN) for effective intelligent fault diagnosis, which includes two subnetworks of fully CNN and GCN to extract the multilevel features information, and uses Efficient Channel Attention (ECA) attention mechanism to reduce information loss. Extensive experiments on three datasets show that our framework improves the representation ability of features and fault diagnosis performance, and achieves competitive accuracy against other approaches. And the results show that GACNN can achieve superior performance even under a strong background noise environment.</div

    The architecture of the attention block.

    No full text
    Learning powerful discriminative features is the key for machine fault diagnosis. Most existing methods based on convolutional neural network (CNN) have achieved promising results. However, they primarily focus on global features derived from sample signals and fail to explicitly mine relationships between signals. In contrast, graph convolutional network (GCN) is able to efficiently mine data relationships by taking graph data with topological structure as input, making them highly effective for feature representation in non-Euclidean space. In this article, to make good use of the advantages of CNN and GCN, we propose a graph attentional convolutional neural network (GACNN) for effective intelligent fault diagnosis, which includes two subnetworks of fully CNN and GCN to extract the multilevel features information, and uses Efficient Channel Attention (ECA) attention mechanism to reduce information loss. Extensive experiments on three datasets show that our framework improves the representation ability of features and fault diagnosis performance, and achieves competitive accuracy against other approaches. And the results show that GACNN can achieve superior performance even under a strong background noise environment.</div

    The diagnostic result of MFPT.

    No full text
    Learning powerful discriminative features is the key for machine fault diagnosis. Most existing methods based on convolutional neural network (CNN) have achieved promising results. However, they primarily focus on global features derived from sample signals and fail to explicitly mine relationships between signals. In contrast, graph convolutional network (GCN) is able to efficiently mine data relationships by taking graph data with topological structure as input, making them highly effective for feature representation in non-Euclidean space. In this article, to make good use of the advantages of CNN and GCN, we propose a graph attentional convolutional neural network (GACNN) for effective intelligent fault diagnosis, which includes two subnetworks of fully CNN and GCN to extract the multilevel features information, and uses Efficient Channel Attention (ECA) attention mechanism to reduce information loss. Extensive experiments on three datasets show that our framework improves the representation ability of features and fault diagnosis performance, and achieves competitive accuracy against other approaches. And the results show that GACNN can achieve superior performance even under a strong background noise environment.</div

    Example of graph construction using complete graph.

    No full text
    Example of graph construction using complete graph.</p

    The diagnostic result of SEU.

    No full text
    Learning powerful discriminative features is the key for machine fault diagnosis. Most existing methods based on convolutional neural network (CNN) have achieved promising results. However, they primarily focus on global features derived from sample signals and fail to explicitly mine relationships between signals. In contrast, graph convolutional network (GCN) is able to efficiently mine data relationships by taking graph data with topological structure as input, making them highly effective for feature representation in non-Euclidean space. In this article, to make good use of the advantages of CNN and GCN, we propose a graph attentional convolutional neural network (GACNN) for effective intelligent fault diagnosis, which includes two subnetworks of fully CNN and GCN to extract the multilevel features information, and uses Efficient Channel Attention (ECA) attention mechanism to reduce information loss. Extensive experiments on three datasets show that our framework improves the representation ability of features and fault diagnosis performance, and achieves competitive accuracy against other approaches. And the results show that GACNN can achieve superior performance even under a strong background noise environment.</div

    The diagnostic result of CWRU.

    No full text
    Learning powerful discriminative features is the key for machine fault diagnosis. Most existing methods based on convolutional neural network (CNN) have achieved promising results. However, they primarily focus on global features derived from sample signals and fail to explicitly mine relationships between signals. In contrast, graph convolutional network (GCN) is able to efficiently mine data relationships by taking graph data with topological structure as input, making them highly effective for feature representation in non-Euclidean space. In this article, to make good use of the advantages of CNN and GCN, we propose a graph attentional convolutional neural network (GACNN) for effective intelligent fault diagnosis, which includes two subnetworks of fully CNN and GCN to extract the multilevel features information, and uses Efficient Channel Attention (ECA) attention mechanism to reduce information loss. Extensive experiments on three datasets show that our framework improves the representation ability of features and fault diagnosis performance, and achieves competitive accuracy against other approaches. And the results show that GACNN can achieve superior performance even under a strong background noise environment.</div

    Setting of the models of the bearing dataset.

    No full text
    Learning powerful discriminative features is the key for machine fault diagnosis. Most existing methods based on convolutional neural network (CNN) have achieved promising results. However, they primarily focus on global features derived from sample signals and fail to explicitly mine relationships between signals. In contrast, graph convolutional network (GCN) is able to efficiently mine data relationships by taking graph data with topological structure as input, making them highly effective for feature representation in non-Euclidean space. In this article, to make good use of the advantages of CNN and GCN, we propose a graph attentional convolutional neural network (GACNN) for effective intelligent fault diagnosis, which includes two subnetworks of fully CNN and GCN to extract the multilevel features information, and uses Efficient Channel Attention (ECA) attention mechanism to reduce information loss. Extensive experiments on three datasets show that our framework improves the representation ability of features and fault diagnosis performance, and achieves competitive accuracy against other approaches. And the results show that GACNN can achieve superior performance even under a strong background noise environment.</div
    • …
    corecore