32 research outputs found

    Multivariate Relations Aggregation Learning in Social Networks

    Full text link
    Multivariate relations are general in various types of networks, such as biological networks, social networks, transportation networks, and academic networks. Due to the principle of ternary closures and the trend of group formation, the multivariate relationships in social networks are complex and rich. Therefore, in graph learning tasks of social networks, the identification and utilization of multivariate relationship information are more important. Existing graph learning methods are based on the neighborhood information diffusion mechanism, which often leads to partial omission or even lack of multivariate relationship information, and ultimately affects the accuracy and execution efficiency of the task. To address these challenges, this paper proposes the multivariate relationship aggregation learning (MORE) method, which can effectively capture the multivariate relationship information in the network environment. By aggregating node attribute features and structural features, MORE achieves higher accuracy and faster convergence speed. We conducted experiments on one citation network and five social networks. The experimental results show that the MORE model has higher accuracy than the GCN (Graph Convolutional Network) model in node classification tasks, and can significantly reduce time cost.Comment: 11 pages, 6 figure

    Machine Learning Education for Artists, Musicians, and Other Creative Practitioners

    Get PDF
    This article aims to lay a foundation for the research and practice of machine learning education for creative practitioners. It begins by arguing that it is important to teach machine learning to creative practitioners and to conduct research about this teaching, drawing on related work in creative machine learning, creative computing education, and machine learning education. It then draws on research about design processes in engineering and creative practice to motivate a set of learning objectives for students who wish to design new creative artifacts with machine learning. The article then draws on education research and knowledge of creative computing practices to propose a set of teaching strategies that can be used to support creative computing students in achieving these objectives. Explanations of these strategies are accompanied by concrete descriptions of how they have been employed to develop new lectures and activities, and to design new experiential learning and scaffolding technologies, for teaching some of the first courses in the world focused on teaching machine learning to creative practitioners. The article subsequently draws on data collected from these courses—an online course as well as undergraduate and masters-level courses taught at a university—to begin to understand how this curriculum supported student learning, to understand learners’ challenges and mistakes, and to inform future teaching and research

    Integrated model, batch, and domain parallelism in training neural networks

    No full text
    We propose a new integrated method of exploiting model, batch and domain parallelism for the training of deep neural networks (DNNs) on large distributed-memory computers using minibatch stochastic gradient descent (SGD). Our goal is to find an efficient parallelization strategy for a fixed batch size using P processes. Our method is inspired by the communication-avoiding algorithms in numerical linear algebra. We see P processes as logically divided into a Pr × Pc grid where the Pr dimension is implicitly responsible for model/domain parallelism and the Pc dimension is implicitly responsible for batch parallelism. In practice, the integrated matrix-based parallel algorithm encapsulates these types of parallelism automatically. We analyze the communication complexity and analytically demonstrate that the lowest communication costs are often achieved neither with pure model nor with pure data parallelism. We also show how the domain parallel approach can help in extending the theoretical scaling limit of the typical batch parallel method

    Spatial-Temporal Neural Networks for Action Recognition

    No full text
    Part 13: Human & Computer Interaction - Sound - Video - ProcessingInternational audienceAction recognition is an important yet challenging problem in many applications. Recently, neural network and deep learning approaches have been widely applied to action recognition and yielded impressive results. In this paper, we present a spatial-temporal neural network model to recognize human actions in videos. This network is composed of two connected structures. A two-stream-based network extracts appearance and optical flow features from video frames. This network characterizes spatial information of human actions in videos. A group of LSTM structures following the spatial network describe the temporal information of human actions. We test our model with data from two public datasets and the experimental results show that our method improves the action recognition accuracy compared to the baseline methods
    corecore