34 research outputs found

    Effects of Limiting the Number of Ball Touches on Physical and Technical Performance of the Junior Football Players during Small-sided Game

    Get PDF
    PURPOSE We aimed to examine the effects of limiting the number of ball touches on the physical and technical performances of junior football players during small-sided games (SSGs), which are widely used to improve football-specific physical and technical performances. METHODS Nineteen middle-school football players participated in the study and took a pretest for their physical and technical skills to be evaluated before the main experiment. During the SSG, to balance teams according to the players’ levels of physical fitness and skill, we selected players with the highest to lowest total scores and organized them in an ABBA order. Ten players who obtained the highest scores participated in the SSG once a week for 5 weeks under the limitation of a certain number of ball touches (one, two, three, four, or free touches). Players could only play with a set number of touches. Each SSG consisted of 4-min sets with 4-min breaks after each set on a pitch with a goal. RESULTS As the number of possible touches increased, the total distance and average speed of the players increased, and the distance ratio covered by running (over 13 km/h), but not walking or jogging, also increased. Regarding technical factors, as the number of touches a player could make increased, the number of passes decreased, whereas the rates of dribbles and defensive tackles increased. CONCLUSIONS As the number of ball touches increased during the SSG, the young players covered a greater distance with a higher speed, unlike professional players, and the frequency of skills mostly used, such as passing and dribbling during the SSG, showed different results

    CGC: Contrastive Graph Clustering for Community Detection and Tracking

    Full text link
    Given entities and their interactions in the web data, which may have occurred at different time, how can we find communities of entities and track their evolution? In this paper, we approach this important task from graph clustering perspective. Recently, state-of-the-art clustering performance in various domains has been achieved by deep clustering methods. Especially, deep graph clustering (DGC) methods have successfully extended deep clustering to graph-structured data by learning node representations and cluster assignments in a joint optimization framework. Despite some differences in modeling choices (e.g., encoder architectures), existing DGC methods are mainly based on autoencoders and use the same clustering objective with relatively minor adaptations. Also, while many real-world graphs are dynamic, previous DGC methods considered only static graphs. In this work, we develop CGC, a novel end-to-end framework for graph clustering, which fundamentally differs from existing methods. CGC learns node embeddings and cluster assignments in a contrastive graph learning framework, where positive and negative samples are carefully selected in a multi-level scheme such that they reflect hierarchical community structures and network homophily. Also, we extend CGC for time-evolving data, where temporal graph clustering is performed in an incremental learning fashion, with the ability to detect change points. Extensive evaluation on real-world graphs demonstrates that the proposed CGC consistently outperforms existing methods.Comment: TheWebConf 2022 Research Trac

    Fairness-Aware Graph Neural Networks: A Survey

    Full text link
    Graph Neural Networks (GNNs) have become increasingly important due to their representational power and state-of-the-art predictive performance on many fundamental learning tasks. Despite this success, GNNs suffer from fairness issues that arise as a result of the underlying graph data and the fundamental aggregation mechanism that lies at the heart of the large class of GNN models. In this article, we examine and categorize fairness techniques for improving the fairness of GNNs. Previous work on fair GNN models and techniques are discussed in terms of whether they focus on improving fairness during a preprocessing step, during training, or in a post-processing phase. Furthermore, we discuss how such techniques can be used together whenever appropriate, and highlight the advantages and intuition as well. We also introduce an intuitive taxonomy for fairness evaluation metrics including graph-level fairness, neighborhood-level fairness, embedding-level fairness, and prediction-level fairness metrics. In addition, graph datasets that are useful for benchmarking the fairness of GNN models are summarized succinctly. Finally, we highlight key open problems and challenges that remain to be addressed

    Leveraging Graph Diffusion Models for Network Refinement Tasks

    Full text link
    Most real-world networks are noisy and incomplete samples from an unknown target distribution. Refining them by correcting corruptions or inferring unobserved regions typically improves downstream performance. Inspired by the impressive generative capabilities that have been used to correct corruptions in images, and the similarities between "in-painting" and filling in missing nodes and edges conditioned on the observed graph, we propose a novel graph generative framework, SGDM, which is based on subgraph diffusion. Our framework not only improves the scalability and fidelity of graph diffusion models, but also leverages the reverse process to perform novel, conditional generation tasks. In particular, through extensive empirical analysis and a set of novel metrics, we demonstrate that our proposed model effectively supports the following refinement tasks for partially observable networks: T1: denoising extraneous subgraphs, T2: expanding existing subgraphs and T3: performing "style" transfer by regenerating a particular subgraph to match the characteristics of a different node or subgraph.Comment: Work in Progress. 21 pages, 7 figure

    Function Analysis of the Euclidean Distance between Probability Distributions

    No full text
    Minimization of the Euclidean distance between output distribution and Dirac delta functions as a performance criterion is known to match the distribution of system output with delta functions. In the analysis of the algorithm developed based on that criterion and recursive gradient estimation, it is revealed in this paper that the minimization process of the cost function has two gradients with different functions; one that forces spreading of output samples and the other one that compels output samples to move close to symbol points. For investigation the two functions, each gradient is controlled separately through individual normalization of each gradient with their related input. From the analysis and experimental results, it is verified that one gradient is associated with the role of accelerating initial convergence speed by spreading output samples and the other gradient is related with lowering the minimum mean squared error (MSE) by pulling error samples close together

    Blind Signal Processing Algorithmsbased based on Recursive Gradient Estimation

    No full text
    Blind algorithms based on the Euclidean distance (ED) between the output distribution function and a set of Dirac delta functions have a heavy computational burden of  due to some double summation operations for the sample size and symbol points. In this paper, a recursive approach to the estimation of the ED and its gradient is proposed to reduce the computational complexity for efficient implementation of the algorithm. The ED of the algorithm is comprised of information potentials (IPs), and the IPs at the next iteration can be calculated recursively based on the currently obtained IPs. Utilizing the recursively estimated IPs, the next step gradient for the weight update of the algorithm can be estimated recursively with the present gradient. With this recursive approach, the computational complexity of gradient calculation has only . The simulation results show that the proposed gradient estimation method holds significantly reduced computational complexity keeping the same performance as the block processing metho

    A Study on the Interrelations of Decision-Making Factors of Information System (IS) Upgrades for Sustainable Business Using Interpretive Structural Modeling and MICMAC Analysis

    No full text
    An information system (IS) upgrade is an essential way to enhance the competitiveness of an organization. Specifically, the decision making processes surrounding IS upgrades is one of the most important parts of an organization’s competitiveness in regard to business sustainability. Previous research studies on IS upgrade decisions have focused on implementing a more efficient decision-making system by determining when IS upgrades should be performed based on the cost factor, from the perspective of both users and experts. However, if the decision making of an IS upgrade is delayed or not performed accurately due to the limitations of a specific business environment, such as a job, position, or cost, an organization can lose its business competitiveness. In this context, the present study determines the main factors involved in decision making processes surrounding IS upgrades, and analyzes the interrelations among these factors in an organization with regard to users, managers, and experts. The interpretive structural modeling (ISM) method is used as an analytical tool to analyze the characteristics and interrelations of factors based on a real system model called the User-Centered Training System (UCTS). Based on the results, the present study provides a deeper insight into decision-making factors and directional models, and allows for a more efficient management of the decision-making problem of an IS upgrade caused by differences in the business environment between each layer (i.e., users, managers, and experts). Specifically, according to our results, users are more likely to think about the positive effects and benefits they could have on their work, rather than about organizational benefits. By contrast, managers reason that IS upgrades should have a positive impact on the overall organizational goals and benefits. Finally, experts think that an IS upgrade should benefit both the organization and users. Taken together, the results of the present study are meaningful in that they clearly show the interrelationships between the decision-making factors on each of the levels

    Normalized Minimum Error Entropy Algorithm with Recursive Power Estimation

    No full text
    The minimum error entropy (MEE) algorithm is known to be superior in signal processing applications under impulsive noise. In this paper, based on the analysis of behavior of the optimum weight and the properties of robustness against impulsive noise, a normalized version of the MEE algorithm is proposed. The step size of the MEE algorithm is normalized with the power of input entropy that is estimated recursively for reducing its computational complexity. The proposed algorithm yields lower minimum MSE (mean squared error) and faster convergence speed simultaneously than the original MEE algorithm does in the equalization simulation. On the condition of the same convergence speed, its performance enhancement in steady state MSE is above 3 dB

    Optimizing the Multistage University Admission Decision Process

    No full text
    The admission decision process is an important operational management problem for many universities. Admission control processes may, however, differ among universities. In this paper, we focus on the problem at Korea Advanced Institute of Science and Technology (KAIST). We assume that individual applications are evaluated and ranked based on paper evaluations and (optional) interview results. We use the term university admission decision to mean determining the number of admission offers that will meet the target number of enrollments. The major complexity of an admission decision comes from the enrollment uncertainty of admitted applicants. In the method we propose in this paper, we use logistic regression with past data to estimate the enrollment probability of each applicant. We then model the admission decision problem as a Markov decision process from which we formulate optimal decision making. The proposed method outperformed human expert results in meeting the enrollment target for the validation data in 2014 and 2015. KAIST successfully used our method for its admission decisions in academic year 2016
    corecore