404 research outputs found

    Groundbreaking Ceremony Invocation

    Get PDF
    Invocation from the groundbreaking ceremony for Governors State University on June 12, 1971

    An Analysis of Material Support of Terrorism and Violent Plots: Scale and Success

    Get PDF
    Following the attacks on September 11, 2001, material support of terrorism charges have served as a cornerstone in the U.S. Government’s fight against terrorism. However, empirical research looking at the usage of material support charges is lacking. The primary focus of this study is to determine if material support charges are related to increases in terrorist attack success and scale. Using the American Terrorism Study (ATS), 177 post-9/11 Islamic Extremist-linked court cases including material support charges and 140 terrorist incidents were coded and analyzed using chi-square, logistical regression, and linear regression models. Results revealed that material support charges are related to decreases in the likelihood of incident success due to the presence of human intelligence sources while increasing the potential or actual scale of incidents through the number of participants. In conclusion, the material support of terrorism charge remains to be a highly controversial charge that is often used when human intelligence sources are present in an investigation but is not related to increases in incident success

    An Adaptive Total Variation Algorithm for Computing the Balanced Cut of a Graph

    Get PDF
    We propose an adaptive version of the total variation algorithm proposed in [3] for computing the balanced cut of a graph. The algorithm from [3] used a sequence of inner total variation minimizations to guarantee descent of the balanced cut energy as well as convergence of the algorithm. In practice the total variation minimization step is never solved exactly. Instead, an accuracy parameter is specified and the total variation minimization terminates once this level of accuracy is reached. The choice of this parameter can vastly impact both the computational time of the overall algorithm as well as the accuracy of the result. Moreover, since the total variation minimization step is not solved exactly, the algorithm is not guarantied to be monotonic. In the present work we introduce a new adaptive stopping condition for the total variation minimization that guarantees monotonicity. This results in an algorithm that is actually monotonic in practice and is also significantly faster than previous, non-adaptive algorithms

    Multiclass Total Variation Clustering

    Get PDF
    Ideas from the image processing literature have recently motivated a new set of clustering algorithms that rely on the concept of total variation. While these algorithms perform well for bi-partitioning tasks, their recursive extensions yield unimpressive results for multiclass clustering tasks. This paper presents a general framework for multiclass total variation clustering that does not rely on recursion. The results greatly outperform previous total variation algorithms and compare well with state-of-the-art NMF approaches

    Exploring the Benefits of Teams in Multiagent Learning

    Full text link
    For problems requiring cooperation, many multiagent systems implement solutions among either individual agents or across an entire population towards a common goal. Multiagent teams are primarily studied when in conflict; however, organizational psychology (OP) highlights the benefits of teams among human populations for learning how to coordinate and cooperate. In this paper, we propose a new model of multiagent teams for reinforcement learning (RL) agents inspired by OP and early work on teams in artificial intelligence. We validate our model using complex social dilemmas that are popular in recent multiagent RL and find that agents divided into teams develop cooperative pro-social policies despite incentives to not cooperate. Furthermore, agents are better able to coordinate and learn emergent roles within their teams and achieve higher rewards compared to when the interests of all agents are aligned.Comment: 10 pages, 6 figures, Published at IJCAI 2022. arXiv admin note: text overlap with arXiv:2204.0747

    Towards a Better Understanding of Learning with Multiagent Teams

    Full text link
    While it has long been recognized that a team of individual learning agents can be greater than the sum of its parts, recent work has shown that larger teams are not necessarily more effective than smaller ones. In this paper, we study why and under which conditions certain team structures promote effective learning for a population of individual learning agents. We show that, depending on the environment, some team structures help agents learn to specialize into specific roles, resulting in more favorable global results. However, large teams create credit assignment challenges that reduce coordination, leading to large teams performing poorly compared to smaller ones. We support our conclusions with both theoretical analysis and empirical results.Comment: 15 pages, 11 figures, published at the International Joint Conference on Artificial Intelligence (IJCAI) in 202

    Convergence and Energy Landscape for Cheeger Cut Clustering

    Get PDF
    This paper provides both theoretical and algorithmic results for the l 1-relaxation of the Cheeger cut problem. The l2- relaxation, known as spectral clustering, only loosely relates to the Cheeger cut; however, it is convex and leads to a simple optimization problem. The l1-relaxation, in contrast, is non-convex but is provably equivalent to the original problem. The l1-relaxation therefore trades convexity for exactness, yielding improved clustering results at the cost of a more challenging optimization. The first challenge is understanding convergence of algorithms. This paper provides the first complete proof of convergence for algorithms that minimize the l1-relaxation. The second challenge entails comprehending the l1-energy landscape, i.e. the set of possible points to which an algorithm might converge. We show that l 1-algorithms can get trapped in local minima that are not globally optimal and we provide a classification theorem to interpret these local minima. This classification gives meaning to these suboptimal solutions and helps to explain, in terms of graph structure, when the l1-relaxation provides the solution of the original Cheeger cut problem
    • …
    corecore