5,471 research outputs found

    Partitioning networks into cliques: a randomized heuristic approach

    Get PDF
    In the context of community detection in social networks, the term community can be grounded in the strict way that simply everybody should know each other within the community. We consider the corresponding community detection problem. We search for a partitioning of a network into the minimum number of non-overlapping cliques, such that the cliques cover all vertices. This problem is called the clique covering problem (CCP) and is one of the classical NP-hard problems. For CCP, we propose a randomized heuristic approach. To construct a high quality solution to CCP, we present an iterated greedy (IG) algorithm. IG can also be combined with a heuristic used to determine how far the algorithm is from the optimum in the worst case. Randomized local search (RLS) for maximum independent set was proposed to find such a bound. The experimental results of IG and the bounds obtained by RLS indicate that IG is a very suitable technique for solving CCP in real-world graphs. In addition, we summarize our basic rigorous results, which were developed for analysis of IG and understanding of its behavior on several relevant graph classes

    Construction of near-optimal vertex clique covering for real-world networks

    Get PDF
    We propose a method based on combining a constructive and a bounding heuristic to solve the vertex clique covering problem (CCP), where the aim is to partition the vertices of a graph into the smallest number of classes, which induce cliques. Searching for the solution to CCP is highly motivated by analysis of social and other real-world networks, applications in graph mining, as well as by the fact that CCP is one of the classical NP-hard problems. Combining the construction and the bounding heuristic helped us not only to find high-quality clique coverings but also to determine that in the domain of real-world networks, many of the obtained solutions are optimal, while the rest of them are near-optimal. In addition, the method has a polynomial time complexity and shows much promise for its practical use. Experimental results are presented for a fairly representative benchmark of real-world data. Our test graphs include extracts of web-based social networks, including some very large ones, several well-known graphs from network science, as well as coappearance networks of literary works' characters from the DIMACS graph coloring benchmark. We also present results for synthetic pseudorandom graphs structured according to the Erdös-Renyi model and Leighton's model

    Best-first heuristic search for multicore machines

    Get PDF
    To harness modern multicore processors, it is imperative to develop parallel versions of fundamental algorithms. In this paper, we compare different approaches to parallel best-first search in a shared-memory setting. We present a new method, PBNF, that uses abstraction to partition the state space and to detect duplicate states without requiring frequent locking. PBNF allows speculative expansions when necessary to keep threads busy. We identify and fix potential livelock conditions in our approach, proving its correctness using temporal logic. Our approach is general, allowing it to extend easily to suboptimal and anytime heuristic search. In an empirical comparison on STRIPS planning, grid pathfinding, and sliding tile puzzle problems using 8-core machines, we show that A*, weighted A* and Anytime weighted A* implemented using PBNF yield faster search than improved versions of previous parallel search proposals

    Reducing Dueling Bandits to Cardinal Bandits

    Full text link
    We present algorithms for reducing the Dueling Bandits problem to the conventional (stochastic) Multi-Armed Bandits problem. The Dueling Bandits problem is an online model of learning with ordinal feedback of the form "A is preferred to B" (as opposed to cardinal feedback like "A has value 2.5"), giving it wide applicability in learning from implicit user feedback and revealed and stated preferences. In contrast to existing algorithms for the Dueling Bandits problem, our reductions -- named \Doubler, \MultiSbm and \DoubleSbm -- provide a generic schema for translating the extensive body of known results about conventional Multi-Armed Bandit algorithms to the Dueling Bandits setting. For \Doubler and \MultiSbm we prove regret upper bounds in both finite and infinite settings, and conjecture about the performance of \DoubleSbm which empirically outperforms the other two as well as previous algorithms in our experiments. In addition, we provide the first almost optimal regret bound in terms of second order terms, such as the differences between the values of the arms

    Computational methods for finding long simple cycles in complex networks

    Get PDF
    © 2017 Elsevier B.V. Detection of long simple cycles in real-world complex networks finds many applications in layout algorithms, information flow modelling, as well as in bioinformatics. In this paper, we propose two computational methods for finding long cycles in real-world networks. The first method is an exact approach based on our own integer linear programming formulation of the problem and a data mining pipeline. This pipeline ensures that the problem is solved as a sequence of integer linear programs. The second method is a multi-start local search heuristic, which combines an initial construction of a long cycle using depth-first search with four different perturbation operators. Our experimental results are presented for social network samples, graphs studied in the network science field, graphs from DIMACS series, and protein-protein interaction networks. These results show that our formulation leads to a significantly more efficient exact approach to solve the problem than a previous formulation. For 14 out of 22 networks, we have found the optimal solutions. The potential of heuristics in this problem is also demonstrated, especially in the context of large-scale problem instances

    Satisficing in multi-armed bandit problems

    Full text link
    Satisficing is a relaxation of maximizing and allows for less risky decision making in the face of uncertainty. We propose two sets of satisficing objectives for the multi-armed bandit problem, where the objective is to achieve reward-based decision-making performance above a given threshold. We show that these new problems are equivalent to various standard multi-armed bandit problems with maximizing objectives and use the equivalence to find bounds on performance. The different objectives can result in qualitatively different behavior; for example, agents explore their options continually in one case and only a finite number of times in another. For the case of Gaussian rewards we show an additional equivalence between the two sets of satisficing objectives that allows algorithms developed for one set to be applied to the other. We then develop variants of the Upper Credible Limit (UCL) algorithm that solve the problems with satisficing objectives and show that these modified UCL algorithms achieve efficient satisficing performance.Comment: To appear in IEEE Transactions on Automatic Contro

    Heuristic search under time and cost bounds

    Get PDF
    Intelligence is difficult to formally define, but one of its hallmarks is the ability find a solution to a novel problem. Therefore it makes good sense that heuristic search is a foundational topic in artificial intelligence. In this context search refers to the process of finding a solution to the problem by considering a large, possibly infinite, set of potential plans of action. Heuristic refers to a rule of thumb or a guiding, if not always accurate, principle. Heuristic search describes a family of techniques which consider members of the set of potential plans of action in turn, as determined by the heuristic, until a suitable solution to the problem is discovered. This work is concerned primarily with suboptimal heuristic search algorithms. These algorithms are not inherently flawed, but they are suboptimal in the sense that the plans that they return may be more expensive than a least cost, or optimal, plan for the problem. While suboptimal heuristic search algorithms may not return least cost solutions to the problem, they are often far faster than their optimal counterparts, making them more attractive for many applications. The thesis of this dissertation is that the performance of suboptimal search algorithms can be improved by taking advantage of information that, while widely available, has been overlooked. In particular, we will see how estimates of the length of a plan, estimates of plan cost that do not err on the side of caution, and measurements of the accuracy of our estimators can be used to improve the performance of suboptimal heuristic search algorithms

    The Behavioral Paradox: Why Investor Irrationality Calls for Lighter and Simpler Financial Regulation

    Get PDF
    It is widely believed that behavioral economics justifies more intrusive regulation of financial markets, because people are not fully rational and need to be protected from their quirks. This Article challenges that belief. Firstly, insofar as people can be helped to make better choices, that goal can usually be achieved through light-touch regulations. Secondly, faulty perceptions about markets seem to be best corrected through market-based solutions. Thirdly, increasing regulation does not seem to solve problems caused by lack of market discipline, pricing inefficiencies, and financial innovation; better results may be achieved with freer markets and simpler rules. Fourthly, regulatory rule makers are subject to imperfect rationality, which tends to reduce the quality of regulatory intervention. Finally, regulatory complexity exacerbates the harmful effects of bounded rationality, whereas simple and stable rules give rise to positive learning effects
    corecore