12,820 research outputs found

    A maximal clique based multiobjective evolutionary algorithm for overlapping community detection

    Get PDF
    Detecting community structure has become one im-portant technique for studying complex networks. Although many community detection algorithms have been proposed, most of them focus on separated communities, where each node can be-long to only one community. However, in many real-world net-works, communities are often overlapped with each other. De-veloping overlapping community detection algorithms thus be-comes necessary. Along this avenue, this paper proposes a maxi-mal clique based multiobjective evolutionary algorithm for over-lapping community detection. In this algorithm, a new represen-tation scheme based on the introduced maximal-clique graph is presented. Since the maximal-clique graph is defined by using a set of maximal cliques of original graph as nodes and two maximal cliques are allowed to share the same nodes of the original graph, overlap is an intrinsic property of the maximal-clique graph. Attributing to this property, the new representation scheme al-lows multiobjective evolutionary algorithms to handle the over-lapping community detection problem in a way similar to that of the separated community detection, such that the optimization problems are simplified. As a result, the proposed algorithm could detect overlapping community structure with higher partition accuracy and lower computational cost when compared with the existing ones. The experiments on both synthetic and real-world networks validate the effectiveness and efficiency of the proposed algorithm

    The stability and breakup of nations : a quantitative analysis

    Get PDF
    This paper presents a model of nations where agents vote on the optimal level of public spending. Larger nations bene t from increasing returns in the provision of public goods, but bear the costs of greater cultural heterogeneity. This tradeo induces agents' preferences over di fferent geographical con gurations, thus determining the likelihood of secessions or unions. After calibrating the model to Europe, we identify the regions prone to secession and the countries most likely to merge. As a test of the theory, we show that the model can account for the breakup of Yugoslavia and the dynamics of its disintegration. We also provide empirical support for the use of genetic distances as a proxy for cultural heterogeneity.Financial aid from the Spanish Ministry of Science (ECO2008-01300) and the Fundación BBVA 3-04X is gratefully acknowledge

    Dependence of the superconducting critical temperature on the number of layers in homologous series of high-Tc cuprates

    Full text link
    We study a model of nn-layer high-temperature cuprates of homologous series like HgBa_2Ca_(n-1)Cu_nO_(2+2n+\delta) to explain the dependence of the critical temperature Tc(n) on the number nn of Cu-O planes in the elementary cell. Focusing on the description of the high-temperature superconducting system in terms of the collective phase variables, we have considered a semi-microscopic anisotropic three-dimensional vector XY model of stacked copper-oxide layers with adjustable parameters representing microscopic in-plane and out-of-plane phase stiffnesses. The model captures the layered composition along c-axis of homologous series and goes beyond the phenomenological Lawrence-Doniach model for layered superconductors. Implementing the spherical closure relation for vector variables we have solved the phase XY model exactly with the help of transfer matrix method and calculated Tc(n) for arbitrary block size nn, elucidating the role of the c-axis anisotropy and its influence on the critical temperature. Furthermore, we accommodate inhomogeneous charge distribution among planes characterized by the charge imbalance coefficient RR being the function of number of layers nn. By making a physically justified assumption regarding the doping dependence of the microscopic phase stiffnesses, we have calculated the values of parameter RR as a function of block size nn in good agreement with the nuclear magnetic resonance data of carrier distribution in multilayered high-Tc cuprates.Comment: 15 pages, 10 figures. Submitted to Physical Review

    Free energy and surface tension of arbitrarily large Mackay icosahedral clusters

    Get PDF
    We present a model for predicting the free energy of arbitrarily large Mackay icosahedral clusters. van der Waals clusters are experimentally observed to be particularly stable at magic numbers corresponding to these structures. Explicit calculations on the vibrational states were used to determine the spectrum of fundamental frequencies for smaller (~561 atoms). Combining these predictions with correlations for the moment of inertia and for the minimum potential energy of large clusters leads to free energies of arbitrary large clusters. The free energies are used to predict the chemical potential and surface tension as a function of size and temperature. This connects macroscopic properties to the microscopic atomic parameters

    A Parallel Monte Carlo Code for Simulating Collisional N-body Systems

    Full text link
    We present a new parallel code for computing the dynamical evolution of collisional N-body systems with up to N~10^7 particles. Our code is based on the the Henon Monte Carlo method for solving the Fokker-Planck equation, and makes assumptions of spherical symmetry and dynamical equilibrium. The principal algorithmic developments involve optimizing data structures, and the introduction of a parallel random number generation scheme, as well as a parallel sorting algorithm, required to find nearest neighbors for interactions and to compute the gravitational potential. The new algorithms we introduce along with our choice of decomposition scheme minimize communication costs and ensure optimal distribution of data and workload among the processing units. The implementation uses the Message Passing Interface (MPI) library for communication, which makes it portable to many different supercomputing architectures. We validate the code by calculating the evolution of clusters with initial Plummer distribution functions up to core collapse with the number of stars, N, spanning three orders of magnitude, from 10^5 to 10^7. We find that our results are in good agreement with self-similar core-collapse solutions, and the core collapse times generally agree with expectations from the literature. Also, we observe good total energy conservation, within less than 0.04% throughout all simulations. We analyze the performance of the code, and demonstrate near-linear scaling of the runtime with the number of processors up to 64 processors for N=10^5, 128 for N=10^6 and 256 for N=10^7. The runtime reaches a saturation with the addition of more processors beyond these limits which is a characteristic of the parallel sorting algorithm. The resulting maximum speedups we achieve are approximately 60x, 100x, and 220x, respectively.Comment: 53 pages, 13 figures, accepted for publication in ApJ Supplement

    A Novel Deep Learning Framework for Internal Gross Target Volume Definition from 4D Computed Tomography of Lung Cancer Patients

    Full text link
    In this paper, we study the reliability of a novel deep learning framework for internal gross target volume (IGTV) delineation from four-dimensional computed tomography (4DCT), which is applied to patients with lung cancer treated by Stereotactic Body Radiation Therapy (SBRT). 77 patients who underwent SBRT followed by 4DCT scans were incorporated in a retrospective study. The IGTV_DL was delineated using a novel deep machine learning algorithm with a linear exhaustive optimal combination framework, for the purpose of comparison, three other IGTVs base on common methods was also delineated, we compared the relative volume difference (RVI), matching index (MI) and encompassment index (EI) for the above IGTVs. Then, multiple parameter regression analysis assesses the tumor volume and motion range as clinical influencing factors in the MI variation. Experimental results demonstrated that the deep learning algorithm with linear exhaustive optimal combination framework has a higher probability of achieving optimal MI compared with other currently widely used methods. For patients after simple breathing training by keeping the respiratory frequency in 10 BMP, the four phase combinations of 0%, 30%, 50% and 90% can be considered as a potential candidate for an optimal combination to synthesis IGTV in all respiration amplitudes

    Determining the optimal redistribution

    Get PDF
    The classical redistribution problem aims at optimally scheduling communications when moving from an initial data distribution \Dini to a target distribution \Dtar where each processor PiP_{i} will host a subset P(i)P(i) of data items. However, modern computing platforms are equipped with a powerful interconnection switch, and the cost of a given communication is (almost) independent of the location of its sender and receiver. This leads to generalizing the redistribution problem as follows: find the optimal permutation σ\sigma of processors such that PiP_{i} will host the set P(σ(i))P(\sigma(i)), and for which the cost of the redistribution is minimal. This report studies the complexity of this generalized problem. We provide optimal algorithms and evaluate their gain over classical redistribution through simulations. We also show the NP-hardness of the problem to find the optimal data partition and processor permutation (defined by new subsets P(σ(i))P(\sigma(i))) that minimize the cost of redistribution followed by a simple computation kernel.Le problème de redistribution classique consiste à ordonnancer les communications de manière optimale lorsque l'on passe une distribution de données initiale \Dini à une distribution cible \Dtar où chaque processeur PiP_{i} héberge un sous-ensemble P(i)P(i) des données. Cependant, les plates-formes de calcul modernes sont équipées de puissants réseaux d'interconnexion programmables, et le coût d'une communication donnée est (presque) indépendant de l'emplacement de l'expéditeur et du récepteur. Cela conduit à généraliser le problème de redistribution comme suit: trouver la permutation optimale σ\sigma de processeurs telle que PiP_{i} héberge l'ensemble P(σ(i))P(\sigma(i)), et telle que le coût de redistribution soit minimal. Ce rapport étudie la complexité de ce problème généralisé. Nous proposons des algorithmes optimaux et évaluons leur gain par rapport à la redistribution classique, via quelques simulations. Nous montrons aussi la NP-completude du problème consistant à trouver la partition de données optimale et la permutation des processeurs (définie par les nouveaux sous-ensembles P(σ(i))P(\sigma(i))) qui minimise le coût de la redistribution suivie d'un noyau de calcul simple

    Stability of Nations and Genetic Diversity 

    Get PDF
    This paper presents a model of nations where culturally heterogeneous agents vote on the optimal level of public spending. Larger nations benefit from increasing returns in the provision of public goods, but bear the costs of greater cultural heterogeneity. This tradeoff induces agents' preferences over different geographical configurations, thus determining the likelihood of secession or unification. We provide empirical support for choosing genetic distances as a proxy of cultural heterogeneity and by using data on genetic distances, we examine the stability of the current map of Europe. We then identify the regions prone to secession and the countries that are more likely to merge. Furthermore, we estimate the welfare gains from European Union membership.nation formation, genetic diversity, cultural heterogeneity, secession, uniï¬cation, European Union
    corecore