1,413 research outputs found

    Self-organisation in ant-based peer-to-peer systems

    Get PDF
    Peer-to-peer systems are a highly decentralised form of distributed computing, which has ad¬ vantages of robustness and redundancy over more centralised systems. When the peer-to-peer system has a stable and static population of nodes, variations and bursts in traffic levels cause momentary levels of congestion in the system, which have to be dealt with by routing policies implemented within the peer-to-peer system in order to maintain efficient and effective routes.Peer-to-peer systems, however, are dynamic in nature, as they exhibit churn, i.e. nodes enter and leave the system during their use. This dynamic nature makes it difficult to identify consistent routing policies that ensure a reasonable proportion of traffic in the system is routed successfully to its destination. Studies have shown that chum in peer-to-peer systems is difficult to model and characterise, and further, is difficult to manage.The task of creating and maintaining efficient routes and network topologies in dynamic environments, such as those described above, is one of dynamic optimisation. Complex adap¬ tive systems such as ant colony optimisation and genetic algorithms have been shown to display adaptive properties in dynamic environments. Although complex adaptive systems have been applied to a small number of dynamic optimisation problems, their application to dynamic opti¬ misation problems is new in general and also application to routing in dynamic environments is new. Further, the problem characteristics and conditions under which these algorithms perform well, and the reasons for doing so, are not yet fully understood. The assessment of how good the complex adaptive systems are at creating solutions to the dynamic routing optimisation problem detailed above is dependent on the metrics used to make the measurements.A contribution of this thesis is the development of a theoretical framework within which we can analyse the behaviours and responses of any peer-to-peer system. We do this by considering a peer-to-peer system to be a graph generating algorithm, which has input parameters and has outputs which can be measured using topological metrics and statistics that characterise the traffic through the network. Specifically, we consider the behaviour of an ant-based peer-to-peer system and we have designed and implemented an ant-based peer-to-peer simulator to enable this.Recently methods for characterising graphs by their scaling properties have been developed and a small number of distinct categories of graphs have been identified (such as random graphs, lattices, small world graphs, and scale-free graphs). These graph characterisation methods have also enabled the creation of new metrics to enable measurements of properties of the graphs belonging to different categories.We use these new graph characterisation techniques mentioned above and the associated metrics to implement a systematic approach to the analysis of the behaviour of our ant peer-to-peer system. We present the results of a number of simulation runs of our system initiated with a range of values of key parameters. The resulting networks are then analysed from both the point of view of traffic statistics, and also topological metrics.Three sets of experiments have been designed and conducted using the simulator created during this project. The first set, equilibrium experiments, consider the behaviour of the system when the number of operational nodes in the system is constant and also the demand placed on the system is constant. The second set of experiments considers the changes that occur when there are bursts in traffic levels or the demand placed on the system. The final set considers the effect of churn in the system, where nodes enter and leave the system during its operation. In crafting the experiments we have been able to identify many of the major control parameters of the ant-based peer-to-peer system.A further contribution of this thesis is the results of the experiments which show that under conditions of network congestion the ant peer-to-peer system becomes very brittle. This is characterised by small average path lengths, a low proportion of ants successfully getting through to their destination node, and also a low average degree of the nodes in the network. This brittleness is made worse when nodes fail and also when the demand applied to the system changes abruptly.A further contribution of this thesis is the creation of a method of ranking the topology of a network with respect to a target topology. This method can be used as the basis for topological control (i.e. the distributed self-assembly of network topologies within a peer-to-peer system that have desired topological properties) and assessing how best to modify a topology in order to move it closer to the desired (or reference) topology. We use this method when measuring the outcome of our experiments to determine how far the resulting graph is from a random graph. In principle this method could be used to measure the distance of the graph of the peer-to-peer network from any reference topology (e.g. a lattice or a tree).A final contribution of this thesis is the definition of a distributed routing policy which uses a measure of confidence that nodes in the system are in an operational state when making calculations regarding onward routing. The method of implementing the routing algorithm within the ant peer-to-peer system has been specified, although this has not been implemented within this thesis. It is conjectured that this algorithm would improve the performance of the ant peer-to-peer system under conditions of churn.The main question this thesis is concerned with is how the behaviour of the ant-based peer-to-peer system can best be measured using a simulation-based approach, and how these measurables can be used to control and optimise the performance of the ant-based peer-to-peer system in conditions of equilibrium, and also non-equilibrium (specifically varying levels of bursts in traffic demand, and also varying rates of nodes entering and leaving the peer-to-peer system)

    Multi-Agent Fitness Functions For Evolutionary Architecture

    Get PDF
    The dynamics of crowd movements are self-organising and often involve complex pattern formations. Although computational models have recently been developed, it is unclear how well their underlying methods capture local dynamics and longer-range aspects, such as evacuation. A major part of this thesis is devoted to an investigation of current methods, and where required, the development of alternatives. The main purpose is to utilise realistic models of pedestrian crowds in the design of fitness functions for an evolutionary approach to architectural design. We critically review the state-of-the-art in pedestrian and evacuation dynamics. The concept of 'Multi-Agent System' embraces a number of approaches, which together encompass important local and longer-range aspects. Early investigations focus on methods-cellular automata and attractor fields-designed to capture these respective levels. The assumption that pattern formations in crowds result from local processes is reflected in two dimensional cellular automata models, where mathematical rules operate in local neighbourhoods. We investigate an established cellular automata and show that lane-formation patterns are stable only in a low-valued density range. Above this range, such patterns suddenly randomise. By identifying and then constraining the source of this randomness, we are only able to achieve a small degree of improvement. Moreover, when we try to integrate the model with attractor fields, no useful behaviour is achieved, and much of the randomness persists. Investigations indicate that the unwanted randomness is associated with 2-lattice phase transitions, where local dynamics get invaded by giant-component clusters during the onset of lattice percolation. Through this in-depth investigation, the general limits to cellular automata are ascertained-these methods are not designed with lattice percolation properties in mind and resulting models depend, often critically, on arbitrarily chosen neighbourhoods. We embark on the development of new and more flexible methodologies. Rather than treating local and global dynamics as separate entities, we combine them. Our methods are responsive to percolation, and are designed around the following principles: 1) Inclusive search provides an optimal path between a pedestrian origin and destination. 2) Dynamic boundaries protect search and are based on percolation probabilities, calculated from local density regimes. In this way, more robust dynamics are achieved. Simultaneously, longer-range behaviours are also specified. 3) Network-level dynamics further relax the constraints of lattice percolation and allow a wider range of pedestrian interactions. Having defined our methods, we demonstrate their usefulness by applying them to lane-formation and evacuation scenarios. Results reproduce the general patterns found in real crowds. We then turn to evolution. This preliminary work is intended to motivate future research in the field of Evolutionary Architecture. We develop a genotype-phenotype mapping, which produces complex architectures, and demonstrate the use of a crowd-flow model in a phenotype-fitness mapping. We discuss results from evolutionary simulations, which suggest that obstacles may have some beneficial effect on crowd evacuation. We conclude with a summary, discussion of methodological limitations, and suggestions for future research

    Reading the news through its structure: new hybrid connectivity based approaches

    Get PDF
    In this thesis a solution for the problem of identifying the structure of news published by online newspapers is presented. This problem requires new approaches and algorithms that are capable of dealing with the massive number of online publications in existence (and that will grow in the future). The fact that news documents present a high degree of interconnection makes this an interesting and hard problem to solve. The identification of the structure of the news is accomplished both by descriptive methods that expose the dimensionality of the relations between different news, and by clustering the news into topic groups. To achieve this analysis this integrated whole was studied using different perspectives and approaches. In the identification of news clusters and structure, and after a preparatory data collection phase, where several online newspapers from different parts of the globe were collected, two newspapers were chosen in particular: the Portuguese daily newspaper Público and the British newspaper The Guardian. In the first case, it was shown how information theory (namely variation of information) combined with adaptive networks was able to identify topic clusters in the news published by the Portuguese online newspaper Público. In the second case, the structure of news published by the British newspaper The Guardian is revealed through the construction of time series of news clustered by a kmeans process. After this approach an unsupervised algorithm, that filters out irrelevant news published online by taking into consideration the connectivity of the news labels entered by the journalists, was developed. This novel hybrid technique is based on Qanalysis for the construction of the filtered network followed by a clustering technique to identify the topical clusters. Presently this work uses a modularity optimisation clustering technique but this step is general enough that other hybrid approaches can be used without losing generality. A novel second order swarm intelligence algorithm based on Ant Colony Systems was developed for the travelling salesman problem that is consistently better than the traditional benchmarks. This algorithm is used to construct Hamiltonian paths over the news published using the eccentricity of the different documents as a measure of distance. This approach allows for an easy navigation between published stories that is dependent on the connectivity of the underlying structure. The results presented in this work show the importance of taking topic detection in large corpora as a multitude of relations and connectivities that are not in a static state. They also influence the way of looking at multi-dimensional ensembles, by showing that the inclusion of the high dimension connectivities gives better results to solving a particular problem as was the case in the clustering problem of the news published online.Neste trabalho resolvemos o problema da identificação da estrutura das notícias publicadas em linha por jornais e agências noticiosas. Este problema requer novas abordagens e algoritmos que sejam capazes de lidar com o número crescente de publicações em linha (e que se espera continuam a crescer no futuro). Este facto, juntamente com o elevado grau de interconexão que as notícias apresentam tornam este problema num problema interessante e de difícil resolução. A identificação da estrutura do sistema de notícias foi conseguido quer através da utilização de métodos descritivos que expõem a dimensão das relações existentes entre as diferentes notícias, quer através de algoritmos de agrupamento das mesmas em tópicos. Para atingir este objetivo foi necessário proceder a ao estudo deste sistema complexo sob diferentes perspectivas e abordagens. Após uma fase preparatória do corpo de dados, onde foram recolhidos diversos jornais publicados online optou-se por dois jornais em particular: O Público e o The Guardian. A escolha de jornais em línguas diferentes deve-se à vontade de encontrar estratégias de análise que sejam independentes do conhecimento prévio que se tem sobre estes sistemas. Numa primeira análise é empregada uma abordagem baseada em redes adaptativas e teoria de informação (nomeadamente variação de informação) para identificar tópicos noticiosos que são publicados no jornal português Público. Numa segunda abordagem analisamos a estrutura das notícias publicadas pelo jornal Britânico The Guardian através da construção de séries temporais de notícias. Estas foram seguidamente agrupadas através de um processo de k-means. Para além disso desenvolveuse um algoritmo que permite filtrar de forma não supervisionada notícias irrelevantes que apresentam baixa conectividade às restantes notícias através da utilização de Q-analysis seguida de um processo de clustering. Presentemente este método utiliza otimização de modularidade, mas a técnica é suficientemente geral para que outras abordagens híbridas possam ser utilizadas sem perda de generalidade do método. Desenvolveu-se ainda um novo algoritmo baseado em sistemas de colónias de formigas para solução do problema do caixeiro viajante que consistentemente apresenta resultados melhores que os tradicionais bancos de testes. Este algoritmo foi aplicado na construção de caminhos Hamiltonianos das notícias publicadas utilizando a excentricidade obtida a partir da conectividade do sistema estudado como medida da distância entre notícias. Esta abordagem permitiu construir um sistema de navegação entre as notícias publicadas que é dependente da conectividade observada na estrutura de notícias encontrada. Os resultados apresentados neste trabalho mostram a importância de analisar sistemas complexos na sua multitude de relações e conectividades que não são estáticas e que influenciam a forma como tradicionalmente se olha para sistema multi-dimensionais. Mostra-se que a inclusão desta dimensões extra produzem melhores resultados na resolução do problema de identificar a estrutura subjacente a este problema da publicação de notícias em linha

    Development of a R package to facilitate the learning of clustering techniques

    Get PDF
    This project explores the development of a tool, in the form of a R package, to ease the process of learning clustering techniques, how they work and what their pros and cons are. This tool should provide implementations for several different clustering techniques with explanations in order to allow the student to get familiar with the characteristics of each algorithm by testing them against several different datasets while deepening their understanding of them through the explanations. Additionally, these explanations should adapt to the input data, making the tool not only adept for self-regulated learning but for teaching too.Grado en Ingeniería Informátic

    Joint dimensioning of server and network infrastructure for resilient optical grids/clouds

    Get PDF
    We address the dimensioning of infrastructure, comprising both network and server resources, for large-scale decentralized distributed systems such as grids or clouds. We design the resulting grid/cloud to be resilient against network link or server failures. To this end, we exploit relocation: Under failure conditions, a grid job or cloud virtual machine may be served at an alternate destination (i.e., different from the one under failure-free conditions). We thus consider grid/cloud requests to have a known origin, but assume a degree of freedom as to where they end up being served, which is the case for grid applications of the bag-of-tasks (BoT) type or hosted virtual machines in the cloud case. We present a generic methodology based on integer linear programming (ILP) that: 1) chooses a given number of sites in a given network topology where to install server infrastructure; and 2) determines the amount of both network and server capacity to cater for both the failure-free scenario and failures of links or nodes. For the latter, we consider either failure-independent (FID) or failure-dependent (FD) recovery. Case studies on European-scale networks show that relocation allows considerable reduction of the total amount of network and server resources, especially in sparse topologies and for higher numbers of server sites. Adopting a failure-dependent backup routing strategy does lead to lower resource dimensions, but only when we adopt relocation (especially for a high number of server sites): Without exploiting relocation, potential savings of FD versus FID are not meaningful

    Application of artificial intelligence techniques for predicting the flyrock, Sungun mine, Iran

    Get PDF
    Flyrock is known as one of the main problems in open pit mining operations. This phenomenon can threaten the safety of mine personnel, equipment and buildings around the mine area. One way to reduce the risk of accidents due to flyrock is to accurately predict that the safe area can be identified and also with proper design of the explosion pattern, the amount of flyrock can be greatly reduced. For this purpose, 14 effective parameters on flyrock have been selected in this paper i.e. burden, blasthole diameter, sub-drilling, number of blastholes, spacing, total length, amount of explosives and a number of other effective parameters, predicting the amount of flyrock in a case study, Songun mine, using linear multivariate regression (LMR) and artificial intelligence algorithms such as Gray Wolf Optimization algorithm (GWO), Moth-Flame Optimization algorithm (MFO), Whale Optimization Algorithm (WOA), Ant Lion Optimizer (ALO) and Multi-Verse Optimizer (MVO). Results showed that intelligent algorithms have better capabilities than linear regression method and finally method MVO showed the best performance for predicting flyrock. Moreover, the results of the sensitivity analysis show that the burden, ANFO, total rock blasted, total length and blast hole diameter are the most significant factors to determine flyrock, respectively, while dynamite has the lowest impact on flyrock generation.Peer ReviewedObjectius de Desenvolupament Sostenible::9 - Indústria, Innovació i InfraestructuraPostprint (published version

    Search based software engineering: Trends, techniques and applications

    Get PDF
    © ACM, 2012. This is the author's version of the work. It is posted here by permission of ACM for your personal use. Not for redistribution. The definitive version is available from the link below.In the past five years there has been a dramatic increase in work on Search-Based Software Engineering (SBSE), an approach to Software Engineering (SE) in which Search-Based Optimization (SBO) algorithms are used to address problems in SE. SBSE has been applied to problems throughout the SE lifecycle, from requirements and project planning to maintenance and reengineering. The approach is attractive because it offers a suite of adaptive automated and semiautomated solutions in situations typified by large complex problem spaces with multiple competing and conflicting objectives. This article provides a review and classification of literature on SBSE. The work identifies research trends and relationships between the techniques applied and the applications to which they have been applied and highlights gaps in the literature and avenues for further research.EPSRC and E

    Metaheuristic Design Patterns: New Perspectives for Larger-Scale Search Architectures

    Get PDF
    Design patterns capture the essentials of recurring best practice in an abstract form. Their merits are well established in domains as diverse as architecture and software development. They offer significant benefits, not least a common conceptual vocabulary for designers, enabling greater communication of high-level concerns and increased software reuse. Inspired by the success of software design patterns, this chapter seeks to promote the merits of a pattern-based method to the development of metaheuristic search software components. To achieve this, a catalog of patterns is presented, organized into the families of structural, behavioral, methodological and component-based patterns. As an alternative to the increasing specialization associated with individual metaheuristic search components, the authors encourage computer scientists to embrace the ‘cross cutting' benefits of a pattern-based perspective to optimization algorithms. Some ways in which the patterns might form the basis of further larger-scale metaheuristic component design automation are also discussed

    Machine learning into metaheuristics: A survey and taxonomy of data-driven metaheuristics

    Get PDF
    During the last years, research in applying machine learning (ML) to design efficient, effective and robust metaheuristics became increasingly popular. Many of those data driven metaheuristics have generated high quality results and represent state-of-the-art optimization algorithms. Although various appproaches have been proposed, there is a lack of a comprehensive survey and taxonomy on this research topic. In this paper we will investigate different opportunities for using ML into metaheuristics. We define uniformly the various ways synergies which might be achieved. A detailed taxonomy is proposed according to the concerned search component: target optimization problem, low-level and high-level components of metaheuristics. Our goal is also to motivate researchers in optimization to include ideas from ML into metaheuristics. We identify some open research issues in this topic which needs further in-depth investigations

    Planejamento para missões autônomas persistentes cooperativas de longo prazo

    Get PDF
    Orientador: Andre Ricardo FioravantiDissertação (mestrado) - Universidade Estadual de Campinas, Faculdade de Engenharia MecânicaResumo: Uma metodologia para abordar missões autônomas persistentes a longo prazo é apresentada juntamente com uma formalização geral do problema em hipóteses simples. É derivada uma realização dessa metodologia que reduz o problema geral para subproblemas de construção de caminho e de otimização combinatória, que são tratados com heurísticas para a computação de solução viável. Quatro estudos de caso são propostos e resolvidos com esta metodologia, mostrando que é possível obter caminhos contínuos ótimos ou subótimos aceitáveis a partir de ma representação discreta e elucidando algumas propriedades de solução nesses diferentes cenários, construindo bases para futuras escolhas educadas entre o uso de métodos exatos ou heurísticosAbstract: A Methodology for tackling Persistent Long Term Autonomous Missions is presented along with a general formalization of the problem upon simple assumptions. A realization of this methodology is derived which reduces the overall problem to a path construction and a combinatorial optimization subproblems, which are treated themselves with heuristics for feasible solution computation. Four case studies are proposed and solved with this methodology, showing that it is possible to obtain optimal or acceptable suboptimal continuous paths from a discrete representation, and elucidating some solution properties in these different scenarios, building bases for future educated choices between use of exact methods over heuristicsMestradoMecanica dos Sólidos e Projeto MecanicoMestre em Engenharia Mecânica1687532CAPE
    corecore