79 research outputs found

    Stigmergy in Web 2.0: a model for site dynamics

    Get PDF
    Building Web 2.0 sites does not necessarily ensure the success of the site. We aim to better understand what improves the success of a site by drawing insight from biologically inspired design patterns. Web 2.0 sites provide a mechanism for human interaction enabling powerful intercommunication between massive volumes of users. Early Web 2.0 site providers that were previously dominant are being succeeded by newer sites providing innovative social interaction mechanisms. Understanding what site traits contribute to this success drives research into Web sites mechanics using models to describe the associated social networking behaviour. Some of these models attempt to show how the volume of users provides a self-organising and self-contextualisation of content. One model describing coordinated environments is called stigmergy, a term originally describing coordinated insect behavior. This paper explores how exploiting stigmergy can provide a valuable mechanism for identifying and analysing online user behavior specifically when considering that user freedom of choice is restricted by the provided web site functionality. This will aid our building better collaborative Web sites improving the collaborative processes

    On the role of stigmergy in cognition

    Get PDF
    Cognition in animals is produced by the self- organized activity of mutually entrained body and brain. Given that stigmergy plays a major role in self-organization of societies, we identify stigmergic behavior in cognitive systems, as a common mechanism ranging from brain activity to social systems. We analyze natural societies and artificial systems exploiting stigmergy to produce cognition. Several authors have identified the importance of stigmergy in the behavior and cognition of social systems. However, the perspective of stigmergy playing a central role in brain activity is novel, to the best of our knowledge. We present several evidences of such processes in the brain and discuss their importance in the formation of cognition. With this we try to motivate further research on stigmergy as a relevant component for intelligent systems.info:eu-repo/semantics/acceptedVersio

    BittyBuzz: A Swarm Robotics Runtime for Tiny Systems

    Full text link
    Swarm robotics is an emerging field of research which is increasingly attracting attention thanks to the advances in robotics and its potential applications. However, despite the enthusiasm surrounding this area of research, software development for swarm robotics is still a tedious task. That fact is partly due to the lack of dedicated solutions, in particular for low-cost systems to be produced in large numbers and that can have important resource constraints. To address this issue, we introduce BittyBuzz, a novel runtime platform: it allows Buzz, a domain-specific language, to run on microcontrollers while maintaining dynamic memory management. BittyBuzz is designed to fit a flash memory as small as 32 kB (with usable space for scripts) and work with as little as 2 kB of RAM. In this work, we introduce the BittyBuzz implementation, its differences from the original Buzz virtual machine, and its advantages for swarm robotics systems. We show that BittyBuzz is successfully integrated with three robotic platforms with minimal memory footprint and conduct experiments to show computation performance of BittyBuzz. Results show that BittyBuzz can be effectively used to implement common swarm behaviors on microcontroller-based systems.Comment: 6 page

    Human Stigmergic Problem Solving

    Get PDF
    Chapter 6 in Cultural-historical perspectives on collective intelligence In the era of digital communication, collective problem solving is increasingly important. Large groups can now resolve issues together in completely different ways, which has transformed the arts, sciences, business, education, technology, and medicine. Collective intelligence is something we share with animals and is different from machine learning and artificial intelligence. To design and utilize human collective intelligence, we must understand how its problem-solving mechanisms work. From democracy in ancient Athens, through the invention of the printing press, to COVID-19, this book analyzes how humans developed the ability to find solutions together. This wide-ranging, thought-provoking book is a game-changer for those working strategically with collective problem solving within organizations and using a variety of innovative methods. It sheds light on how humans work effectively alongside machines to confront challenges that are more urgent than what humanity has faced before. This title is also available as Open Access on Cambridge Core.Chapter 6 presents human stigmergic problem solving as a distinct “solution-centered” subtype of CI with biological antecedents in the trail laying and nest building of ants. Stigmergy describe how many individuals agents are able to coordinate collective action only by leaving information in a shared environment. In this type of collective problem solving, a version of a solution will already exist, either partially or completely. The problem-solving process will, therefore, be a response that changes the existing version of a solution by rating it like with an online video, re-estimating it through a prediction market, adapting it like an open textbook or completing it like a Wikipedia article. In human qualitative stigmergy, a preliminary part of a solution will be stored in the system or medium, and individuals will then respond to the unfinishedness in the solution in different ways. If many versions of a solutions already exist, human quantitative stigmergy can also be used to rate the most optimal solutions. In the online setting, solutions will be continuously compared with each other. These stored solutions also solve many different problems at various points of time.publishedVersio

    Opportunities and challenges of geospatial analysis for promoting urban livability in the era of big data and machine learning

    Get PDF
    Urban systems involve a multitude of closely intertwined components, which are more measurable than before due to new sensors, data collection, and spatio-temporal analysis methods. Turning these data into knowledge to facilitate planning efforts in addressing current challenges of urban complex systems requires advanced interdisciplinary analysis methods, such as urban informatics or urban data science. Yet, by applying a purely data-driven approach, it is too easy to get lost in the ‘forest’ of data, and to miss the ‘trees’ of successful, livable cities that are the ultimate aim of urban planning. This paper assesses how geospatial data, and urban analysis, using a mixed methods approach, can help to better understand urban dynamics and human behavior, and how it can assist planning efforts to improve livability. Based on reviewing state-of-the-art research the paper goes one step further and also addresses the potential as well as limitations of new data sources in urban analytics to get a better overview of the whole ‘forest’ of these new data sources and analysis methods. The main discussion revolves around the reliability of using big data from social media platforms or sensors, and how information can be extracted from massive amounts of data through novel analysis methods, such as machine learning, for better-informed decision making aiming at urban livability improvement

    On the Design of Social Media for Learning

    Get PDF
    This paper presents two conceptual models that we have developed for understanding ways that social media can support learning. One model relates to the “social” aspect of social media, describing the different ways that people can learn with and from each other, in one or more of three social forms: groups, networks and sets. The other model relates to the ‘media’ side of social media, describing how technologies are constructed and the roles that people play in creating and enacting them, treating them in terms of softness and hardness. The two models are complementary: neither provides a complete picture but, in combination, they help to explain how and why different uses of social media may succeed or fail and, as importantly, are intended to help us design learning activities that make most effective use of the technologies. We offer some suggestions as to how media used to support different social forms can be softened and hardened for different kinds of learning applications

    Developing a Framework for Stigmergic Human Collaboration with Technology Tools: Cases in Emergency Response

    Get PDF
    Information and Communications Technologies (ICTs), particularly social media and geographic information systems (GIS), have become a transformational force in emergency response. Social media enables ad hoc collaboration, providing timely, useful information dissemination and sharing, and helping to overcome limitations of time and place. Geographic information systems increase the level of situation awareness, serving geospatial data using interactive maps, animations, and computer generated imagery derived from sophisticated global remote sensing systems. Digital workspaces bring these technologies together and contribute to meeting ad hoc and formal emergency response challenges through their affordances of situation awareness and mass collaboration. Distributed ICTs that enable ad hoc emergency response via digital workspaces have arguably made traditional top-down system deployments less relevant in certain situations, including emergency response (Merrill, 2009; Heylighen, 2007a, b). Heylighen (2014, 2007a, b) theorizes that human cognitive stigmergy explains some self-organizing characteristics of ad hoc systems. Elliott (2007) identifies cognitive stigmergy as a factor in mass collaborations supported by digital workspaces. Stigmergy, a term from biology, refers to the phenomenon of self-organizing systems with agents that coordinate via perceived changes in the environment rather than direct communication. In the present research, ad hoc emergency response is examined through the lens of human cognitive stigmergy. The basic assertion is that ICTs and stigmergy together make possible highly effective ad hoc collaborations in circumstances where more typical collaborative methods break down. The research is organized into three essays: an in-depth analysis of the development and deployment of the Ushahidi emergency response software platform, a comparison of the emergency response ICTs used for emergency response during Hurricanes Katrina and Sandy, and a process model developed from the case studies and relevant academic literature is described

    Reading the news through its structure: new hybrid connectivity based approaches

    Get PDF
    In this thesis a solution for the problem of identifying the structure of news published by online newspapers is presented. This problem requires new approaches and algorithms that are capable of dealing with the massive number of online publications in existence (and that will grow in the future). The fact that news documents present a high degree of interconnection makes this an interesting and hard problem to solve. The identification of the structure of the news is accomplished both by descriptive methods that expose the dimensionality of the relations between different news, and by clustering the news into topic groups. To achieve this analysis this integrated whole was studied using different perspectives and approaches. In the identification of news clusters and structure, and after a preparatory data collection phase, where several online newspapers from different parts of the globe were collected, two newspapers were chosen in particular: the Portuguese daily newspaper Público and the British newspaper The Guardian. In the first case, it was shown how information theory (namely variation of information) combined with adaptive networks was able to identify topic clusters in the news published by the Portuguese online newspaper Público. In the second case, the structure of news published by the British newspaper The Guardian is revealed through the construction of time series of news clustered by a kmeans process. After this approach an unsupervised algorithm, that filters out irrelevant news published online by taking into consideration the connectivity of the news labels entered by the journalists, was developed. This novel hybrid technique is based on Qanalysis for the construction of the filtered network followed by a clustering technique to identify the topical clusters. Presently this work uses a modularity optimisation clustering technique but this step is general enough that other hybrid approaches can be used without losing generality. A novel second order swarm intelligence algorithm based on Ant Colony Systems was developed for the travelling salesman problem that is consistently better than the traditional benchmarks. This algorithm is used to construct Hamiltonian paths over the news published using the eccentricity of the different documents as a measure of distance. This approach allows for an easy navigation between published stories that is dependent on the connectivity of the underlying structure. The results presented in this work show the importance of taking topic detection in large corpora as a multitude of relations and connectivities that are not in a static state. They also influence the way of looking at multi-dimensional ensembles, by showing that the inclusion of the high dimension connectivities gives better results to solving a particular problem as was the case in the clustering problem of the news published online.Neste trabalho resolvemos o problema da identificação da estrutura das notícias publicadas em linha por jornais e agências noticiosas. Este problema requer novas abordagens e algoritmos que sejam capazes de lidar com o número crescente de publicações em linha (e que se espera continuam a crescer no futuro). Este facto, juntamente com o elevado grau de interconexão que as notícias apresentam tornam este problema num problema interessante e de difícil resolução. A identificação da estrutura do sistema de notícias foi conseguido quer através da utilização de métodos descritivos que expõem a dimensão das relações existentes entre as diferentes notícias, quer através de algoritmos de agrupamento das mesmas em tópicos. Para atingir este objetivo foi necessário proceder a ao estudo deste sistema complexo sob diferentes perspectivas e abordagens. Após uma fase preparatória do corpo de dados, onde foram recolhidos diversos jornais publicados online optou-se por dois jornais em particular: O Público e o The Guardian. A escolha de jornais em línguas diferentes deve-se à vontade de encontrar estratégias de análise que sejam independentes do conhecimento prévio que se tem sobre estes sistemas. Numa primeira análise é empregada uma abordagem baseada em redes adaptativas e teoria de informação (nomeadamente variação de informação) para identificar tópicos noticiosos que são publicados no jornal português Público. Numa segunda abordagem analisamos a estrutura das notícias publicadas pelo jornal Britânico The Guardian através da construção de séries temporais de notícias. Estas foram seguidamente agrupadas através de um processo de k-means. Para além disso desenvolveuse um algoritmo que permite filtrar de forma não supervisionada notícias irrelevantes que apresentam baixa conectividade às restantes notícias através da utilização de Q-analysis seguida de um processo de clustering. Presentemente este método utiliza otimização de modularidade, mas a técnica é suficientemente geral para que outras abordagens híbridas possam ser utilizadas sem perda de generalidade do método. Desenvolveu-se ainda um novo algoritmo baseado em sistemas de colónias de formigas para solução do problema do caixeiro viajante que consistentemente apresenta resultados melhores que os tradicionais bancos de testes. Este algoritmo foi aplicado na construção de caminhos Hamiltonianos das notícias publicadas utilizando a excentricidade obtida a partir da conectividade do sistema estudado como medida da distância entre notícias. Esta abordagem permitiu construir um sistema de navegação entre as notícias publicadas que é dependente da conectividade observada na estrutura de notícias encontrada. Os resultados apresentados neste trabalho mostram a importância de analisar sistemas complexos na sua multitude de relações e conectividades que não são estáticas e que influenciam a forma como tradicionalmente se olha para sistema multi-dimensionais. Mostra-se que a inclusão desta dimensões extra produzem melhores resultados na resolução do problema de identificar a estrutura subjacente a este problema da publicação de notícias em linha

    COORDINATION BY REASSIGNMENT IN THE FIREFOX COMMUNITY

    Get PDF
    According to the so-called mirroring hypothesis , the structure of an organization tends to replicate the technical dependencies among the different components in the product (or service) that the organization is developing. An explanation for this phenomenon is that socio-technical alignment, which can be measured by the congrunce of technical dependencies and human relations (Cataldo et al., 2008), leads to more efficient coordination. In this context, we suggest that a key organizational capability, especially in fast-changing environments, is to quickly reorganize in response to new opportunities or simply in order to solve problems more efficiently. To back up our suggestion, we study the dynamics of congrunce between task dependencies and expert attention within the Firefox project, as reported to the Bugzilla bug tracking system. We identify in this database several networks of interrelated problems, known as bug report networks (Sandusky et al., 2004). We show that the ability to reassign bugs to other developers within each bug report network does indeed correlate positively with the average level of congrunce achieved on each bug report network. Furthermore, when bug report networks are grouped according to common experts, we find preliminary evidence that the relationship between congrunce and assignments could be different from one group to the other

    Self-management for large-scale distributed systems

    Get PDF
    Autonomic computing aims at making computing systems self-managing by using autonomic managers in order to reduce obstacles caused by management complexity. This thesis presents results of research on self-management for large-scale distributed systems. This research was motivated by the increasing complexity of computing systems and their management. In the first part, we present our platform, called Niche, for programming self-managing component-based distributed applications. In our work on Niche, we have faced and addressed the following four challenges in achieving self-management in a dynamic environment characterized by volatile resources and high churn: resource discovery, robust and efficient sensing and actuation, management bottleneck, and scale. We present results of our research on addressing the above challenges. Niche implements the autonomic computing architecture, proposed by IBM, in a fully decentralized way. Niche supports a network-transparent view of the system architecture simplifying the design of distributed self-management. Niche provides a concise and expressive API for self-management. The implementation of the platform relies on the scalability and robustness of structured overlay networks. We proceed by presenting a methodology for designing the management part of a distributed self-managing application. We define design steps that include partitioning of management functions and orchestration of multiple autonomic managers. In the second part, we discuss robustness of management and data consistency, which are necessary in a distributed system. Dealing with the effect of churn on management increases the complexity of the management logic and thus makes its development time consuming and error prone. We propose the abstraction of Robust Management Elements, which are able to heal themselves under continuous churn. Our approach is based on replicating a management element using finite state machine replication with a reconfigurable replica set. Our algorithm automates the reconfiguration (migration) of the replica set in order to tolerate continuous churn. For data consistency, we propose a majority-based distributed key-value store supporting multiple consistency levels that is based on a peer-to-peer network. The store enables the tradeoff between high availability and data consistency. Using majority allows avoiding potential drawbacks of a master-based consistency control, namely, a single-point of failure and a potential performance bottleneck. In the third part, we investigate self-management for Cloud-based storage systems with the focus on elasticity control using elements of control theory and machine learning. We have conducted research on a number of different designs of an elasticity controller, including a State-Space feedback controller and a controller that combines feedback and feedforward control. We describe our experience in designing an elasticity controller for a Cloud-based key-value store using state-space model that enables to trade-off performance for cost. We describe the steps in designing an elasticity controller. We continue by presenting the design and evaluation of ElastMan, an elasticity controller for Cloud-based elastic key-value stores that combines feedforward and feedback control
    corecore