29,613 research outputs found

    Separating Agent-Functioning and Inter-Agent Coordination by Activated Modules: The DECOMAS Architecture

    Full text link
    The embedding of self-organizing inter-agent processes in distributed software applications enables the decentralized coordination system elements, solely based on concerted, localized interactions. The separation and encapsulation of the activities that are conceptually related to the coordination, is a crucial concern for systematic development practices in order to prepare the reuse and systematic integration of coordination processes in software systems. Here, we discuss a programming model that is based on the externalization of processes prescriptions and their embedding in Multi-Agent Systems (MAS). One fundamental design concern for a corresponding execution middleware is the minimal-invasive augmentation of the activities that affect coordination. This design challenge is approached by the activation of agent modules. Modules are converted to software elements that reason about and modify their host agent. We discuss and formalize this extension within the context of a generic coordination architecture and exemplify the proposed programming model with the decentralized management of (web) service infrastructures

    OpenKnowledge at work: exploring centralized and decentralized information gathering in emergency contexts

    Get PDF
    Real-world experience teaches us that to manage emergencies, efficient crisis response coordination is crucial; ICT infrastructures are effective in supporting the people involved in such contexts, by supporting effective ways of interaction. They also should provide innovative means of communication and information management. At present, centralized architectures are mostly used for this purpose; however, alternative infrastructures based on the use of distributed information sources, are currently being explored, studied and analyzed. This paper aims at investigating the capability of a novel approach (developed within the European project OpenKnowledge1) to support centralized as well as decentralized architectures for information gathering. For this purpose we developed an agent-based e-Response simulation environment fully integrated with the OpenKnowledge infrastructure and through which existing emergency plans are modelled and simulated. Preliminary results show the OpenKnowledge capability of supporting the two afore-mentioned architectures and, under ideal assumptions, a comparable performance in both cases

    Mobile phone-based healthcare delivery in a Sami area: Reflections on technology and culture\ud

    Get PDF
    This paper analyses the redesign of psychiatric services for children and\ud adolescents in a Sami area in the county of Finnmark in Norway. The project\ud included the introduction of a new technology in support of a decentralized model\ud for healthcare service delivery. We focus specifically on the role of culture in the\ud development and implementation of a mobile phone application during the pilot\ud phase of the project. In our analysis we draw on information infrastructure theory.\ud We are in particular interested in the concept of generativity and critically assess\ud its role of in the analysis of technology in a culturally diverse context

    DEPAS: A Decentralized Probabilistic Algorithm for Auto-Scaling

    Full text link
    The dynamic provisioning of virtualized resources offered by cloud computing infrastructures allows applications deployed in a cloud environment to automatically increase and decrease the amount of used resources. This capability is called auto-scaling and its main purpose is to automatically adjust the scale of the system that is running the application to satisfy the varying workload with minimum resource utilization. The need for auto-scaling is particularly important during workload peaks, in which applications may need to scale up to extremely large-scale systems. Both the research community and the main cloud providers have already developed auto-scaling solutions. However, most research solutions are centralized and not suitable for managing large-scale systems, moreover cloud providers' solutions are bound to the limitations of a specific provider in terms of resource prices, availability, reliability, and connectivity. In this paper we propose DEPAS, a decentralized probabilistic auto-scaling algorithm integrated into a P2P architecture that is cloud provider independent, thus allowing the auto-scaling of services over multiple cloud infrastructures at the same time. Our simulations, which are based on real service traces, show that our approach is capable of: (i) keeping the overall utilization of all the instantiated cloud resources in a target range, (ii) maintaining service response times close to the ones obtained using optimal centralized auto-scaling approaches.Comment: Submitted to Springer Computin

    Storage Solutions for Big Data Systems: A Qualitative Study and Comparison

    Full text link
    Big data systems development is full of challenges in view of the variety of application areas and domains that this technology promises to serve. Typically, fundamental design decisions involved in big data systems design include choosing appropriate storage and computing infrastructures. In this age of heterogeneous systems that integrate different technologies for optimized solution to a specific real world problem, big data system are not an exception to any such rule. As far as the storage aspect of any big data system is concerned, the primary facet in this regard is a storage infrastructure and NoSQL seems to be the right technology that fulfills its requirements. However, every big data application has variable data characteristics and thus, the corresponding data fits into a different data model. This paper presents feature and use case analysis and comparison of the four main data models namely document oriented, key value, graph and wide column. Moreover, a feature analysis of 80 NoSQL solutions has been provided, elaborating on the criteria and points that a developer must consider while making a possible choice. Typically, big data storage needs to communicate with the execution engine and other processing and visualization technologies to create a comprehensive solution. This brings forth second facet of big data storage, big data file formats, into picture. The second half of the research paper compares the advantages, shortcomings and possible use cases of available big data file formats for Hadoop, which is the foundation for most big data computing technologies. Decentralized storage and blockchain are seen as the next generation of big data storage and its challenges and future prospects have also been discussed

    Mandate-driven networking eco-system : a paradigm shift in end-to-end communications

    Get PDF
    The wireless industry is driven by key stakeholders that follow a holistic approach of "one-system-fits-all" that leads to moving network functionality of meeting stringent End-to-End (E2E) communication requirements towards the core and cloud infrastructures. This trend is limiting smaller and new players for bringing in new and novel solutions. For meeting these E2E requirements, tenants and end-users need to be active players for bringing their needs and innovations. Driving E2E communication not only in terms of quality of service (QoS) but also overall carbon footprint and spectrum efficiency from one specific community may lead to undesirable simplifications and a higher level of abstraction of other network segments may lead to sub-optimal operations. Based on this, the paper presents a paradigm shift that will enlarge the role of wireless innovation at academia, Small and Medium-sized Enterprises (SME)'s, industries and start-ups while taking into account decentralized mandate-driven intelligence in E2E communications

    Is the computing continuum already here?

    Get PDF
    The computing continuum, a novel paradigm that extends beyond the current silos of cloud and edge computing, can enable the seamless and dynamic deployment of applications across diverse infrastructures. By utilizing the cloud-native features and scalability of Kubernetes, this concept promotes deployment transparency, communication transparency, and resource availability transparency. Key features of this paradigm include intent-driven policies, a decentralized architecture, multi-ownership, and a fluid topology. Integral to the computing continuum are the building blocks of dynamic discovery and peering, hierarchical resource continuum, resource and service reflection, network continuum, and storage and data continuum. The implementation of these principles allows organizations to foster an efficient, dynamic, and seamless computing environment, thereby facilitating the deployment of complex distributed applications across varying infrastructures
    • …
    corecore