149 research outputs found

    Towards Jacamo-rest: A Resource-Oriented Abstraction for Managing Multi-Agent Systems

    Full text link
    The Multi-Agent Oriented Programming (MAOP) paradigm provides abstractions to model and implements entities of agents, as well as of their organisations and environments. In recent years, researchers have started to explore the integration of MAOP and the resource-oriented web architecture (REST). This paper further advances this line of research by presenting an ongoing work on jacamo-rest, a resource-oriented web-based abstraction for the multi-agent programming platform JaCaMo. Jacamo-rest takes Multi-Agent System (MAS) interoperability to a new level, enabling MAS to not only interact with services or applications of the World Wide Web but also to be managed and updated in their specifications by other applications. To add a developer interface to JaCaMo that is suitable for the Web, we provide a novel conceptual perspective on the management of MAOP specification entities as web resources. We tested jacamo-rest using it as a middleware of a programming interface application that provides modern software engineering facilities such as continuous deployments and iterative software development for MAS.Comment: 11 pages, 5 figures, Accepted to present on 14th Workshop-School on Agents, Environments, and Applications (WESAAC 2020

    Extending an open source enterprise service bus for cloud data access support

    Get PDF
    In the last years Cloud computing has become popular among IT organizations aiming to reduce its operational costs. Applications can be designed to be run on the Cloud, and utilize its technologies, or can be partially or totally migrated to the Cloud. The application's architecture contains three layers: presentation, business logic, and data layer. The presentation layer provides a user friendly interface, and acts as intermediary between the user and the application logic. The business logic separates the business logic from the underlaying layers of the application. The Data Layer (DL) abstracts the underlaying database storage system from the business layer. It is responsible for storing the application's data. The DL is divided into two sublayers: Data Access Layer (DAL), and Database Layer (DBL). The former provides the abstraction to the business layer of the database operations, while the latter is responsible for the data persistency, and manipulation. When migrating an application to the Cloud, it can be fully or partially migrated. Each application layer can be hosted using different Cloud deployment models. Possible Cloud deployment models are: Private Cloud, Public Cloud, Community Cloud, and Hybrid Cloud. In this diploma thesis we focus on the database layer, which is one of the most expensive layers to build and maintain in an IT infrastructure. Application data is typically moved to the Cloud because of , e. g. Cloud bursting, data analysis, or backup and archiving. Currently, there is little support and guidance how to enable appropriate data access to the Cloud. In this diploma thesis the we extend an Open Source Enterprise Service Bus to provide support for enabling transparent data access in the Cloud. After a research in the different protocols used by the Cloud providers to manage and store data, we design and implement the needed components in the Enterprise Service Bus to provide the user transparent access to his data previously migrated to the Cloud

    Designing and prototyping WebRTC and IMS integration using open source tools

    Get PDF
    WebRTC, or Web Real-time Communications, is a collection of web standards that detail the mechanisms, architectures and protocols that work together to deliver real-time multimedia services to the web browser. It represents a significant shift from the historical approach of using browser plugins, which over time, have proven cumbersome and problematic. Furthermore, it adopts various Internet standards in areas such as identity management, peer-to-peer connectivity, data exchange and media encoding, to provide a system that is truly open and interoperable. Given that WebRTC enables the delivery of multimedia content to any Internet Protocol (IP)-enabled device capable of hosting a web browser, this technology could potentially be used and deployed over millions of smartphones, tablets and personal computers worldwide. This service and device convergence remains an important goal of telecommunication network operators who seek to enable it through a converged network that is based on the IP Multimedia Subsystem (IMS). IMS is an IP-based subsystem that sits at the core of a modern telecommunication network and acts as the main routing substrate for media services and applications such as those that WebRTC realises. The combination of WebRTC and IMS represents an attractive coupling, and as such, a protracted investigation could help to answer important questions around the technical challenges that are involved in their integration, and the merits of various design alternatives that present themselves. This thesis is the result of such an investigation and culminates in the presentation of a detailed architectural model that is validated with a prototypical implementation in an open source testbed. The model is built on six requirements which emerge from an analysis of the literature, including previous interventions in IMS networks and a key technical report on design alternatives. Furthermore, this thesis argues that the client architecture requires support for web-oriented signalling, identity and call handling techniques leading to a potential for IMS networks to natively support these techniques as operator networks continue to grow and develop. The proposed model advocates the use of SIP over WebSockets for signalling and DTLS-SRTP for media to enable one-to-one communication and can be extended through additional functions resulting in a modular architecture. The model was implemented using open source tools which were assembled to create an experimental network testbed, and tests were conducted demonstrating successful cross domain communications under various conditions. The thesis has a strong focus on enabling ordinary software developers to assemble a prototypical network such as the one that was assembled and aims to enable experimentation in application use cases for integrated environments

    Plataforma de gestão M2M

    Get PDF
    Mestrado em Engenharia de Computadores e TelemáticaThe Internet of Things is still a fast growing area and topic of interest. New solutions and implementations keep emerging, both in service oriented solutions or device oriented solutions with M2M communications, therefore promoting the creation of new business models. Thus, as a natural evolution, came the possibility to abstract sensor management from service creation. Allowing a delegation of sensor management from the sensor providers, to focus on content creation through services. However, this delegation brings new concerns regarding access control. Consequently, this dissertation proposes a possible solution to this problem, enclosed in a service oriented platform interconnected with an ETSI M2M solution. Promoting interoperability between sensors and allowing a great elasticity in service creation.A Internet das Coisas continua a ser uma área em grande crescimento e de grande interesse. Estão constantemente a surgir novas soluções e inplementações, tanto ao nível dos serviços como ao nível das comunicações Máquina-a-Máquina, promovendo assim o aparecimento de novos modelos de negócio. Desta forma surgiu naturalmente a possibilidade de abstrair a gestão de sensores da criação de serviços. Permitindo assim, uma delagação da gestão por parte de empresas detentoras de sensores, para se focarem no conteúdo com a criação de serviços. Contudo esta divisão acarreta algumas preocupações de segurança quanto ao controlo de acesso. Nesse sentido, esta dissertação propõe uma possível solução para o mesmo, englobada numa plataforma orientada ao serviços interligada com uma solução ETSI M2M. Promovendo a interoperabilidade entre sensores e permitindo assim uma grande elasticidade na criação de serviços

    High-Performance Near-Time Processing of Bulk Data

    Get PDF
    Enterprise Systems like customer-billing systems or financial transaction systems are required to process large volumes of data in a fixed period of time. Those systems are increasingly required to also provide near-time processing of data to support new service offerings. Common systems for data processing are either optimized for high maximum throughput or low latency. This thesis proposes the concept for an adaptive middleware, which is a new approach for designing systems for bulk data processing. The adaptive middleware is able to adapt its processing type fluently between batch processing and single-event processing. By using message aggregation, message routing and a closed feedback-loop to adjust the data granularity at runtime, the system is able to minimize the end-to-end latency for different load scenarios. The relationship of end-to-end latency and throughput of batch and message-based systems is formally analyzed and a performance evaluation of both processing types has been conducted. Additionally, the impact of message aggregation on throughput and latency is investigated. The proposed middleware concept has been implemented with a research prototype and has been evaluated. The results of the evaluation show that the concept is viable and is able to optimize the end-to-end latency of a system. The design, implementation and operation of an adaptive system for bulk data processing differs from common approaches to implement enterprise systems. A conceptual framework has been development to guide the development process of how to build an adaptive software for bulk data processing. It defines the needed roles and their skills, the necessary tasks and their relationship, artifacts that are created and required by different tasks, the tools that are needed to process the tasks and the processes, which describe the order of tasks

    Methods and Tools for Management of Distributed Event Processing Applications

    Get PDF
    Die Erfassung und Verarbeitung von Ereignissen aus cyber-physischen Systemen bietet Anwendern die Möglichkeit, kontinuierlich über Leistungsdaten und aufkommende Probleme unterrichtet zu werden (Situational Awareness) oder Wartungsprozesse zustandsabhängig zu optimieren (Condition-based Maintenance). Derartige Szenarien verlangen aufgrund der Vielzahl und Frequenz der Daten sowie der Anforderung einer echtzeitnahen Auswertung den Einsatz geeigneter Technologien. Unter dem Namen Event Processing haben sich dabei Technologien etabliert, die in der Lage sind, Datenströme in Echtzeit zu verarbeiten und komplexe Ereignismuster auf Basis räumlicher, zeitlicher oder kausaler Zusammenhänge zu erkennen. Gleichzeitig sind heute in diesem Bereich verfügbare Systeme jedoch noch durch eine hohe technische Komplexität der zugrunde liegenden deklarativen Sprachen gekennzeichnet, die bei der Entwicklung echtzeitfähiger Anwendungen zu langsamen Entwicklungszyklen aufgrund notwendiger technischer Expertise führt. Gerade diese Anwendungen weisen allerdings häufig eine hohe Dynamik in Bezug auf Veränderungen von Anforderungen der zu erkennenden Situationen, aber auch der zugrunde liegenden Sensordaten hinsichtlich ihrer Syntax und Semantik auf. Der primäre Beitrag dieser Arbeit ermöglicht Fachanwendern durch die Abstraktion von technischen Details, selbständig verteilte echtzeitfähige Anwendungen in Form von sogenannten Echtzeit-Verarbeitungspipelines zu erstellen, zu bearbeiten und auszuführen. Die Beiträge der Arbeit lassen sich wie folgt zusammenfassen: 1. Eine Methodik zur Entwicklung echtzeitfähiger Anwendungen unter Berücksichtigung von Erweiterbarkeit sowie der Zugänglichkeit für Fachanwender. 2. Modelle zur semantischen Beschreibung der Charakteristika von Ereignisproduzenten, Ereignisverarbeitungseinheiten und Ereigniskonsumenten. 3. Ein System zur Ausführung von Verarbeitungspipelines bestehend aus geographisch verteilten Ereignisverarbeitungseinheiten. 4. Ein Software-Artefakt zur graphischen Modellierung von Verarbeitungspipelines sowie deren automatisierter Ausführung. Die Beiträge werden in verschiedenen Szenarien aus den Bereichen Produktion und Logistik vorgestellt, angewendet und evaluiert

    Automatic Generation of Distributed Runtime Infrastructure for Internet of Things

    Get PDF
    Ph. D. ThesisThe Internet of Things (IoT) represents a network of connected devices that are able to cooperate and interact with each other in order to reach a particular goal. To attain this, the devices are equipped with identifying, sensing, networking and processing capabilities. Cloud computing, on the other hand, is the delivering of on-demand computing services – from applications, to storage, to processing power – typically over the internet. Clouds bring a number of advantages to distributed computing because of highly available pool of virtualized computing resource. Due to the large number of connected devices, real-world IoT use cases may generate overwhelmingly large amounts of data. This prompts the use of cloud resources for processing, storage and analysis of the data. Therefore, a typical IoT system comprises of a front-end (devices that collect and transmit data), and back-end – typically distributed Data Stream Management Systems (DSMSs) deployed on the cloud infrastructure, for data processing and analysis. Increasingly, new IoT devices are being manufactured to provide limited execution environment on top of their data sensing and transmitting capabilities. This consequently demands a change in the way data is being processed in a typical IoT-cloud setup. The traditional, centralised cloud-based data processing model – where IoT devices are used only for data collection – does not provide an efficient utilisation of all available resources. In addition, the fundamental requirements of real-time data processing such as short response time may not always be met. This prompts a new processing model which is based on decentralising the data processing tasks. The new decentralised architectural pattern allows some parts of data streaming computation to be executed directly on edge devices – closer to where the data is collected. Extending the processing capabilities to the IoT devices increases the robustness of applications as well as reduces the communication overhead between different components of an IoT system. However, this new pattern poses new challenges in the development, deployment and management of IoT applications. Firstly, there exists a large resource gap between the two parts of a typical IoT system (i.e. clouds and IoT devices); hence, prompting a new approach for IoT applications deployment and management. Secondly, the new decentralised approach necessitates the deployment of DSMS on distributed clusters of heterogeneous nodes resulting in unpredictable runtime performance and complex fault characteristics. Lastly, the environment where DSMSs are deployed is very dynamic due to user or device mobility, workload variation, and resource availability. In this thesis we present solutions to address the aforementioned challenges. We investigate how a high-level description of a data streaming computation can be used to automatically generate a distributed runtime infrastructure for Internet of Things. Subsequently, we develop a deployment and management system capable of distributing different operators of a data streaming computation onto different IoT gateway devices and cloud infrastructure. To address the other challenges, we propose a non-intrusive approach for performance evaluation of DSMSs and present a protocol and a set of algorithms for dynamic migration of stateful data stream operators. To improve our migration approach, we provide an optimisation technique which provides minimal application downtime and improves the accuracy of a data stream computation

    InDEx – Industrial Data Excellence

    Get PDF
    InDEx, the Industrial Data Excellence program, was created to investigate what industrial data can be collected, shared, and utilized for new intelligent services in high-performing, reliable and secure ways, and how to accomplish that in practice in the Finnish manufacturing industry.InDEx produced several insights into data in an industrial environment, collecting data, sharing data in the value chain and in the factory environment, and utilizing and manipulating data with artificial intelligence. Data has an important role in the future in an industrial context, but data sources and utilization mechanisms are more diverse than in cases related to consumer data. Experiences in the InDEx cases showed that there is great potential in data utili zation.Currently, successful business cases built on data sharing are either company-internal or utilize an existing value chain. The data market has not yet matured, and third-party offerings based on public and private data sources are rare. In this program, we tried out a framework that aimed to securely and in a controlled manner share data between organizations. We also worked to improve the contractual framework needed to support new business based on shared data, and we conducted a study of applicable business models. Based on this, we searched for new data-based opportunities within the project consortium. The vision of data as a tradeable good or of sharing with external partners is still to come true, but we believe that we have taken steps in the right direction.The program started in fall 2019 and ended in April 2022. The program faced restrictions caused by COVID-19, which had an effect on the intensity of the work during 2020 and 2021, and the program was extended by one year. Because of meeting restrictions, InDEx collaboration was realized through online meetings. We learned to work and collaborate using digital tools and environments. Despite the mentioned hindrances, and thanks to Business Finland’s flexibility, the extension time made it possible for most of the planned goals to be achieved.This report gives insights in the outcomes of the companies’ work within the InDEx program. DIMECC InDEx is the first finalized program by the members of the Finnish Advanced Manufacturing Network (FAMN, www.famn.fi).</p
    corecore