518 research outputs found

    Contributions to Edge Computing

    Get PDF
    Efforts related to Internet of Things (IoT), Cyber-Physical Systems (CPS), Machine to Machine (M2M) technologies, Industrial Internet, and Smart Cities aim to improve society through the coordination of distributed devices and analysis of resulting data. By the year 2020 there will be an estimated 50 billion network connected devices globally and 43 trillion gigabytes of electronic data. Current practices of moving data directly from end-devices to remote and potentially distant cloud computing services will not be sufficient to manage future device and data growth. Edge Computing is the migration of computational functionality to sources of data generation. The importance of edge computing increases with the size and complexity of devices and resulting data. In addition, the coordination of global edge-to-edge communications, shared resources, high-level application scheduling, monitoring, measurement, and Quality of Service (QoS) enforcement will be critical to address the rapid growth of connected devices and associated data. We present a new distributed agent-based framework designed to address the challenges of edge computing. This actor-model framework implementation is designed to manage large numbers of geographically distributed services, comprised from heterogeneous resources and communication protocols, in support of low-latency real-time streaming applications. As part of this framework, an application description language was developed and implemented. Using the application description language a number of high-order management modules were implemented including solutions for resource and workload comparison, performance observation, scheduling, and provisioning. A number of hypothetical and real-world use cases are described to support the framework implementation

    The Cooperative Defense Overlay Network: A Collaborative Automated Threat Information Sharing Framework for a Safer Internet

    Get PDF
    With the ever-growing proliferation of hardware and software-based computer security exploits and the increasing power and prominence of distributed attacks, network and system administrators are often forced to make a difficult decision: expend tremendous resources on defense from sophisticated and continually evolving attacks from an increasingly dangerous Internet with varying levels of success; or expend fewer resources on defending against common attacks on "low hanging fruit," hoping to avoid the less common but incredibly devastating zero-day worm or botnet attack. Home networks and small organizations are usually forced to choose the latter option and in so doing are left vulnerable to all but the simplest of attacks. While automated tools exist for sharing information about network-based attacks, this sharing is typically limited to administrators of large networks and dedicated security-conscious users, to the exclusion of smaller organizations and novice home users. In this thesis we propose a framework for a cooperative defense overlay network (CODON) in which participants with varying technical abilities and resources can contribute to the security and health of the internet via automated crowdsourcing, rapid information sharing, and the principle of collateral defense

    Managing Event-Driven Applications in Heterogeneous Fog Infrastructures

    Get PDF
    The steady increase in digitalization propelled by the Internet of Things (IoT) has led to a deluge of generated data at unprecedented pace. Thereby, the promise to realize data-driven decision-making is a major innovation driver in a myriad of industries. Based on the widely used event processing paradigm, event-driven applications allow to analyze data in the form of event streams in order to extract relevant information in a timely manner. Most recently, graphical flow-based approaches in no-code event processing systems have been introduced to significantly lower technological entry barriers. This empowers non-technical citizen technologists to create event-driven applications comprised of multiple interconnected event-driven processing services. Still, today’s event-driven applications are focused on centralized cloud deployments that come with inevitable drawbacks, especially in the context of IoT scenarios that require fast results, are limited by the available bandwidth, or are bound by the regulations in terms of privacy and security. Despite recent advances in the area of fog computing which mitigate these shortcomings by extending the cloud and moving certain processing closer to the event source, these approaches are hardly established in existing systems. Inherent fog computing characteristics, especially the heterogeneity of resources alongside novel application management demands, particularly the aspects of geo-distribution and dynamic adaptation, pose challenges that are currently insufficiently addressed and hinder the transition to a next generation of no-code event processing systems. The contributions of this thesis enable citizen technologists to manage event-driven applications in heterogeneous fog infrastructures along the application life cycle. Therefore, an approach for a holistic application management is proposed which abstracts citizen technologists from underlying technicalities. This allows to evolve present event processing systems and advances the democratization of event-driven application management in fog computing. Individual contributions of this thesis are summarized as follows: 1. A model, manifested in a geo-distributed system architecture, to semantically describe characteristics specific to node resources, event-driven applications and their management to blend application-centric and infrastructure-centric realms. 2. Concepts for geo-distributed deployment and operation of event-driven applications alongside strategies for flexible event stream management. 3. A methodology to support the evolution of event-driven applications including methods to dynamically reconfigure, migrate and offload individual event-driven processing services at run-time. The contributions are introduced, applied and evaluated along two scenarios from the manufacturing and logistics domain

    Cloud Computing

    Get PDF
    In the recent years, Cloud Computing has become very popular and an interesting subject in the field of science and technology. The research efforts in the Cloud Computing have led to a number of applications used for the convenience in daily life. Cloud Computing is not only providing solutions at the enterprise level but it is also suitable in organizing a centralized database which is accessible from every corner of the world. It is said that, 10 to 15 years later when all the enterprises have adopted the Cloud Computing, there will be no more perception for the data center in the company. The aim of this Master’s thesis “Cloud Computing: Server Configuration and Software Implementation for the Data Collection with Wireless Sensor Nodes” was to integrate the Wireless Sensor Network with Cloud Computing in a such a way that the data received from the Sensor node can be access able from anywhere in the world. To accomplish this task, a Wireless Sensor Network was deployed to measure the environmental conditions such as Temperature, Light and the Sensor’s battery information and the measured values are sent to a web server from where the data can be accessed. The project also includes the software implementation to collect the sensor’s measurements and a Graphical User Interface (GUI) application which reads the values from the sensor network and stores it to the database.fi=Opinnäytetyö kokotekstinä PDF-muodossa.|en=Thesis fulltext in PDF format.|sv=Lärdomsprov tillgängligt som fulltext i PDF-format

    Methods and Tools for Management of Distributed Event Processing Applications

    Get PDF
    Die Erfassung und Verarbeitung von Ereignissen aus cyber-physischen Systemen bietet Anwendern die Möglichkeit, kontinuierlich über Leistungsdaten und aufkommende Probleme unterrichtet zu werden (Situational Awareness) oder Wartungsprozesse zustandsabhängig zu optimieren (Condition-based Maintenance). Derartige Szenarien verlangen aufgrund der Vielzahl und Frequenz der Daten sowie der Anforderung einer echtzeitnahen Auswertung den Einsatz geeigneter Technologien. Unter dem Namen Event Processing haben sich dabei Technologien etabliert, die in der Lage sind, Datenströme in Echtzeit zu verarbeiten und komplexe Ereignismuster auf Basis räumlicher, zeitlicher oder kausaler Zusammenhänge zu erkennen. Gleichzeitig sind heute in diesem Bereich verfügbare Systeme jedoch noch durch eine hohe technische Komplexität der zugrunde liegenden deklarativen Sprachen gekennzeichnet, die bei der Entwicklung echtzeitfähiger Anwendungen zu langsamen Entwicklungszyklen aufgrund notwendiger technischer Expertise führt. Gerade diese Anwendungen weisen allerdings häufig eine hohe Dynamik in Bezug auf Veränderungen von Anforderungen der zu erkennenden Situationen, aber auch der zugrunde liegenden Sensordaten hinsichtlich ihrer Syntax und Semantik auf. Der primäre Beitrag dieser Arbeit ermöglicht Fachanwendern durch die Abstraktion von technischen Details, selbständig verteilte echtzeitfähige Anwendungen in Form von sogenannten Echtzeit-Verarbeitungspipelines zu erstellen, zu bearbeiten und auszuführen. Die Beiträge der Arbeit lassen sich wie folgt zusammenfassen: 1. Eine Methodik zur Entwicklung echtzeitfähiger Anwendungen unter Berücksichtigung von Erweiterbarkeit sowie der Zugänglichkeit für Fachanwender. 2. Modelle zur semantischen Beschreibung der Charakteristika von Ereignisproduzenten, Ereignisverarbeitungseinheiten und Ereigniskonsumenten. 3. Ein System zur Ausführung von Verarbeitungspipelines bestehend aus geographisch verteilten Ereignisverarbeitungseinheiten. 4. Ein Software-Artefakt zur graphischen Modellierung von Verarbeitungspipelines sowie deren automatisierter Ausführung. Die Beiträge werden in verschiedenen Szenarien aus den Bereichen Produktion und Logistik vorgestellt, angewendet und evaluiert

    Data semantic enrichment for complex event processing over IoT Data Streams

    Get PDF
    This thesis generalizes techniques for processing IoT data streams, semantically enrich data with contextual information, as well as complex event processing in IoT applications. A case study for ECG anomaly detection and signal classification was conducted to validate the knowledge foundation

    Edge and cluster computing as enabling infrastructure for Internet of Medical Things

    Get PDF
    (c) 2018 IEEE. Personal use of this material is permitted. Permission from IEEE must be obtained for all other users, including reprinting/ republishing this material for advertising or promotional purposes, creating new collective works for resale or redistribution to servers or lists, or reuse of any copyrighted components of this work in other works.The continuous adoption of fitness and medical smart sensors are boosting the development of Internet of Medical Things (IoMT), reshaping and revolutionizing Healthcare. This digital transformation is paving the way to new forms of care based on real-time analysis of huge amounts of data produced by sensors, which is seen as a basis for improving clinical efficiency and helping to save lives. A medical sensor typically produces several KBs of data per second so the collection and analysis of these data can be approached with Big Data technologies. The aim of this paper is to present and evaluate a hybrid architecture for real-time anomaly detection from data streams coming from sensors attached to patients. The architecture includes an edge computing data staging platform based on Raspberry Pi 3 for data logging, data transformation in RDF triple and data streaming towards a cluster computing running Apache Kafka for collecting RDFStreams, Apache Flink for running a parallel version of the Hierarchical Temporal Memory algorithm and Cassandra for data storing. The different layers of the architecture have been evaluated in terms of both CPU performance and memory usage using the REALDISP dataset.Peer ReviewedPostprint (author's final draft

    Protection of Information and Communications in Distributed Systems and Microservices

    Get PDF
    Distributed systems have been a topic of discussion since the 1980s, but the adoption of microservices has raised number of system components considerably. With more decentralised distributed systems, new ways to handle authentication, authorisation and accounting (AAA) are needed, as well as ways to allow components to communicate between themselves securely. New standards and technologies have been created to deal with these new requirements and many of them have already found their way to most used systems and services globally. After covering AAA and separate access control models, we continue with ways to secure communications between two connecting parties, using Transport Layer Security (TLS) and other more specialised methods such as the Google-originated Secure Production Identity Framework for Everyone (SPIFFE). We also discuss X.509 certificates for ensuring identities. Next, both older time- tested and newer distributed AAA technologies are presented. After this, we are looking into communication between distributed components with both synchronous and asynchronous communication mechanisms, as well as into the publish/subscribe communication model popular with the rise of the streaming platform. This thesis also explores possibilities in securing communications between distributed endpoints and ways to handle AAA in a distributed context. This is showcased in a new software component that handles authentication through a separate identity endpoint using the OpenID Connect authentication protocol and stores identity in a Javascript object-notation formatted and cryptographically signed JSON Web Token, allowing stateless session handling as the token can be validated by checking its signature. This enables fast and scalable session management and identity handling for any distributed system

    Distributed Handler Architecture

    Get PDF
    Thesis (PhD) - Indiana University, Computer Sciences, 2007Over the last couple of decades, distributed systems have been demonstrated an architectural evolvement based on models including client/server, multi-tier, distributed objects, messaging and peer-to-peer. One recent evolutionary step is Service Oriented Architecture (SOA), whose goal is to achieve loose-coupling among the interacting software applications for scalability and interoperability. The SOA model is engendered in Web Services, which provide software platforms to build applications as services and to create seamless and loosely-coupled interactions. Web Services utilize supportive functionalities such as security, reliability, monitoring, logging and so forth. These functionalities are typically provisioned as handlers, which incrementally add new capabilities to the services by building an execution chain. Even though handlers are very important to the service, the way of utilization is very crucial to attain the potential benefits. Every attempt to support a service with an additive functionality increases the chance of having an overwhelmingly crowded chain: this makes Web Service fat. Moreover, a handler may become a bottleneck because of having a comparably higher processing time. In this dissertation, we present Distributed Handler Architecture (DHArch) to provide an efficient, scalable and modular architecture to manage the execution of the handlers. The system distributes the handlers by utilizing a Message Oriented Middleware and orchestrates their execution in an efficient fashion. We also present an empirical evaluation of the system to demonstrate the suitability of this architecture to cope with the issues that exist in the conventional Web Service handler structures

    Collaborative Intrusion Detection in Federated Cloud Environments

    Get PDF
    Moving services to the Cloud is a trend that has steadily gained popularity over recent years, with a constant increase in sophistication and complexity of such services. Today, critical infrastructure operators are considering moving their services and data to the Cloud. Infrastructure vendors will inevitably take advantage of the benefits Cloud Computing has to offer. As Cloud Computing grows in popularity, new models are deployed to exploit even further its full capacity, one of which is the deployment of Cloud federations. A Cloud federation is an association among different Cloud Service Providers (CSPs) with the goal of sharing resources and data. In providing a larger-scale and higher performance infrastructure, federation enables on-demand provisioning of complex services. In this paper we convey our contribution to this area by outlining our proposed methodology that develops a robust collaborative intrusion detection methodology in a federated Cloud environment. For collaborative intrusion detection we use the Dempster-Shafer theory of evidence to fuse the beliefs provided by the monitoring entities, taking the final decision regarding a possible attack. Protecting the federated Cloud against cyber attacks is a vital concern, due to the potential for significant economic consequences
    • …
    corecore