6 research outputs found

    Analysis of On Demand and Table Driven Routing Protocol for Fire Fighter Application

    Get PDF
    In an Ad hoc communication, the nodes are randomly distributed in a region are moving arbitrarily. We propose Analysis of On demand and Table driven routing protocols for Fire Fighter Applications (AOTFF) in this paper. The performance analysis on reactive protocols viz., AODV and AOMDV as well as proactive protocol DSDV are compared with Packet Delivery Fraction (PDF) and Simulation time. The model of fire fighter is developed using routing protocol to cover maximum area by knowing the path that is already been used . It is observed that the performance of reactive protocols are better than compared to proactive protocols.

    Softwarization of Large-Scale IoT-based Disasters Management Systems

    Get PDF
    The Internet of Things (IoT) enables objects to interact and cooperate with each other for reaching common objectives. It is very useful in large-scale disaster management systems where humans are likely to fail when they attempt to perform search and rescue operations in high-risk sites. IoT can indeed play a critical role in all phases of large-scale disasters (i.e. preparedness, relief, and recovery). Network softwarization aims at designing, architecting, deploying, and managing network components primarily based on software programmability properties. It relies on key technologies, such as cloud computing, Network Functions Virtualization (NFV), and Software Defined Networking (SDN). The key benefits are agility and cost efficiency. This thesis proposes softwarization approaches to tackle the key challenges related to large-scale IoT based disaster management systems. A first challenge faced by large-scale IoT disaster management systems is the dynamic formation of an optimal coalition of IoT devices for the tasks at hand. Meeting this challenge is critical for cost efficiency. A second challenge is an interoperability. IoT environments remain highly heterogeneous. However, the IoT devices need to interact. Yet another challenge is Quality of Service (QoS). Disaster management applications are known to be very QoS sensitive, especially when it comes to delay. To tackle the first challenge, we propose a cloud-based architecture that enables the formation of efficient coalitions of IoT devices for search and rescue tasks. The proposed architecture enables the publication and discovery of IoT devices belonging to different cloud providers. It also comes with a coalition formation algorithm. For the second challenge, we propose an NFV and SDN based - architecture for on-the-fly IoT gateway provisioning. The gateway functions are provisioned as Virtual Network Functions (VNFs) that are chained on-the-fly in the IoT domain using SDN. When it comes to the third challenge, we rely on fog computing to meet the QoS and propose algorithms that provision IoT applications components in hybrid NFV based - cloud/fogs. Both stationary and mobile fog nodes are considered. In the case of mobile fog nodes, a Tabu Search-based heuristic is proposed. It finds a near-optimal solution and we numerically show that it is faster than the Integer Linear Programming (ILP) solution by several orders of magnitude

    A comprehensive survey on Fog Computing: State-of-the-art and research challenges

    Get PDF
    Cloud computing with its three key facets (i.e., Infrastructure-as-a-Service, Platform-as-a-Service, and Softwareas- a-Service) and its inherent advantages (e.g., elasticity and scalability) still faces several challenges. The distance between the cloud and the end devices might be an issue for latencysensitive applications such as disaster management and content delivery applications. Service level agreements (SLAs) may also impose processing at locations where the cloud provider does not have data centers. Fog computing is a novel paradigm to address such issues. It enables provisioning resources and services outside the cloud, at the edge of the network, closer to end devices, or eventually, at locations stipulated by SLAs. Fog computing is not a substitute for cloud computing but a powerful complement. It enables processing at the edge while still offering the possibility to interact with the cloud. This paper presents a comprehensive survey on fog computing. It critically reviews the state of the art in the light of a concise set of evaluation criteria. We cover both the architectures and the algorithms that make fog systems. Challenges and research directions are also introduced. In addition, the lessons learned are reviewed and the prospects are discussed in terms of the key role fog is likely to play in emerging technologies such as tactile Internet

    The Internet of Things and The Web of Things

    Get PDF
    International audienceThe Internet of Things is creating a new world, a quantifiable and measureable world, where people and businesses can manage their assets in better informed ways, and can make more timely and better informed decisions about what they want or need to do. This new con-nected world brings with it fundamental changes to society and to consumers. This special issue of ERCIM News thus focuses on various relevant aspects of the Internet of Things and the Web of Things

    Engineering Self-Adaptive Collective Processes for Cyber-Physical Ecosystems

    Get PDF
    The pervasiveness of computing and networking is creating significant opportunities for building valuable socio-technical systems. However, the scale, density, heterogeneity, interdependence, and QoS constraints of many target systems pose severe operational and engineering challenges. Beyond individual smart devices, cyber-physical collectives can provide services or solve complex problems by leveraging a “system effect” while coordinating and adapting to context or environment change. Understanding and building systems exhibiting collective intelligence and autonomic capabilities represent a prominent research goal, partly covered, e.g., by the field of collective adaptive systems. Therefore, drawing inspiration from and building on the long-time research activity on coordination, multi-agent systems, autonomic/self-* systems, spatial computing, and especially on the recent aggregate computing paradigm, this thesis investigates concepts, methods, and tools for the engineering of possibly large-scale, heterogeneous ensembles of situated components that should be able to operate, adapt and self-organise in a decentralised fashion. The primary contribution of this thesis consists of four main parts. First, we define and implement an aggregate programming language (ScaFi), internal to the mainstream Scala programming language, for describing collective adaptive behaviour, based on field calculi. Second, we conceive of a “dynamic collective computation” abstraction, also called aggregate process, formalised by an extension to the field calculus, and implemented in ScaFi. Third, we characterise and provide a proof-of-concept implementation of a middleware for aggregate computing that enables the development of aggregate systems according to multiple architectural styles. Fourth, we apply and evaluate aggregate computing techniques to edge computing scenarios, and characterise a design pattern, called Self-organising Coordination Regions (SCR), that supports adjustable, decentralised decision-making and activity in dynamic environments.Con lo sviluppo di informatica e intelligenza artificiale, la diffusione pervasiva di device computazionali e la crescente interconnessione tra elementi fisici e digitali, emergono innumerevoli opportunità per la costruzione di sistemi socio-tecnici di nuova generazione. Tuttavia, l'ingegneria di tali sistemi presenta notevoli sfide, data la loro complessità—si pensi ai livelli, scale, eterogeneità, e interdipendenze coinvolti. Oltre a dispositivi smart individuali, collettivi cyber-fisici possono fornire servizi o risolvere problemi complessi con un “effetto sistema” che emerge dalla coordinazione e l'adattamento di componenti fra loro, l'ambiente e il contesto. Comprendere e costruire sistemi in grado di esibire intelligenza collettiva e capacità autonomiche è un importante problema di ricerca studiato, ad esempio, nel campo dei sistemi collettivi adattativi. Perciò, traendo ispirazione e partendo dall'attività di ricerca su coordinazione, sistemi multiagente e self-*, modelli di computazione spazio-temporali e, specialmente, sul recente paradigma di programmazione aggregata, questa tesi tratta concetti, metodi, e strumenti per l'ingegneria di ensemble di elementi situati eterogenei che devono essere in grado di lavorare, adattarsi, e auto-organizzarsi in modo decentralizzato. Il contributo di questa tesi consiste in quattro parti principali. In primo luogo, viene definito e implementato un linguaggio di programmazione aggregata (ScaFi), interno al linguaggio Scala, per descrivere comportamenti collettivi e adattativi secondo l'approccio dei campi computazionali. In secondo luogo, si propone e caratterizza l'astrazione di processo aggregato per rappresentare computazioni collettive dinamiche concorrenti, formalizzata come estensione al field calculus e implementata in ScaFi. Inoltre, si analizza e implementa un prototipo di middleware per sistemi aggregati, in grado di supportare più stili architetturali. Infine, si applicano e valutano tecniche di programmazione aggregata in scenari di edge computing, e si propone un pattern, Self-Organising Coordination Regions, per supportare, in modo decentralizzato, attività decisionali e di regolazione in ambienti dinamici

    Simplifying the use of event-based systems with context mediation and declarative descriptions

    Get PDF
    Current trends like the proliferation of sensors or the Internet of Things lead to Cyber-physical Systems (CPSs). In these systems many different components communicate by exchanging events. While events provide a convenient abstraction for handling the high load these systems generate, CPSs are very complex and require expert computer scientists to handle correctly. We realized that one of the primary reasons for this inherent complexity is that events do not carry context. We analyzed the context of events and realized that there are two dimensions: context about the data of an event and context about the event itself. Context about the data includes assumptions like systems of measurement units or the structure of the encoded information that are required to correctly understand the event. Context about the event itself is data that provides additional information to the information carried by the event. For example an event might carry positional data, the additional information could then be the room identifier belonging to this position. Context about the data helps bridge the heterogeneity that CPSs possess. Event producers and consumers may have different assumptions about the data and thus interpret events in different ways. To overcome this gap, we developed the ACTrESS middleware. ACTrESS provides a model to encode interpretation assumptions in an interpretation context. Clients can thus make their assumptions explicit and send them to the middleware, which is then able to mediate between different contexts by transforming events. Through analysis of the provided contexts, ACTrESS can generate transformers, which are dynamically loaded into the system. It does not need to rely on costly operations like reflection. To prove this, we conducted a performance study which shows that in a content-based publish/subscribe system, the overhead introduced by ACTrESS’ transformations is too small to be measurable. Because events do not carry contextual information, expert computer scientists are required to describe situations that are made up of multiple events. The fact that CPSs promise to transform our everyday life (e.g., smart homes) makes this problem even more severe in that most of the target users cannot use CPSs. In this thesis, we developed a declarative language to easily describe situations and a desired reaction. Furthermore, we provide a mechanism to translate this high-level description to executable code. The key idea is that events are contextualized, i.e. our middleware enriches the event with the missing contextual information based on the situation description. The enriched events are then correlated and combined automatically, to ultimately be able to decide if the described situation is fulfilled or not. By generating small computational units, we achieve good parallelization and are able to elegantly scale up and down, which makes our approach particularly suitable for modern cloud architectures. We conducted a usability analysis and performance study. The usability analysis shows that our approach significantly simplifies the definition of reactive behavior in CPS. The performance study shows that the achieved automatic distribution and parallelization incur a small performance cost compared to highly optimized systems like Esper
    corecore