133 research outputs found

    Introducing the new paradigm of Social Dispersed Computing: Applications, Technologies and Challenges

    Full text link
    [EN] If last decade viewed computational services as a utility then surely this decade has transformed computation into a commodity. Computation is now progressively integrated into the physical networks in a seamless way that enables cyber-physical systems (CPS) and the Internet of Things (IoT) meet their latency requirements. Similar to the concept of Âżplatform as a serviceÂż or Âżsoftware as a serviceÂż, both cloudlets and fog computing have found their own use cases. Edge devices (that we call end or user devices for disambiguation) play the role of personal computers, dedicated to a user and to a set of correlated applications. In this new scenario, the boundaries between the network node, the sensor, and the actuator are blurring, driven primarily by the computation power of IoT nodes like single board computers and the smartphones. The bigger data generated in this type of networks needs clever, scalable, and possibly decentralized computing solutions that can scale independently as required. Any node can be seen as part of a graph, with the capacity to serve as a computing or network router node, or both. Complex applications can possibly be distributed over this graph or network of nodes to improve the overall performance like the amount of data processed over time. In this paper, we identify this new computing paradigm that we call Social Dispersed Computing, analyzing key themes in it that includes a new outlook on its relation to agent based applications. We architect this new paradigm by providing supportive application examples that include next generation electrical energy distribution networks, next generation mobility services for transportation, and applications for distributed analysis and identification of non-recurring traffic congestion in cities. The paper analyzes the existing computing paradigms (e.g., cloud, fog, edge, mobile edge, social, etc.), solving the ambiguity of their definitions; and analyzes and discusses the relevant foundational software technologies, the remaining challenges, and research opportunities.Garcia Valls, MS.; Dubey, A.; Botti, V. (2018). Introducing the new paradigm of Social Dispersed Computing: Applications, Technologies and Challenges. Journal of Systems Architecture. 91:83-102. https://doi.org/10.1016/j.sysarc.2018.05.007S831029

    Edge Computing for Extreme Reliability and Scalability

    Get PDF
    The massive number of Internet of Things (IoT) devices and their continuous data collection will lead to a rapid increase in the scale of collected data. Processing all these collected data at the central cloud server is inefficient, and even is unfeasible or unnecessary. Hence, the task of processing the data is pushed to the network edges introducing the concept of Edge Computing. Processing the information closer to the source of data (e.g., on gateways and on edge micro-servers) not only reduces the huge workload of central cloud, also decreases the latency for real-time applications by avoiding the unreliable and unpredictable network latency to communicate with the central cloud

    Energy-aware and adaptive fog storage mechanism with data replication ruled by spatio-temporal content popularity

    Get PDF
    Data traffic demand increases at a very fast pace in edge networking environments, with strict requisites on latency and throughput. To fulfil these requirements, among others, this paper proposes a fog storage system that incorporates mobile nodes as content providers. This fog storage system has a hybrid design because it does not only bring data closer to edge consumers but, as a novelty, it also incorporates in the system other relevant functional aspects. These novel aspects are the user data demand, the energy consumption, and the node distance. In this way, the decision whether to replicate data is based on an original edge service managed by an adaptive distance metric for node clustering. The adaptive distance is evaluated from several important system parameters like, distance from consumer to the data storage location, spatio-temporal data popularity, and the autonomy of each battery-powered node. Testbed results evidence that this flexible cluster-based proposal offers a more responsive data access to consumers, reduces core traffic, and depletes in a fair way the available battery energy of edge nodes.info:eu-repo/semantics/acceptedVersio

    Architectural model for Collaboration in The Internet of Things : a Fog Computing based approach

    Get PDF
    Through sensors, actuators and other Internet-connected devices, applications and services are becoming able to perceive and react on the real world. Seamlessly integrating people, and devices is no longer a futuristic idea. Converging the physical world with the human-made realm into one network is rather a present and promising approach called The Internet of Things (IoT). A closer look at the phenomenon of IoT reveals many problems. The current trends are focusing on Cloud-centric approaches to deal with the heterogeneity and the scale of this network. The blessing of the Cloud computing becomes, however, a burden on latency-sensitive applications, which require processing and storage mechanisms in their proximity to meet low-latency, location and better context-awareness requirements. In addition to mobility support and high geographical distribution requirements. Fog computing is a new concept that focuses on extending the Cloud paradigm to the edge of the Internet of Things, via providing communication, computing, and access management support. This research project foresees and is driven by the promising opportunities of the concept behind Fog computing. In this thesis, we leverage this new concept by delivering a Collaboration Architecture for the Fog computing. This architecture constitutes a referential model to better design and to implement Fog platforms. It powers the freedom of abstraction to make development and deployment at the Fog nodes easier and more efficient. Moreover, it provides a nest where IoT-connected objects can interact and collaborate. To this end, we introduce expressive mechanisms to define and abstract objects, data analytics, and services. To leverage Fog nodes with dynamic services and service-based collaboration, we propose the concept of Operation: a formal way to dynamically generate new services through mechanisms such as aggregation, composition, and transformation. Finally, we deliver a comprehensive study and a collaboration-oriented access control model for the proposed architecture. Dans les dernières années, les avantages du Cloud Computing l’ont mis au cœur des architectures proposées pour l’Internet des Objets (IoT). L’infrastructure homogène, prédictible et performante a fait du Cloud une solution adéquate pour le traitement et l’analyse des données en provenance des objets de l’IoT. Cependant, les avantages de l’utilisation du Cloud se révèlent problématiques pour les systèmes IoT sensibles au temps de latence, et qui exigent la distribution géographique, la prise en compte de l’environnement local ainsi que la mobilité des objets. Le Fog Computing est un nouveau concept visant l'extension du Cloud vers la périphérie de l’IoT. Ainsi, il envisage une couche de nœuds (Fogs) permettant de fournir aux objets connectés un support à la gestion de la communication, à la persistance des données et à la gestion d’accès. Ce projet de recherche est motivé par les opportunités prometteuses du concept du Fog computing. Il anticipe ces opportunités et vise à proposer une architecture fédératrice, jusqu’à présent inexistante, pour la collaboration dans le Fog. De ce fait, dans cette thèse, nous tirons parti de l'idée derrière ce nouveau concept afin de proposer une architecture à cette fin. Cette architecture consiste en un modèle référentiel qui promeut à la fois une grande abstraction dans la conception des applications, ainsi que la facilité et l'efficacité dans le développement et le déploiement au niveau des nœuds de la couche du Fog. En effet, pour renforcer ces nœuds avec des services dynamiques, nous proposons des moyens formels pour la génération dynamique de nouveaux services à travers des opérations d'agrégations, de compositions ou de transformations. En conséquence, les nœuds du Fog deviennent un nid où les objets connectés peuvent interagir et collaborer à travers des mécanismes expressifs de définition et d'abstraction d’objets, des analyses de données et des services

    MEC vs MCC: performance analysis of real-time applications

    Get PDF
    Hoje em dia, numerosas são as aplicações que apresentam um uso intensivo de recursos empurrando os requisitos computacionais e a demanda de energia dos dispositivos para além das suas capacidades. Atentando na arquitetura Mobile Cloud, que disponibiliza plataformas funcionais e aplicações emergentes (como Realidade Aumentada (AR), Realidade Virtual (VR), jogos online em tempo real, etc.), são evidentes estes desafios directamente relacionados com a latência, consumo de energia, e requisitos de privacidade. O Mobile Edge Computing (MEC) é uma tecnologia recente que aborda os obstáculos de desempenho enfrentados pela Mobile Cloud Computing (MCC), procurando solucioná-los O MEC aproxima as funcionalidades de computação e de armazenamento da periferia da rede. Neste trabalho descreve-se a arquitetura MEC assim como os principais tipos soluções para a sua implementação. Apresenta-se a arquitetura de referência da tecnologia cloudlet e uma comparação com o modelo de arquitetura ainda em desenvolvimento e padronização pelo ETSI. Um dos propósitos do MEC é permitir remover dos dispositivos tarefas intensivas das aplicações para melhorar a computação, a capacidade de resposta e a duração da bateria dos dispositivos móveis. O objetivo deste trabalho é estudar, comparar e avaliar o desempenho das arquiteturas MEC e MCC para o provisionamento de tarefas intensivas de aplicações com uso intenso de computação. Os cenários de teste foram configurados utilizando esse tipo de aplicações em ambas as implementações de MEC e MCC. Os resultados do teste deste estudo permitem constatar que o MEC apresenta melhor desempenho do que o MCC relativamente à latência e à qualidade de experiência do utilizador. Além disso, os resultados dos testes permitem quantificar o benefício efetivo tecnologia MEC.Numerous applications, such as Augmented Reality (AR), Virtual Reality (VR), real-time online gaming are resource-intensive applications and consequently, are pushing the computational requirements and energy demands of the mobile devices beyond their capabilities. Despite the fact that mobile cloud architecture has practical and functional platforms, these new emerging applications present several challenges regarding latency, energy consumption, context awareness, and privacy enhancement. Mobile Edge Computing (MEC) is a new resourceful and intermediary technology, that addresses the performance hurdles faced by Mobile Cloud Computing (MCC), and brings computing and storage closer to the network edge. This work introduces the MEC architecture and some of edge computing implementations. It presents the reference architecture of the cloudlet technology and provides a comparison with the architecture model that is under standardization by ETSI. MEC can offload intensive tasks from applications to enhance computation, responsiveness and battery life of the mobile devices. The objective of this work is to study and evaluate the performance of MEC and MCC architectures for provisioning offload intensive tasks from compute-intensive applications. Test scenarios were set up with use cases with this kind of applications for both MEC and MCC implementations. The test results of this study enable to support evidence that the MEC presents better performance than cloud computing regarding latency and user quality of experience. Moreover, the results of the tests enable to quantify the effective benefit of the MEC approach

    Towards a Cognitive Compute Continuum: An Architecture for Ad-Hoc Self-Managed Swarms

    Get PDF
    In this paper we introduce our vision of a Cognitive Computing Continuum to address the changing IT service provisioning towards a distributed, opportunistic, self-managed collaboration between heterogeneous devices outside the traditional data center boundaries. The focal point of this continuum are cognitive devices, which have to make decisions autonomously using their on-board computation and storage capacity based on information sensed from their environment. Such devices are moving and cannot rely on fixed infrastructure elements, but instead realise on-the-fly networking and thus frequently join and leave temporal swarms. All this creates novel demands for the underlying architecture and resource management, which must bridge the gap from edge to cloud environments, while keeping the QoS parameters within required boundaries. The paper presents an initial architecture and a resource management framework for the implementation of this type of IT service provisioning.Comment: 8 pages, CCGrid 2021 Cloud2Things Worksho

    CoSMiC: A hierarchical cloudlet-based storage architecture for mobile clouds

    Get PDF
    Storage capacity is a constraint for current mobile devices. Mobile Cloud Computing (MCC) is developed to augment device capabilities, facilitating to mobile users store/access of a large dataset on the cloud through wireless networks. However, given the limitations of network bandwidth, latencies, and devices battery life, new solutions are needed to extend the usage of mobile devices. This paper presents a novel design and implementation of a hierarchical cloud storage system for mobile devices based on multiple I/O caching layers. The solution relies on Memcached as a cache system, preserving its powerful capacities such as performance, scalability, and quick and portable deployment. The solution targets to reduce the I/O latency of current mobile cloud solutions. The proposed solution consists of a user-level library and extended Memcached back-ends. The solution aims to be hierarchical by deploying Memcached-based I/O cache servers across all the I/O infrastructure datapath. Our experimental results demonstrate that CoSMiC can significantly reduce the round-trip latency in presence of low cache hit ratios compared with a 3G connection even when using a multi-level cache hierarchy

    Multisite adaptive computation offloading for mobile cloud applications

    Get PDF
    The sheer amount of mobile devices and their fast adaptability have contributed to the proliferation of modern advanced mobile applications. These applications have characteristics such as latency-critical and demand high availability. Also, these kinds of applications often require intensive computation resources and excessive energy consumption for processing, a mobile device has limited computation and energy capacity because of the physical size constraints. The heterogeneous mobile cloud environment consists of different computing resources such as remote cloud servers in faraway data centres, cloudlets whose goal is to bring the cloud closer to the users, and nearby mobile devices that can be utilised to offload mobile tasks. Heterogeneity in mobile devices and the different sites include software, hardware, and technology variations. Resource-constrained mobile devices can leverage the shared resource environment to offload their intensive tasks to conserve battery life and improve the overall application performance. However, with such a loosely coupled and mobile device dominating network, new challenges and problems such as how to seamlessly leverage mobile devices with all the offloading sites, how to simplify deploying runtime environment for serving offloading requests from mobile devices, how to identify which parts of the mobile application to offload and how to decide whether to offload them and how to select the most optimal candidate offloading site among others. To overcome the aforementioned challenges, this research work contributes the design and implementation of MAMoC, a loosely coupled end-to-end mobile computation offloading framework. Mobile applications can be adapted to the client library of the framework while the server components are deployed to the offloading sites for serving offloading requests. The evaluation of the offloading decision engine demonstrates the viability of the proposed solution for managing seamless and transparent offloading in distributed and dynamic mobile cloud environments. All the implemented components of this work are publicly available at the following URL: https://github.com/mamoc-repo

    Do we all really know what a fog node is? Current trends towards an open definition

    Get PDF
    Fog computing has emerged as a promising technology that can bring cloud applications closer to the physical IoT devices at the network edge. While it is widely known what cloud computing is, how data centers can build the cloud infrastructure and how applications can make use of this infrastructure, there is no common picture on what fog computing and particularly a fog node, as its main building block, really is. One of the first attempts to define a fog node was made by Cisco, qualifying a fog computing system as a “mini-cloud” located at the edge of the network and implemented through a variety of edge devices, interconnected by a variety, mostly wireless, communication technologies. Thus, a fog node would be the infrastructure implementing the said mini-cloud. Other proposals have their own definition of what a fog node is, usually in relation to a specific edge device, a specific use case or an application. In this paper, we first survey the state of the art in technologies for fog computing nodes, paying special attention to the contributions that analyze the role edge devices play in the fog node definition. We summarize and compare the concepts, lessons learned from their implementation, and end up showing how a conceptual framework is emerging towards a unifying fog node definition. We focus on core functionalities of a fog node as well as in the accompanying opportunities and challenges towards their practical realization in the near future.Postprint (author's final draft
    • …
    corecore