4,684 research outputs found

    A Fog-based Distributed Look-up Service for Intelligent Transportation Systems

    Get PDF
    Future intelligent transportation systems and applications are expected to greatly benefit from the integration with a cloud computing infrastructure for service reliability and efficiency. More recently, fog computing has been proposed as a new computing paradigm to support low-latency and location-aware services by moving the execution of application logic on devices at the edge of the network in proximity of the physical systems, e.g. in the roadside infrastructure or directly in the connected vehicles. Such distributed runtime environment can support low-latency communication with sensors and actuators thus allowing functions such as real-time monitoring and remote control, e.g. for remote telemetry of public transport vehicles or remote control under emergency situations, respectively. These applications will require support for some basic functionalities from the runtime. Among them, discovery of sensors and actuators will be a significant challenge considering the large variety of sensors and actuators and their mobility. In this paper, a discovery service specifically tailored for fog computing platforms with mobile nodes is proposed. Instead of adopting a centralized approach, we pro-pose an approach based on a distributed hash table to be implemented by fog nodes, exploiting their storage and computation capabilities. The proposed approach supports by design multiple attributes and range queries. A prototype of the proposed service has been implemented and evaluated experimentally

    Introducing the new paradigm of Social Dispersed Computing: Applications, Technologies and Challenges

    Full text link
    [EN] If last decade viewed computational services as a utility then surely this decade has transformed computation into a commodity. Computation is now progressively integrated into the physical networks in a seamless way that enables cyber-physical systems (CPS) and the Internet of Things (IoT) meet their latency requirements. Similar to the concept of ¿platform as a service¿ or ¿software as a service¿, both cloudlets and fog computing have found their own use cases. Edge devices (that we call end or user devices for disambiguation) play the role of personal computers, dedicated to a user and to a set of correlated applications. In this new scenario, the boundaries between the network node, the sensor, and the actuator are blurring, driven primarily by the computation power of IoT nodes like single board computers and the smartphones. The bigger data generated in this type of networks needs clever, scalable, and possibly decentralized computing solutions that can scale independently as required. Any node can be seen as part of a graph, with the capacity to serve as a computing or network router node, or both. Complex applications can possibly be distributed over this graph or network of nodes to improve the overall performance like the amount of data processed over time. In this paper, we identify this new computing paradigm that we call Social Dispersed Computing, analyzing key themes in it that includes a new outlook on its relation to agent based applications. We architect this new paradigm by providing supportive application examples that include next generation electrical energy distribution networks, next generation mobility services for transportation, and applications for distributed analysis and identification of non-recurring traffic congestion in cities. The paper analyzes the existing computing paradigms (e.g., cloud, fog, edge, mobile edge, social, etc.), solving the ambiguity of their definitions; and analyzes and discusses the relevant foundational software technologies, the remaining challenges, and research opportunities.Garcia Valls, MS.; Dubey, A.; Botti, V. (2018). Introducing the new paradigm of Social Dispersed Computing: Applications, Technologies and Challenges. Journal of Systems Architecture. 91:83-102. https://doi.org/10.1016/j.sysarc.2018.05.007S831029

    Optimized Deep Learning Schemes for Secured Resource Allocation and Task Scheduling in Cloud Computing - A Survey

    Get PDF
    Scheduling involves allocating shared resources gradually so that tasks can be completed within a predetermined time frame. In Task Scheduling (TS) and Resource Allocation (RA), the phrase is used independently for tasks and resources. Scheduling is widely used for Cloud Computing (CC), computer science, and operational management. Effective scheduling ensures that systems operate efficiently, decisions are made effectively, resources are used efficiently, costs are kept to a minimum, and productivity is increased. High energy consumption, lower CPU utilization, time consumption, and low robustness are the most frequent problems in TS and RA in CC. In this survey, RA and TS based on deep learning (DL) and machine learning (ML) were discussed. Additionally, look into the methods employed by DL-based RA and TS-based CC. Additionally, the benefits, drawbacks, advantages, disadvantages, and merits are explored. The work's primary contribution is an analysis and assessment of DL-based RA and TS methodologies that pinpoint problems with cloud computing
    corecore