7,424 research outputs found

    SymbioCity: Smart Cities for Smarter Networks

    Get PDF
    The "Smart City" (SC) concept revolves around the idea of embodying cutting-edge ICT solutions in the very fabric of future cities, in order to offer new and better services to citizens while lowering the city management costs, both in monetary, social, and environmental terms. In this framework, communication technologies are perceived as subservient to the SC services, providing the means to collect and process the data needed to make the services function. In this paper, we propose a new vision in which technology and SC services are designed to take advantage of each other in a symbiotic manner. According to this new paradigm, which we call "SymbioCity", SC services can indeed be exploited to improve the performance of the same communication systems that provide them with data. Suggestive examples of this symbiotic ecosystem are discussed in the paper. The dissertation is then substantiated in a proof-of-concept case study, where we show how the traffic monitoring service provided by the London Smart City initiative can be used to predict the density of users in a certain zone and optimize the cellular service in that area.Comment: 14 pages, submitted for publication to ETT Transactions on Emerging Telecommunications Technologie

    Green Symbiotic Cloud Communications: Virtualized Transport Layer and Cognitive Decision Function

    Get PDF
    The evolution of the concept of cloud communications has posed a growing emphasis on virtual and abstract environments for the flow of information, structuring it in similitude to a natural cloud. The Green Symbiotic Cloud Communications (GSCC) paradigm created on this concept facilitates the use of multiple communication mediums concomitantly creating a first of its kind communication cloud. This paper specifically corroborates a virtualized transport layer and network ports and an abstracted Internet protocol scheme in defining the GSCC architecture. We further address the issue of formulating a cognitive decision function based on utility theory, which allows users with GSCC enabled devices to intelligently distribute its bandwidth requirement amongst the available communication mediums. Considering the multiple criteria associated with different networks we formulate an optimization problem to find the solution for this resource allocation problem for single user. We further address the multi-user scenario and formulate and solve the multi-objective optimization problem using goal attainment technique. Results in single and multiple user scenarios, demonstrate that by utilizing multiple mediums as per GSCC paradigm coupled with our proposed decision function improves the functionality of the communication cloud. The proposed architecture is dynamic and evolving, embedding greenness by efficiently utilizing the available resources as and when required. The multiple virtual links equate a linearly increasing relationship with the throughput achieved. Experimental results for both real time and static data through the proposed schematic are documented. The augmented paradigm enhances the quality of service, linearly increases throughput and increases the overall security in communications

    Automated Dynamic Resource Provisioning and Monitoring in Virtualized Large-Scale Datacenter

    Get PDF
    Infrastructure as a Service (IaaS) is a pay-as-you go based cloud provision model which on demand outsources the physical servers, guest virtual machine (VM) instances, storage resources, and networking connections. This article reports the design and development of our proposed innovative symbiotic simulation based system to support the automated management of IaaS-based distributed virtualized data enter. To make the ideas work in practice, we have implemented an Open Stack based open source cloud computing platform. A smart benchmarking application "Cloud Rapid Experimentation and Analysis Tool (aka CBTool)" is utilized to mark the resource allocation potential of our test cloud system. The real-time benchmarking metrics of cloud are fed to a distributed multi-agent based intelligence middleware layer. To optimally control the dynamic operation of prototype data enter, we predefine some custom policies for VM provisioning and application performance profiling within a versatile cloud modeling and simulation toolkit "CloudSim". Both tools for our prototypes' implementation can scale up to thousands of VMs, therefore, our devised mechanism is highly scalable and flexibly be interpolated at large-scale level. Autonomic characteristics of agents aid in streamlining symbiosis among the simulation system and IaaS cloud in a closed feedback control loop. The practical worth and applicability of the multiagent-based technology lies in the fact that this technique is inherently scalable hence can efficiently be implemented within the complex cloud computing environment. To demonstrate the efficacy of our approach, we have deployed an intelligible lightweight representative scenario in the context of monitoring and provisioning virtual machines within the test-bed. Experimental results indicate notable improvement in the resource provision profile of virtualized data enter on incorporating our proposed strategy

    Automated Dynamic Resource Provisioning and Monitoring in Virtualized Large-Scale Datacenter

    Full text link

    Mapping Big Data into Knowledge Space with Cognitive Cyber-Infrastructure

    Full text link
    Big data research has attracted great attention in science, technology, industry and society. It is developing with the evolving scientific paradigm, the fourth industrial revolution, and the transformational innovation of technologies. However, its nature and fundamental challenge have not been recognized, and its own methodology has not been formed. This paper explores and answers the following questions: What is big data? What are the basic methods for representing, managing and analyzing big data? What is the relationship between big data and knowledge? Can we find a mapping from big data into knowledge space? What kind of infrastructure is required to support not only big data management and analysis but also knowledge discovery, sharing and management? What is the relationship between big data and science paradigm? What is the nature and fundamental challenge of big data computing? A multi-dimensional perspective is presented toward a methodology of big data computing.Comment: 59 page

    Computation Offloading and Scheduling in Edge-Fog Cloud Computing

    Get PDF
    Resource allocation and task scheduling in the Cloud environment faces many challenges, such as time delay, energy consumption, and security. Also, executing computation tasks of mobile applications on mobile devices (MDs) requires a lot of resources, so they can offload to the Cloud. But Cloud is far from MDs and has challenges as high delay and power consumption. Edge computing with processing near the Internet of Things (IoT) devices have been able to reduce the delay to some extent, but the problem is distancing itself from the Cloud. The fog computing (FC), with the placement of sensors and Cloud, increase the speed and reduce the energy consumption. Thus, FC is suitable for IoT applications. In this article, we review the resource allocation and task scheduling methods in Cloud, Edge and Fog environments, such as traditional, heuristic, and meta-heuristics. We also categorize the researches related to task offloading in Mobile Cloud Computing (MCC), Mobile Edge Computing (MEC), and Mobile Fog Computing (MFC). Our categorization criteria include the issue, proposed strategy, objectives, framework, and test environment.
    corecore