124,586 research outputs found

    Design and Development of Techniques to Ensure Integrity in Fog Computing Based Databases

    Get PDF
    The advancement of information technology in coming years will bring significant changes to the way sensitive data is processed. But the volume of generated data is rapidly growing worldwide. Technologies such as cloud computing, fog computing, and the Internet of things (IoT) will offer business service providers and consumers opportunities to obtain effective and efficient services as well as enhance their experiences and services; increased availability and higher-quality services via real-time data processing augment the potential for technology to add value to everyday experiences. This improves human life quality and easiness. As promising as these technological innovations, they are prone to security issues such as data integrity and data consistency. However, as with any computer system, these services are not without risks. There is the possibility that systems might be infiltrated by malicious transactions and, as a result, data could be corrupted, which is a cause for concern. Once an attacker damages a set of data items, the damage can spread through the database. When valid transactions read corrupted data, they can update other data items based on the value read. Given the sensitive nature of important data and the critical need to provide real-time access for decision-making, it is vital that any damage done by a malicious transaction and spread by valid transactions must be corrected immediately and accurately. In this research, we develop three different novel models for employing fog computing technology in critical systems such as healthcare, intelligent government system and critical infrastructure systems. In the first model, we present two sub-models for using fog computing in healthcare: an architecture using fog modules with heterogeneous data, and another using fog modules with homogeneous data. We propose a unique approach for each module to assess the damage caused by malicious transactions, so that original data may be recovered and affected transactions may be identified for future investigations. In the second model, we introduced a unique model that uses fog computing in smart cities to manage utility service companies and consumer data. Then we propose a novel technique to assess damage to data caused by an attack. Thus, original data can be recovered, and a database can be returned to its consistent state as no attacking has occurred. The last model focus of designing a novel technique for an intelligent government system that uses fog computing technology to control and manage data. Unique algorithms sustaining the integrity of system data in the event of cyberattack are proposed in this segment of research. These algorithms are designed to maintain the security of systems attacked by malicious transactions or subjected to fog node data modifications. A transaction-dependency graph is implemented in this model to observe and monitor the activities of every transaction. Once an intrusion detection system detects malicious activities, the system will promptly detect all affected transactions. Then we conducted a simulation study to prove the applicability and efficacy of the proposed models. The evaluation rendered this models practicable and effective

    A System dynamics approach to data center capacity planning - A case study

    Get PDF
    This thesis is an empirical study where the System Dynamics methodology is applied to help the Chief Technical Officer of a Norwegian IT company, operating in the cloud computing industry, in planning for future data center capacity. Put simply, cloud computing is the provisioning of centralized IT services and infrastructure to businesses in an on-demand, reliable, and inexpensive fashion, which is why it is sometimes loosely referred to as computing as a utility'. The client's main interest in this project is to gain an analysis tool that can help in estimating the point in time at which the capacity limit of the company's data center in Oslo will be reached. This is a critical question for the business since setting up a new data center has a lead time of around one year, and it is essential to start planning for such an effort well beforehand. In this thesis, a System Dynamics model is built for this purpose, with its structure based on empirical knowledge elicited from the client of the project. Rigorous testing is applied to build confidence in the reliability and usefulness of the model. The model structure successfully replicates historical behavior of important variables in the system. The established robustness of the model qualifies it as suitable to use for policy and scenario testing. A few examples of such tests are carried out and documented in this report, including various tests regarding the central question of when the data center's capacity limit will be reached. This model can eventually become the basis of a management flight simulator that the client could use to try out different policies to see their consequences before implementing them in the real world. This project has been carried out with two overarching purposes, one professional and one academic. The professional goal, as already mentioned, is to help the client in medium-term capacity planning. The academic aspiration of the thesis, however, is to establish the usefulness of the System Dynamics methodology in data center planning and cloud computing business fields. To the best of the author's knowledge, no previous System Dynamics works have been carried out in this area. Yet, being dominated by aging chains, co-flows, accumulations, delays, and feedbacks, data center management is in this thesis demonstrated to be a promising area for applying System Dynamics.GEO-SD360JMASV-SYS

    Enforcing reputation constraints on business process workflows

    Get PDF
    The problem of trust in determining the flow of execution of business processes has been in the centre of research interst in the last decade as business processes become a de facto model of Internet-based commerce, particularly with the increasing popularity in Cloud computing. One of the main mea-sures of trust is reputation, where the quality of services as provided to their clients can be used as the main factor in calculating service and service provider reputation values. The work presented here contributes to the solving of this problem by defining a model for the calculation of service reputa-tion levels in a BPEL-based business workflow. These levels of reputation are then used to control the execution of the workflow based on service-level agreement constraints provided by the users of the workflow. The main contribution of the paper is to first present a formal meaning for BPEL processes, which is constrained by reputation requirements from the users, and then we demonstrate that these requirements can be enforced using a reference architecture with a case scenario from the domain of distributed map processing. Finally, the paper discusses the possible threats that can be launched on such an architecture

    Cloudbus Toolkit for Market-Oriented Cloud Computing

    Full text link
    This keynote paper: (1) presents the 21st century vision of computing and identifies various IT paradigms promising to deliver computing as a utility; (2) defines the architecture for creating market-oriented Clouds and computing atmosphere by leveraging technologies such as virtual machines; (3) provides thoughts on market-based resource management strategies that encompass both customer-driven service management and computational risk management to sustain SLA-oriented resource allocation; (4) presents the work carried out as part of our new Cloud Computing initiative, called Cloudbus: (i) Aneka, a Platform as a Service software system containing SDK (Software Development Kit) for construction of Cloud applications and deployment on private or public Clouds, in addition to supporting market-oriented resource management; (ii) internetworking of Clouds for dynamic creation of federated computing environments for scaling of elastic applications; (iii) creation of 3rd party Cloud brokering services for building content delivery networks and e-Science applications and their deployment on capabilities of IaaS providers such as Amazon along with Grid mashups; (iv) CloudSim supporting modelling and simulation of Clouds for performance studies; (v) Energy Efficient Resource Allocation Mechanisms and Techniques for creation and management of Green Clouds; and (vi) pathways for future research.Comment: 21 pages, 6 figures, 2 tables, Conference pape

    InterCloud: Utility-Oriented Federation of Cloud Computing Environments for Scaling of Application Services

    Full text link
    Cloud computing providers have setup several data centers at different geographical locations over the Internet in order to optimally serve needs of their customers around the world. However, existing systems do not support mechanisms and policies for dynamically coordinating load distribution among different Cloud-based data centers in order to determine optimal location for hosting application services to achieve reasonable QoS levels. Further, the Cloud computing providers are unable to predict geographic distribution of users consuming their services, hence the load coordination must happen automatically, and distribution of services must change in response to changes in the load. To counter this problem, we advocate creation of federated Cloud computing environment (InterCloud) that facilitates just-in-time, opportunistic, and scalable provisioning of application services, consistently achieving QoS targets under variable workload, resource and network conditions. The overall goal is to create a computing environment that supports dynamic expansion or contraction of capabilities (VMs, services, storage, and database) for handling sudden variations in service demands. This paper presents vision, challenges, and architectural elements of InterCloud for utility-oriented federation of Cloud computing environments. The proposed InterCloud environment supports scaling of applications across multiple vendor clouds. We have validated our approach by conducting a set of rigorous performance evaluation study using the CloudSim toolkit. The results demonstrate that federated Cloud computing model has immense potential as it offers significant performance gains as regards to response time and cost saving under dynamic workload scenarios.Comment: 20 pages, 4 figures, 3 tables, conference pape
    • …
    corecore