5 research outputs found

    Modeling a dynamic data replication strategy to increase system availability in cloud computing environments

    Full text link
    Failures are normal rather than exceptional in the cloud computing environments. To improve system avai-lability, replicating the popular data to multiple suitable locations is an advisable choice, as users can access the data from a nearby site. This is, however, not the case for replicas which must have a fixed number of copies on several locations. How to decide a reasonable number and right locations for replicas has become a challenge in the cloud computing. In this paper, a dynamic data replication strategy is put forward with a brief survey of replication strategy suitable for distributed computing environments. It includes: 1) analyzing and modeling the relationship between system availability and the number of replicas; 2) evaluating and identifying the popular data and triggering a replication operation when the popularity data passes a dynamic threshold; 3) calculating a suitable number of copies to meet a reasonable system byte effective rate requirement and placing replicas among data nodes in a balanced way; 4) designing the dynamic data replication algorithm in a cloud. Experimental results demonstrate the efficiency and effectiveness of the improved system brought by the proposed strategy in a cloud

    Secure Data Management and Transmission Infrastructure for the Future Smart Grid

    Get PDF
    Power grid has played a crucial role since its inception in the Industrial Age. It has evolved from a wide network supplying energy for incorporated multiple areas to the largest cyber-physical system. Its security and reliability are crucial to any country’s economy and stability [1]. With the emergence of the new technologies and the growing pressure of the global warming, the aging power grid can no longer meet the requirements of the modern industry, which leads to the proposal of ‘smart grid’. In smart grid, both electricity and control information communicate in a massively distributed power network. It is essential for smart grid to deliver real-time data by communication network. By using smart meter, AMI can measure energy consumption, monitor loads, collect data and forward information to collectors. Smart grid is an intelligent network consists of many technologies in not only power but also information, telecommunications and control. The most famous structure of smart grid is the three-layer structure. It divides smart grid into three different layers, each layer has its own duty. All these three layers work together, providing us a smart grid that monitor and optimize the operations of all functional units from power generation to all the end-customers [2]. To enhance the security level of future smart grid, deploying a high secure level data transmission scheme on critical nodes is an effective and practical approach. A critical node is a communication node in a cyber-physical network which can be developed to meet certain requirements. It also has firewalls and capability of intrusion detection, so it is useful for a time-critical network system, in other words, it is suitable for future smart grid. The deployment of such a scheme can be tricky regarding to different network topologies. A simple and general way is to install it on every node in the network, that is to say all nodes in this network are critical nodes, but this way takes time, energy and money. Obviously, it is not the best way to do so. Thus, we propose a multi-objective evolutionary algorithm for the searching of critical nodes. A new scheme should be proposed for smart grid. Also, an optimal planning in power grid for embedding large system can effectively ensure every power station and substation to operate safely and detect anomalies in time. Using such a new method is a reliable method to meet increasing security challenges. The evolutionary frame helps in getting optimum without calculating the gradient of the objective function. In the meanwhile, a means of decomposition is useful for exploring solutions evenly in decision space. Furthermore, constraints handling technologies can place critical nodes on optimal locations so as to enhance system security even with several constraints of limited resources and/or hardware. The high-quality experimental results have validated the efficiency and applicability of the proposed approach. It has good reason to believe that the new algorithm has a promising space over the real-world multi-objective optimization problems extracted from power grid security domain. In this thesis, a cloud-based information infrastructure is proposed to deal with the big data storage and computation problems for the future smart grid, some challenges and limitations are addressed, and a new secure data management and transmission strategy regarding increasing security challenges of future smart grid are given as well

    Secure Data Management and Transmission Infrastructure for the Future Smart Grid

    Get PDF
    Power grid has played a crucial role since its inception in the Industrial Age. It has evolved from a wide network supplying energy for incorporated multiple areas to the largest cyber-physical system. Its security and reliability are crucial to any country’s economy and stability [1]. With the emergence of the new technologies and the growing pressure of the global warming, the aging power grid can no longer meet the requirements of the modern industry, which leads to the proposal of ‘smart grid’. In smart grid, both electricity and control information communicate in a massively distributed power network. It is essential for smart grid to deliver real-time data by communication network. By using smart meter, AMI can measure energy consumption, monitor loads, collect data and forward information to collectors. Smart grid is an intelligent network consists of many technologies in not only power but also information, telecommunications and control. The most famous structure of smart grid is the three-layer structure. It divides smart grid into three different layers, each layer has its own duty. All these three layers work together, providing us a smart grid that monitor and optimize the operations of all functional units from power generation to all the end-customers [2]. To enhance the security level of future smart grid, deploying a high secure level data transmission scheme on critical nodes is an effective and practical approach. A critical node is a communication node in a cyber-physical network which can be developed to meet certain requirements. It also has firewalls and capability of intrusion detection, so it is useful for a time-critical network system, in other words, it is suitable for future smart grid. The deployment of such a scheme can be tricky regarding to different network topologies. A simple and general way is to install it on every node in the network, that is to say all nodes in this network are critical nodes, but this way takes time, energy and money. Obviously, it is not the best way to do so. Thus, we propose a multi-objective evolutionary algorithm for the searching of critical nodes. A new scheme should be proposed for smart grid. Also, an optimal planning in power grid for embedding large system can effectively ensure every power station and substation to operate safely and detect anomalies in time. Using such a new method is a reliable method to meet increasing security challenges. The evolutionary frame helps in getting optimum without calculating the gradient of the objective function. In the meanwhile, a means of decomposition is useful for exploring solutions evenly in decision space. Furthermore, constraints handling technologies can place critical nodes on optimal locations so as to enhance system security even with several constraints of limited resources and/or hardware. The high-quality experimental results have validated the efficiency and applicability of the proposed approach. It has good reason to believe that the new algorithm has a promising space over the real-world multi-objective optimization problems extracted from power grid security domain. In this thesis, a cloud-based information infrastructure is proposed to deal with the big data storage and computation problems for the future smart grid, some challenges and limitations are addressed, and a new secure data management and transmission strategy regarding increasing security challenges of future smart grid are given as well

    Data Replication and Its Alignment with Fault Management in the Cloud Environment

    Get PDF
    Nowadays, the exponential data growth becomes one of the major challenges all over the world. It may cause a series of negative impacts such as network overloading, high system complexity, and inadequate data security, etc. Cloud computing is developed to construct a novel paradigm to alleviate massive data processing challenges with its on-demand services and distributed architecture. Data replication has been proposed to strategically distribute the data access load to multiple cloud data centres by creating multiple data copies at multiple cloud data centres. A replica-applied cloud environment not only achieves a decrease in response time, an increase in data availability, and more balanced resource load but also protects the cloud environment against the upcoming faults. The reactive fault tolerance strategy is also required to handle the faults when the faults already occurred. As a result, the data replication strategies should be aligned with the reactive fault tolerance strategies to achieve a complete management chain in the cloud environment. In this thesis, a data replication and fault management framework is proposed to establish a decentralised overarching management to the cloud environment. Three data replication strategies are firstly proposed based on this framework. A replica creation strategy is proposed to reduce the total cost by jointly considering the data dependency and the access frequency in the replica creation decision making process. Besides, a cloud map oriented and cost efficiency driven replica creation strategy is proposed to achieve the optimal cost reduction per replica in the cloud environment. The local data relationship and the remote data relationship are further analysed by creating two novel data dependency types, Within-DataCentre Data Dependency and Between-DataCentre Data Dependency, according to the data location. Furthermore, a network performance based replica selection strategy is proposed to avoid potential network overloading problems and to increase the number of concurrent-running instances at the same time

    Smartcells : a Bio-Cloud theory towards intelligent cloud computing system

    Get PDF
    Cloud computing is the future of web technologies and the goal for all web companies as well. It reinforces some old concepts of building highly scalable Internet architectures and introduces some new concepts that entirely change the way applications are built and deployed. In the recent years, some technology companies adopted the cloud computing strategy. This adoption took place when these companies have predicted that cloud computing will be the solutions of Web problems such as availability. However, organizations find it almost impossible to launch the cloud idea without adopting previous approaches like that of Service-Oriented approach. As a result of this dependency, web service problems are transferred into the cloud. Indeed, the current cloud’s availability is too expensive due to service replication, some cloud services face performance problem, a majority of these services is weak regarding security, and cloud services are randomly discovered while it is difficult to precisely select the best ones in addition to being spontaneously fabricated in an ocean of services. Moreover, it is impossible to validate cloud services especially before runtime. Finally, according to the W3C standards, cloud services are not yet internationalized. Indeed, the predicted web is a smart service model while it lacks intelligence and autonomy. This is why the adoption of service-oriented model was not an ideal decision. In order to minimize the consequences of cloud problems and achieve more benefits, each cloud company builds its own cloud platform. Currently, cloud vendors are facing a big problem that can be summarized by the “Cloud Platform Battle”. The budget of this battle will cost about billions of dollars due to the absence of an agreement to reach a standard cloud platform. Why intelligent collaboration is not applied between distributed clouds to achieve better Cloud Computing results? The appropriate approach is to restructure the cloud model basis to recover its issues. Multiple intelligent techniques may be used to develop advanced intelligent Cloud systems. Classical examples of distributed intelligent systems include: human body, social insect colonies, flocks of vertebrates, multi-agent systems, transportation systems, multi-robot systems, and wireless sensor networks. However, the intelligent system that could be imitated is the human body system, in which billions of body cells work together to achieve accurate results. Inspired by Bio-Informatics strategy that benefits from technologies to solve biological facts (like our genes), this thesis research proposes a novel Bio-Cloud strategy which imitates biological facts (like brain and genes) in solving the Cloud Computing issues. Based on Bio-Cloud strategy, I have developed through this thesis project the “SmartCells” framework as a smart solution for Cloud problems. SmartCells framework covers: 1) Cloud problems which are inherited from the service paradigm (like issues of service reusability, security, etc.); 2) The intelligence insufficiency problem in Cloud Computing systems. SmartCells depends on collaborations between smart components (Cells) that take advantage of the variety of already built web service components to produce an intelligent Cloud system. Le « Cloud Computing » est certes le futur des technologies du web. Il renforce certains vieux concepts de construction d’architectures internet hautement Ă©volutifs, et introduit de nouveaux concepts qui changent complĂštement la façon dont les applications sont dĂ©veloppĂ©es et dĂ©ployĂ©es. Au cours des derniĂšres annĂ©es, certaines entreprises technologiques ont adoptĂ© la stratĂ©gie du Cloud Computing. Cette adoption a eu lieu lorsque ces entreprises ont prĂ©dit que le Cloud Computing sera les solutions des plusieurs problĂšmes Web tels que la disponibilitĂ©. Toutefois, les organisations pensent qu'il est presque impossible de lancer l'idĂ©e du « Cloud » sans adopter les concepts et les normes antĂ©rieures comme celle du paradigme orientĂ© service (Service-Oriented Paradigm). En raison de cette dĂ©pendance, les problĂšmes de l'approche orientĂ©e service et services web sont transfĂ©rĂ©s au Cloud. En effet, la disponibilitĂ© du Cloud actuel s’avĂšre trop chĂšre Ă  cause de la reproduction de services, certains services Cloud sont confrontĂ©s Ă  des problĂšmes de performances, une majoritĂ© des services Cloud est faible en matiĂšre de sĂ©curitĂ©, et ces services sont dĂ©couverts d’une façon alĂ©atoire, il est difficile de choisir le meilleur d’entre eux ainsi qu’ils sont composĂ©s d’un groupe de services web dans un monde de services. Egalement, il est impossible de valider les services Cloud en particulier, avant le temps d’exĂ©cution. Finalement, selon les normes du W3C, les services Cloud ne sont pas encore internationalisĂ©s. En effet, le web comme prĂ©vu, est un modĂšle de service intelligent bien qu’il manque d’intelligence et d’autonomie. Ainsi, l'adoption d'un modĂšle axĂ© sur le service n’était pas une dĂ©cision idĂ©ale. Afin de minimiser les consĂ©quences des problĂšmes du Cloud et rĂ©aliser plus de profits, certaines entreprises de Cloud dĂ©veloppent leurs propres plateformes de Cloud Computing. Actuellement, les fournisseurs du Cloud font face Ă  un grand problĂšme qui peut se rĂ©sumer par la « Bataille de la plateforme Cloud ». Le budget de cette bataille coĂ»te des milliards de dollars en l’absence d’un accord pour accĂ©der Ă  une plateforme Cloud standard. Pourquoi une collaboration intelligente n’est pas mise en place entre les nuages distribuĂ©s pour obtenir de meilleurs rĂ©sultats ? L’approche appropriĂ©e est de restructurer le modĂšle de cloud afin de couvrir ses problĂšmes. Des techniques intelligentes multiples peuvent ĂȘtre utilisĂ©es pour dĂ©velopper des systĂšmes Cloud intelligents avancĂ©s. Parmi les exemples classiques de systĂšmes intelligents distribuĂ©s se trouvent : le corps humain, les colonies d’insectes sociaux, les troupeaux de vertĂ©brĂ©s, les systĂšmes multi-agents, les systĂšmes de transport, les systĂšmes multi-robots, et les rĂ©seaux de capteurs sans fils. Toutefois, le systĂšme intelligent qui pourrait ĂȘtre imitĂ© est le systĂšme du corps humain dans lequel vivent des milliards de cellules du corps et travaillent ensemble pour atteindre des rĂ©sultats prĂ©cis. En s’inspirant de la stratĂ©gie Bio-Informatique qui bĂ©nĂ©ficie de technologies pour rĂ©soudre des faits biologiques (comme les gĂšnes). Cette thĂšse propose une nouvelle stratĂ©gie Bio-Cloud qui imite des faits biologiques (comme le cerveau et les gĂšnes) pour rĂ©soudre les problĂšmes du Cloud Computing mentionnĂ©s ci-haut. Ainsi, en me basant sur la stratĂ©gie Bio-Cloud, j’ai dĂ©veloppĂ© au cours de cette thĂšse la thĂ©orie « SmartCells » conçue comme une proposition (approche) cherchant Ă  rĂ©soudre les problĂšmes du Cloud Computing. Cette approche couvre : 1) les problĂšmes hĂ©ritĂ©s du paradigme services (comme les questions de rĂ©utilisation de services, les questions de sĂ©curitĂ©, etc.); 2) le problĂšme d’insuffisance d’intelligence dans les systĂšmes du Cloud Computing. SmartCells se base sur la collaboration entre les composants intelligents (les Cellules) qui profitent de la variĂ©tĂ© des composants des services web dĂ©jĂ  construits afin de produire un systĂšme de Cloud intelligent
    corecore