425 research outputs found
CLOSURE: A cloud scientific workflow scheduling algorithm based on attack-defense game model
The multi-tenant coexistence service mode makes the cloud-based scientific workflow encounter the risks of being intruded. For this problem, we propose a CLoud scientific wOrkflow SchedUling algoRithm based on attack-defensE game model (CLOSURE). In the algorithm, attacks based on different operating system vulnerabilities are regarded as different “attack” strategies; and different operating system distributions in a virtual machine cluster executing the workflows are regarded as different “defense” strategies. The information of the attacker and defender is not balanced. In other words, the defender cannot obtain the information about the attacker’s strategies, while the attacker can acquire information about the defender’s strategies through a network scan. Therefore, we propose to dynamically switch the defense strategies during the workflow execution, which can weaken the network scan effects and transform the workflow security problem into an attack-defense game problem. Then, the probability distribution of the optimal mixed defense strategies can be achieved by calculating the Nash Equilibrium in the attack-defense game model. Based on this probability, diverse VMs are provisioned for workflow execution. Furthermore, a task-VM mapping algorithm based on dynamic Heterogeneous Earliest Finish Time (HEFT) is presented to accelerate the defense strategy switching and improve workflow efficiency. The experiments are conducted on both simulation and actual environment, experimental results demonstrate that compared with other algorithms, the proposed algorithm can reduce the attacker’s benefits by around 15.23%, and decrease the time costs of the algorithm by around 7.86%
Security and Energy-aware Collaborative Task Offloading in D2D communication
Device-to-device (D2D) communication technique is used to establish direct links among mobile devices (MDs) to reduce communication delay and increase network capacity over the underlying wireless networks. Existing D2D schemes for task offloading focus on system throughput, energy consumption, and delay without considering data security. This paper proposes a Security and Energy-aware Collaborative Task Offloading for D2D communication (Sec2D). Specifically, we first build a novel security model, in terms of the number of CPU cores, CPU frequency, and data size, for measuring the security workload on heterogeneous MDs. Then, we formulate the collaborative task offloading problem that minimizes the time-average delay and energy consumption of MDs while ensuring data security. In order to meet this goal, the Lyapunov optimization framework is applied to implement online decision-making. Two solutions, greedy approach and optimal approach, with different time complexities, are proposed to deal with the generated mixed-integer linear programming (MILP) problem. The theoretical proofs demonstrate that Sec2D follows a [O(1∕V),O(V)] energy-delay tradeoff. Simulation results show that Sec2D can guarantee both data security and system stability in the collaborative D2D communication environment
Towards mobile cloud computing with single sign-on access
This is a post-peer-review, pre-copyedit version of an article published in Journal of Grid Computing. The final authenticated version is available online at: http://dx.doi.org/10.1007/s10723-017-9413-3The low computing power of mobile devices impedes the development of mobile applications with a heavy computing load. Mobile Cloud Computing (MCC) has emerged as the solution to this by connecting mobile devices with the “infinite” computing power of the Cloud. As mobile devices typically communicate over untrusted networks, it becomes necessary to secure the communications to avoid privacy-sensitive data breaches. This paper presents work on implementing MCC applications with secure communications. For that purpose, we built on COMPSs-Mobile, a redesigned implementation of the COMP Superscalar (COMPSs) framework aiming to MCC platorms. COMPSs-Mobile automatically exploits the parallelism inherent in an application and orchestrates its execution on loosely-coupled distributed environment. To avoid a vendor lock-in, this extension leverages on the Generic Security Services Application Program Interface (GSSAPI) (RFC2743) as a generic way to access security services to provide communications with authentication, secrecy and integrity. Besides, GSSAPI allows applications to take profit of more advanced features, such as Federated Identity or Single Sign-On, which the underlying security framework could provide. To validate the practicality of the proposal, we use Kerberos as the security services provider to implement SSO; however, applications do not authenticate themselves and require users to obtain and place the credentials beforehand. To evaluate the performance, we conducted some tests running an application on a smartphone offloading tasks to a private cloud. Our results show that the overhead of securing the communications is acceptable.This work has been supported by the Spanish Government (contracts TIN2012-34557, TIN2015-65316-P and grants BES-2013-067167, EEBB-I-15-09808 of the Research Training Program and SEV-2011-00067 of Severo Ochoa Program), by Generalitat de Catalunya (contract 2014-SGR-1051) and by the European Commission (ASCETiC project, FP7-ICT-2013.1.2 contract 610874). The second author was partially supported by the European Commission's Horizon2020 programme under grant agreement 653965 (AARC).Peer ReviewedPostprint (author's final draft
Immersive interconnected virtual and augmented reality : a 5G and IoT perspective
Despite remarkable advances, current augmented and virtual reality (AR/VR) applications are a largely individual and local experience. Interconnected AR/VR, where participants can virtually interact across vast distances, remains a distant dream. The great barrier that stands between current technology and such applications is the stringent end-to-end latency requirement, which should not exceed 20 ms in order to avoid motion sickness and other discomforts. Bringing AR/VR to the next level to enable immersive interconnected AR/VR will require significant advances towards 5G ultra-reliable low-latency communication (URLLC) and a Tactile Internet of Things (IoT). In this article, we articulate the technical challenges to enable a future AR/VR end-to-end architecture, that combines 5G URLLC and Tactile IoT technology to support this next generation of interconnected AR/VR applications. Through the use of IoT sensors and actuators, AR/VR applications will be aware of the environmental and user context, supporting human-centric adaptations of the application logic, and lifelike interactions with the virtual environment. We present potential use cases and the required technological building blocks. For each of them, we delve into the current state of the art and challenges that need to be addressed before the dream of remote AR/VR interaction can become reality
Advances in Grid Computing
This book approaches the grid computing with a perspective on the latest achievements in the field, providing an insight into the current research trends and advances, and presenting a large range of innovative research papers. The topics covered in this book include resource and data management, grid architectures and development, and grid-enabled applications. New ideas employing heuristic methods from swarm intelligence or genetic algorithm and quantum encryption are considered in order to explain two main aspects of grid computing: resource management and data management. The book addresses also some aspects of grid computing that regard architecture and development, and includes a diverse range of applications for grid computing, including possible human grid computing system, simulation of the fusion reaction, ubiquitous healthcare service provisioning and complex water systems
Deep Data Locality on Apache Hadoop
The amount of data being collected in various areas such as social media, network, scientific instrument, mobile devices, and sensors is growing continuously, and the technology to process them is also advancing rapidly. One of the fundamental technologies to process big data is Apache Hadoop that has been adopted by many commercial products, such as InfoSphere by IBM, or Spark by Cloudera. MapReduce on Hadoop has been widely used in many data science applications. As a dominant big data processing platform, the performance of MapReduce on Hadoop system has a significant impact on the big data processing capability across multiple industries. Most of the research for improving the speed of big data analysis has been on Hadoop modules such as Hadoop common, Hadoop Distribute File System (HDFS), Hadoop Yet Another Resource Negotiator (YARN) and Hadoop MapReduce. In this research, we focused on data locality on HDFS to improve the performance of MapReduce. To reduce the amount of data transfer, MapReduce has been utilizing data locality. However, even though the majority of the processing cost occurs in the later stages, data locality has been utilized only in the early stages, which we call Shallow Data Locality (SDL). As a result, the benefit of data locality has not been fully realized. We have explored a new concept called Deep Data Locality (DDL) where the data is pre-arranged to maximize the locality in the later stages. Specifically, we introduce two implementation methods of the DDL, i.e., block-based DDL and key-based DDL.
In block-based DDL, the data blocks are pre-arranged to reduce the block copying time in two ways. First the RLM blocks are eliminated. Under the conventional default block placement policy (DBPP), data blocks are randomly placed on any available slave nodes, requiring a copy of RLM (Rack-Local Map) blocks. In block-based DDL, blocks are placed to avoid RLMs to reduce the block copy time. Second, block-based DDL concentrates the blocks in a smaller number of nodes and reduces the data transfer time among them. We analyzed the block distribution status with the customer review data from TripAdvisor and measured the performances with Terasort Benchmark. Our test result shows that the execution times of Map and Shuffle have been improved by up to 25% and 31% respectively.
In key-based DDL, the input data is divided into several blocks and stored in HDFS before going into the Map stage. In comparison with conventional blocks that have random keys, our blocks have a unique key. This requires a pre-sorting of the key-value pairs, which can be done during ETL process. This eliminates some data movements in map, shuffle, and reduce stages, and thereby improves the performance. In our experiments, MapReduce with key-based DDL performed 21.9% faster than default MapReduce and 13.3% faster than MapReduce with block-based DDL. Additionally, key-based DDL can be combined with other methods to further improve the performance. When key-based DDL and block-based DDL are combined, the Hadoop performance went up by 34.4%.
In this research, we developed the MapReduce workflow models with a novel computational model. We developed a numerical simulator that integrates the computational models. The model faithfully predicts the Hadoop performance under various conditions
Recommended from our members
Multi-criteria decision support for energy-efficient IoT edge computing offloading
Computation offloading is one of the primary technological enablers of the Internet of Things (IoT), as it helps address individual devices’ resource restrictions (e.g. process- ing and memory). In the past, offloading would always utilise remote cloud infrastruc- tures, but the increasing size of IoT data traffic and the real-time response requirements of modern and future IoT applications have led to the adoption of the edge computing paradigm, where the data is processed at the edge of the network, closer to the IoT devices. The decision as to whether cloud or edge resources will be utilised is typically taken at the design stage, based on the type of the IoT device.
Yet, the conditions that determine the optimality of this decision, such as the arrival rate, nature and sizes of the tasks, and crucially the real-time conditions of the networks involved, keep changing. At the same time, the energy consumption of IoT devices is usually a key requirement, which is affected primarily by the time it takes to complete tasks, whether for the actual computation or for offloading them through the network.
This thesis presents a dynamic computation offloading mechanism, which improves the performance (i.e. in terms of response time) and energy consumption of IoT de- vices in a decentralised and autonomous manner. We initially propose the Multi-critEria DecIsion support meChanism for IoT offloading(MEDICI), which runs independently on an IoT device, enabling it to make offloading decisions dynamically, based on multiple criteria, such as the state of the IoT, edge or cloud devices and the conditions of the net- work connecting them. It provides mathematical models of the expected time and energy costs for the different options of offloading a task (i.e. to the edge or the cloud or the IoT device itself). To evaluate its effectiveness, we provide simulation results, by extending the EdgeCloudSim simulator, comparing it against previous families of approaches used in the literature. Our simulations on four different types of IoT applications show that allowing customisation and dynamic offloading decision support can improve drastically the response time of time-critical and small-size applications, such as IoT cyber intrusion detection, and the energy consumption not only of the individual IoT devices but also of the system as a whole.
Furthermore, we present an enhancement of our MEDICI mechanism, the ProbeLess Multi-critEria DecIsion support meChanism for IoT offloading (PL-MEDICI), which en- ables MEDICI to operate in real IoT environments without the need for probing or having pre-defined parameters in order to estimate or model the network conditions or the com- putation capabilities of the different devices involved. This is the first probeless dynamic and decentralised offloading decision support mechanism for IoT environments. The probeless property is achieved by combining lightweight statistical techniques with the concept of age of knowledge (AoK) to allow us to have accurate enough information to use for our estimations.
We provide experimental results performed in a real IoT testbed with three real IoT applications, showcasing that PL-MEDICI outperforms existing techniques in terms of both response time and energy consumption.
Finally, in order to further evaluate our PL-MEDICI mechanism, we formulate a mixed- integer linear program optimisation problem that provides the theoretical optimal cen- tralised solution to our problem. This is used to compare our PL-MEDICI against the theoretical optimum, given the same estimated input. Our results showed that our of- floading mechanism is close to the obtained optimal solution in terms of both the re- sponse time and energy consumptio
3rd EGEE User Forum
We have organized this book in a sequence of chapters, each chapter associated with an application or technical theme introduced by an overview of the contents, and a summary of the main conclusions coming from the Forum for the chapter topic. The first chapter gathers all the plenary session keynote addresses, and following this there is a sequence of chapters covering the application flavoured sessions. These are followed by chapters with the flavour of Computer Science and Grid Technology. The final chapter covers the important number of practical demonstrations and posters exhibited at the Forum. Much of the work presented has a direct link to specific areas of Science, and so we have created a Science Index, presented below. In addition, at the end of this book, we provide a complete list of the institutes and countries involved in the User Forum
- …