159 research outputs found

    Resource Allocation in Networking and Computing Systems: A Security and Dependability Perspective

    Get PDF
    In recent years, there has been a trend to integrate networking and computing systems, whose management is getting increasingly complex. Resource allocation is one of the crucial aspects of managing such systems and is affected by this increased complexity. Resource allocation strategies aim to effectively maximize performance, system utilization, and profit by considering virtualization technologies, heterogeneous resources, context awareness, and other features. In such complex scenario, security and dependability are vital concerns that need to be considered in future computing and networking systems in order to provide the future advanced services, such as mission-critical applications. This paper provides a comprehensive survey of existing literature that considers security and dependability for resource allocation in computing and networking systems. The current research works are categorized by considering the allocated type of resources for different technologies, scenarios, issues, attributes, and solutions. The paper presents the research works on resource allocation that includes security and dependability, both singularly and jointly. The future research directions on resource allocation are also discussed. The paper shows how there are only a few works that, even singularly, consider security and dependability in resource allocation in the future computing and networking systems and highlights the importance of jointly considering security and dependability and the need for intelligent, adaptive and robust solutions. This paper aims to help the researchers effectively consider security and dependability in future networking and computing systems.publishedVersio

    Market driven elastic secure infrastructure

    Full text link
    In today’s Data Centers, a combination of factors leads to the static allocation of physical servers and switches into dedicated clusters such that it is difficult to add or remove hardware from these clusters for short periods of time. This silofication of the hardware leads to inefficient use of clusters. This dissertation proposes a novel architecture for improving the efficiency of clusters by enabling them to add or remove bare-metal servers for short periods of time. We demonstrate by implementing a working prototype of the architecture that such silos can be broken and it is possible to share servers between clusters that are managed by different tools, have different security requirements, and are operated by tenants of the Data Center, which may not trust each other. Physical servers and switches in a Data Center are grouped for a combination of reasons. They are used for different purposes (staging, production, research, etc); host applications required for servicing specific workloads (HPC, Cloud, Big Data, etc); and/or configured to meet stringent security and compliance requirements. Additionally, different provisioning systems and tools such as Openstack-Ironic, MaaS, Foreman, etc that are used to manage these clusters take control of the servers making it difficult to add or remove the hardware from their control. Moreover, these clusters are typically stood up with sufficient capacity to meet anticipated peak workload. This leads to inefficient usage of the clusters. They are under-utilized during off-peak hours and in the cases where the demand exceeds capacity the clusters suffer from degraded quality of service (QoS) or may violate service level objectives (SLOs). Although today’s clouds offer huge benefits in terms of on-demand elasticity, economies of scale, and a pay-as-you-go model yet many organizations are reluctant to move their workloads to the cloud. Organizations that (i) needs total control of their hardware (ii) has custom deployment practices (iii) needs to match stringent security and compliance requirements or (iv) do not want to pay high costs incurred from running workloads in the cloud prefers to own its hardware and host it in a data center. This includes a large section of the economy including financial companies, medical institutions, and government agencies that continue to host their own clusters outside of the public cloud. Considering that all the clusters may not undergo peak demand at the same time provides an opportunity to improve the efficiency of clusters by sharing resources between them. The dissertation describes the design and implementation of the Market Driven Elastic Secure Infrastructure (MESI) as an alternative to the public cloud and as an architecture for the lowest layer of the public cloud to improve its efficiency. It allows mutually non-trusting physically deployed services to share the physical servers of a data center efficiently. The approach proposed here is to build a system composed of a set of services each fulfilling a specific functionality. A tenant of the MESI has to trust only a minimal functionality of the tenant that offers the hardware resources. The rest of the services can be deployed by each tenant themselves MESI is based on the idea of enabling tenants to share hardware they own with tenants they may not trust and between clusters with different security requirements. The architecture provides control and freedom of choice to the tenants whether they wish to deploy and manage these services themselves or use them from a trusted third party. MESI services fit into three layers that build on each other to provide: 1) Elastic Infrastructure, 2) Elastic Secure Infrastructure, and 3) Market-driven Elastic Secure Infrastructure. 1) Hardware Isolation Layer (HIL) – the bottommost layer of MESI is designed for moving nodes between multiple tools and schedulers used for managing the clusters. It defines HIL to control the layer 2 switches and bare-metal servers such that tenants can elastically adjust the size of the clusters in response to the changing demand of the workload. It enables the movement of nodes between clusters with minimal to no modifications required to the tools and workflow used for managing these clusters. (2) Elastic Secure Infrastructure (ESI) builds on HIL to enable sharing of servers between clusters with different security requirements and mutually non-trusting tenants of the Data Center. ESI enables the borrowing tenant to minimize its trust in the node provider and take control of trade-offs between cost, performance, and security. This enables sharing of nodes between tenants that are not only part of the same organization by can be organization tenants in a co-located Data Center. (3) The Bare-metal Marketplace is an incentive-based system that uses economic principles of the marketplace to encourage the tenants to share their servers with others not just when they do not need them but also when others need them more. It provides tenants the ability to define their own cluster objectives and sharing constraints and the freedom to decide the number of nodes they wish to share with others. MESI is evaluated using prototype implementations at each layer of the architecture. (i) The HIL prototype implemented with only 3000 Lines of Code (LOC) is able to support many provisioning tools and schedulers with little to no modification; adds no overhead to the performance of the clusters and is in active production use at MOC managing over 150 servers and 11 switches. (ii) The ESI prototype builds on the HIL prototype and adds to it an attestation service, a provisioning service, and a deterministically built open-source firmware. Results demonstrate that it is possible to build a cluster that is secure, elastic, and fairly quick to set up. The tenant requires only minimum trust in the provider for the availability of the node. (iii) The MESI prototype demonstrates the feasibility of having a one-of-kind multi-provider marketplace for trading bare-metal servers where providers also use the nodes. The evaluation of the MESI prototype shows that all the clusters benefit from participating in the marketplace. It uses agents to trade bare-metal servers in a marketplace to meet the requirements of their clusters. Results show that compared to operating as silos individual clusters see a 50% improvement in the total work done; up to 75% improvement (reduction) in waiting for queues and up to 60% improvement in the aggregate utilization of the test bed. This dissertation makes the following contributions: (i) It defines the architecture of MESI allows mutually non-trusting tenants of the data center to share resources between clusters with different security requirements. (ii) Demonstrates that it is possible to design a service that breaks the silos of static allocation of clusters yet has a small Trusted Computing Base (TCB) and no overhead to the performance of the clusters. (iii) Provides a unique architecture that puts the tenant in control of its own security and minimizes the trust needed in the provider for sharing nodes. (iv) A working prototype of a multi-provider marketplace for bare-metal servers which is a first proof-of-concept that demonstrates that it is possible to trade real bare-metal nodes at practical time scales such that moving nodes between clusters is sufficiently fast to be able to get some useful work done. (v) Finally results show that it is possible to encourage even mutually non-trusting tenants to share their nodes with each other without any central authority making allocation decisions. Many smart, dedicated engineers and researchers have contributed to this work over the years. I have jointly led the efforts to design the HIL and the ESI layer; led the design and implementation of the bare-metal marketplace and the overall MESI architecture

    Modelling and Optimizing Supply Chain Integrated Production Scheduling Problems

    Full text link
    Globalization and advanced information technologies (e.g., Internet of Things) have considerably impacted supply chains (SCs) by persistently forcing original equipment manufacturers (OEMs) to switch production strategies from make-to-stock (MTS) to make-to-order (MTO) to survive in competition. Generally, an OEM follows the MTS strategy for products with steady demand. In contrast, the MTO strategy exists under a pull system with irregular demand in which the received customer orders are scheduled and launched into production. In comparison to MTS, MTO has the primary challenges of ensuring timely delivery at the lowest possible cost, satisfying the demands of high customization and guaranteeing the accessibility of raw materials throughout the production process. These challenges are increasing substantially since industrial productions are becoming more flexible, diversified, and customized. Besides, independently making the production scheduling decisions from other stages of these SCs often find sub-optimal results, creating substantial challenges to fulfilling demands timely and cost-effectively. Since adequately managing these challenges asynchronously are difficult, constructing optimization models by integrating SC decisions, such as customer requirements, supply portfolio (supplier selection and order allocation), delivery batching decisions, and inventory portfolio (inventory replenishment, consumption, and availability), with shop floor scheduling under a deterministic and dynamic environment is essential to fulfilling customer expectations at the least possible cost. These optimization models are computationally intractable. Consequently, designing algorithms to schedule or reschedule promptly is also highly challenging for these time-sensitive, operationally integrated optimization models. Thus, this thesis focuses on modelling and optimizing SC-integrated production scheduling problems, named SC scheduling problems (SCSPs). The objective of optimizing job shop scheduling problems (JSSPs) is to ensure that the requisite resources are accessible when required and that their utilization is maximally efficient. Although numerous algorithms have been devised, they can sometimes become computationally exorbitant and yield sub-optimal outcomes, rendering production systems inefficient. These could be due to a variety of causes, such as an imbalance in population quality over generations, recurrent generation and evaluation of identical schedules, and permitting an under-performing method to conduct the evolutionary process. Consequently, this study designs two methods, a sequential approach (Chapter 2) and a multi-method approach (Chapter 3), to address the aforementioned issues and to acquire competitive results in finding optimal or near-optimal solutions for JSSPs in a single objective setting. The devised algorithms for JSSPs optimize workflows for each job by accurate mapping between/among related resources, generating more optimal results than existing algorithms. Production scheduling can not be accomplished precisely without considering supply and delivery decisions and customer requirements simultaneously. Thus, a few recent studies have operationally integrated SCs to accurately predict process insights for executing, monitoring, and controlling the planned production. However, these studies are limited to simple shop-floor configurations and can provide the least flexibility to address the MTO-based SC challenges. Thus, this study formulates a bi-objective optimization model that integrates the supply portfolio into a flexible job shop scheduling environment with a customer-imposed delivery window to cost-effectively meet customized and on-time delivery requirements (Chapter 4). Compared to the job shop that is limited to sequence flexibility only, the flexible job shop has been deemed advantageous due to its capacity to provide increased scheduling flexibility (both process and sequence flexibility). To optimize the model, the performance of the multi-objective particle swarm optimization algorithm has been enhanced, with the results providing decision-makers with an increased degree of flexibility, offering a larger number of Pareto solutions, more varied and consistent frontiers, and a reasonable time for MTO-based SCs. Environmental sustainability is spotlighted for increasing environmental awareness and follow-up regulations. Consequently, the related factors strongly regulate the supply portfolio for sustainable development, which remained unexplored in the SCSP as those criteria are primarily qualitative (e.g., green production, green product design, corporate social responsibility, and waste disposal system). These absences may lead to an unacceptable supply portfolio. Thus, this study overcomes the problem by integrating VIKORSORT into the proposed solution methodology of the extended SCSP. In addition, forming delivery batches of heterogeneous customer orders is challenging, as one order can lead to another being delayed. Therefore, the previous optimization model is extended by integrating supply, manufacturing, and delivery batching decisions and concurrently optimizing them in response to heterogeneous customer requirements with time window constraints, considering both economic and environmental sustainability for the supply portfolio (Chapter 5). Since the proposed optimization model is an extension of the flexible job shop, it can be classified as a non-deterministic polynomial-time (NP)-hard problem, which cannot be solved by conventional optimization techniques, particularly in the case of larger instances. Therefore, a reinforcement learning-based hyper-heuristic (HH) has been designed, where four solution-updating heuristics are intelligently guided to deliver the best possible results compared to existing algorithms. The optimization model furnishes a set of comprehensive schedules that integrate the supply portfolio, production portfolio (work-center/machine assignment and customer orders sequencing), and batching decisions. This provides numerous meaningful managerial insights and operational flexibility prior to the execution phase. Recently, SCs have been experiencing unprecedented and massive disruptions caused by an abrupt outbreak, resulting in difficulties for OEMs to recover from disruptive demand-supply equilibrium. Hence, this study proposes a multi-portfolio (supply, production, and inventory portfolios) approach for a proactive-reactive scheme, which concerns the SCSP with complex multi-level products, simultaneously including unpredictably dynamic supply, demand, and shop floor disruptions (Chapter 6). This study considers fabrication and assembly in a multi-level product structure. To effectively address this time-sensitive model based on real-time data, a Q-learning-based multi-operator differential evolution algorithm in a HH has been designed to address disruptive events and generate a timely rescheduling plan. The numerical results and analyses demonstrate the proposed model's capability to effectively address single and multiple disruptions, thus providing significant managerial insights and ensuring SC resilience

    Edge Computing for Internet of Things

    Get PDF
    The Internet-of-Things is becoming an established technology, with devices being deployed in homes, workplaces, and public areas at an increasingly rapid rate. IoT devices are the core technology of smart-homes, smart-cities, intelligent transport systems, and promise to optimise travel, reduce energy usage and improve quality of life. With the IoT prevalence, the problem of how to manage the vast volumes of data, wide variety and type of data generated, and erratic generation patterns is becoming increasingly clear and challenging. This Special Issue focuses on solving this problem through the use of edge computing. Edge computing offers a solution to managing IoT data through the processing of IoT data close to the location where the data is being generated. Edge computing allows computation to be performed locally, thus reducing the volume of data that needs to be transmitted to remote data centres and Cloud storage. It also allows decisions to be made locally without having to wait for Cloud servers to respond

    Measuring knowledge sharing processes through social network analysis within construction organisations

    Get PDF
    The construction industry is a knowledge intensive and information dependent industry. Organisations risk losing valuable knowledge, when the employees leave them. Therefore, construction organisations need to nurture opportunities to disseminate knowledge through strengthening knowledge-sharing networks. This study aimed at evaluating the formal and informal knowledge sharing methods in social networks within Australian construction organisations and identifying how knowledge sharing could be improved. Data were collected from two estimating teams in two case studies. The collected data through semi-structured interviews were analysed using UCINET, a Social Network Analysis (SNA) tool, and SNA measures. The findings revealed that one case study consisted of influencers, while the other demonstrated an optimal knowledge sharing structure in both formal and informal knowledge sharing methods. Social networks could vary based on the organisation as well as the individuals’ behaviour. Identifying networks with specific issues and taking steps to strengthen networks will enable to achieve optimum knowledge sharing processes. This research offers knowledge sharing good practices for construction organisations to optimise their knowledge sharing processes

    Efficiency and Sustainability of the Distributed Renewable Hybrid Power Systems Based on the Energy Internet, Blockchain Technology and Smart Contracts

    Get PDF
    The climate changes that are visible today are a challenge for the global research community. In this context, renewable energy sources, fuel cell systems, and other energy generating sources must be optimally combined and connected to the grid system using advanced energy transaction methods. As this book presents the latest solutions in the implementation of fuel cell and renewable energy in mobile and stationary applications such as hybrid and microgrid power systems based on energy internet, blockchain technology, and smart contracts, we hope that they are of interest to readers working in the related fields mentioned above

    EG-ICE 2021 Workshop on Intelligent Computing in Engineering

    Get PDF
    The 28th EG-ICE International Workshop 2021 brings together international experts working at the interface between advanced computing and modern engineering challenges. Many engineering tasks require open-world resolutions to support multi-actor collaboration, coping with approximate models, providing effective engineer-computer interaction, search in multi-dimensional solution spaces, accommodating uncertainty, including specialist domain knowledge, performing sensor-data interpretation and dealing with incomplete knowledge. While results from computer science provide much initial support for resolution, adaptation is unavoidable and most importantly, feedback from addressing engineering challenges drives fundamental computer-science research. Competence and knowledge transfer goes both ways

    Expanding the Horizons of Manufacturing: Towards Wide Integration, Smart Systems and Tools

    Get PDF
    This research topic aims at enterprise-wide modeling and optimization (EWMO) through the development and application of integrated modeling, simulation and optimization methodologies, and computer-aided tools for reliable and sustainable improvement opportunities within the entire manufacturing network (raw materials, production plants, distribution, retailers, and customers) and its components. This integrated approach incorporates information from the local primary control and supervisory modules into the scheduling/planning formulation. That makes it possible to dynamically react to incidents that occur in the network components at the appropriate decision-making level, requiring fewer resources, emitting less waste, and allowing for better responsiveness in changing market requirements and operational variations, reducing cost, waste, energy consumption and environmental impact, and increasing the benefits. More recently, the exploitation of new technology integration, such as through semantic models in formal knowledge models, allows for the capture and utilization of domain knowledge, human knowledge, and expert knowledge toward comprehensive intelligent management. Otherwise, the development of advanced technologies and tools, such as cyber-physical systems, the Internet of Things, the Industrial Internet of Things, Artificial Intelligence, Big Data, Cloud Computing, Blockchain, etc., have captured the attention of manufacturing enterprises toward intelligent manufacturing systems

    PLATFORM-DRIVEN CROWDSOURCED MANUFACTURING FOR MANUFACTURING AS A SERVICE

    Get PDF
    Platform-driven crowdsourced manufacturing is an emerging manufacturing paradigm to instantiate the adoption of the open business model in the context of achieving Manufacturing-as-a-Service (MaaS). It has attracted attention from both industries and academia as a powerful way of searching for manufacturing solutions extensively in a smart manufacturing era. In this regard, this work examines the origination and evolution of the open business model and highlights the trends towards platform-driven crowdsourced manufacturing as a solution for MaaS. Platform-driven crowdsourced manufacturing has a full function of value capturing, creation, and delivery approach, which is fulfilled by the cooperation among manufacturers, open innovators, and platforms. The platform-driven crowdsourced manufacturing workflow is proposed to organize these three decision agents by specifying the domains and interactions, following a functional, behavioral, and structural mapping model. A MaaS reference model is proposed to outline the critical functions and inter-relationships. A series of quantitative, qualitative, and computational solutions are developed for fulfilling the outlined functions. The case studies demonstrate the proposed methodologies and can pace the way towards a service-oriented product fulfillment process. This dissertation initially proposes a manufacturing theory and decision models by integrating manufacturer crowds through a cyber platform. This dissertation reveals the elementary conceptual framework based on stakeholder analysis, including dichotomy analysis of industrial applicability, decision agent identification, workflow, and holistic framework of platform-driven crowdsourced manufacturing. Three stakeholders require three essential service fields, and their cooperation requires an information service system as a kernel. These essential functions include contracting evaluation services for open innovators, manufacturers' task execution services, and platforms' management services. This research tackles these research challenges to provide a technology implementation roadmap and transition guidebook for industries towards crowdsourcing.Ph.D
    • …
    corecore