12,871 research outputs found

    Cloud computing resource scheduling and a survey of its evolutionary approaches

    Get PDF
    A disruptive technology fundamentally transforming the way that computing services are delivered, cloud computing offers information and communication technology users a new dimension of convenience of resources, as services via the Internet. Because cloud provides a finite pool of virtualized on-demand resources, optimally scheduling them has become an essential and rewarding topic, where a trend of using Evolutionary Computation (EC) algorithms is emerging rapidly. Through analyzing the cloud computing architecture, this survey first presents taxonomy at two levels of scheduling cloud resources. It then paints a landscape of the scheduling problem and solutions. According to the taxonomy, a comprehensive survey of state-of-the-art approaches is presented systematically. Looking forward, challenges and potential future research directions are investigated and invited, including real-time scheduling, adaptive dynamic scheduling, large-scale scheduling, multiobjective scheduling, and distributed and parallel scheduling. At the dawn of Industry 4.0, cloud computing scheduling for cyber-physical integration with the presence of big data is also discussed. Research in this area is only in its infancy, but with the rapid fusion of information and data technology, more exciting and agenda-setting topics are likely to emerge on the horizon

    Search based software engineering: Trends, techniques and applications

    Get PDF
    © ACM, 2012. This is the author's version of the work. It is posted here by permission of ACM for your personal use. Not for redistribution. The definitive version is available from the link below.In the past five years there has been a dramatic increase in work on Search-Based Software Engineering (SBSE), an approach to Software Engineering (SE) in which Search-Based Optimization (SBO) algorithms are used to address problems in SE. SBSE has been applied to problems throughout the SE lifecycle, from requirements and project planning to maintenance and reengineering. The approach is attractive because it offers a suite of adaptive automated and semiautomated solutions in situations typified by large complex problem spaces with multiple competing and conflicting objectives. This article provides a review and classification of literature on SBSE. The work identifies research trends and relationships between the techniques applied and the applications to which they have been applied and highlights gaps in the literature and avenues for further research.EPSRC and E

    A Time-driven Data Placement Strategy for a Scientific Workflow Combining Edge Computing and Cloud Computing

    Full text link
    Compared to traditional distributed computing environments such as grids, cloud computing provides a more cost-effective way to deploy scientific workflows. Each task of a scientific workflow requires several large datasets that are located in different datacenters from the cloud computing environment, resulting in serious data transmission delays. Edge computing reduces the data transmission delays and supports the fixed storing manner for scientific workflow private datasets, but there is a bottleneck in its storage capacity. It is a challenge to combine the advantages of both edge computing and cloud computing to rationalize the data placement of scientific workflow, and optimize the data transmission time across different datacenters. Traditional data placement strategies maintain load balancing with a given number of datacenters, which results in a large data transmission time. In this study, a self-adaptive discrete particle swarm optimization algorithm with genetic algorithm operators (GA-DPSO) was proposed to optimize the data transmission time when placing data for a scientific workflow. This approach considered the characteristics of data placement combining edge computing and cloud computing. In addition, it considered the impact factors impacting transmission delay, such as the band-width between datacenters, the number of edge datacenters, and the storage capacity of edge datacenters. The crossover operator and mutation operator of the genetic algorithm were adopted to avoid the premature convergence of the traditional particle swarm optimization algorithm, which enhanced the diversity of population evolution and effectively reduced the data transmission time. The experimental results show that the data placement strategy based on GA-DPSO can effectively reduce the data transmission time during workflow execution combining edge computing and cloud computing

    Applying autonomy to distributed satellite systems: Trends, challenges, and future prospects

    Get PDF
    While monolithic satellite missions still pose significant advantages in terms of accuracy and operations, novel distributed architectures are promising improved flexibility, responsiveness, and adaptability to structural and functional changes. Large satellite swarms, opportunistic satellite networks or heterogeneous constellations hybridizing small-spacecraft nodes with highperformance satellites are becoming feasible and advantageous alternatives requiring the adoption of new operation paradigms that enhance their autonomy. While autonomy is a notion that is gaining acceptance in monolithic satellite missions, it can also be deemed an integral characteristic in Distributed Satellite Systems (DSS). In this context, this paper focuses on the motivations for system-level autonomy in DSS and justifies its need as an enabler of system qualities. Autonomy is also presented as a necessary feature to bring new distributed Earth observation functions (which require coordination and collaboration mechanisms) and to allow for novel structural functions (e.g., opportunistic coalitions, exchange of resources, or in-orbit data services). Mission Planning and Scheduling (MPS) frameworks are then presented as a key component to implement autonomous operations in satellite missions. An exhaustive knowledge classification explores the design aspects of MPS for DSS, and conceptually groups them into: components and organizational paradigms; problem modeling and representation; optimization techniques and metaheuristics; execution and runtime characteristics and the notions of tasks, resources, and constraints. This paper concludes by proposing future strands of work devoted to study the trade-offs of autonomy in large-scale, highly dynamic and heterogeneous networks through frameworks that consider some of the limitations of small spacecraft technologies.Postprint (author's final draft

    Collaborative signal and information processing for target detection with heterogeneous sensor networks

    Get PDF
    In this paper, an approach for target detection and acquisition with heterogeneous sensor networks through strategic resource allocation and coordination is presented. Based on sensor management and collaborative signal and information processing, low-capacity low-cost sensors are strategically deployed to guide and cue scarce high performance sensors in the network to improve the data quality, with which the mission is eventually completed more efficiently with lower cost. We focus on the problem of designing such a network system in which issues of resource selection and allocation, system behaviour and capacity, target behaviour and patterns, the environment, and multiple constraints such as the cost must be addressed simultaneously. Simulation results offer significant insight into sensor selection and network operation, and demonstrate the great benefits introduced by guided search in an application of hunting down and capturing hostile vehicles on the battlefield
    corecore