1,838 research outputs found
Cloud computing resource scheduling and a survey of its evolutionary approaches
A disruptive technology fundamentally transforming the way that computing services are delivered, cloud computing offers information and communication technology users a new dimension of convenience of resources, as services via the Internet. Because cloud provides a finite pool of virtualized on-demand resources, optimally scheduling them has become an essential and rewarding topic, where a trend of using Evolutionary Computation (EC) algorithms is emerging rapidly. Through analyzing the cloud computing architecture, this survey first presents taxonomy at two levels of scheduling cloud resources. It then paints a landscape of the scheduling problem and solutions. According to the taxonomy, a comprehensive survey of state-of-the-art approaches is presented systematically. Looking forward, challenges and potential future research directions are investigated and invited, including real-time scheduling, adaptive dynamic scheduling, large-scale scheduling, multiobjective scheduling, and distributed and parallel scheduling. At the dawn of Industry 4.0, cloud computing scheduling for cyber-physical integration with the presence of big data is also discussed. Research in this area is only in its infancy, but with the rapid fusion of information and data technology, more exciting and agenda-setting topics are likely to emerge on the horizon
A Survey on Load Balancing Algorithms for VM Placement in Cloud Computing
The emergence of cloud computing based on virtualization technologies brings
huge opportunities to host virtual resource at low cost without the need of
owning any infrastructure. Virtualization technologies enable users to acquire,
configure and be charged on pay-per-use basis. However, Cloud data centers
mostly comprise heterogeneous commodity servers hosting multiple virtual
machines (VMs) with potential various specifications and fluctuating resource
usages, which may cause imbalanced resource utilization within servers that may
lead to performance degradation and service level agreements (SLAs) violations.
To achieve efficient scheduling, these challenges should be addressed and
solved by using load balancing strategies, which have been proved to be NP-hard
problem. From multiple perspectives, this work identifies the challenges and
analyzes existing algorithms for allocating VMs to PMs in infrastructure
Clouds, especially focuses on load balancing. A detailed classification
targeting load balancing algorithms for VM placement in cloud data centers is
investigated and the surveyed algorithms are classified according to the
classification. The goal of this paper is to provide a comprehensive and
comparative understanding of existing literature and aid researchers by
providing an insight for potential future enhancements.Comment: 22 Pages, 4 Figures, 4 Tables, in pres
AI Solutions for MDS: Artificial Intelligence Techniques for Misuse Detection and Localisation in Telecommunication Environments
This report considers the application of Articial Intelligence (AI) techniques to
the problem of misuse detection and misuse localisation within telecommunications
environments. A broad survey of techniques is provided, that covers inter alia
rule based systems, model-based systems, case based reasoning, pattern matching,
clustering and feature extraction, articial neural networks, genetic algorithms, arti
cial immune systems, agent based systems, data mining and a variety of hybrid
approaches. The report then considers the central issue of event correlation, that
is at the heart of many misuse detection and localisation systems. The notion of
being able to infer misuse by the correlation of individual temporally distributed
events within a multiple data stream environment is explored, and a range of techniques,
covering model based approaches, `programmed' AI and machine learning
paradigms. It is found that, in general, correlation is best achieved via rule based approaches,
but that these suffer from a number of drawbacks, such as the difculty of
developing and maintaining an appropriate knowledge base, and the lack of ability
to generalise from known misuses to new unseen misuses. Two distinct approaches
are evident. One attempts to encode knowledge of known misuses, typically within
rules, and use this to screen events. This approach cannot generally detect misuses
for which it has not been programmed, i.e. it is prone to issuing false negatives.
The other attempts to `learn' the features of event patterns that constitute normal
behaviour, and, by observing patterns that do not match expected behaviour, detect
when a misuse has occurred. This approach is prone to issuing false positives,
i.e. inferring misuse from innocent patterns of behaviour that the system was not
trained to recognise. Contemporary approaches are seen to favour hybridisation,
often combining detection or localisation mechanisms for both abnormal and normal
behaviour, the former to capture known cases of misuse, the latter to capture
unknown cases. In some systems, these mechanisms even work together to update
each other to increase detection rates and lower false positive rates. It is concluded
that hybridisation offers the most promising future direction, but that a rule or state
based component is likely to remain, being the most natural approach to the correlation
of complex events. The challenge, then, is to mitigate the weaknesses of
canonical programmed systems such that learning, generalisation and adaptation
are more readily facilitated
Handbook of Computational Intelligence in Manufacturing and Production Management
Artificial intelligence (AI) is simply a way of providing a computer or a machine to think intelligently like human beings. Since human intelligence is a complex abstraction, scientists have only recently began to understand and make certain assumptions on how people think and to apply these assumptions in order to design AI programs. It is a vast knowledge base discipline that covers reasoning, machine learning, planning, intelligent search, and perception building. Traditional AI had the limitations to meet the increasing demand of search, optimization, and machine learning in the areas of large, biological, and commercial database information systems and management of factory automation for different industries such as power, automobile, aerospace, and chemical plants. The drawbacks of classical AI became more pronounced due to successive failures of the decade long Japanese project on fifth generation computing machines. The limitation of traditional AI gave rise to development of new computational methods in various applications of engineering and management problems. As a result, these computational techniques emerged as a new discipline called computational intelligence (CI)
Zero-day Network Intrusion Detection using Machine Learning Approach
Zero-day network attacks are a growing global cybersecurity concern. Hackers exploit vulnerabilities in network systems, making network traffic analysis crucial in detecting and mitigating unauthorized attacks. However, inadequate and ineffective network traffic analysis can lead to prolonged network compromises. To address this, machine learning-based zero-day network intrusion detection systems (ZDNIDS) rely on monitoring and collecting relevant information from network traffic data. The selection of pertinent features is essential for optimal ZDNIDS performance given the voluminous nature of network traffic data, characterized by attributes. Unfortunately, current machine learning models utilized in this field exhibit inefficiency in detecting zero-day network attacks, resulting in a high false alarm rate and overall performance degradation. To overcome these limitations, this paper introduces a novel approach combining the anomaly-based extended isolation forest algorithm with the BAT algorithm and Nevergrad. Furthermore, the proposed model was evaluated using 5G network traffic, showcasing its effectiveness in efficiently detecting both known and unknown attacks, thereby reducing false alarms when compared to existing systems. This advancement contributes to improved internet security
Ant colony optimization algorithm for load balancing in grid computing
Managing resources in grid computing system is complicated due to the distributed and heterogeneous nature of the resources. This research proposes an enhancement of the ant colony optimization algorithm that caters for dynamic scheduling and load balancing in the grid computing system. The proposed algorithm is known as the enhance ant colony optimization (EACO). The algorithm consists of three new mechanisms that organize the work of an ant colony i.e. initial pheromone value mechanism, resource selection mechanism and pheromone update mechanism. The resource allocation problem is modelled as a graph that can be used by the ant to deliver its pheromone.This graph consists of four types of vertices which are job, requirement, resource and capacity that are used in constructing the grid resource management element. The proposed EACO algorithm takes into consideration the capacity of resources and the characteristics of jobs in determining the best resource to process a job. EACO selects the resources based on the pheromone value on each resource which is recorded in a matrix form. The initial pheromone value of each resource for each job is calculated based on the estimated transmission time and execution time of a given job.Resources with high pheromone value are selected to process the submitted jobs. Global pheromone update is performed after the completion of processing the jobs in order to reduce the pheromone value of resources.A simulation environment was developed using Java programming to test the performance of the proposed EACO algorithm against other ant based algorithm, in terms of resource utilization. Experimental results show that EACO produced better grid resource management solution
- …