25 research outputs found

    Neural, Genetic, And Neurogenetic Approaches For Solving The 0-1 Multidimensional Knapsack Problem

    Get PDF
    The multi-dimensional knapsack problem (MDKP) is a well-studied problem in Decision Sciences. The problem’s NP-Hard nature prevents the successful application of exact procedures such as branch and bound, implicit enumeration and dynamic programming for larger problems. As a result, various approximate solution approaches, such as the relaxation approaches, heuristic and metaheuristic approaches have been developed and applied effectively to this problem. In this study, we propose a Neural approach, a Genetic Algorithms approach and a Neurogenetic approach, which is a hybrid of the Neural and the Genetic Algorithms approach. The Neural approach is essentially a problem-space based non-deterministic local-search algorithm. In the Genetic Algorithms approach we propose a new way of generating initial population. In the Neurogenetic approach, we show that the Neural and Genetic iterations, when interleaved appropriately, can complement each other and provide better solutions than either the Neural or the Genetic approach alone. Within the overall search, the Genetic approach provides diversification while the Neural provides intensification. We demonstrate the effectiveness of our proposed approaches through an empirical study performed on several sets of benchmark problems commonly used in the literature

    RSCCGA: Resource Scheduling for Cloud Computing by Genetic Algorithm

    Get PDF
    Cloud computing, also known as on-the-line computing, is a kind of Internet-based computing that provides shared processing resources and data to computers and other devices on demand. It is a model for enabling ubiquitous, on-demand access to a shared pool of configurable computing resources, which can be rapidly provisioned and released with minimal management effort. Cloud computing and storage solutions provide users and enterprises with various capabilities to store and process their data in third-party data centers. It relies on sharing of resources to achieve coherence and economy of scale, similar to a utility (like the electricity grid) over a network. the scheduling problem is an important issue in the management of resources in the cloud, because despite many requests the data center there is the possibility of scheduling manually. Therefore, the scheduling algorithms play an important role in cloud computing, because the goal of scheduling is to reduce response times and improve resource utilization. The computing resources, either software or hardware, are virtualized and allocated as services from providers to users. The computing resources can be allocated dynamically upon the requirements and preferences of consumers. Traditional system-centric resource management architecture cannot process the resource assignment task and dynamically allocate the available resources in a cloud computing environment. This paper proposed a resource scheduling model for cloud computing based on the genetic algorithm. Experiments show that proposed method has more performance than other methods.Keywords: Cloud Computing, Resource Management, Scheduling, Bandwidth Consumption, Waiting Time, Genetic algorith

    Fuzzy logic-based algorithm resource scheduling for improving the reliability of cloud computing

    Get PDF
    Cloud computing is an important infrastructure for distributed systems with the main objective of reducing the use of resources. In a cloud environment, users may face thousands of resources to run each task. However, allocation of resources to tasks by the user is an impossible endeavor. Accurate scheduling of system resources results in their optimal use as well as an increase in the reliability of cloud computing. This study designed a system based on fuzzy logic and followed by an introduction of an efficient and precise algorithm for scheduling resources for improving the reliability of cloud computing. Waiting and turnaround times of the proposed method were compared to those of previous works. In the proposed method, the waiting time is equal to 26.99 and the turnaround time is equal to 82.99. According to the results, the proposed method outperforms other methods in terms of waiting time and turnaround time as well as accuracy

    Improving scalability of large-scale distributed Spiking Neural Network simulations on High Performance Computing systems using novel architecture-aware streaming hypergraph partitioning

    Get PDF
    After theory and experimentation, modelling and simulation is regarded as the third pillar of science, helping scientists to further their understanding of a complex system. In recent years there has been a growing scientific focus on computational neuroscience as a means to understand the brain and its functions, with large international projects (Human Brain Project, Brain Activity Map, MindScope and \textit{China Brain Project}) aiming to further our knowledge of high level cognitive functions. They are a testament to the enormous interest, difficulty and importance of solving the mysteries of the brain. Spiking Neural Network (SNN) simulations are widely used in the domain to facilitate experimentation. Scaling SNN simulations to large networks usually results in more-than-linear increase in computational complexity. The computing resources required at the brain scale simulation far surpass the capabilities of personal computers today. If those demands are to be met, distributed computation models need to be adopted, since there is a slow down of improvements in individual processors speed due to physical limitations on heat dissipation. This is a significant change that requires careful management of the workload in many levels: partition of work, communication and workload balancing, efficient inter-process communication and efficient use of available memory. If large scale neuronal network models are to be run successfully, simulators must consider these, and offer a viable solution to the challenges they pose. Large scale SNN simulations evidence most of the issues of general HPC systems evident in large distributed computation. Commonly used distribution of workload algorithms (round robin, random and manual allocation) do not take into consideration connectivity locality, which is natural in biological networks, which can lead to increased communication requirements when distributing the simulation in multiple computing nodes. State-of-the-art SNN simulations use dense communication collectives to distribute spike data. The common method of point to point communication in distributed computation is through dense patterns. Sparse communication collectives have been suggested to incur in lower overheads when the application's pattern of communication is sparse. In this work we characterise the bottlenecks on communication-bound SNN simulations and identify communication balance and sparsity as the main contributors to scalability. We propose hypergraph partitioning to distribute neurons along computing nodes to minimise communication (increasing sparsity). A hypergraph is a generalisation of graphs, where a (hyper)edge can link 2 or more vertices at once. Coupled with a novel use of sparse-aware communication collective, computational efficiency increases by up to 40.8 percent points and simulation time reduces by up to 73\%, compared to the common round-robin allocation in neuronal simulators. HPC systems have, by design, highly hierarchical communication network links, with qualitative differences in communication speed and latency between computing nodes. This can create a mismatch between the distributed simulation communication patterns and the physical capabilities of the hardware. If large distributed simulations are to take full advantage of these systems, the communication properties of the HPC need to be taken into consideration when allocating workload to route frequent, heavy communication through fast network links. Strategies that consider the heterogeneous physical communication capabilities are called architecture-aware. After demonstrating that hypergraph partitioning leads to more efficient workload allocation in SNN simulations, this thesis proposes a novel sequential hypergraph partitioning algorithm that incorporates network bandwidth via profiling. This leads to a significant reduction in execution time (up to 14x speedup in synthetic benchmark simulations compared to architecture-agnostic partitioners). The motivating context of this work is large scale brain simulations, however in the era of social media, large graphs and hypergraphs are increasingly relevant in many other scientific applications. A common feature of such graphs is that they are too big for a single machine to cope, both in terms of performance and memory requirements. State-of-the-art multilevel partitioning has been shown to struggle to scale to large graphs in distributed memory, not just because they take a long time to process, but also because they require full knowledge of the graph (not possible in dynamic graphs) and to fit the graph entirely in memory (not possible for very large graphs). To address those limitations we propose a parallel implementation of our architecture-aware streaming hypergraph partitioning algorithm (HyperPRAW) to model distributed applications. Results demonstrate that HyperPRAW produces consistent speedup over previous streaming approaches that only consider hyperedge overlap (up to 5.2x speedup). Compared to multilevel global partitioner in dense hypergraphs (those with high average cardinality), HyperPRAW is able to produce workload allocations that result in speeding up runtime in a synthetic simulation benchmark (up to 4.3x). HyperPRAW has the potential to scale to very large hypergraphs as it only requires local information to make allocation decisions, with an order of magnitude less memory footprint than global partitioners. The combined contributions of this thesis lead to a novel, parallel, scalable, streaming hypergraph partitioning algorithm (HyperPRAW) that can be used to help scale large distributed simulations in HPC systems. HyperPRAW helps tackle three of the main scalability challenges: it produces highly balanced distributed computation and communication, minimising idle time between computing nodes; it reduces the communication overhead by placing frequently communicating simulation elements close to each other (where the communication cost is minimal); and it provides a solution with a reasonable memory footprint that allows tackling larger problems than state-of-the-art alternatives such as global multilevel partitioning

    Conception d'un modèle architectural collaboratif pour l'informatique omniprésente à la périphérie des réseaux mobiles

    Get PDF
    Le progrès des technologies de communication pair-à-pair et sans fil a de plus en plus permis l’intégration de dispositifs portables et omniprésents dans des systèmes distribués et des architectures informatiques de calcul dans le paradigme de l’internet des objets. De même, ces dispositifs font l'objet d'un développement technologique continu. Ainsi, ils ont toujours tendance à se miniaturiser, génération après génération durant lesquelles ils sont considérés comme des dispositifs de facto. Le fruit de ces progrès est l'émergence de l'informatique mobile collaborative et omniprésente, notamment intégrée dans les modèles architecturaux de l'Internet des Objets. L’avantage le plus important de cette évolution de l'informatique est la facilité de connecter un grand nombre d'appareils omniprésents et portables lorsqu'ils sont en déplacement avec différents réseaux disponibles. Malgré les progrès continuels, les systèmes intelligents mobiles et omniprésents (réseaux, dispositifs, logiciels et technologies de connexion) souffrent encore de diverses limitations à plusieurs niveaux tels que le maintien de la connectivité, la puissance de calcul, la capacité de stockage de données, le débit de communications, la durée de vie des sources d’énergie, l'efficacité du traitement de grosses tâches en termes de partitionnement, d'ordonnancement et de répartition de charge. Le développement technologique accéléré des équipements et dispositifs de ces modèles mobiles s'accompagne toujours de leur utilisation intensive. Compte tenu de cette réalité, plus d'efforts sont nécessaires à la fois dans la conception structurelle tant au matériel et logiciel que dans la manière dont il est géré. Il s'agit d'améliorer, d'une part, l'architecture de ces modèles et leurs technologies de communication et, d'autre part, les algorithmes d'ordonnancement et d'équilibrage de charges pour effectuer leurs travaux efficacement sur leurs dispositifs. Notre objectif est de rendre ces modèles omniprésents plus autonomes, intelligents et collaboratifs pour renforcer les capacités de leurs dispositifs, leurs technologies de connectivité et les applications qui effectuent leurs tâches. Ainsi, nous avons établi un modèle architectural autonome, omniprésent et collaboratif pour la périphérie des réseaux. Ce modèle s'appuie sur diverses technologies de connexion modernes telles que le sans-fil, la radiocommunication pair-à-pair, et les technologies offertes par LoPy4 de Pycom telles que LoRa, BLE, Wi-Fi, Radio Wi-Fi et Bluetooth. L'intégration de ces technologies permet de maintenir la continuité de la communication dans les divers environnements, même les plus sévères. De plus, ce modèle conçoit et évalue un algorithme d'équilibrage de charge et d'ordonnancement permettant ainsi de renforcer et améliorer son efficacité et sa qualité de service (QoS) dans différents environnements. L’évaluation de ce modèle architectural montre des avantages tels que l’amélioration de la connectivité et l’efficacité d’exécution des tâches. Advances in peer-to-peer and wireless communication technologies have increasingly enabled the integration of mobile and pervasive devices into distributed systems and computing architectures in the Internet of Things paradigm. Likewise, these devices are subject to continuous technological development. Thus, they always tend to be miniaturized, generation after generation during which they are considered as de facto devices. The success of this progress is the emergence of collaborative mobiles and pervasive computing, particularly integrated into the architectural models of the Internet of Things. The most important benefit of this form of computing is the ease of connecting a large number of pervasive and portable devices when they are on the move with different networks available. Despite the continual advancements that support this field, mobile and pervasive intelligent systems (networks, devices, software and connection technologies) still suffer from various limitations at several levels such as maintaining connectivity, computing power, ability to data storage, communication speeds, the lifetime of power sources, the efficiency of processing large tasks in terms of partitioning, scheduling and load balancing. The accelerated technological development of the equipment and devices of these mobile models is always accompanied by their intensive use. Given this reality, it requires more efforts both in their structural design and management. This involves improving on the one hand, the architecture of these models and their communication technologies, and, on the other hand, the scheduling and load balancing algorithms for the work efficiency. The goal is to make these models more autonomous, intelligent, and collaborative by strengthening the different capabilities of their devices, their connectivity technologies and the applications that perform their tasks. Thus, we have established a collaborative autonomous and pervasive architectural model deployed at the periphery of networks. This model is based on various modern connection technologies such as wireless, peer-to-peer radio communication, and technologies offered by Pycom's LoPy4 such as LoRa, BLE, Wi-Fi, Radio Wi-Fi and Bluetooth. The integration of these technologies makes it possible to maintain the continuity of communication in the various environments, even the most severe ones. Within this model, we designed and evaluated a load balancing and scheduling algorithm to strengthen and improve its efficiency and quality of service (QoS) in different environments. The evaluation of this architectural model shows payoffs such as improvement of connectivity and efficiency of task executions

    A Polyhedral Study of Mixed 0-1 Set

    Get PDF
    We consider a variant of the well-known single node fixed charge network flow set with constant capacities. This set arises from the relaxation of more general mixed integer sets such as lot-sizing problems with multiple suppliers. We provide a complete polyhedral characterization of the convex hull of the given set

    Evolutionary Computation 2020

    Get PDF
    Intelligent optimization is based on the mechanism of computational intelligence to refine a suitable feature model, design an effective optimization algorithm, and then to obtain an optimal or satisfactory solution to a complex problem. Intelligent algorithms are key tools to ensure global optimization quality, fast optimization efficiency and robust optimization performance. Intelligent optimization algorithms have been studied by many researchers, leading to improvements in the performance of algorithms such as the evolutionary algorithm, whale optimization algorithm, differential evolution algorithm, and particle swarm optimization. Studies in this arena have also resulted in breakthroughs in solving complex problems including the green shop scheduling problem, the severe nonlinear problem in one-dimensional geodesic electromagnetic inversion, error and bug finding problem in software, the 0-1 backpack problem, traveler problem, and logistics distribution center siting problem. The editors are confident that this book can open a new avenue for further improvement and discoveries in the area of intelligent algorithms. The book is a valuable resource for researchers interested in understanding the principles and design of intelligent algorithms

    Optimal control in dynamic agri-food supply chains

    Get PDF
    Doctor of PhilosophyDepartment of Industrial & Manufacturing Systems EngineeringAshesh K SinhaThe supply chains for agriculture and food (agri-food) related products face several challenges due to uncertainties and dynamic behaviors related to fluctuations in demand, uncontrollable environmental factors, sensitive quality concerns, and profitability within a low-margin industry. This research develops data-driven stochastic models and methods for solving important problems in agri-food supply chains. Agri-food supply chains are well known to be dynamic and stochastic, yet most current models use simplified deterministic models. Instead, this research develops stochastic models and optimization methods that integrate ideas and techniques from machine learning, Big Data mining, and deep reinforcement learning to improve the supply chain performance and reduce food loss amidst many sources of uncertainty. Due to advances in computational capability and the availability of data in recent decades, it is now possible to create models with more details to better reflect the true supply chain dynamics and complexity. This research first introduces a new generalized stochastic model for representing the dynamics of complex agri-food supply chains to optimize profitability by ensuring the quality of the end-product. A specific focus is placed on the tracking of the obtained quality level throughout the steps of the supply chain since this property highly predicts if the materials in the current steps are available for use in different potential final products or the final products’ acceptability by the consumer. It is recognized that these models must be able to be developed from existing data to capture the supply chain’s complexity and quantify uncertain outcomes. This research accomplishes this by integrating data mining techniques with these models to determine the supply chain dynamics. Since deriving these dynamic behaviors from historical data can become computationally challenging, a novel approach that leverages Big Data mining tools and techniques is introduced and utilized to speed up running times without compromising the complexity or requiring more assumptions for the models. Lastly, this research analyzes how traditional techniques perform versus approximation methods for agri-food supply chain models with rolling horizons and product degradation. This can be demonstrated through a mix and blend problem, a common operation in agri-food supply chains. A new neural network architecture called OR-Net is introduced as an efficient mechanism for modeling and solving sequential integer programs such as the mix and blend problem using deep reinforcement learning. OR-Net is designed specifically to focus on the orthogonal relationships that exist between an integer program’s coefficients. Using numerical experiments, analysis is performed to evaluate the performance of OR-Net against stage-wise optimization and other approximation methods

    Aeronautical engineering: A continuing bibliography with indexes (supplement 233)

    Get PDF
    This bibliography lists 637 reports, articles, and other documents introduced into the NASA scientific and technical information system in November, 1988. Subject coverage includes: design, construction and testing of aircraft and aircraft engines; aircraft components, equipment and systems; ground support systems; and theoretical and applied aspects of aerodynamics and general fluid dynamics
    corecore