31 research outputs found
āļāļēāļĢāļāļāļāđāļāļāđāļāļ·āđāļāđāļāļīāđāļĄāļāļĢāļ°āļŠāļīāļāļāļīāļ āļēāļāđāļŠāđāļāļāļēāļāđāļāļīāļāļĢāļāļāļāļŠāđāļāđāļāļĢāļ·āđāļāļāļŠāļģāļāļēāļ : āļāļĢāļāļĩāļĻāļķāļāļĐāļē
āļāļāļāļąāļāļĒāđāļ āļāļēāļāļ§āļīāļāļąāļĒāļāļĩāđāđāļāđāļāļāļēāļĢāļāļāļāđāļāļāđāļŠāđāļāļāļēāļāđāļāļīāļāļĢāļāļāļāļŠāđāļāđāļāļĢāļ·āđāļāļāļŠāļģāļāļēāļāđāļāļ·āđāļāđāļāļīāđāļĄāļāļĢāļ°āļŠāļīāļāļāļīāļ āļēāļāļāļēāļĢāđāļāļīāļāļāļēāļāđāļāļĒāļĄāļĩāļ§āļąāļāļāļļāļāļĢāļ°āļŠāļāļāđāđāļŦāđāļĢāļ°āļĒāļ°āļāļēāļāļāļĩāđāđāļāđāđāļāļāļēāļĢāļāļāļŠāđāļāļāđāļģāļāļĩāđāļŠāļļāļ āļāļąāļāļŦāļēāđāļŠāđāļāļāļēāļāļāļāļŠāđāļāđāļāļĢāļ·āđāļāļāļŠāļģāļāļēāļāļāļāļāļāļĢāļīāļĐāļąāļāļāļĢāļāļĩāļĻāļķāļāļĐāļē āļĄāļĩāļāļļāļāļāļĢāļ°āļāļēāļĒāļŠāļīāļāļāđāļēāđāļāļĩāļĒāļāđāļŦāđāļāđāļāļĩāļĒāļ§ āđāļāļ·āđāļāļŠāđāļāđāļāļĢāļ·āđāļāļāļŠāļģāļāļēāļāđāļāļĒāļąāļāļĢāđāļēāļāļāļąāļ§āđāļāļāļāļģāļŦāļāđāļēāļĒ 20 āļĢāđāļēāļ āđāļāđāļāļāļāļĢāļļāļāđāļāļāļŊ āđāļĨāļ°āļāļĢāļīāļĄāļāļāļĨ āļāļēāļāļ§āļīāļāļąāļĒāļāļĩāđāļāļķāļāđāļāđāļāļģāđāļŠāļāļāđāļāļ§āļāļēāļāļāļĢāļąāļāļāļĢāļļāļāđāļĨāļ°āļāļāļāđāļāļāđāļŠāđāļāļāļēāļāļāļēāļĢāļāļāļŠāđāļāļāļĩāđāđāļŦāļĄāļēāļ°āļŠāļĄāđāļĨāļ°āļĄāļĩāļāļĢāļ°āļŠāļīāļāļāļīāļ āļēāļāđāļāļĒāļāļēāļĢāļāļĢāļ°āļĒāļļāļāļāđāļāļēāļĢāđāļāđāļāļąāļāļŦāļēāļāļēāļĢāļāļąāļāđāļŠāđāļāļāļēāļāđāļāļīāļāļĢāļāļŠāļģāļŦāļĢāļąāļāļāļēāļĢāđāļāđāļāļąāļāļŦāļēāļāļēāļĢāđāļāļīāļāļāļēāļāļāļāļāļāļāļąāļāļāļēāļāļāļēāļĒāļāļĩāđāļĄāļĩāļĢāļ°āļĒāļ°āļāļēāļāđāļāđāļĨāļ°āļāļĨāļąāļāđāļāđāļēāļāļąāļ(Symmetric traveling salesman problem) āđāļāļĒāđāļāđāļ§āļīāļāļĩāļāļēāļĢāļāļģāļĨāļāļāļāļēāļĢāļāļāđāļŦāļāļĩāļĒāļ§āđāļāļ·āđāļāļāļĢāļąāļāļāļĢāļļāļāļāļĢāļ°āļŠāļīāļāļāļīāļ āļēāļāļāļāļāļāļēāļĢāļāļąāļāļāļēāļĢāđāļŠāđāļāļāļēāļāļāļēāļĢāđāļāļīāļāļĢāļ āđāļĨāļ°āđāļāđāđāļāļĢāļĩāļĒāļāđāļāļĩāļĒāļāļ§āļīāļāļĩāļāļĩāđāđāļāđāđāļāļāļąāļāļāļļāļāļąāļāļāļ·āļāļ§āļīāļāļĩāļāļēāļĢāļŦāļēāļāļģāļāļāļāļāļĩāđāđāļāļĨāđāđāļāļĩāļĒāļāļāļĩāđāļŠāļļāļ (Nearest neighbor heuristic) āđāļĨāļ°āļ§āļīāļāļĩāļāļēāļĢāļāļģāļĨāļāļāļāļēāļĢāļāļāđāļŦāļāļĩāļĒāļ§ (Simulated annealing algorithm) āļāļąāđāļāļāļĩāđ āļāļēāļāļāļēāļĢāļ§āļīāļāļąāļĒāļāļāļ§āđāļē āļ§āļīāļāļĩāļāļēāļĢāļāļģāļĨāļāļāļāļēāļĢāļāļāđāļŦāļāļĩāļĒāļ§āļŠāļēāļĄāļēāļĢāļāļĨāļāļĢāļ°āļĒāļ°āļāļēāļāļāļēāļĢāđāļāļīāļāļĢāļāļāļēāļāļ§āļīāļāļĩāļāļĩāđāđāļāđāđāļāļāļąāļāļāļļāļāļąāļāđāļāđ 7.81 % āļāļģāļŠāļģāļāļąāļ: āļāļąāļāļŦāļēāļāļēāļĢāđāļāļīāļāļāļēāļāļāļāļāļāļāļąāļāļāļēāļāļāļēāļĒ, āļāļēāļĢāļŦāļēāļāļģāļāļāļāļāļĩāđāđāļāļĨāđāđāļāļĩāļĒāļāļāļĩāđāļŠāļļāļ, āļ§āļīāļāļĩāļāļēāļĢāļāļģāļĨāļāļāļāļēāļĢāļāļāđāļŦāļāļĩāļĒāļ§, āđāļĄāļāļēāļ§āļīāļāļĩāļŪāļīāļ§āļĢāļīāļŠāļāļīāļ Abstract This research was concerned with designing the vehicle routing for cosmetic products. The objective was to minimize the total transportation distance. In addition, there was a single depot of the transportation routing problem in the cosmetic company case study in order to distribute products through 20 cosmetic dealers in Bangkok and nearby places. We proposed the effective transportation route to solve the symmetric traveling salesman problem by using the simulated annealing algorithm to enhance the efficiency of the vehicle routing. Accordingly, two algorithms, the nearest neighbor heuristic and the simulated annealing algorithm, are compared. As in the results, the simulated annealing algorithm outperforms the current method approximately 7.81% Keywords: Travelling salesman problem, nearest neighbor heuristic, simulated annealing, metaheuristic
Parallel Computers and Complex Systems
We present an overview of the state of the art and future trends in high performance parallel and distributed computing, and discuss techniques for using such computers in the simulation of complex problems in computational science. The use of high performance parallel computers can help improve our understanding of complex systems, and the converse is also true --- we can apply techniques used for the study of complex systems to improve our understanding of parallel computing. We consider parallel computing as the mapping of one complex system --- typically a model of the world --- into another complex system --- the parallel computer. We study static, dynamic, spatial and temporal properties of both the complex systems and the map between them. The result is a better understanding of which computer architectures are good for which problems, and of software structure, automatic partitioning of data, and the performance of parallel machines
Study on the Impact of the NS in the Performance of Meta-Heuristics in the TSP
Meta-heuristics have been applied for a long time to the Travelling Salesman Problem (TSP) but information is still lacking in the determination of the parameters with the best performance. This paper examines the impact of the Simulated Annealing (SA) and Discrete Artificial Bee Colony (DABC) parameters in the TSP. One special consideration of this paper is how the Neighborhood Structure (NS) interact with the other parameters and impacts the performance of the meta-heuristics. NS performance has been the topic of much research, with NS proposed for the best-known problems, which seem to imply that the NS influences the performance of meta-heuristics, more that other parameters. Moreover, a comparative analysis of distinct meta-heuristics is carried out to demonstrate a non-proportional increase in the performance of the NS.This work is supported by FEDER Funds through the "Programa Operacional Factores de Competitividade - COMPETE" program and by National Funds through FCT "FundaqAo para a Ciencia e a Tecnologia" under the project: FCOMP-01-0124-FEDER-PEst-OE/EEl/U10760/2011, PEst-OE/EEI/UI0760/2014, and PEst2015-2020.info:eu-repo/semantics/publishedVersio
Solving the dynamic traveling salesman problem using a genetic algorithm with trajectory prediction: an application to fish aggregating devices
The paper addresses the synergies from combining a heuristic method with a predictive technique to solve the Dynamic Traveling Salesman Problem (DTSP). Particularly, we build a genetic algorithm that feeds on Newton's motion equation to show how route optimization can be improved when targets are constantly moving. Our empirical evidence stems from the recovery of fish aggregating devices (FADs) by tuna vessels. Based on historical real data provided by GPS buoys attached to the FADs, we first estimate their trajectories to feed a genetic algorithm that searches for the best route considering their future locations. Our solution, which we name Genetic Algorithm based on Trajectory Prediction (GATP), shows that the distance traveled is significantly shorter than implementing other commonly used methods.European Regional Development Fund | Ref. 10SEC300036PRMinisterio de EconomÃa y Competitividad | Ref. ECO2013-45706
An adaptive hybrid genetic-annealing approach for solving the map problem on belief networks
Genetic algorithms (GAs) and simulated annealing (SA) are two important search methods that have been used successfully in solving difficult problems such as combinatorial optimization problems. Genetic algorithms are capable of wide exploration of the search space, while simulated annealing is capable of fine tuning a good solution. Combining both techniques may result in achieving the benefits of both and improving the quality of the solutions obtained. Several attempts have been made to hybridize GAs and SA. One such attempt was to augment a standard GA with simulated annealing as a genetic operator. SA in that case acted as a directed or intelligent mutation operator as opposed to the random, undirected mutation operator of GAs. Although using this technique showed some advantages over GA used alone, one problem was to find fixed global annealing parameters that work for all solutions and all stages in the search process. Failing to find optimum annealing parameters affects the quality of the solution obtained and may degrade performance. In this research, we try to overcome this weakness by introducing an adaptive hybrid GA - SA algorithm, in which simulated annealing acts as a special case of mutation. However, the annealing operator used in this technique is adaptive in the sense that the annealing parameters are evolved and optimized according to the requirements of the search process. Adaptation is expected to help guide the search towards optimum solutions with minimum effort of parameter optimization. The algorithm is tested in solving an important NP-hard problem, which is the MAP (Maximum a-Posteriori) assignment problem on BBNs (Bayesian Belief Networks). The algorithm is also augmented with some problem specific information used to design a new GA crossover operator. The results obtained from testing the algorithm on several BBN graphs with large numbers of nodes and different network structures indicate that the adaptive hybrid algorithm provides an improvement of solution quality over that obtained by GA used alone and GA augmented with standard non-adaptive simulated annealing. Its effect, however, is more profound for problems with large numbers of nodes, which are difficult for GA alone to solve
Technology Directions for the 21st Century, volume 1
For several decades, semiconductor device density and performance have been doubling about every 18 months (Moore's Law). With present photolithography techniques, this rate can continue for only about another 10 years. Continued improvement will need to rely on newer technologies. Transition from the current micron range for transistor size to the nanometer range will permit Moore's Law to operate well beyond 10 years. The technologies that will enable this extension include: single-electron transistors; quantum well devices; spin transistors; and nanotechnology and molecular engineering. Continuation of Moore's Law will rely on huge capital investments for manufacture as well as on new technologies. Much will depend on the fortunes of Intel, the premier chip manufacturer, which, in turn, depend on the development of mass-market applications and volume sales for chips of higher and higher density. The technology drivers are seen by different forecasters to include video/multimedia applications, digital signal processing, and business automation. Moore's Law will affect NASA in the areas of communications and space technology by reducing size and power requirements for data processing and data fusion functions to be performed onboard spacecraft. In addition, NASA will have the opportunity to be a pioneering contributor to nanotechnology research without incurring huge expenses
Technology Directions for the 21st Century
The Office of Space Communications (OSC) is tasked by NASA to conduct a planning process to meet NASA's science mission and other communications and data processing requirements. A set of technology trend studies was undertaken by Science Applications International Corporation (SAIC) for OSC to identify quantitative data that can be used to predict performance of electronic equipment in the future to assist in the planning process. Only commercially available, off-the-shelf technology was included. For each technology area considered, the current state of the technology is discussed, future applications that could benefit from use of the technology are identified, and likely future developments of the technology are described. The impact of each technology area on NASA operations is presented together with a discussion of the feasibility and risk associated with its development. An approximate timeline is given for the next 15 to 25 years to indicate the anticipated evolution of capabilities within each of the technology areas considered. This volume contains four chapters: one each on technology trends for database systems, computer software, neural and fuzzy systems, and artificial intelligence. The principal study results are summarized at the beginning of each chapter
Recommended from our members
OptPlatform: metaheuristic optimisation framework for solving complex real-world problems
This thesis was submitted for the award of Doctor of Philosophy and was awarded by Brunel University LondonWe optimise daily, whether that is planning a round trip that visits the most attractions within a given holiday budget or just taking a train instead of driving a car in a rush hour. Many problems, just like these, are solved by individuals as part of our daily schedule, and they are effortless and straightforward. If we now scale that to many individuals with many different schedules, like a school timetable, we get to a point where it is just not feasible or practical to solve by hand. In such instances, optimisation methods are used to obtain an optimal solution. In this thesis, a practical approach to optimisation has been taken by developing an optimisation platform with all the necessary tools to be used by practitioners who are not necessarily familiar with the subject of optimisation. First, a high-performance metaheuristic optimisation framework (MOF) called OptPlatform is implemented, and the versatility and performance are evaluated across multiple benchmarks and real-world optimisation problems. Results show that, compared to competing MOFs, the OptPlatform outperforms in both the solution quality and computation time. Second, the most suitable hardware platform for OptPlatform is determined by an in-depth analysis of Ant Colony Optimisation scaling across CPU, GPU and enterprise Xeon Phi. Contrary to the common benchmark problems used in the literature, the supply chain problem solved could not scale on GPUs. Third, a variety of metaheuristics are implemented into OptPlatform. Including, a new metaheuristic based on Imperialist Competitive Algorithm (ICA), called ICA with Independence and Constrained Assimilation (ICAwICA) is proposed. The ICAwICA was compared against two different types of benchmark problems, and results show the versatile application of the algorithm, matching and in some cases outperforming the custom-tuned approaches. Finally, essential MOF features like automatic algorithm selection and tuning, lacking on existing frameworks, are implemented in OptPlatform. Two novel approaches are proposed and compared to existing methods. Results indicate the superiority of the implemented tuning algorithms within constrained tuning budget environment
Parallel and Distributed Computing
The 14 chapters presented in this book cover a wide variety of representative works ranging from hardware design to application development. Particularly, the topics that are addressed are programmable and reconfigurable devices and systems, dependability of GPUs (General Purpose Units), network topologies, cache coherence protocols, resource allocation, scheduling algorithms, peertopeer networks, largescale network simulation, and parallel routines and algorithms. In this way, the articles included in this book constitute an excellent reference for engineers and researchers who have particular interests in each of these topics in parallel and distributed computing