96 research outputs found

    Designing substitution boxes based on chaotic map and globalized firefly algorithm

    Get PDF
    Cipher strength mainly depends on the robust structure and a well-designed interaction of the components in its framework. A significant component of a cipher system, which has a significant influence on the strength of the cipher system, is the substitution box or S-box. An S-box is a vital and most essential component of the cipher system due to its direct involvement in providing the system with resistance against certain known and potential cryptanalytic attacks. Hence, research in this area has increased since the late 1980s, but there are still several issues in the design and analysis of the S-boxes for cryptography purposes. Therefore, it is not surprising that the design of suitable S-boxes attracts a lot of attention in the cryptography community. Nonlinearity, bijectivity, strict avalanche criteria, bit independence criteria, differential probability, and linear probability are the major required cryptographic characteristics associated with a strong S-box. Different cryptographic systems requiring certain levels of these security properties. Being that S- boxes can exhibit a certain combination of cryptographic properties at differing rates, the design of a cryptographically strong S-box often requires the establishment of a trade-off between these properties when optimizing the property values. To date, many S-boxes designs have been proposed in the literature, researchers have advocated the adoption of metaheuristic based S-boxes design. Although helpful, no single metaheuristic claim dominance over their other countermeasure. For this reason, the research for a new metaheuristic based S-boxes generation is still a useful endeavour. This thesis aim to provide a new design for 8 × 8 S-boxes based on firefly algorithm (FA) optimization. The FA is a newly developed metaheuristic algorithm inspired by fireflies and their flash lighting process. In this context, the proposed algorithm utilizes a new design for retrieving strong S- boxes based on standard firefly algorithm (SFA). Three variations of FA have been proposed with an aim of improving the generated S-boxes based on the SFA. The first variation of FA is called chaotic firefly algorithm (CFA), which was initialized using discrete chaotic map to enhance the algorithm to start the search from good positions. The second variation is called globalized firefly algorithm (GFA), which employs random movement based on the best firefly using chaotic maps. If a firefly is brighter than its other counterparts, it will not conduct any search. The third variation is called globalized firefly algorithm with chaos (CGFA), which was designed as a combination of CFA initialization and GFA. The obtained result was compared with a previous S-boxes based on optimization algorithms. Overall, the experimental outcome and analysis of the generated S-boxes based on nonlinearity, bit independence criteria, strict avalanche criteria, and differential probability indicate that the proposed method has satisfied most of the required criteria for a robust S-box without compromising any of the required measure of a secure S-box

    Spatial prediction of groundwater spring potential mapping based on an adaptive neuro-fuzzy inference system and metaheuristic optimization

    Get PDF
    Groundwater is one of the most valuable natural resources in the world (Jha et al., 2007). However, it is not an unlimited resource; therefore understanding groundwater potential is crucial to ensure its sustainable use. The aim of the current study is to propose and verify new artificial intelligence methods for the spatial prediction of groundwater spring potential mapping at the Koohdasht–Nourabad plain, Lorestan province, Iran. These methods are new hybrids of an adaptive neuro-fuzzy inference system (ANFIS) and five metaheuristic algorithms, namely invasive weed optimization (IWO), differential evolution (DE), firefly algorithm (FA), particle swarm optimization (PSO), and the bees algorithm (BA). A total of 2463 spring locations were identified and collected, and then divided randomly into two subsets: 70&thinsp;% (1725 locations) were used for training models and the remaining 30&thinsp;% (738 spring locations) were utilized for evaluating the models. A total of 13 groundwater conditioning factors were prepared for modeling, namely the slope degree, slope aspect, altitude, plan curvature, stream power index (SPI), topographic wetness index (TWI), terrain roughness index (TRI), distance from fault, distance from river, land use/land cover, rainfall, soil order, and lithology. In the next step, the step-wise assessment ratio analysis (SWARA) method was applied to quantify the degree of relevance of these groundwater conditioning factors. The global performance of these derived models was assessed using the area under the curve (AUC). In addition, the Friedman and Wilcoxon signed-rank tests were carried out to check and confirm the best model to use in this study. The result showed that all models have a high prediction performance; however, the ANFIS–DE model has the highest prediction capability (AUC&thinsp; = &thinsp;0.875), followed by the ANFIS–IWO model, the ANFIS–FA model (0.873), the ANFIS–PSO model (0.865), and the ANFIS–BA model (0.839). The results of this research can be useful for decision makers responsible for the sustainable management of groundwater resources.</p

    Probabilistic and artificial intelligence modelling of drought and agricultural crop yield in Pakistan

    Get PDF
    Pakistan is a drought-prone, agricultural nation with hydro-meteorological imbalances that increase the scarcity of water resources, thus, constraining water availability and leading major risks to the agricultural productivity sector and food security. Rainfall and drought are imperative matters of consideration, both for hydrological and agricultural applications. The aim of this doctoral thesis is to advance new knowledge in designing hybridized probabilistic and artificial intelligence forecasts models for rainfall, drought and crop yield within the agricultural hubs in Pakistan. The choice of these study regions is a strategic decision, to focus on precision agriculture given the importance of rainfall and drought events on agricultural crops in socioeconomic activities of Pakistan. The outcomes of this PhD contribute to efficient modelling of seasonal rainfall, drought and crop yield to assist farmers and other stakeholders to promote more strategic decisions for better management of climate risk for agriculturalreliant nations

    Applied Metaheuristic Computing

    Get PDF
    For decades, Applied Metaheuristic Computing (AMC) has been a prevailing optimization technique for tackling perplexing engineering and business problems, such as scheduling, routing, ordering, bin packing, assignment, facility layout planning, among others. This is partly because the classic exact methods are constrained with prior assumptions, and partly due to the heuristics being problem-dependent and lacking generalization. AMC, on the contrary, guides the course of low-level heuristics to search beyond the local optimality, which impairs the capability of traditional computation methods. This topic series has collected quality papers proposing cutting-edge methodology and innovative applications which drive the advances of AMC

    Improved Spiral Dynamics and Artificial Bee Colony Algorithms with Application to Engineering Problems

    Get PDF

    Forecasting methods in energy planning models

    Get PDF
    Energy planning models (EPMs) play an indispensable role in policy formulation and energy sector development. The forecasting of energy demand and supply is at the heart of an EPM. Different forecasting methods, from statistical to machine learning have been applied in the past. The selection of a forecasting method is mostly based on data availability and the objectives of the tool and planning exercise. We present a systematic and critical review of forecasting methods used in 483 EPMs. The methods were analyzed for forecasting accuracy; applicability for temporal and spatial predictions; and relevance to planning and policy objectives. Fifty different forecasting methods have been identified. Artificial neural network (ANN) is the most widely used method, which is applied in 40% of the reviewed EPMs. The other popular methods, in descending order, are: support vector machine (SVM), autoregressive integrated moving average (ARIMA), fuzzy logic (FL), linear regression (LR), genetic algorithm (GA), particle swarm optimization (PSO), grey prediction (GM) and autoregressive moving average (ARMA). In terms of accuracy, computational intelligence (CI) methods demonstrate better performance than that of the statistical ones, in particular for parameters with greater variability in the source data. However, hybrid methods yield better accuracy than that of the stand-alone ones. Statistical methods are useful for only short and medium range, while CI methods are preferable for all temporal forecasting ranges (short, medium and long). Based on objective, most EPMs focused on energy demand and load forecasting. In terms geographical coverage, the highest number of EPMs were developed on China. However, collectively, more models were established for the developed countries than the developing ones. Findings would benefit researchers and professionals in gaining an appreciation of the forecasting methods, and enable them to select appropriate method(s) to meet their needs

    Optimización de algoritmos bioinspirados en sistemas heterogéneos CPU-GPU.

    Get PDF
    Los retos científicos del siglo XXI precisan del tratamiento y análisis de una ingente cantidad de información en la conocida como la era del Big Data. Los futuros avances en distintos sectores de la sociedad como la medicina, la ingeniería o la producción eficiente de energía, por mencionar sólo unos ejemplos, están supeditados al crecimiento continuo en la potencia computacional de los computadores modernos. Sin embargo, la estela de este crecimiento computacional, guiado tradicionalmente por la conocida “Ley de Moore”, se ha visto comprometido en las últimas décadas debido, principalmente, a las limitaciones físicas del silicio. Los arquitectos de computadores han desarrollado numerosas contribuciones multicore, manycore, heterogeneidad, dark silicon, etc, para tratar de paliar esta ralentización computacional, dejando en segundo plano otros factores fundamentales en la resolución de problemas como la programabilidad, la fiabilidad, la precisión, etc. El desarrollo de software, sin embargo, ha seguido un camino totalmente opuesto, donde la facilidad de programación a través de modelos de abstracción, la depuración automática de código para evitar efectos no deseados y la puesta en producción son claves para una viabilidad económica y eficiencia del sector empresarial digital. Esta vía compromete, en muchas ocasiones, el rendimiento de las propias aplicaciones; consecuencia totalmente inadmisible en el contexto científico. En esta tesis doctoral tiene como hipótesis de partida reducir las distancias entre los campos hardware y software para contribuir a solucionar los retos científicos del siglo XXI. El desarrollo de hardware está marcado por la consolidación de los procesadores orientados al paralelismo masivo de datos, principalmente GPUs Graphic Processing Unit y procesadores vectoriales, que se combinan entre sí para construir procesadores o computadores heterogéneos HSA. En concreto, nos centramos en la utilización de GPUs para acelerar aplicaciones científicas. Las GPUs se han situado como una de las plataformas con mayor proyección para la implementación de algoritmos que simulan problemas científicos complejos. Desde su nacimiento, la trayectoria y la historia de las tarjetas gráficas ha estado marcada por el mundo de los videojuegos, alcanzando altísimas cotas de popularidad según se conseguía más realismo en este área. Un hito importante ocurrió en 2006, cuando NVIDIA (empresa líder en la fabricación de tarjetas gráficas) lograba hacerse con un hueco en el mundo de la computación de altas prestaciones y en el mundo de la investigación con el desarrollo de CUDA “Compute Unified Device Arquitecture. Esta arquitectura posibilita el uso de la GPU para el desarrollo de aplicaciones científicas de manera versátil. A pesar de la importancia de la GPU, es interesante la mejora que se puede producir mediante su utilización conjunta con la CPU, lo que nos lleva a introducir los sistemas heterogéneos tal y como detalla el título de este trabajo. Es en entornos heterogéneos CPU-GPU donde estos rendimientos alcanzan sus cotas máximas, ya que no sólo las GPUs soportan el cómputo científico de los investigadores, sino que es en un sistema heterogéneo combinando diferentes tipos de procesadores donde podemos alcanzar mayor rendimiento. En este entorno no se pretende competir entre procesadores, sino al contrario, cada arquitectura se especializa en aquella parte donde puede explotar mejor sus capacidades. Donde mayor rendimiento se alcanza es en estos clústeres heterogéneos, donde múltiples nodos son interconectados entre sí, pudiendo dichos nodos diferenciarse no sólo entre arquitecturas CPU-GPU, sino también en las capacidades computacionales dentro de estas arquitecturas. Con este tipo de escenarios en mente, se presentan nuevos retos en los que lograr que el software que hemos elegido como candidato se ejecuten de la manera más eficiente y obteniendo los mejores resultados posibles. Estas nuevas plataformas hacen necesario un rediseño del software para aprovechar al máximo los recursos computacionales disponibles. Se debe por tanto rediseñar y optimizar los algoritmos existentes para conseguir que las aportaciones en este campo sean relevantes, y encontrar algoritmos que, por su propia naturaleza sean candidatos para que su ejecución en dichas plataformas de alto rendimiento sea óptima. Encontramos en este punto una familia de algoritmos denominados bioinspirados, que utilizan la inteligencia colectiva como núcleo para la resolución de problemas. Precisamente esta inteligencia colectiva es la que les hace candidatos perfectos para su implementación en estas plataformas bajo el nuevo paradigma de computación paralela, puesto que las soluciones pueden ser construidas en base a individuos que mediante alguna forma de comunicación son capaces de construir conjuntamente una solución común. Esta tesis se centrará especialmente en uno de estos algoritmos bioinspirados que se engloba dentro del término metaheurísticas bajo el paradigma del Soft Computing, el Ant Colony Optimization “ACO”. Se realizará una contextualización, estudio y análisis del algoritmo. Se detectarán las partes más críticas y serán rediseñadas buscando su optimización y paralelización, manteniendo o mejorando la calidad de sus soluciones. Posteriormente se pasará a implementar y testear las posibles alternativas sobre diversas plataformas de alto rendimiento. Se utilizará el conocimiento adquirido en el estudio teórico-práctico anterior para su aplicación a casos reales, más en concreto se mostrará su aplicación sobre el plegado de proteínas. Todo este análisis es trasladado a su aplicación a un caso concreto. En este trabajo, aunamos las nuevas plataformas hardware de alto rendimiento junto al rediseño e implementación software de un algoritmo bioinspirado aplicado a un problema científico de gran complejidad como es el caso del plegado de proteínas. Es necesario cuando se implementa una solución a un problema real, realizar un estudio previo que permita la comprensión del problema en profundidad, ya que se encontrará nueva terminología y problemática para cualquier neófito en la materia, en este caso, se hablará de aminoácidos, moléculas o modelos de simulación que son desconocidos para los individuos que no sean de un perfil biomédico.Ingeniería, Industria y Construcció

    Applied Methuerstic computing

    Get PDF
    For decades, Applied Metaheuristic Computing (AMC) has been a prevailing optimization technique for tackling perplexing engineering and business problems, such as scheduling, routing, ordering, bin packing, assignment, facility layout planning, among others. This is partly because the classic exact methods are constrained with prior assumptions, and partly due to the heuristics being problem-dependent and lacking generalization. AMC, on the contrary, guides the course of low-level heuristics to search beyond the local optimality, which impairs the capability of traditional computation methods. This topic series has collected quality papers proposing cutting-edge methodology and innovative applications which drive the advances of AMC

    Development of a multi-objective optimization algorithm based on lichtenberg figures

    Get PDF
    This doctoral dissertation presents the most important concepts of multi-objective optimization and a systematic review of the most cited articles in the last years of this subject in mechanical engineering. The State of the Art shows a trend towards the use of metaheuristics and the use of a posteriori decision-making techniques to solve engineering problems. This fact increases the demand for algorithms, which compete to deliver the most accurate answers at the lowest possible computational cost. In this context, a new hybrid multi-objective metaheuristic inspired by lightning and Linchtenberg Figures is proposed. The Multi-objective Lichtenberg Algorithm (MOLA) is tested using complex test functions and explicit contrainted engineering problems and compared with other metaheuristics. MOLA outperformed the most used algorithms in the literature: NSGA-II, MOPSO, MOEA/D, MOGWO, and MOGOA. After initial validation, it was applied to two complex and impossible to be analytically evaluated problems. The first was a design case: the multi-objective optimization of CFRP isogrid tubes using the finite element method. The optimizations were made considering two methodologies: i) using a metamodel, and ii) the finite element updating. The last proved to be the best methodology, finding solutions that reduced at least 45.69% of the mass, 18.4% of the instability coefficient, 61.76% of the Tsai-Wu failure index and increased by at least 52.57% the natural frequency. In the second application, MOLA was internally modified and associated with feature selection techniques to become the Multi-objective Sensor Selection and Placement Optimization based on the Lichtenberg Algorithm (MOSSPOLA), an unprecedented Sensor Placement Optimization (SPO) algorithm that maximizes the acquired modal response and minimizes the number of sensors for any structure. Although this is a structural health monitoring principle, it has never been done before. MOSSPOLA was applied to a real helicopter’s main rotor blade using the 7 best-known metrics in SPO. Pareto fronts and sensor configurations were unprecedentedly generated and compared. Better sensor distributions were associated with higher hypervolume and the algorithm found a sensor configuration for each sensor number and metric, including one with 100% accuracy in identifying delamination considering triaxial modal displacements, minimum number of sensors, and noise for all blade sections.Esta tese de doutorado traz os conceitos mais importantes de otimização multi-objetivo e uma revisão sistemática dos artigos mais citados nos últimos anos deste tema em engenharia mecânica. O estado da arte mostra uma tendência no uso de meta-heurísticas e de técnicas de tomada de decisão a posteriori para resolver problemas de engenharia. Este fato aumenta a demanda sobre os algoritmos, que competem para entregar respostas mais precisas com o menor custo computacional possível. Nesse contexto, é proposta uma nova meta-heurística híbrida multi-objetivo inspirada em raios e Figuras de Lichtenberg. O Algoritmo de Lichtenberg Multi-objetivo (MOLA) é testado e comparado com outras metaheurísticas usando funções de teste complexas e problemas restritos e explícitos de engenharia. Ele superou os algoritmos mais utilizados na literatura: NSGA-II, MOPSO, MOEA/D, MOGWO e MOGOA. Após validação, foi aplicado em dois problemas complexos e impossíveis de serem analiticamente otimizados. O primeiro foi um caso de projeto: otimização multi-objetivo de tubos isogrid CFRP usando o método dos elementos finitos. As otimizações foram feitas considerando duas metodologias: i) usando um meta-modelo, e ii) atualização por elementos finitos. A última provou ser a melhor metodologia, encontrando soluções que reduziram pelo menos 45,69% da massa, 18,4% do coeficiente de instabilidade, 61,76% do TW e aumentaram em pelo menos 52,57% a frequência natural. Na segunda aplicação, MOLA foi modificado internamente e associado a técnicas de feature selection para se tornar o Seleção e Alocação ótima de Sensores Multi-objetivo baseado no Algoritmo de Lichtenberg (MOSSPOLA), um algoritmo inédito de Otimização de Posicionamento de Sensores (SPO) que maximiza a resposta modal adquirida e minimiza o número de sensores para qualquer estrutura. Embora isto seja um princípio de Monitoramento da Saúde Estrutural, nunca foi feito antes. O MOSSPOLA foi aplicado na pá do rotor principal de um helicóptero real usando as 7 métricas mais conhecidas em SPO. Frentes de Pareto e configurações de sensores foram ineditamente geradas e comparadas. Melhores distribuições de sensores foram associadas a um alto hipervolume e o algoritmo encontrou uma configuração de sensor para cada número de sensores e métrica, incluindo uma com 100% de precisão na identificação de delaminação considerando deslocamentos modais triaxiais, número mínimo de sensores e ruído para todas as seções da lâmina

    Computational Optimizations for Machine Learning

    Get PDF
    The present book contains the 10 articles finally accepted for publication in the Special Issue “Computational Optimizations for Machine Learning” of the MDPI journal Mathematics, which cover a wide range of topics connected to the theory and applications of machine learning, neural networks and artificial intelligence. These topics include, among others, various types of machine learning classes, such as supervised, unsupervised and reinforcement learning, deep neural networks, convolutional neural networks, GANs, decision trees, linear regression, SVM, K-means clustering, Q-learning, temporal difference, deep adversarial networks and more. It is hoped that the book will be interesting and useful to those developing mathematical algorithms and applications in the domain of artificial intelligence and machine learning as well as for those having the appropriate mathematical background and willing to become familiar with recent advances of machine learning computational optimization mathematics, which has nowadays permeated into almost all sectors of human life and activity
    corecore