500 research outputs found
ONE-DIMENSIONAL CUTTING STOCK PROBLEM THAT MINIMIZES THE NUMBER OF DIFFERENT PATTERNS
Cutting stock problem (CSP) is a problem of cutting an object into several smaller objects to fulfill the existing demand with a minimum unused object remaining. Besides minimizing the remaining of the object, sometimes there is another additional problem in CSP, namely minimizing the number of different cutting patterns. This happens because there is a setup cost for each pattern. This study shows a way to obtain a minimum number of different patterns in the cutting stock problem (CSP). An example problem is modeled in linear programming and then solved by a column generation algorithm using the Lingo 18.0 software
Intelligent conceptual mould layout design system (ICMLDS) : innovation report
Family Mould Cavity Runner Layout Design (FMCRLD) is the most demanding and
critical task in the early Conceptual Mould Layout Design (CMLD) phase.
Traditional experience-dependent manual FCMRLD workflow results in long design
lead time, non-optimum designs and costs of errors. However, no previous research,
existing commercial software packages or patented technologies can support
FMCRLD automation and optimisation. The nature of FMCRLD is non-repetitive
and generative. The complexity of FMCRLD optimisation involves solving a
complex two-level combinatorial layout design optimisation problem. This research
first developed the Intelligent Conceptual Mould Layout Design System (ICMLDS)
prototype based on the innovative nature-inspired evolutionary FCMRLD approach
for FMCRLD automation and optimisation using Genetic Algorithm (GA) and Shape
Grammar (SG). The ICMLDS prototype has been proven to be a powerful
intelligent design tool as well as an interactive design-training tool that can encourage
and accelerate mould designers’ design alternative exploration, exploitation and
optimisation for better design in less time. This previously unavailable capability
enables the supporting company not only to innovate the existing traditional mould
making business but also to explore new business opportunities in the high-value
low-volume market (such as telecommunication, consumer electronic and medical
devices) of high precision injection moulding parts. On the other hand, the
innovation of this research also provides a deeper insight into the art of evolutionary
design and expands research opportunities in the evolutionary design approach into a
wide variety of new application areas including hot runner layout design, ejector
layout design, cooling layout design and architectural space layout design
Inferring Best Strategies from the Aggregation of Information from Multiple Agents: The Cultural Approach
Although learning in MAS is described as a collective experience, most of the times its modeling draws solely or mostly on the results of the interaction between the agents.
This abruptly contrasts with our everyday experience where learning relies, to a great extent, on a large stock of already codified knowledge rather than on the direct interaction among the agents.
If in the course human history this reliance on already codified knowledge had a significant importance, especially since the discovery of writing, during the last decade the size and availability of this stock has increased notably because of the Internet.
Even more, humanity has endowed itself with institutions and organizations devoted to fulfill the role of codifying, preserving and diffusing knowledge since its early days.
Cultural Algorithms are one of the few cases where the modeling of this process, although in a limited way, has been attempted.
However, even in this case, the modeling lacks some of the characteristics that have made it so successful in human populations, notably its frugality in learning only from a rather small subset of the population and a discussion of its dynamics in terms of hypothesis generation and falsification and the relationship between adaptation and discovery.
A deep understanding of this process of collective learning, in all its aspects of generalization and re-adoption of this collective and distilled knowledge, together with its diffusion is a key element to understand how human communities function and how a mixed community of humans and electronic agents could effectively learn. And this is more important now than ever because this process has become not only global and available to large populations but also has largely increased its speed.
This research aims to contribute to cover this gap, elucidating on the frugality of the mechanism while mapping it in a framework characterized by a variable level of complexity of knowledge.
Also seeks to understand the macro dynamics resulting from the micro mechanisms and strategies chosen by the agents.
Nevertheless, as any exercise based on modeling, it portrays a stylized description of reality that misses important points and significant aspects of the real behavior. In this case, while we will focus on individual learning and on the process of generalization and ulterior re-use of these generalizations, learning from other agents is notably absent. We believe however, that this choice contributes to make our model easier to understand and easier to expose the causality relationships emerging from our simulation exercises without sacrificing any significant result
Inferring Best Strategies from the Aggregation of Information from Multiple Agents: The Cultural Approach
Although learning in MAS is described as a collective experience, most of the times its modeling draws solely or mostly on the results of the interaction between the agents.
This abruptly contrasts with our everyday experience where learning relies, to a great extent, on a large stock of already codified knowledge rather than on the direct interaction among the agents.
If in the course human history this reliance on already codified knowledge had a significant importance, especially since the discovery of writing, during the last decade the size and availability of this stock has increased notably because of the Internet.
Even more, humanity has endowed itself with institutions and organizations devoted to fulfill the role of codifying, preserving and diffusing knowledge since its early days.
Cultural Algorithms are one of the few cases where the modeling of this process, although in a limited way, has been attempted.
However, even in this case, the modeling lacks some of the characteristics that have made it so successful in human populations, notably its frugality in learning only from a rather small subset of the population and a discussion of its dynamics in terms of hypothesis generation and falsification and the relationship between adaptation and discovery.
A deep understanding of this process of collective learning, in all its aspects of generalization and re-adoption of this collective and distilled knowledge, together with its diffusion is a key element to understand how human communities function and how a mixed community of humans and electronic agents could effectively learn. And this is more important now than ever because this process has become not only global and available to large populations but also has largely increased its speed.
This research aims to contribute to cover this gap, elucidating on the frugality of the mechanism while mapping it in a framework characterized by a variable level of complexity of knowledge.
Also seeks to understand the macro dynamics resulting from the micro mechanisms and strategies chosen by the agents.
Nevertheless, as any exercise based on modeling, it portrays a stylized description of reality that misses important points and significant aspects of the real behavior. In this case, while we will focus on individual learning and on the process of generalization and ulterior re-use of these generalizations, learning from other agents is notably absent. We believe however, that this choice contributes to make our model easier to understand and easier to expose the causality relationships emerging from our simulation exercises without sacrificing any significant result
PREDICTING COMPLEX PHENOTYPE-GENOTYPE RELATIONSHIPS IN GRASSES: A SYSTEMS GENETICS APPROACH
It is becoming increasingly urgent to identify and understand the mechanisms underlying complex traits. Expected increases in the human population coupled with climate change make this especially urgent for grasses in the Poaceae family because these serve as major staples of the human and livestock diets worldwide. In particular, Oryza sativa (rice), Triticum spp. (wheat), Zea mays (maize), and Saccharum spp. (sugarcane) are among the top agricultural commodities. Molecular marker tools such as linkage-based Quantitative Trait Loci (QTL) mapping, Genome-Wide Association Studies (GWAS), Multiple Marker Assisted Selection (MMAS), and Genome Selection (GS) techniques offer promise for understanding the mechanisms behind complex traits and to improve breeding programs. These methods have shown some success. Often, however, they cannot identify the causal genes underlying traits nor the biological context in which those genes function. To improve our understanding of complex traits as well improve breeding techniques, additional tools are needed to augment existing methods. This work proposes a knowledge-independent systems-genetic paradigm that integrates results from genetic studies such as QTL mapping, GWAS and mutational insertion lines such as Tos17 with gene co-expression networks for grasses--in particular for rice. The techniques described herein attempt to overcome the bias of limited human knowledge by relying solely on the underlying signals within the data to capture a holistic representation of gene interactions for a species. Through integration of gene co-expression networks with genetic signal, modules of genes can be identified with potential effect for a given trait, and the biological function of those interacting genes can be determined
Planificacion agregada en la cosecha forestal: Un modelo de programacion matemática y solucion
En este estudio se propone y resuelve un modelo de programaciĂłn entera mixta para la planificaciĂłn táctica de la cosecha forestal. Se considera, el reemplazo de productos, la diferenciaciĂłn de rodales y canchas de trozado. Las instancias usadas disponen de hasta: 60 rodales, 25 canchas de acopio, 10 clientes, 8 perĂodos de planificaciĂłn y 20 reglas de trozado. Considerando, hasta 260000 variables, 4800 enteras y 10000 restricciones y usando el software Cplex. En todos los casos, se obtiene el Ăłptimo y se verifica que a mayor nĂşmero de reglas de trozado, el beneficio alcanzado tambiĂ©n es mayor.In this study, we propose and solve a mix Integer Programming model for the tactical planning in forest harvesting. The following elements we considered the replacement of products, the stand differentiation places of bucking. The instances used have: 60 stands, 25 stockyards, 10 customers, 8 planning periods and 20 bucking rules, with up to 260000 variables, 48000 integers and 10000 constraints. We used Cplex software and in all cases, we obtained the optimum and we verified that, when there are a greater number of bucking rules, the benefit achieved was also greater
Evolutionary Computation
This book presents several recent advances on Evolutionary Computation, specially evolution-based optimization methods and hybrid algorithms for several applications, from optimization and learning to pattern recognition and bioinformatics. This book also presents new algorithms based on several analogies and metafores, where one of them is based on philosophy, specifically on the philosophy of praxis and dialectics. In this book it is also presented interesting applications on bioinformatics, specially the use of particle swarms to discover gene expression patterns in DNA microarrays. Therefore, this book features representative work on the field of evolutionary computation and applied sciences. The intended audience is graduate, undergraduate, researchers, and anyone who wishes to become familiar with the latest research work on this field
Evolutionary design of deep neural networks
MenciĂłn Internacional en el tĂtulo de doctorFor three decades, neuroevolution has applied evolutionary computation to the optimization of
the topology of artificial neural networks, with most works focusing on very simple architectures.
However, times have changed, and nowadays convolutional neural networks are the industry and
academia standard for solving a variety of problems, many of which remained unsolved before the
discovery of this kind of networks.
Convolutional neural networks involve complex topologies, and the manual design of these
topologies for solving a problem at hand is expensive and inefficient. In this thesis, our aim is to
use neuroevolution in order to evolve the architecture of convolutional neural networks.
To do so, we have decided to try two different techniques: genetic algorithms and grammatical
evolution. We have implemented a niching scheme for preserving the genetic diversity, in order
to ease the construction of ensembles of neural networks. These techniques have been validated
against the MNIST database for handwritten digit recognition, achieving a test error rate of 0.28%,
and the OPPORTUNITY data set for human activity recognition, attaining an F1 score of 0.9275.
Both results have proven very competitive when compared with the state of the art. Also, in all
cases, ensembles have proven to perform better than individual models.
Later, the topologies learned for MNIST were tested on EMNIST, a database recently introduced
in 2017, which includes more samples and a set of letters for character recognition. Results have
shown that the topologies optimized for MNIST perform well on EMNIST, proving that architectures
can be reused across domains with similar characteristics.
In summary, neuroevolution is an effective approach for automatically designing topologies for
convolutional neural networks. However, it still remains as an unexplored field due to hardware
limitations. Current advances, however, should constitute the fuel that empowers the emergence of
this field, and further research should start as of today.This Ph.D. dissertation has been partially supported by the Spanish Ministry of Education, Culture and Sports under FPU fellowship with identifier FPU13/03917.
This research stay has been partially co-funded by the Spanish Ministry of Education, Culture and Sports under FPU short stay grant with identifier EST15/00260.Programa Oficial de Doctorado en Ciencia y TecnologĂa InformáticaPresidente: MarĂa Araceli SanchĂs de Miguel.- Secretario: Francisco Javier Segovia PĂ©rez.- Vocal: Simon Luca
Optimal seismic retrofitting of existing RC frames through soft-computing approaches
2016 - 2017Ph.D. Thesis proposes a Soft-Computing approach capable of supporting the engineer judgement in the selection and
design of the cheapest solution for seismic retrofitting of existing RC framed structure. Chapter 1 points out the need for
strengthening the existing buildings as one of the main way of decreasing economic and life losses as direct
consequences of earthquake disasters. Moreover, it proposes a wide, but not-exhaustive, list of the most frequently
observed deficiencies contributing to the vulnerability of concrete buildings. Chapter 2 collects the state of practice on
seismic analysis methods for the assessment the safety of the existing buildings within the framework of a performancebased
design. The most common approaches for modeling the material plasticity in the frame non-linear analysis are
also reviewed. Chapter 3 presents a wide state of practice on the retrofitting strategies, intended as preventive measures
aimed at mitigating the effect of a future earthquake by a) decreasing the seismic hazard demands; b) improving the
dynamic characteristics supplied to the existing building. The chapter presents also a list of retrofitting systems,
intended as technical interventions commonly classified into local intervention (also known “member-level”
techniques) and global intervention (also called “structure-level” techniques) that might be used in synergistic
combination to achieve the adopted strategy. In particular, the available approaches and the common criteria,
respectively for selecting an optimum retrofit strategy and an optimal system are discussed. Chapter 4 highlights the
usefulness of the Soft-Computing methods as efficient tools for providing “objective” answer in reasonable time for
complex situation governed by approximation and imprecision. In particular, Chapter 4 collects the applications found
in the scientific literature for Fuzzy Logic, Artificial Neural Network and Evolutionary Computing in the fields of
structural and earthquake engineering with a taxonomic classification of the problems in modeling, simulation and
optimization. Chapter 5 “translates” the search for the cheapest retrofitting system into a constrained optimization
problem. To this end, the chapter includes a formulation of a novel procedure that assembles a numerical model for
seismic assessment of framed structures within a Soft-Computing-driven optimization algorithm capable to minimize
the objective function defined as the total initial cost of intervention. The main components required to assemble the
procedure are described in the chapter: the optimization algorithm (Genetic Algorithm); the simulation framework
(OpenSees); and the software environment (Matlab). Chapter 6 describes step-by-step the flow-chart of the proposed
procedure and it focuses on the main implementation aspects and working details, ranging from a clever initialization of
the population of candidate solutions up to a proposal of tuning procedure for the genetic parameters. Chapter 7
discusses numerical examples, where the Soft-Computing procedure is applied to the model of multi-storey RC frames
obtained through simulated design. A total of fifteen “scenarios” are studied in order to assess its “robustness” to
changes in input data. Finally, Chapter 8, on the base of the outcomes observed, summarizes the capabilities of the
proposed procedure, yet highlighting its “limitations” at the current state of development. Some possible modifications
are discussed to enhance its efficiency and completeness. [edited by author]XVI n.s
- …