169 research outputs found
Graph Planarization Problem Optimization Based on Triple-Valued Gravitational Search Algorithm
This article presents a triple-valued gravitational search algorithm (TGSA) to tackle the graph planarization problem (GPP). GPP is one of the most important tasks in graph theory, and has proved to be an NP-hard problem. To solve it, TGSA uses a triple-valued encoding scheme and models the search space into a triangular hypercube quantitatively based on the well-known single-row routing representation method. The agents in TGSA, whose interactions are driven by the gravity law, move toward the global optimal position gradually. The position updating rule for each agent is based on two indices: one is a velocity index which is a function of the current velocity of the agent, and the other is a population index based on the cumulative information in the whole population. To verify the performance of the algorithm, 21 benchmark instances are tested. Experimental results indicate that TGSA can solve the GPP by finding its maximum planar subgraph and embedding the resulting edges into a plane simultaneously. Compared with traditional algorithms, a novelty of TGSA is that it can find multiple optimal solutions for the GPP. Comparative results also demonstrate that TGSA outperforms the traditional meta-heuristics in terms of the solution qualities within reasonable computational times. © 2013 Institute of Electrical Engineers of Japan
Graph Theoretical Modelling of Electrical Distribution Grids
This thesis deals with the applications of graph theory towards the electrical distribution networks that transmit electricity from the generators that produce it and the consumers that use it. Specifically, we establish the substation and bus network as graph theoretical models for this major piece of electrical infrastructure. We also generate substation and bus networks for a wide range of existing data from both synthetic and real grids and show several properties of these graphs, such as density, degeneracy, and planarity. We also motivate future research into the definition of a graph family containing bus and substation networks and the classification of that family as having polynomial expansion
Recommended from our members
Combinatorial optimization and metaheuristics
Today, combinatorial optimization is one of the youngest and most active areas of discrete mathematics. It is a branch of optimization in applied mathematics and computer science, related to operational research, algorithm theory and computational complexity theory. It sits at the intersection of several fields, including artificial intelligence, mathematics and software engineering. Its increasing interest arises for the fact that a large number of scientific and industrial problems can be formulated as abstract combinatorial optimization problems, through graphs and/or (integer) linear programs. Some of these problems have polynomial-time (“efficient”) algorithms, while most of them are NP-hard, i.e. it is not proved that they can be solved in polynomial-time. Mainly, it means that it is not possible to guarantee that an exact solution to the problem can be found and one has to settle for an approximate solution with known performance guarantees. Indeed, the goal of approximate methods is to find “quickly” (reasonable run-times), with “high” probability, provable “good” solutions (low error from the real optimal solution). In the last 20 years, a new kind of algorithm commonly called metaheuristics have emerged in this class, which basically try to combine heuristics in high level frameworks aimed at efficiently and effectively exploring the search space. This report briefly outlines the components, concepts, advantages and disadvantages of different metaheuristic approaches from a conceptual point of view, in order to analyze their similarities and differences. The two very significant forces of intensification and diversification, that mainly determine the behavior of a metaheuristic, will be pointed out. The report concludes by exploring the importance of hybridization and integration methods
Modeling, Simulation, And Optimization Of Diamond Disc Pad Conditioning In Chemical Mechanical Polishing
Microfabrication (originally based on structuring the surface of silicon) remains the basic manufacturing technology of the semiconductor industry. The semiconductor business model has been driven by “Moore’s Law†which predicts that the number of transistors the industry would be able to place on a computer chip would double every two years
免疫学的および進化的アルゴリズムに基づく改良された群知能最適化に関する研究
富山大学・富理工博甲第175号・楊玉・2020/3/24富山大学202
A nonmonotone GRASP
A greedy randomized adaptive search procedure (GRASP) is an itera-
tive multistart metaheuristic for difficult combinatorial optimization problems. Each
GRASP iteration consists of two phases: a construction phase, in which a feasible
solution is produced, and a local search phase, in which a local optimum in the
neighborhood of the constructed solution is sought. Repeated applications of the con-
struction procedure yields different starting solutions for the local search and the
best overall solution is kept as the result. The GRASP local search applies iterative
improvement until a locally optimal solution is found. During this phase, starting from
the current solution an improving neighbor solution is accepted and considered as the
new current solution. In this paper, we propose a variant of the GRASP framework that
uses a new “nonmonotone” strategy to explore the neighborhood of the current solu-
tion. We formally state the convergence of the nonmonotone local search to a locally
optimal solution and illustrate the effectiveness of the resulting Nonmonotone GRASP
on three classical hard combinatorial optimization problems: the maximum cut prob-
lem (MAX-CUT), the weighted maximum satisfiability problem (MAX-SAT), and
the quadratic assignment problem (QAP)
Theoretical Analysis of Single Molecule Spectroscopy Lineshapes of Conjugated Polymers
Conjugated Polymers(CPs) exhibit a wide range of highly tunable optical properties. Quantitative and detailed understanding of the nature of excitons responsible for such a rich optical behavior has significant implications for better utilization of CPs for more efficient plastic solar cells and other novel optoelectronic devices. In general, samples of CPs are plagued with substantial inhomogeneous broadening due to various sources of disorder. Single molecule emission spectroscopy (SMES) offers a unique opportunity to investigate the energetics and dynamics of excitons and their interactions with phonon modes. The major subject of the present thesis is to analyze and understand room temperature SMES lineshapes for a particular CP, called poly(2,5-di-(2\u27-ethylhexyloxy)-1,4-phenylenevinylene)(DEH-PPV). A minimal quantum mechanical model of a two-level system coupled to a Brownian oscillator bath is utilized. The main objective is to identify the set of model parameters best fitting a SMES lineshape for each of about 200 samples of DEH-PPV, from which new insight into the nature of exciton-bath coupling can be gained. This project also entails developing a reliable computational methodology for quantum mechanical modeling of spectral lineshapes in general. Well-known optimization techniques such as gradient descent, genetic algorithms, and heuristic searches have been tested, employing an measure between theoretical and experimental lineshapes for guiding the optimization. However, all of these tend to result in theoretical lineshapes qualitatively different from experimental ones. This is attributed to the ruggedness of the parameter space and inadequateness of the measure. On the other hand, when the dynamic reduction of the original parameter space to a 2-parameter space through feature searching and visualization of the search space paths using directed acyclic graphs(DAGs), the qualitative nature of the fitting improved significantly. For a more satisfactory fitting, it is shown that the inclusion of an additional energetic disorder is essential, representing the effect of quasi-static disorder accumulated during the SMES of each polymer. Various technical details, ambiguous issues, and implication of the present work are discussed
Unfolding RNA 3D structures for secondary structure prediction benchmarking
Les acides ribonucléiques (ARN) forment des structures tri-dimensionnelles complexes
stabilisées par la formation de la structure secondaire (2D), elle-même formée de paires
de bases. Plusieurs méthodes computationnelles ont été créées dans les dernières années
afin de prédire la structure 2D d’ARNs, en partant de la séquence. Afin de simplifier
le calcul, ces méthodes appliquent généralement des restrictions sur le type de paire de
bases et la topologie des structures 2D prédites. Ces restrictions font en sorte qu’il est
parfois difficile de savoir à quel point la totalité des paires de bases peut être représentée
par ces structures 2D restreintes.
MC-Unfold fut créé afin de trouver les structures 2D restreintes qui pourraient être associées à une structure secondaire complète, en fonction des restrictions communément
utilisées par les méthodes de prédiction de structure secondaire.
Un ensemble de 321 monomères d’ARN totalisant plus de 4223 structures fut assemblé
afin d’évaluer les méthodes de prédiction de structure 2D. La majorité de ces structures
ont été déterminées par résonance magnétique nucléaire et crystallographie aux rayons
X. Ces structures ont été dépliés par MC-Unfold et les structures résultantes ont été comparées à celles prédites par les méthodes de prédiction.
La performance de MC-Unfold sur un ensemble de structures expérimentales est encourageante. En moins de 5 minutes, 96% des 227 structures ont été complètement dépliées,
le reste des structures étant trop complexes pour être déplié rapidement. Pour ce qui est
des méthodes de prédiction de structure 2D, les résultats indiquent qu’elles sont capable
de prédire avec un certain succès les structures expérimentales, particulièrement les petites molécules. Toutefois, si on considère les structures larges ou contenant des pseudo-noeuds, les résultats sont généralement défavorables. Les résultats obtenus indiquent que
les méthodes de prédiction de structure 2D devraient être utilisées avec prudence, particulièrement pour de larges molécules.Ribonucleic acids (RNA) adopt complex three dimensional structures which are stabilized by the formation of base pairs, also known as the secondary (2D) structure. Predicting where and how many of these interactions occur has been the focus of many computational methods called 2D structure prediction algorithms. These methods disregard
some interactions, which makes it difficult to know how well a 2D structure represents
an RNA structure, especially when large amounts of base pairs are ignored.
MC-Unfold was created to remove interactions violating the assumptions used by prediction methods. This process, named unfolding, extends previous planarization and
pseudoknot removal methods. To evaluate how well computational methods can predict
experimental structures, a set of 321 RNA monomers corresponding to more than 4223
experimental structures was acquired. These structures were mostly determined using
nuclear magnetic resonance and X-ray crystallography. MC-Unfold was used to remove
interactions the prediction algorithms were not expected to predict. These structures
were then compared with the structured predicted.
MC-Unfold performed very well on the test set it was given. In less than five minutes,
96% of the 227 structure could be exhaustively unfolded. The few remaining structures
are very large and could not be unfolded in reasonable time. MC-Unfold is therefore a
practical alternative to the current methods.
As for the evaluation of prediction methods, MC-Unfold demonstrated that the computational methods do find experimental structures, especially for small molecules. However,
when considering large or pseudoknotted molecules, the results are not so encouraging.
As a consequence, 2D structure prediction methods should be used with caution, especially for large structures
- …