2,469 research outputs found

    Improving Local Search for Minimum Weighted Connected Dominating Set Problem by Inner-Layer Local Search

    Get PDF
    The minimum weighted connected dominating set (MWCDS) problem is an important variant of connected dominating set problems with wide applications, especially in heterogenous networks and gene regulatory networks. In the paper, we develop a nested local search algorithm called NestedLS for solving MWCDS on classic benchmarks and massive graphs. In this local search framework, we propose two novel ideas to make it effective by utilizing previous search information. First, we design the restart based smoothing mechanism as a diversification method to escape from local optimal. Second, we propose a novel inner-layer local search method to enlarge the candidate removal set, which can be modelled as an optimized version of spanning tree problem. Moreover, inner-layer local search method is a general method for maintaining the connectivity constraint when dealing with massive graphs. Experimental results show that NestedLS outperforms state-of-the-art meta-heuristic algorithms on most instances

    The Weighted Independent Domination Problem: ILP Model and Algorithmic Approaches

    Get PDF
    This work deals with the so-called weighted independent domination problem, which is an NPNP-hard combinatorial optimization problem in graphs. In contrast to previous work, this paper considers the problem from a non-theoretical perspective. The first contribution consists in the development of three integer linear programming models. Second, two greedy heuristics are proposed. Finally, the last contribution is a population-based iterated greedy metaheuristic which is applied in two different ways: (1) the metaheuristic is applied directly to each problem instance, and (2) the metaheuristic is applied at each iteration of a higher-level framework---known as construct, merge, solve \& adapt---to sub-instances of the tackled problem instances. The results of the considered algorithmic approaches show that integer linear programming approaches can only compete with the developed metaheuristics in the context of graphs with up to 100 nodes. When larger graphs are concerned, the application of the populated-based iterated greedy algorithm within the higher-level framework works generally best. The experimental evaluation considers graphs of different types, sizes, densities, and ways of generating the node and edge weights

    Bayesian Network Approximation from Local Structures

    Get PDF
    This work is focused on the problem of Bayesian network structure learning. There are two main areas in this field which are here discussed.The first area is a theoretical one. We consider some aspects of the Bayesian network structure learning hardness. In particular we prove that the problem of finding a Bayesian network structure with a minimal number of edges encoding the joint probability distribution of a given dataset is NP-hard. This result can be considered as a significantly different than the standard one view on the NP-hardness of the Bayesian network structure learning. The most notable so far results in this area are focused mainly on the specific characterization of the problem, where the aim is to find a Bayesian network structure maximizing some given probabilistic criterion. These criteria arise from quite advanced considerations in the area of statistics, and in particular their interpretation might be not intuitive---especially for the people not familiar with the Bayesian networks domain. In contrary the proposed here criterion, for which the NP-hardness is proved, does not require any advanced knowledge and it can be easily understandable.The second area is related to concrete algorithms. We focus on one of the most interesting branch in history of Bayesian network structure learning methods, leading to a very significant solutions. Namely we consider the branch of local Bayesian network structure learning methods, where the main aim is to gather first of all some information describing local properties of constructed networks, and then use this information appropriately in order to construct the whole network structure. The algorithm which is the root of this branch is focused on the important local characterization of Bayesian networks---so called Markov blankets. The Markov blanket of a given attribute consists of such other attributes which in the probabilistic sense correspond to the maximal in strength and minimal in size set of its causes. The aforementioned first algorithm in the considered here branch is based on one important observation. Subject to appropriate assumptions it is possible to determine the optimal Bayesian network structure by examining relations between attributes only within the Markov blankets. In the case of datasets derived from appropriately sparse distributions, where Markov blanket of each attribute has a limited by some common constant size, such procedure leads to a well time scalable Bayesian network structure learning approach.The Bayesian network local learning branch has mainly evolved in direction of reducing the gathered local information into even smaller and more reliably learned patterns. This reduction has raised from the parallel progress in the Markov blankets approximation field.The main result of this dissertation is the proposal of Bayesian network structure learning procedure which can be placed into the branch of local learning methods and which leads to the fork in its root in fact. The fundamental idea is to appropriately aggregate learned over the Markov blankets local knowledge not in the form of derived dependencies within these blankets---as it happens in the root method, but in the form of local Bayesian networks. The user can thanks to this have much influence on the character of this local knowledge---by choosing appropriate to his needs Bayesian network structure learning method used in order to learn the local structures. The merging approach of local structures into a global one is justified theoretically and evaluated empirically, showing its ability to enhance even very advanced Bayesian network structure learning algorithms, when applying them locally in the proposed scheme.Praca ta skupia się na problemie uczenia struktury sieci bayesowskiej. Są dwa główne pola w tym temacie, które są tutaj omówione.Pierwsze pole ma charakter teoretyczny. Rozpatrujemy pewne aspekty trudności uczenia struktury sieci bayesowskiej. W szczególności pokozujemy, że problem wyznaczenia struktury sieci bayesowskiej o minimalnej liczbie krawędzi kodującej w sobie łączny rozkład prawdopodobieństwa atrybutów danej tabeli danych jest NP-trudny. Rezultat ten może być postrzegany jako istotnie inne od standardowego spojrzenie na NP-trudność uczenia struktury sieci bayesowskiej. Najbardziej znaczące jak dotąd rezultaty w tym zakresie skupiają się głównie na specyficznej charakterystyce problemu, gdzie celem jest wyznaczenie struktury sieci bayesowskiej maksymalizującej pewne zadane probabilistyczne kryterium. Te kryteria wywodzą się z dość zaawansowanych rozważań w zakresie statystyki i w szczególności mogą nie być intuicyjne---szczególnie dla ludzi niezaznajomionych z dziedziną sieci bayesowskich. W przeciwieństwie do tego zaproponowane tutaj kryterium, dla którego została wykazana NP-trudność, nie wymaga żadnej zaawansowanej wiedzy i może być łatwo zrozumiane.Drugie pole wiąże się z konkretnymi algorytmami. Skupiamy się na jednej z najbardziej interesujących gałęzi w historii metod uczenia struktur sieci bayesowskich, prowadzącej do bardzo znaczących rozwiązań. Konkretnie rozpatrujemy gałąź metod lokalnego uczenia struktur sieci bayesowskich, gdzie głównym celem jest zebranie w pierwszej kolejności pewnych informacji opisujących lokalne własności konstruowanych sieci, a następnie użycie tych informacji w odpowiedni sposób celem konstrukcji pełnej struktury sieci. Algorytm będący korzeniem tej gałęzi skupia się na ważnej lokalnej charakteryzacji sieci bayesowskich---tak zwanych kocach Markowa. Koc Markowa dla zadanego atrybutu składa się z tych pozostałych atrybutów, które w sensie probabilistycznym odpowiadają maksymalnymu w sile i minimalnemu w rozmiarze zbiorowi jego przyczyn. Wspomniany pierwszy algorytm w rozpatrywanej tu gałęzi opiera się na jednej istotnej obserwacji. Przy odpowiednich założeniach możliwe jest wyznaczenie optymalnej struktury sieci bayesowskiej poprzez badanie relacji między atrybutami jedynie w obrębie koców Markowa. W przypadku zbiorów danych wywodzących się z odpowiednio rzadkiego rozkładu, gdzie koc Markowa każdego atrybutu ma ograniczony przez pewną wspólną stałą rozmiar, taka procedura prowadzi do dobrze skalowalnego czasowo podejścia uczenia struktury sieci bayesowskiej.Gałąź lokalnego uczenia sieci bayesowskich rozwinęła się głównie w kierunku redukcji zbieranych lokalnych informacji do jeszcze mniejszych i bardziej niezawodnie wyuczanych wzorców. Redukcja ta wyrosła na bazie równoległego rozwoju w dziedzinie aproksymacji koców Markowa.Głównym rezultatem tej rozprawy jest zaproponowanie procedury uczenia struktury sieci bayesowskiej, która może być umiejscowiona w gałęzi metod lokalnego uczenia i która faktycznie wyznacza rozgałęzienie w jego korzeniu. Fundamentalny pomysł polega tu na tym, żeby odpowiednio agregować wyuczoną w obrębie koców Markowa lokalną wiedzę nie w formie wyprowadzonych zależności w obrębie tych koców---tak jak to się dzieje w przypadku metody - korzenia, ale w formie lokalnych sieci bayesowskich. Użytkownik może mieć dzięki temu duży wpływ na charakter tej lokalnej wiedzy---poprzez wybór odpowiedniej dla jego potrzeb metody uczenia struktury sieci bayesowskiej użytej w celu wyznaczenia lokalnych struktur. Procedura scalenia lokalnych modeli celem utworzenia globalnego jest uzasadniona teoretycznie oraz zbadana eksperymentalnie, pokazując jej zdolność do poprawienia nawet bardzo zaawansowanych algorytmów uczenia struktury sieci bayesowskiej, gdy zastosuje się je lokalnie w ramach zaproponowanego schematu

    Seventh Biennial Report : June 2003 - March 2005

    No full text

    Big data clustering: Data preprocessing, variable selection, and dimension reduction

    Get PDF
    [no abstract available

    Multi-Objective Optimization for Speed and Stability of a Sony Aibo Gait

    Get PDF
    Locomotion is a fundamental facet of mobile robotics that many higher level aspects rely on. However, this is not a simple problem for legged robots with many degrees of freedom. For this reason, machine learning techniques have been applied to the domain. Although impressive results have been achieved, there remains a fundamental problem with using most machine learning methods. The learning algorithms usually require a large dataset which is prohibitively hard to collect on an actual robot. Further, learning in simulation has had limited success transitioning to the real world. Also, many learning algorithms optimize for a single fitness function, neglecting many of the effects on other parts of the system. As part of the RoboCup 4-legged league, many researchers have worked on increasing the walking/gait speed of Sony AIBO robots. Recently, the effort shifted from developing a quick gait, to developing a gait that also provides a stable sensing platform. However, to date, optimization of both velocity and camera stability has only occurred using a single fitness function that incorporates the two objectives with a weighting that defines the desired tradeoff between them. However, the true nature of this tradeoff is not understood because the pareto front has never been charted, so this a priori decision is uninformed. This project applies the Nondominated Sorting Genetic Algorithm-II (NSGA-II) to find a pareto set of fast, stable gait parameters. This allows a user to select the best tradeoff between balance and speed for a given application. Three fitness functions are defined: one speed measure and two stability measures. A plot of evolved gaits shows a pareto front that indicates speed and stability are indeed conflicting goals. Interestingly, the results also show that tradeoffs also exist between different measures of stability

    Development of a project complexity assessment method for energy megaprojects

    Get PDF
    Megaprojects are characterised by their large-scale capital expenditure, long duration and significant levels of technical and process complexity. Empirical data show that megaprojects in the energy sector experience alarming rates of failure, such as cost overruns, delays in completion and production shortfalls. One of the main causes of failure is their high level of complexity and the absence of effective tools to assess and manage it. Project complexity has received increasing attention in recent years, both in academia and the industry. However, there is still a lack of consensus on a clear definition for ‘project complexity’ or a comprehensive list of complexity indicators, specifically for energy megaprojects. Furthermore, there is also a lack of a widely accepted assessment method to measure project complexity in a quantitative manner. This study is carried out in response to these problems. First, it develops a taxonomy of project complexity indicators on the basis of a comprehensive review and synthesis of existing literature. It includes 51 internal and external Project Complexity Indicators (PCIs) in a logical hierarchical structure; these indicators specify the aspects that need to be measured when assessing project complexity. Second, weights for all indicators are established through an integrated Delphi-AHP method, with the participation of 20 international experts. Finally, the study specifies Numerical Scoring Criteria (NSCs) for all indicators based on a synthesis of existing knowledge about megaprojects. The criteria specify the scoring thresholds, on a 1-5 scale, for each indicator. These three components constitute a new Project Complexity Assessment (PCA) method, which is implemented as a spreadsheet PCA tool. The developed tool allows a project team to assess and score their project in each of the PCIs against the defined criteria. It then calculates two separate complexity indices for internal and external factors; the results indicate the complexity level of the project. Complexity profiles are also produced to illustrate the complexity scores of different categories of PCIs. The PCA method is tested using an energy megaproject case study. The results demonstrate not only that the tool can help a project team understand the complexity of their project, but also it can help the team to develop appropriate complexity management strategies by comparing the assessment results of different projects

    Algorithms for the weighted independent domination problem

    Get PDF
    El problema de la dominació independent ponderada és un problema NP-hard d'optimització combinatòria en grafs. Aquest problema només ha estat abordat a la literatura per enfocaments de programació lineal entera, heurístiques voraces i diferents versions d'algoritmes voraços iteratius basats en poblacions. En aquest projecte, primer apliquem una millora sobre les heurístiques voraces existents. Això ho fem implementant les versions rollout d'aquestes heurístiques i provant-les en un marc multistart on són aplicades de forma probabilística. En segon lloc, implementem tres versions d'un algorisme genètic esbiaixat de clau aleatòria. La diferència entre aquestes versions es troba en la forma en què els individus són descodificats en solucions viables del problema. Els resultats mostren que els algorismes desenvolupats poden competir amb els que són estat de l'art en el conjunt d'instàncies relativament petites. No obstant això, amb una mida creixent de les instàncies del problema, els nostres algorismes no poden arribar al nivell dels resultats obtinguts per l'algorisme més punter. Tot i això, els nostres algorismes poden ser millorats de moltes formes diferents, les quals expliquem en detall. Per tant, creiem que els nostres algorismes haurien de ser més estudiats en treballs futurs.The weighted independent domination problem is an NP-hard combinatorial optimization problem in graphs. This problem has only been tackled in the literature by integer linear programming approaches, by Greedy heuristics, and by different versions of a population-based iterated greedy algorithm. In this project, we first improve over the existing Greedy heuristics. This is done by implementing the rollout versions of these heuristics, and by testing them in a multistart framework in which they are applied in a probabilistic way. Second, we implement three versions of a biased random key genetic algorithm. The difference between these versions is found in the way in which individuals are decoded into feasible solutions to the problem. Moreover, we study the rollout versions of the corresponding decoders. Our results show that the developed algorithms can compete with the state of the art in the group of rather small-scale problem instances. However, with a growing size of the problem instances, our algorithms can not quite match the results of the current state-of-the-art algorithm. Nevertheless, our algorithms can potentially be improved in several different ways, which we explain in detail. Therefore, we believe that our algorithms should be further studied in future work
    corecore