208 research outputs found

    A lagrangian reconstruction of a class of local search methods.

    Get PDF
    by Choi Mo Fung Kenneth.Thesis (M.Phil.)--Chinese University of Hong Kong, 1998.Includes bibliographical references (leaves 105-112).Abstract also in Chinese.Chapter 1 --- Introduction --- p.1Chapter 1.1 --- Constraint Satisfaction Problems --- p.2Chapter 1.2 --- Constraint Satisfaction Techniques --- p.2Chapter 1.3 --- Motivation of the Research --- p.4Chapter 1.4 --- Overview of the Thesis --- p.5Chapter 2 --- Related Work --- p.7Chapter 2.1 --- Min-conflicts Heuristic --- p.7Chapter 2.2 --- GSAT --- p.8Chapter 2.3 --- Breakout Method --- p.8Chapter 2.4 --- GENET --- p.9Chapter 2.5 --- E-GENET --- p.9Chapter 2.6 --- DLM --- p.10Chapter 2.7 --- Simulated Annealing --- p.11Chapter 2.8 --- Genetic Algorithms --- p.12Chapter 2.9 --- Tabu Search --- p.12Chapter 2.10 --- Integer Programming --- p.13Chapter 3 --- Background --- p.15Chapter 3.1 --- GENET --- p.15Chapter 3.1.1 --- Network Architecture --- p.15Chapter 3.1.2 --- Convergence Procedure --- p.18Chapter 3.2 --- Classical Optimization --- p.22Chapter 3.2.1 --- Optimization Problems --- p.22Chapter 3.2.2 --- The Lagrange Multiplier Method --- p.23Chapter 3.2.3 --- Saddle Point of Lagrangian Function --- p.25Chapter 4 --- Binary CSP's as Zero-One Integer Constrained Minimization Prob- lems --- p.27Chapter 4.1 --- From CSP to SAT --- p.27Chapter 4.2 --- From SAT to Zero-One Integer Constrained Minimization --- p.29Chapter 5 --- A Continuous Lagrangian Approach for Solving Binary CSP's --- p.33Chapter 5.1 --- From Integer Problems to Real Problems --- p.33Chapter 5.2 --- The Lagrange Multiplier Method --- p.36Chapter 5.3 --- Experiment --- p.37Chapter 6 --- A Discrete Lagrangian Approach for Solving Binary CSP's --- p.39Chapter 6.1 --- The Discrete Lagrange Multiplier Method --- p.39Chapter 6.2 --- Parameters of CSVC --- p.43Chapter 6.2.1 --- Objective Function --- p.43Chapter 6.2.2 --- Discrete Gradient Operator --- p.44Chapter 6.2.3 --- Integer Variables Initialization --- p.45Chapter 6.2.4 --- Lagrange Multipliers Initialization --- p.46Chapter 6.2.5 --- Condition for Updating Lagrange Multipliers --- p.46Chapter 6.3 --- A Lagrangian Reconstruction of GENET --- p.46Chapter 6.4 --- Experiments --- p.52Chapter 6.4.1 --- Evaluation of LSDL(genet) --- p.53Chapter 6.4.2 --- Evaluation of Various Parameters --- p.55Chapter 6.4.3 --- Evaluation of LSDL(max) --- p.63Chapter 6.5 --- Extension of LSDL --- p.66Chapter 6.5.1 --- Arc Consistency --- p.66Chapter 6.5.2 --- Lazy Arc Consistency --- p.67Chapter 6.5.3 --- Experiments --- p.70Chapter 7 --- Extending LSDL for General CSP's: Initial Results --- p.77Chapter 7.1 --- General CSP's as Integer Constrained Minimization Problems --- p.77Chapter 7.1.1 --- Formulation --- p.78Chapter 7.1.2 --- Incompatibility Functions --- p.79Chapter 7.2 --- The Discrete Lagrange Multiplier Method --- p.84Chapter 7.3 --- A Comparison between the Binary and the General Formulation --- p.85Chapter 7.4 --- Experiments --- p.87Chapter 7.4.1 --- The N-queens Problems --- p.89Chapter 7.4.2 --- The Graph-coloring Problems --- p.91Chapter 7.4.3 --- The Car-Sequencing Problems --- p.92Chapter 7.5 --- Inadequacy of the Formulation --- p.94Chapter 7.5.1 --- Insufficiency of the Incompatibility Functions --- p.94Chapter 7.5.2 --- Dynamic Illegal Constraint --- p.96Chapter 7.5.3 --- Experiments --- p.97Chapter 8 --- Concluding Remarks --- p.100Chapter 8.1 --- Contributions --- p.100Chapter 8.2 --- Discussions --- p.102Chapter 8.3 --- Future Work --- p.103Bibliography --- p.10

    Towards Implicit Parallel Programming for Systems

    Get PDF
    Multi-core processors require a program to be decomposable into independent parts that can execute in parallel in order to scale performance with the number of cores. But parallel programming is hard especially when the program requires state, which many system programs use for optimization, such as for example a cache to reduce disk I/O. Most prevalent parallel programming models do not support a notion of state and require the programmer to synchronize state access manually, i.e., outside the realms of an associated optimizing compiler. This prevents the compiler to introduce parallelism automatically and requires the programmer to optimize the program manually. In this dissertation, we propose a programming language/compiler co-design to provide a new programming model for implicit parallel programming with state and a compiler that can optimize the program for a parallel execution. We define the notion of a stateful function along with their composition and control structures. An example implementation of a highly scalable server shows that stateful functions smoothly integrate into existing programming language concepts, such as object-oriented programming and programming with structs. Our programming model is also highly practical and allows to gradually adapt existing code bases. As a case study, we implemented a new data processing core for the Hadoop Map/Reduce system to overcome existing performance bottlenecks. Our lambda-calculus-based compiler automatically extracts parallelism without changing the program's semantics. We added further domain-specific semantic-preserving transformations that reduce I/O calls for microservice programs. The runtime format of a program is a dataflow graph that can be executed in parallel, performs concurrent I/O and allows for non-blocking live updates

    Towards Implicit Parallel Programming for Systems

    Get PDF
    Multi-core processors require a program to be decomposable into independent parts that can execute in parallel in order to scale performance with the number of cores. But parallel programming is hard especially when the program requires state, which many system programs use for optimization, such as for example a cache to reduce disk I/O. Most prevalent parallel programming models do not support a notion of state and require the programmer to synchronize state access manually, i.e., outside the realms of an associated optimizing compiler. This prevents the compiler to introduce parallelism automatically and requires the programmer to optimize the program manually. In this dissertation, we propose a programming language/compiler co-design to provide a new programming model for implicit parallel programming with state and a compiler that can optimize the program for a parallel execution. We define the notion of a stateful function along with their composition and control structures. An example implementation of a highly scalable server shows that stateful functions smoothly integrate into existing programming language concepts, such as object-oriented programming and programming with structs. Our programming model is also highly practical and allows to gradually adapt existing code bases. As a case study, we implemented a new data processing core for the Hadoop Map/Reduce system to overcome existing performance bottlenecks. Our lambda-calculus-based compiler automatically extracts parallelism without changing the program's semantics. We added further domain-specific semantic-preserving transformations that reduce I/O calls for microservice programs. The runtime format of a program is a dataflow graph that can be executed in parallel, performs concurrent I/O and allows for non-blocking live updates

    Towards Implicit Parallel Programming for Systems

    Get PDF
    Multi-core processors require a program to be decomposable into independent parts that can execute in parallel in order to scale performance with the number of cores. But parallel programming is hard especially when the program requires state, which many system programs use for optimization, such as for example a cache to reduce disk I/O. Most prevalent parallel programming models do not support a notion of state and require the programmer to synchronize state access manually, i.e., outside the realms of an associated optimizing compiler. This prevents the compiler to introduce parallelism automatically and requires the programmer to optimize the program manually. In this dissertation, we propose a programming language/compiler co-design to provide a new programming model for implicit parallel programming with state and a compiler that can optimize the program for a parallel execution. We define the notion of a stateful function along with their composition and control structures. An example implementation of a highly scalable server shows that stateful functions smoothly integrate into existing programming language concepts, such as object-oriented programming and programming with structs. Our programming model is also highly practical and allows to gradually adapt existing code bases. As a case study, we implemented a new data processing core for the Hadoop Map/Reduce system to overcome existing performance bottlenecks. Our lambda-calculus-based compiler automatically extracts parallelism without changing the program's semantics. We added further domain-specific semantic-preserving transformations that reduce I/O calls for microservice programs. The runtime format of a program is a dataflow graph that can be executed in parallel, performs concurrent I/O and allows for non-blocking live updates

    Towards Implicit Parallel Programming for Systems

    Get PDF
    Multi-core processors require a program to be decomposable into independent parts that can execute in parallel in order to scale performance with the number of cores. But parallel programming is hard especially when the program requires state, which many system programs use for optimization, such as for example a cache to reduce disk I/O. Most prevalent parallel programming models do not support a notion of state and require the programmer to synchronize state access manually, i.e., outside the realms of an associated optimizing compiler. This prevents the compiler to introduce parallelism automatically and requires the programmer to optimize the program manually. In this dissertation, we propose a programming language/compiler co-design to provide a new programming model for implicit parallel programming with state and a compiler that can optimize the program for a parallel execution. We define the notion of a stateful function along with their composition and control structures. An example implementation of a highly scalable server shows that stateful functions smoothly integrate into existing programming language concepts, such as object-oriented programming and programming with structs. Our programming model is also highly practical and allows to gradually adapt existing code bases. As a case study, we implemented a new data processing core for the Hadoop Map/Reduce system to overcome existing performance bottlenecks. Our lambda-calculus-based compiler automatically extracts parallelism without changing the program's semantics. We added further domain-specific semantic-preserving transformations that reduce I/O calls for microservice programs. The runtime format of a program is a dataflow graph that can be executed in parallel, performs concurrent I/O and allows for non-blocking live updates

    Towards Implicit Parallel Programming for Systems

    Get PDF
    Multi-core processors require a program to be decomposable into independent parts that can execute in parallel in order to scale performance with the number of cores. But parallel programming is hard especially when the program requires state, which many system programs use for optimization, such as for example a cache to reduce disk I/O. Most prevalent parallel programming models do not support a notion of state and require the programmer to synchronize state access manually, i.e., outside the realms of an associated optimizing compiler. This prevents the compiler to introduce parallelism automatically and requires the programmer to optimize the program manually. In this dissertation, we propose a programming language/compiler co-design to provide a new programming model for implicit parallel programming with state and a compiler that can optimize the program for a parallel execution. We define the notion of a stateful function along with their composition and control structures. An example implementation of a highly scalable server shows that stateful functions smoothly integrate into existing programming language concepts, such as object-oriented programming and programming with structs. Our programming model is also highly practical and allows to gradually adapt existing code bases. As a case study, we implemented a new data processing core for the Hadoop Map/Reduce system to overcome existing performance bottlenecks. Our lambda-calculus-based compiler automatically extracts parallelism without changing the program's semantics. We added further domain-specific semantic-preserving transformations that reduce I/O calls for microservice programs. The runtime format of a program is a dataflow graph that can be executed in parallel, performs concurrent I/O and allows for non-blocking live updates

    A programming logic based on type theory

    Get PDF

    A Rotating Aperture Mask for Small Telescopes

    Get PDF
    Observing the dynamic interaction between stars and their close stellar neighbors is key to establishing the stars’ orbits, masses, and other properties. Our ability to visually discriminate nearby stars is limited by the power of our telescopes, posing a challenge to astronomers at small observatories that contribute to binary star surveys. Masks placed at the telescope aperture promise to augment the resolving power of telescopes of all sizes, but many of these masks must be manually and repetitively reoriented about the optical axis to achieve their full benefits. This paper introduces a design concept for a mask rotation mechanism that can be adapted to telescopes of different types and proportions, focusing on an implementation for a Celestron C11 Schmidt–Cassegrain optical tube assembly. Mask concepts were first evaluated using diffraction simulation programs, later manufactured, and finally tested on close double stars using a C11. An electronic rotation mechanism was designed, produced, and evaluated. Results show that applying a properly shaped and oriented mask to a C11 enhances contrast in images of double star systems relative to images captured with the unmasked telescope, and they show that the rotation mechanism accurately and repeatably places masks at target orientations with minimal manual effort. Detail drawings of the mask rotation mechanism and code for the software interface are included

    Fine-Grained Workflow Interoperability in Life Sciences

    Get PDF
    In den vergangenen Jahrzehnten führten Fortschritte in den Schlüsseltechnologien der Lebenswissenschaften zu einer exponentiellen Zunahme der zur Verfügung stehenden biologischen Daten. Um Ergebnisse zeitnah generieren zu können werden sowohl spezialisierte Rechensystem als auch Programmierfähigkeiten benötigt: Desktopcomputer oder monolithische Ansätze sind weder in der Lage mit dem Wachstum der verfügbaren biologischen Daten noch mit der Komplexität der Analysetechniken Schritt zu halten. Workflows erlauben diesem Trend durch Parallelisierungsansätzen und verteilten Rechensystemen entgegenzuwirken. Ihre transparenten Abläufe, gegeben durch ihre klar definierten Strukturen, ebenso ihre Wiederholbarkeit, erfüllen die Standards der Reproduzierbarkeit, welche an wissenschaftliche Methoden gestellt werden. Eines der Ziele unserer Arbeit ist es Forschern beim Bedienen von Rechensystemen zu unterstützen, ohne dass Programmierkenntnisse notwendig sind. Dafür wurde eine Sammlung von Tools entwickelt, welche jedes Kommandozeilenprogramm in ein Workflowsystem integrieren kann. Ohne weitere Anpassungen kann unser Programm zwei weit verbreitete Workflowsysteme unterstützen. Unser modularer Entwurf erlaubt zudem Unterstützung für weitere Workflowmaschinen hinzuzufügen. Basierend auf der Bedeutung von frühen und robusten Workflowentwürfen, haben wir außerdem eine wohl etablierte Desktop–basierte Analyseplattform erweitert. Diese enthält über 2.000 Aufgaben, wobei jede als Baustein in einem Workflow fungiert. Die Plattform erlaubt einfache Entwicklung neuer Aufgaben und die Integration externer Kommandozeilenprogramme. In dieser Arbeit wurde ein Plugin zur Konvertierung entwickelt, welches nutzerfreundliche Mechanismen bereitstellt, um Workflows auf verteilten Hochleistungsrechensystemen auszuführen—eine Aufgabe, die sonst technische Kenntnisse erfordert, die gewöhnlich nicht zum Anforderungsprofil eines Lebenswissenschaftlers gehören. Unsere Konverter–Erweiterung generiert quasi identische Versionen desselben Workflows, welche im Anschluss auf leistungsfähigen Berechnungsressourcen ausgeführt werden können. Infolgedessen werden nicht nur die Möglichkeiten von verteilten hochperformanten Rechensystemen sowie die Bequemlichkeit eines für Desktopcomputer entwickelte Workflowsystems ausgenutzt, sondern zusätzlich werden Berechnungsbeschränkungen von Desktopcomputern und die steile Lernkurve, die mit dem Workflowentwurf auf verteilten Systemen verbunden ist, umgangen. Unser Konverter–Plugin hat sofortige Anwendung für Forscher. Wir zeigen dies in drei für die Lebenswissenschaften relevanten Anwendungsbeispielen: Strukturelle Bioinformatik, Immuninformatik, und Metabolomik.Recent decades have witnessed an exponential increase of available biological data due to advances in key technologies for life sciences. Specialized computing resources and scripting skills are now required to deliver results in a timely fashion: desktop computers or monolithic approaches can no longer keep pace with neither the growth of available biological data nor the complexity of analysis techniques. Workflows offer an accessible way to counter against this trend by facilitating parallelization and distribution of computations. Given their structured and repeatable nature, workflows also provide a transparent process to satisfy strict reproducibility standards required by the scientific method. One of the goals of our work is to assist researchers in accessing computing resources without the need for programming or scripting skills. To this effect, we created a toolset able to integrate any command line tool into workflow systems. Out of the box, our toolset supports two widely–used workflow systems, but our modular design allows for seamless additions in order to support further workflow engines. Recognizing the importance of early and robust workflow design, we also extended a well–established, desktop–based analytics platform that contains more than two thousand tasks (each being a building block for a workflow), allows easy development of new tasks and is able to integrate external command line tools. We developed a converter plug–in that offers a user–friendly mechanism to execute workflows on distributed high–performance computing resources—an exercise that would otherwise require technical skills typically not associated with the average life scientist's profile. Our converter extension generates virtually identical versions of the same workflows, which can then be executed on more capable computing resources. That is, not only did we leverage the capacity of distributed high–performance resources and the conveniences of a workflow engine designed for personal computers but we also circumvented computing limitations of personal computers and the steep learning curve associated with creating workflows for distributed environments. Our converter extension has immediate applications for researchers and we showcase our results by means of three use cases relevant for life scientists: structural bioinformatics, immunoinformatics and metabolomics
    • …
    corecore