7 research outputs found

    Approaches to grid-based SAT solving

    Get PDF
    In this work we develop techniques for using distributed computing resources to efficiently solve instances of the propositional satisfiability problem (SAT). The computing resources considered in this work are assumed to be geographically distributed and connected by a non-dedicated network. Such systems are typically referred to as computational grid environments. The time a modern SAT solver consumes while solving an instance varies according to a random distribution. Unlike many other methods for distributed SAT solving, this work identifies the random distribution as a valuable resource for solving-time reduction. The methods which use randomness in the run times of a search algorithm, such as the ones discussed in this work, are examples of multi-search. The main contribution of this work is in developing and analyzing the multi-search approach in SAT solving and showing its efficiency with several experiments. For the purpose of the analysis, the work introduces a grid simulation model which captures several of the properties of a grid environment which are not observed in more traditional parallel computing systems. The work develops two algorithmic frameworks for multi-search in SAT. The first, SDSAT, is based on using properties of the distribution of the solving time so that the expected time required to solve an instance is reduced. Based on the analysis of SDSAT, the work proposes an algorithm for efficiently using large number of computing resources simultaneously to solve collections of SAT instances. The analysis of SDSAT also motivates the second algorithmic framework, CL-SDSAT. The framework is used to efficiently solve many industrial SAT instances by carefully combining information learned in the distributed SAT solvers. All methods described in the work are directly applicable in a wide range of grid environments and can be used together with virtually unmodified state-of-the-art SAT solvers. The methods are experimentally verified using standard benchmark SAT instances in a production-level grid environment. The experiments show that using the relatively simple methods developed in the work, SAT instances which cannot be solved efficiently in sequential settings can be now solved in a grid environment

    Customizable Feature based Design Pattern Recognition Integrating Multiple Techniques

    Get PDF
    Die Analyse und Rückgewinnung von Architekturinformationen aus existierenden Altsystemen ist eine komplexe, teure und zeitraubende Aufgabe, was der kontinuierlich steigenden Komplexität von Software und dem Aufkommen der modernen Technologien geschuldet ist. Die Wartung von Altsystemen wird immer stärker nachgefragt und muss dabei mit den neuesten Technologien und neuen Kundenanforderungen umgehen können. Die Wiederverwendung der Artefakte aus Altsystemen für neue Entwicklungen wird sehr bedeutsam und überlebenswichtig für die Softwarebranche. Die Architekturen von Altsystemen unterliegen konstanten Veränderungen, deren Projektdokumentation oft unvollständig, inkonsistent und veraltet ist. Diese Dokumente enthalten ungenügend Informationen über die innere Struktur der Systeme. Häufig liefert nur der Quellcode zuverlässige Informationen über die Struktur von Altsystemen. Das Extrahieren von Artefakten aus Quellcode von Altsystemen unterstützt das Programmverständnis, die Wartung, das Refactoring, das Reverse Engineering, die nachträgliche Dokumentation und Reengineering Methoden. Das Ziel dieser Dissertation ist es Entwurfsinformationen von Altsystemen zu extrahieren, mit Fokus auf die Wiedergewinnung von Architekturmustern. Architekturmuster sind Schlüsselelemente, um Architekturentscheidungen aus Quellcode von Altsystemen zu extrahieren. Die Verwendung von Mustern bei der Entwicklung von Applikationen wird allgemein als qualitätssteigernd betrachtet und reduziert Entwicklungszeit und kosten. In der Vergangenheit wurden unterschiedliche Methoden entwickelt, um Muster in Altsystemen zu erkennen. Diese Techniken erkennen Muster mit unterschiedlicher Genauigkeit, da ein und dasselbe Muster unterschiedlich spezifiziert und implementiert wird. Der Lösungsansatz dieser Dissertation basiert auf anpassbaren und wiederverwendbaren Merkmal-Typen, die statische und dynamische Parameter nutzen, um variable Muster zu definieren. Jeder Merkmal-Typ verwendet eine wählbare Suchtechnik (SQL Anfragen, Reguläre Ausdrücke oder Quellcode Parser), um ein bestimmtes Merkmal eines Musters im Quellcode zu identifizieren. Insbesondere zur Erkennung verschiedener Varianten eines Musters kommen im entwickelten Verfahren statische, dynamische und semantische Analysen zum Einsatz. Die Verwendung unterschiedlicher Suchtechniken erhöht die Genauigkeit der Mustererkennung bei verschiedenen Softwaresystemen. Zusätzlich wurde eine neue Semantik für Annotationen im Quellcode von existierenden Softwaresystemen entwickelt, welche die Effizienz der Mustererkennung steigert. Eine prototypische Implementierung des Ansatzes, genannt UDDPRT, wurde zur Erkennung verschiedener Muster in Softwaresystemenen unterschiedlicher Programmiersprachen (JAVA, C/C++, C#) verwendet. UDDPRT erlaubt die Anpassung der Mustererkennung durch den Benutzer. Alle Abfragen und deren Zusammenspiel sind konfigurierbar und erlauben dadurch die Erkennung von neuen und abgewandelten Mustern. Es wurden umfangreiche Experimente mit diversen Open Source Software Systemen durchgeführt und die erzielten Ergebnisse wurden mit denen anderer Ansätze verglichen. Dabei war es möglich eine deutliche Steigerung der Genauigkeit im entwickelten Verfahren gegenüber existierenden Ansätzen zu zeigen.Recovering design information from legacy applications is a complex, expensive, quiet challenging, and time consuming task due to ever increasing complexity of software and advent of modern technology. The growing demand for maintenance of legacy systems, which can cope with the latest technologies and new business requirements, the reuse of artifacts from the existing legacy applications for new developments become very important and vital for software industry. Due to constant evolution in architecture of legacy systems, they often have incomplete, inconsistent and obsolete documents which do not provide enough information about the structure of these systems. Mostly, source code is the only reliable source of information for recovering artifacts from legacy systems. Extraction of design artifacts from the source code of existing legacy systems supports program comprehension, maintenance, code refactoring, reverse engineering, redocumentation and reengineering methodologies. The objective of approach used in this thesis is to recover design information from legacy code with particular focus on the recovery of design patterns. Design patterns are key artifacts for recovering design decisions from the legacy source code. Patterns have been extensively tested in different applications and reusing them yield quality software with reduced cost and time frame. Different techniques, methodologies and tools are used to recover patterns from legacy applications in the past. Each technique recovers patterns with different precision and recall rates due to different specifications and implementations of same pattern. The approach used in this thesis is based on customizable and reusable feature types which use static and dynamic parameters to define variant pattern definitions. Each feature type allows user to switch/select between multiple searching techniques (SQL queries, Regular Expressions and Source Code Parsers) which are used to match features of patterns with source code artifacts. The technique focuses on detecting variants of different design patterns by using static, dynamic and semantic analysis techniques. The integrated use of SQL queries, source code parsers, regular expressions and annotations improve the precision and recall for pattern extraction from different legacy systems. The approach has introduced new semantics of annotations to be used in the source code of legacy applications, which reduce search space and time for detecting patterns. The prototypical implementation of approach, called UDDPRT is used to recognize different design patterns from the source code of multiple languages (Java, C/C++, C#). The prototype is flexible and customizable that novice user can change the SQL queries and regular expressions for detecting implementation variants of design patterns. The approach has improved significant precision and recall of pattern extraction by performing experiments on number of open source systems taken as baselines for comparisons

    Example-based model refactoring using heuristic search

    Get PDF
    Software maintenance is considered the most expensive activity in software systems development: more than 80% of the resources are devoted to it. During the maintenance activities, software models are very rarely taken into account. The evolution of these models and the transformations that manipulate them are at the heart of model-driven engineering (MDE). However, as the source code, the model changes and tends to become increasingly complex. These changes generally have a negative impact on the quality of models and they cause damage to the software. In this context, refactoring is the most used technique to maintain an adequate quality of these models. The refactoring process is usually done in two steps: the detection of elements of the model to correct (design defects), then the correction of these elements. In this thesis, we propose two main contributions related to detection and correction of defects in class diagrams. The first contribution aims to automate the design defect detection. We propose to adapt genetic algorithms (e.g., genetic programming) to detect parts of the model that may correspond to design defects. The second contribution concerns the automation of the correction of these design defects. We propose to adapt three heuristic methods to suggest refactorings: 1. A single-objective optimization method based on structural similarities between a given model (i.e., the model to be refactored) and a set of examples of models (i.e., models that have undergone some refactorings); 2. An interactive single-objective optimization method based on structural similarity and the opinion of the designer; and 3. A multi-objective optimization method that maximizes both the structural and semantic similarities between the model under study and the models in the set of examples. All the proposed methods were implemented and evaluated on models generated from existing open-source projects and the obtained results confirm their efficiency

    Reliable massively parallel symbolic computing : fault tolerance for a distributed Haskell

    Get PDF
    As the number of cores in manycore systems grows exponentially, the number of failures is also predicted to grow exponentially. Hence massively parallel computations must be able to tolerate faults. Moreover new approaches to language design and system architecture are needed to address the resilience of massively parallel heterogeneous architectures. Symbolic computation has underpinned key advances in Mathematics and Computer Science, for example in number theory, cryptography, and coding theory. Computer algebra software systems facilitate symbolic mathematics. Developing these at scale has its own distinctive set of challenges, as symbolic algorithms tend to employ complex irregular data and control structures. SymGridParII is a middleware for parallel symbolic computing on massively parallel High Performance Computing platforms. A key element of SymGridParII is a domain specific language (DSL) called Haskell Distributed Parallel Haskell (HdpH). It is explicitly designed for scalable distributed-memory parallelism, and employs work stealing to load balance dynamically generated irregular task sizes. To investigate providing scalable fault tolerant symbolic computation we design, implement and evaluate a reliable version of HdpH, HdpH-RS. Its reliable scheduler detects and handles faults, using task replication as a key recovery strategy. The scheduler supports load balancing with a fault tolerant work stealing protocol. The reliable scheduler is invoked with two fault tolerance primitives for implicit and explicit work placement, and 10 fault tolerant parallel skeletons that encapsulate common parallel programming patterns. The user is oblivious to many failures, they are instead handled by the scheduler. An operational semantics describes small-step reductions on states. A simple abstract machine for scheduling transitions and task evaluation is presented. It defines the semantics of supervised futures, and the transition rules for recovering tasks in the presence of failure. The transition rules are demonstrated with a fault-free execution, and three executions that recover from faults. The fault tolerant work stealing has been abstracted in to a Promela model. The SPIN model checker is used to exhaustively search the intersection of states in this automaton to validate a key resiliency property of the protocol. It asserts that an initially empty supervised future on the supervisor node will eventually be full in the presence of all possible combinations of failures. The performance of HdpH-RS is measured using five benchmarks. Supervised scheduling achieves a speedup of 757 with explicit task placement and 340 with lazy work stealing when executing Summatory Liouville up to 1400 cores of a HPC architecture. Moreover, supervision overheads are consistently low scaling up to 1400 cores. Low recovery overheads are observed in the presence of frequent failure when lazy on-demand work stealing is used. A Chaos Monkey mechanism has been developed for stress testing resiliency with random failure combinations. All unit tests pass in the presence of random failure, terminating with the expected results

    Eight Biennial Report : April 2005 – March 2007

    No full text

    Conceitos e ferramentas de autoria para representação da visão estratégica baseada em padrões e busca em jogos heurísticos adversaristas

    Get PDF
    Orientador: Marcos CastilhoCoorientador: Bruno Muller JuniorTese (doutorado) - Universidade Federal do Paraná, Setor de Ciências Exatas, Programa de Pós-Graduação em Informática. Defesa : Curitiba, 11/02/2019Inclui referências: p. 96-102Área de concentração: Ciência da ComputaçãoResumo: A presente tese apresenta novos conceitos e ferramentas de software para dar suporte à formalização de conhecimento no nível de visão estratégica. No Xadrez, por exemplo, é a visão estratégica que permite ao enxadrista identificar quando duas posições de jogo diferentes apresentam as mesmas características, isto é, o mesmo padrão. A autoria formal de padrões de Xadrez é apresentada como um elemento chave sob o qual reside o estilo humano de jogar e como uma tarefa metacognitiva para aprendizes e grandes mestres refletirem sobre padrões armazenados em suas memórias de longo prazo. Conceitos de múltiplas representações externas são usados no projeto do ambiente. Poucos trabalhos foram encontrados na literatura científica sobre aplicar conhecimento, no nível de visão estratégica, para máquinas de jogo. Uma ontologia para especificar padrões é definida, bem como, um ambiente e uma linguagem para formalizar o conhecimento sobre padrões são apresentados. Experimentos para verificar o poder de expressividade foram realizados cujos resultados são promissores. Os conceitos abordados aqui podem ser utilizados em outras áreas como medicina e ciências administrativas. Palavras-chave: inteligência artificial, IA, reconhecimento de padrões, xadrez, jogos educacionais, representação de conhecimento, formalização do conhecimento, visão estratégica.Abstract: This thesis presents new concepts and software tools to support knowledge formalization at the strategic vision level. In Chess, for example, is the strategic vision that allows chess players identifying when two different game positions has the same characteristis, it means, the same pattern. The formal authoring of patterns in Chess is presented as key element that underlies human-like style of playing and as a meta-cognitive task for learners and grand masters to reflect about patterns stored in their long term memory. Multiple external representation concepts are use in the design of the environment. Few past works have been found in the scientific literature about applying knwoledge, at a strategical vision level, to game engines. An onthology for pattern specification has been defined, as well, an environment and a language for pattern formalization are presented. Experiments were done to verify the expression power of the language giving promising results. The concepts approached here can be used in other scientific fields as medicine and administration. Keywords: artificial intelligence, AI, pattern recognition, chess
    corecore