330 research outputs found

    The Strategy Challenge in SMT Solving

    Get PDF
    Abstract. High-performance SMT solvers contain many tightly integrated, hand-crafted heuristic combinations of algorithmic proof methods. While these heuristic combinations tend to be highly tuned for known classes of problems, they may easily perform badly on classes of problems not anticipated by solver developers. This issue is becoming increasingly pressing as SMT solvers begin to gain the attention of practitioners in diverse areas of science and engineering. We present a challenge to the SMT community: to develop methods through which users can exert strategic control over core heuristic aspects of SMT solvers. We present evidence that the adaptation of ideas of strategy prevalent both within the Argonne and LCF theorem proving paradigms can go a long way towards realizing this goal. Prologue. Bill McCune, Kindness and Strategy, by Grant Passmore I would like to tell a short story about Bill, of how I met him, and one way his work and kindness impacted my life

    Conversational artificial intelligence in the AEC industry: A review of present status, challenges and opportunities

    Get PDF
    The idea of developing a system that can converse and understand human languages has been around since the 1200 s. With the advancement in artificial intelligence (AI), Conversational AI came of age in 2010 with the launch of Apple’s Siri. Conversational AI systems leveraged Natural Language Processing (NLP) to understand and converse with humans via speech and text. These systems have been deployed in sectors such as aviation, tourism, and healthcare. However, the application of Conversational AI in the architecture engineering and construction (AEC) industry is lagging, and little is known about the state of research on Conversational AI. Thus, this study presents a systematic review of Conversational AI in the AEC industry to provide insights into the current development and conducted a Focus Group Discussion to highlight challenges and validate areas of opportunities. The findings reveal that Conversational AI applications hold immense benefits for the AEC industry, but it is currently underexplored. The major challenges for the under exploration were highlighted and discusses for intervention. Lastly, opportunities and future research directions of Conversational AI are projected and validated which would improve the productivity and efficiency of the industry. This study presents the status quo of a fast-emerging research area and serves as the first attempt in the AEC field. Its findings would provide insights into the new field which be of benefit to researchers and stakeholders in the AEC industry

    Conversational artificial intelligence in the AEC industry: A review of present status, challenges and opportunities

    Get PDF
    The idea of developing a system that can converse and understand human languages has been around since the 1200 s. With the advancement in artificial intelligence (AI), Conversational AI came of age in 2010 with the launch of Apple’s Siri. Conversational AI systems leveraged Natural Language Processing (NLP) to understand and converse with humans via speech and text. These systems have been deployed in sectors such as aviation, tourism, and healthcare. However, the application of Conversational AI in the architecture engineering and construction (AEC) industry is lagging, and little is known about the state of research on Conversational AI. Thus, this study presents a systematic review of Conversational AI in the AEC industry to provide insights into the current development and conducted a Focus Group Discussion to highlight challenges and validate areas of opportunities. The findings reveal that Conversational AI applications hold immense benefits for the AEC industry, but it is currently underexplored. The major challenges for the under exploration were highlighted and discusses for intervention. Lastly, opportunities and future research directions of Conversational AI are projected and validated which would improve the productivity and efficiency of the industry. This study presents the status quo of a fast-emerging research area and serves as the first attempt in the AEC field. Its findings would provide insights into the new field which be of benefit to researchers and stakeholders in the AEC industry

    Mathematical models of cellular decisions: investigating immune response and apoptosis

    No full text
    The main objective of this thesis is to develop and analyze mathematical models of cellular decisions. This work focuses on understanding the mechanisms involved in specific cellular processes such as immune response in the vascular system, and those involved in apoptosis, or programmed cellular death. A series of simple ordinary differential equation (ODE) models are constructed describing the macrophage response to hemoglobin:haptoglobin (Hb:Hp) complexes that may be present in vascular inflammation. The models proposed a positive feedback loop between the CD163 macrophage receptor and anti-inflammatory cytokine interleukin-10 (IL-10) and bifurcation analysis predicted the existence of a cellular phenotypic switch which was experimentally verified. Moreover, these models are extended to include the intracellular mediator heme oxygenase-1 (HO-1). Analysis of the proposed models find a positive feedback mechanism between IL-10 and HO-1. This model also predicts cellular response of heme and IL-10 stimuli. For the apoptotic (cell suicide) system, a modularized model is constructed encompassing the extrinsic and intrinsic signaling pathways. Model reduction is performed by abstracting the dynamics of complexes (oligomers) at a steady-state. This simplified model is analyzed, revealing different kinetic properties between type I and type II cells, and reduced models verify results. The second model of apoptosis proposes a novel mechanism of apoptosis activation through receptor-ligand clustering, yielding robust bistability and hysteresis. Using techniques from algebraic geometry, a model selection criterion is provided between the proposed and existing model as experimental data becomes available to verify the mechanism. The models developed throughout this thesis reveal important and relevant mechanisms specific to cellular response; specifically, interactions necessary for an organism to maintain homeostasis are identified. This work enables a deeper understanding of the biological interactions and dynamics of vascular inflammation and apoptosis. The results of these models provide predictions which may motivate further experimental work and theoretical study

    Model Transformation Languages with Modular Information Hiding

    Get PDF
    Model transformations, together with models, form the principal artifacts in model-driven software development. Industrial practitioners report that transformations on larger models quickly get sufficiently large and complex themselves. To alleviate entailed maintenance efforts, this thesis presents a modularity concept with explicit interfaces, complemented by software visualization and clustering techniques. All three approaches are tailored to the specific needs of the transformation domain

    Identification of microservices from monolithic applications through topic modelling

    Get PDF
    Dissertação de mestrado em Informatics EngineeringMicroservices emerged as one of the most popular architectural patterns in the recent years given the increased need to scale, grow and flexibilize software projects accompanied by the growth in cloud computing and DevOps. Many software applications are being submitted to a process of migration from its monolithic architecture to a more modular, scalable and flexible architecture of microservices. This process is slow and, depending on the project’s complexity, it may take months or even years to complete. This dissertation proposes a new approach on microservices identification by resorting to topic modelling in order to identify services according to domain terms. This approach in combination with clustering techniques produces a set of services based on the original software. The proposed methodology is implemented as an open-source tool for exploration of monolithic architectures and identification of microservices. An extensive quantitative analysis using the state of the art metrics on independence of functionality and modularity of services was conducted on 200 open-source projects collected from GitHub. Cohesion at message and domain level metrics showed medians of roughly 0.6. Interfaces per service exhibited a median of 1.5 with a compact interquartile range. Structural and conceptual modularity revealed medians of 0.2 and 0.4 respectively. Further analysis to understand if the methodology works better for smaller/larger projects revealed an overall stability and similar performance across metrics. Our first results are positive demonstrating beneficial identification of services due to overall metrics’ results.Os microserviços emergiram como um dos padrões arquiteturais mais populares na atualidade dado o aumento da necessidade em escalar, crescer e flexibilizar projetos de software, acompanhados da crescente da computação na cloud e DevOps. Muitas aplicações estão a ser submetidas a processos de migração de uma arquitetura monolítica para uma arquitetura mais modular, escalável e flexivel de microserviços. Este processo de migração é lento, e dependendo da complexidade do projeto, poderá levar vários meses ou mesmo anos a completar. Esta dissertação propõe uma nova abordagem na identificação de microserviços recorrendo a modelação de tópicos de forma a identificar serviços de acordo com termos de domínio de um projeto de software. Esta abordagem em combinação com técnicas de clustering produz um conjunto de serviços baseado no projeto de software original. A metodologia proposta é implementada como uma ferramenta open-source para exploração de arquiteturas monolíticas e identificação de microserviços. Uma análise quantitativa extensa recorrendo a métricas de independência de funcionalidade e modularidade de serviços foi conduzida em 200 aplicações open-source recolhidas do GitHub. Métricas de coesão ao nível da mensagem e domínio revelaram medianas em torno de 0.6. Interfaces por serviço demonstraram uma mediana de 1.5 com um intervalo interquartil compacto. Métricas de modularidade estrutural e conceptual revelaram medianas de 0.2 e 0.4 respetivamente. Uma análise mais aprofundada para tentar perceber se a metodologia funciona melhor para projetos de diferentes dimensões/características revelaram uma estabilidade geral do funcionamento do método. Os primeiros resultados são positivos demonstrando identificações de serviços benéficos tendo em conta que os valores das métricas são de uma forma global positivos e promissores

    Natural language software registry (second edition)

    Get PDF

    Optimizing and Incrementalizing Higher-order Collection Queries by AST Transformation

    Get PDF
    In modernen, universellen Programmiersprachen sind Abfragen auf Speicher-basierten Kollektionen oft rechenintensiver als erforderlich. Während Datenbankenabfragen vergleichsweise einfach optimiert werden können, fällt dies bei Speicher-basierten Kollektionen oft schwer, denn universelle Programmiersprachen sind in aller Regel ausdrucksstärker als Datenbanken. Insbesondere unterstützen diese Sprachen meistens verschachtelte, rekursive Datentypen und Funktionen höherer Ordnung. Kollektionsabfragen können per Hand optimiert und inkrementalisiert werden, jedoch verringert dies häufig die Modularität und ist oft zu fehleranfällig, um realisierbar zu sein oder um Instandhaltung von entstandene Programm zu gewährleisten. Die vorliegende Doktorarbeit demonstriert, wie Abfragen auf Kollektionen systematisch und automatisch optimiert und inkrementalisiert werden können, um Programmierer von dieser Last zu befreien. Die so erzeugten Programme werden in derselben Kernsprache ausgedrückt, um weitere Standardoptimierungen zu ermöglichen. Teil I entwickelt eine Variante der Scala API für Kollektionen, die Staging verwendet um Abfragen als abstrakte Syntaxbäume zu reifizieren. Auf Basis dieser Schnittstelle werden anschließend domänenspezifische Optimierungen von Programmiersprachen und Datenbanken angewandt; unter anderem werden Abfragen umgeschrieben, um vom Programmierer ausgewählte Indizes zu benutzen. Dank dieser Indizes kann eine erhebliche Beschleunigung der Ausführungsgeschwindigkeit gezeigt werden; eine experimentelle Auswertung zeigt hierbei Beschleunigungen von durchschnittlich 12x bis zu einem Maximum von 12800x. Um Programme mit Funktionen höherer Ordnung durch Programmtransformation zu inkrementalisieren, wird in Teil II eine Erweiterung der Finite-Differenzen-Methode vorgestellt [Paige and Koenig, 1982; Blakeley et al., 1986; Gupta and Mumick, 1999] und ein erster Ansatz zur Inkrementalisierung durch Programmtransformation für Programme mit Funktionen höherer Ordnung entwickelt. Dabei werden Programme zu Ableitungen transformiert, d.h. zu Programmen die Eingangsdifferenzen in Ausgangdifferenzen umwandeln. Weiterhin werden in den Kapiteln 12–13 die Korrektheit des Inkrementalisierungsansatzes für einfach-getypten und ungetypten λ-Kalkül bewiesen und Erweiterungen zu System F besprochen. Ableitungen müssen oft Ergebnisse der ursprünglichen Programme wiederverwenden. Um eine solche Wiederverwendung zu ermöglichen, erweitert Kapitel 17 die Arbeit von Liu and Teitelbaum [1995] zu Programmen mit Funktionen höherer Ordnung und entwickeln eine Programmtransformation solcher Programme im Cache-Transfer-Stil. Für eine effiziente Inkrementalisierung ist es weiterhin notwendig, passende Grundoperationen auszuwählen und manuell zu inkrementalisieren. Diese Arbeit deckt einen Großteil der wichtigsten Grundoperationen auf Kollektionen ab. Die Durchführung von Fallstudien zeigt deutliche Laufzeitverbesserungen sowohl in Praxis als auch in der asymptotischen Komplexität.In modern programming languages, queries on in-memory collections are often more expensive than needed. While database queries can be readily optimized, it is often not trivial to use them to express collection queries which employ nested data and first-class functions, as enabled by functional programming languages. Collection queries can be optimized and incrementalized by hand, but this reduces modularity, and is often too error-prone to be feasible or to enable maintenance of resulting programs. To free programmers from such burdens, in this thesis we study how to optimize and incrementalize such collection queries. Resulting programs are expressed in the same core language, so that they can be subjected to other standard optimizations. To enable optimizing collection queries which occur inside programs, we develop a staged variant of the Scala collection API that reifies queries as ASTs. On top of this interface, we adapt domain-specific optimizations from the fields of programming languages and databases; among others, we rewrite queries to use indexes chosen by programmers. Thanks to the use of indexes we show significant speedups in our experimental evaluation, with an average of 12x and a maximum of 12800x. To incrementalize higher-order programs by program transformation, we extend finite differencing [Paige and Koenig, 1982; Blakeley et al., 1986; Gupta and Mumick, 1999] and develop the first approach to incrementalization by program transformation for higher-order programs. Base programs are transformed to derivatives, programs that transform input changes to output changes. We prove that our incrementalization approach is correct: We develop the theory underlying incrementalization for simply-typed and untyped λ-calculus, and discuss extensions to System F. Derivatives often need to reuse results produced by base programs: to enable such reuse, we extend work by Liu and Teitelbaum [1995] to higher-order programs, and develop and prove correct a program transformation, converting higher-order programs to cache-transfer-style. For efficient incrementalization, it is necessary to choose and incrementalize by hand appropriate primitive operations. We incrementalize a significant subset of collection operations and perform case studies, showing order-of-magnitude speedups both in practice and in asymptotic complexity
    corecore