1,995 research outputs found

    Using Inhabitation in Bounded Combinatory Logic with Intersection Types for Composition Synthesis

    Full text link
    We describe ongoing work on a framework for automatic composition synthesis from a repository of software components. This work is based on combinatory logic with intersection types. The idea is that components are modeled as typed combinators, and an algorithm for inhabitation {\textemdash} is there a combinatory term e with type tau relative to an environment Gamma? {\textemdash} can be used to synthesize compositions. Here, Gamma represents the repository in the form of typed combinators, tau specifies the synthesis goal, and e is the synthesized program. We illustrate our approach by examples, including an application to synthesis from GUI-components.Comment: In Proceedings ITRS 2012, arXiv:1307.784

    A theoretical introduction to “Combinatory SYBR®Green qPCR Screening”, a matrix-based approach for the detection of materials derived from genetically modified plants

    Get PDF
    The detection of genetically modified (GM) materials in food and feed products is a complex multi-step analytical process invoking screening, identification, and often quantification of the genetically modified organisms (GMO) present in a sample. “Combinatory qPCR SYBR®Green screening” (CoSYPS) is a matrix-based approach for determining the presence of GM plant materials in products. The CoSYPS decision-support system (DSS) interprets the analytical results of SYBR®GREEN qPCR analysis based on four values: the Ct- and Tm values and the LOD and LOQ for each method. A theoretical explanation of the different concepts applied in CoSYPS analysis is given (GMO Universe, “Prime number tracing”, matrix/combinatory approach) and documented using the RoundUp Ready soy GTS40-3-2 as an example. By applying a limited set of SYBR®GREEN qPCR methods and through application of a newly developed “prime number”-based algorithm, the nature of subsets of corresponding GMO in a sample can be determined. Together, these analyses provide guidance for semi-quantitative estimation of GMO presence in a food and feed product

    Interactive 3D visualization for theoretical Virtual Observatories

    Get PDF
    Virtual Observatories (VOs) are online hubs of scientific knowledge. They encompass a collection of platforms dedicated to the storage and dissemination of astronomical data, from simple data archives to e-research platforms offering advanced tools for data exploration and analysis. Whilst the more mature platforms within VOs primarily serve the observational community, there are also services fulfilling a similar role for theoretical data. Scientific visualization can be an effective tool for analysis and exploration of datasets made accessible through web platforms for theoretical data, which often contain spatial dimensions and properties inherently suitable for visualization via e.g. mock imaging in 2d or volume rendering in 3d. We analyze the current state of 3d visualization for big theoretical astronomical datasets through scientific web portals and virtual observatory services. We discuss some of the challenges for interactive 3d visualization and how it can augment the workflow of users in a virtual observatory context. Finally we showcase a lightweight client-server visualization tool for particle-based datasets allowing quantitative visualization via data filtering, highlighting two example use cases within the Theoretical Astrophysical Observatory.Comment: 10 Pages, 13 Figures, Accepted for Publication in Monthly Notices of the Royal Astronomical Societ

    User support for software development technologies

    Get PDF
    The adoption of software development technologies is very closely related to the topic of user support. This is especially true in early phases, when the users are not familiar with the modification or the build processes of the software that has to be developed nor with the technology used for software development. This work introduces an approach to improve the usability of software development technologies represented by the Combinatory Logic Synthesizer (CL)S Framework. (CL)S is based on a type inhabitation algorithm for the combinatory logic with intersection types and aims to automatically create software components from a domain-specified repository. The framework yields a complete enumeration of all inhabitants. The inhabitation results are computed in the form of tree grammars. Unfortunately, the underlying type system allows limited application of domain-specific knowledge. To compensate for this limit, this work provides a framework for debugging intersection type specifications and filtering inhabitation results using domain-specific constraints as main aspects. The aim of the debugger is to make potentially incomplete or erroneous input specifications and decisions of the inhabitation algorithm understandable for those who are not experts in the field of type theory. The combination of tree grammars and graph theory forms the foundation of a clear representation of the computed results that informs users about the search process of the algorithm. The graphical representations are based on hypergraphs that illustrate the inhabitation in a step-wise fashion. Within the scope of this work, three filtering algorithms were implemented and investigated. The filtering algorithm integrated into the framework for user support and used for the restriction of inhabitation results is practically feasible and represents a clear improvement compared to existing approaches. It is based on modifying the tree grammars resulting from the (CL)S Framework. Additionally, the usability of the (CL)S Framework is supported by eight perspectives included in a web-based integrated development environment (IDE) that provides detailed graphical and textual information about the synthesis

    ChIP-Array: Combinatory analysis of ChIP-seq/chip and microarray gene expression data to discover direct/indirect targets of a transcription factor

    Get PDF
    Chromatin immunoprecipitation (ChIP) coupled with high-throughput techniques (ChIP-X), such as next generation sequencing (ChIP-Seq) and microarray (ChIP-chip), has been successfully used to map active transcription factor binding sites (TFBS) of a transcription factor (TF). The targeted genes can be activated or suppressed by the TF, or are unresponsive to the TF. Microarray technology has been used to measure the actual expression changes of thousands of genes under the perturbation of a TF, but is unable to determine if the affected genes are direct or indirect targets of the TF. Furthermore, both ChIP-X and microarray methods produce a large number of false positives. Combining microarray expression profiling and ChIP-X data allows more effective TFBS analysis for studying the function of a TF. However, current web servers only provide tools to analyze either ChIP-X or expression data, but not both. Here, we present ChIP-Array, a web server that integrates ChIP-X and expression data from human, mouse, yeast, fruit fly and Arabidopsis. This server will assist biologists to detect direct and indirect target genes regulated by a TF of interest and to aid in the functional characterization of the TF. ChIP-Array is available at http://jjwanglab.hku.hk/ChIP-Array, with free access to academic users. © 2011 The Author(s).published_or_final_versio

    Kolmogorov Complexity in perspective. Part II: Classification, Information Processing and Duality

    Get PDF
    We survey diverse approaches to the notion of information: from Shannon entropy to Kolmogorov complexity. Two of the main applications of Kolmogorov complexity are presented: randomness and classification. The survey is divided in two parts published in a same volume. Part II is dedicated to the relation between logic and information system, within the scope of Kolmogorov algorithmic information theory. We present a recent application of Kolmogorov complexity: classification using compression, an idea with provocative implementation by authors such as Bennett, Vitanyi and Cilibrasi. This stresses how Kolmogorov complexity, besides being a foundation to randomness, is also related to classification. Another approach to classification is also considered: the so-called "Google classification". It uses another original and attractive idea which is connected to the classification using compression and to Kolmogorov complexity from a conceptual point of view. We present and unify these different approaches to classification in terms of Bottom-Up versus Top-Down operational modes, of which we point the fundamental principles and the underlying duality. We look at the way these two dual modes are used in different approaches to information system, particularly the relational model for database introduced by Codd in the 70's. This allows to point out diverse forms of a fundamental duality. These operational modes are also reinterpreted in the context of the comprehension schema of axiomatic set theory ZF. This leads us to develop how Kolmogorov's complexity is linked to intensionality, abstraction, classification and information system.Comment: 43 page

    A study of the use of natural language processing for conversational agents

    Get PDF
    Language is a mark of humanity and conscience, with the conversation (or dialogue) as one of the most fundamental manners of communication that we learn as children. Therefore one way to make a computer more attractive for interaction with users is through the use of natural language. Among the systems with some degree of language capabilities developed, the Eliza chatterbot is probably the first with a focus on dialogue. In order to make the interaction more interesting and useful to the user there are other approaches besides chatterbots, like conversational agents. These agents generally have, to some degree, properties like: a body (with cognitive states, including beliefs, desires and intentions or objectives); an interactive incorporation in the real or virtual world (including perception of events, communication, ability to manipulate the world and communicate with others); and behavior similar to a human (including affective abilities). This type of agents has been called by several terms, including animated agents or embedded conversational agents (ECA). A dialogue system has six basic components. (1) The speech recognition component is responsible for translating the user’s speech into text. (2) The Natural Language Understanding component produces a semantic representation suitable for dialogues, usually using grammars and ontologies. (3) The Task Manager chooses the concepts to be expressed to the user. (4) The Natural Language Generation component defines how to express these concepts in words. (5) The dialog manager controls the structure of the dialogue. (6) The synthesizer is responsible for translating the agents answer into speech. However, there is no consensus about the necessary resources for developing conversational agents and the difficulties involved (especially in resource-poor languages). This work focuses on the influence of natural language components (dialogue understander and manager) and analyses, in particular the use of parsing systems as part of developing conversational agents with more flexible language capabilities. This work analyses what kind of parsing resources contributes to conversational agents and discusses how to develop them targeting Portuguese, which is a resource-poor language. To do so we analyze approaches to the understanding of natural language, and identify parsing approaches that offer good performance, based on which we develop a prototype to evaluate the impact of using a parser in a conversational agent.linguagem é uma marca da humanidade e da consciência, sendo a conversação (ou diálogo) uma das maneiras de comunicacão mais fundamentais que aprendemos quando crianças. Por isso uma forma de fazer um computador mais atrativo para interação com usuários é usando linguagem natural. Dos sistemas com algum grau de capacidade de linguagem desenvolvidos, o chatterbot Eliza é, provavelmente, o primeiro sistema com foco em diálogo. Com o objetivo de tornar a interação mais interessante e útil para o usuário há outras aplicações alem de chatterbots, como agentes conversacionais. Estes agentes geralmente possuem, em algum grau, propriedades como: corpo (com estados cognitivos, incluindo crenças, desejos e intenções ou objetivos); incorporação interativa no mundo real ou virtual (incluindo percepções de eventos, comunicação, habilidade de manipular o mundo e comunicar com outros agentes); e comportamento similar ao humano (incluindo habilidades afetivas). Este tipo de agente tem sido chamado de diversos nomes como agentes animados ou agentes conversacionais incorporados. Um sistema de diálogo possui seis componentes básicos. (1) O componente de reconhecimento de fala que é responsável por traduzir a fala do usuário em texto. (2) O componente de entendimento de linguagem natural que produz uma representação semântica adequada para diálogos, normalmente utilizando gramáticas e ontologias. (3) O gerenciador de tarefa que escolhe os conceitos a serem expressos ao usuário. (4) O componente de geração de linguagem natural que define como expressar estes conceitos em palavras. (5) O gerenciador de diálogo controla a estrutura do diálogo. (6) O sintetizador de voz é responsável por traduzir a resposta do agente em fala. No entanto, não há consenso sobre os recursos necessários para desenvolver agentes conversacionais e a dificuldade envolvida nisso (especialmente em línguas com poucos recursos disponíveis). Este trabalho foca na influência dos componentes de linguagem natural (entendimento e gerência de diálogo) e analisa em especial o uso de sistemas de análise sintática (parser) como parte do desenvolvimento de agentes conversacionais com habilidades de linguagem mais flexível. Este trabalho analisa quais os recursos do analisador sintático contribuem para agentes conversacionais e aborda como os desenvolver, tendo como língua alvo o português (uma língua com poucos recursos disponíveis). Para isto, analisamos as abordagens de entendimento de linguagem natural e identificamos as abordagens de análise sintática que oferecem um bom desempenho. Baseados nesta análise, desenvolvemos um protótipo para avaliar o impacto do uso de analisador sintático em um agente conversacional

    The involutions-as-principal types/ application-as-unification analogy

    Get PDF
    In 2005, S. Abramsky introduced various universal models of computation based on Affine Combinatory Logic, consisting of partial involutions over a suitable formal language of moves, in order to discuss reversible computation in a game-theoretic setting. We investigate Abramsky\u2019s models from the point of view of the model theory of \u3bb-calculus, focusing on the purely linear and affine fragments of Abramsky\u2019s Combinatory Algebras. Our approach stems from realizing a structural analogy, which had not been hitherto pointed out in the literature, between the partial involution interpreting a combinator and the principal type of that term, with respect to a simple types discipline for \u3bb-calculus. This analogy allows for explaining as unification between principal types the somewhat awkward linear application of involutions arising from Geometry of Interaction (GoI). Our approach provides immediately an answer to the open problem, raised by Abramsky, of characterising those finitely describable partial involutions which are denotations of combinators, in the purely affine fragment. We prove also that the (purely) linear combinatory algebra of partial involutions is a (purely) linear \u3bb-algebra, albeit not a combinatory model, while the (purely) affine combinatory algebra is not. In order to check the complex equations involved in the definition of affine \u3bb-algebra, we implement in Erlang the compilation of \u3bb-terms as involutions, and their execution
    corecore