947 research outputs found

    Session Kotlin: A hybrid session type embedding in Kotlin

    Get PDF
    Concurrency and distribution have become essential for building modern applications. However, developing and maintaining these apps is not an easy task. Communication errors are a common source of problems: unexpected messages cause runtime errors, and mutual dependencies lead to deadlocks. To address these issues, developers can define communication protocols that detail the structure and order of the transmitted messages, but maintaining protocol fidelity can be complex if carried out manually. Session types formalize this concept by materializing the communication protocol as a type that can be enforced by the language’s type system. In this thesis we present the first embedding of session types in Kotlin: we propose a Domain-Specific Language (DSL) for multiparty ses- sion types that lets developers write safe concurrent applications, with built-in validation and integrating code generation in the language’s framework.A concorrência e a distribuição têm-se tornado essenciais na construção de aplicações modernas. No entanto, desenvolver e manter estas aplicações não é tarefa fácil. Erros de comunicação são uma fonte comum de problemas: mensagens inesperadas causam erros durante a execução de código, e dependências mútuas levam a deadlocks. Para resolver estas questões, é tipico definir protocolos de comunicação que detalham a estrutura e a ordem das mensagens transmitidas, mas garantir o seu cumprimento pode ser complexo se feito manualmente. Os tipos de sessão formalizam este conceito ao materializar o protocolo de comunicação como um tipo que pode ser gerido pelo sistema de tipos da linguagem. Nesta tese apresentamos o primeiro embedding de tipos de sessão em Kotlin: propomos uma Linguagem de Domínio Específica para tipos de sessão com múltiplos participantes que permite aos programadores a escrita de aplicações concorrentes seguras, incorporando validação e integrando a geração de código no framework da linguagem

    Safe Stream-Based Programming with Refinement Types

    Full text link
    In stream-based programming, data sources are abstracted as a stream of values that can be manipulated via callback functions. Stream-based programming is exploding in popularity, as it provides a powerful and expressive paradigm for handling asynchronous data sources in interactive software. However, high-level stream abstractions can also make it difficult for developers to reason about control- and data-flow relationships in their programs. This is particularly impactful when asynchronous stream-based code interacts with thread-limited features such as UI frameworks that restrict UI access to a single thread, since the threading behavior of streaming constructs is often non-intuitive and insufficiently documented. In this paper, we present a type-based approach that can statically prove the thread-safety of UI accesses in stream-based software. Our key insight is that the fluent APIs of stream-processing frameworks enable the tracking of threads via type-refinement, making it possible to reason automatically about what thread a piece of code runs on -- a difficult problem in general. We implement the system as an annotation-based Java typechecker for Android programs built upon the popular ReactiveX framework and evaluate its efficacy by annotating and analyzing 8 open-source apps, where we find 33 instances of unsafe UI access while incurring an annotation burden of only one annotation per 186 source lines of code. We also report on our experience applying the typechecker to two much larger apps from the Uber Technologies Inc. codebase, where it currently runs on every code change and blocks changes that introduce potential threading bugs

    Automatic Refactoring for Renamed Clones in Test Code

    Get PDF
    Unit testing plays an essential role in software development and maintenance, especially in Test-Driven Development. Conventional unit tests, which have no input parameters, often exercise similar scenarios with small variations to achieve acceptable coverage, which often results in duplicated code in test suites. Test code duplication hinders comprehension of test cases and maintenance of test suites. Test refactoring is a potential tool for developers to use to control technical debt arising due to test cloning. In this thesis, we present a novel tool, JTestParametrizer, for automatically refactoring method-scope renamed clones in test suites. We propose three levels of refactoring to parameterize type, data, and behaviour differences in clone pairs. Our technique works at the Abstract Syntax Tree level by extracting a parameterized template utility method and instantiating it with appropriate parameter values. We applied our technique to 5 open-source Java benchmark projects and conducted an empirical study on our results. Our technique examined 14,431 test methods in our benchmark projects and identified 415 renamed clone pairs as effective candidates for refactoring. On average, 65% of the effective candidates (268 clone pairs) in our test suites are refactorable using our technique. All of the refactored test methods are compilable, and 94% of them pass when executed as tests. We believe that our proposed refactorings generally improve code conciseness, reduce the amount of duplication, and make test suites easier to maintain and extend

    EOLANG and φ\varphi-calculus

    Full text link
    Object-oriented programming (OOP) is one of the most popular paradigms used for building software systems. However, despite its industrial and academic popularity, OOP is still missing a formal apparatus similar to λ\lambda-calculus, which functional programming is based on. There were a number of attempts to formalize OOP, but none of them managed to cover all the features available in modern OO programming languages, such as C++ or Java. We have made yet another attempt and created φ\varphi-calculus. We also created EOLANG (also called EO), an experimental programming language based on φ\varphi-calculus

    Test Quality Assurance for E2E Web Test Suites: Parallelization of Dependent Test Suites and Test Flakiness Prevention

    Get PDF
    Web applications support a wide range of activities today, from e-commerce to health management, and ensuring their quality is a fundamental task. Nevertheless, testing these systems is hard because of their dynamic and asynchronous nature and their heterogeneity. Quality assurance of Web applications is usually performed through testing, performed at different levels of abstraction. At the End-to-end (E2E) level, test scripts interact with the application through the web browser, as a human user would do. This kind of testing is usually time consuming, and its execution time can be reduced by running the test suite in parallel. However, the presence of dependen- cies in the test suite can make test parallelization difficult. Best practices prescribe that test scripts in a test suite should be independent (i.e. they should not assume that the system under test is already in an expected state), but this is not always done in practice: dependent tests are a serious problem that affects end-to-end web test suites. Moreover, test dependencies are a problem because they enforce an execution order for the test suite, preventing the use of techniques like test selection, test prioritization, and test parallelization. Another issue that affects E2E Web test suites is test flakiness: a test script is called flaky when it may non-deterministically pass or fail on the same version of the Ap- plication Under Test. Test flakiness is usually caused by multiple factors, that can be very hard to determine: most common causes of flakiness are improper waiting for async operations, not respected test order dependencies and concurrency problems (e.g. race conditions, deadlocks, atomicity violations). Test flakiness is a problem that affects E2E test execution in general, but it can have a greater impact in presence of dependencies, since 1) if a test script fails due to flakiness, other test scripts that depend on it will probably fail as well, 2) most dependency-detection approaches and tools rely on multiple executions of test schedules in different orders to detect dependencies. In order to do that, execution results must be deterministic: if test scripts can pass or fail non-deterministically, those dependency detection tools can not work. This thesis proposes to improve the quality assurance for E2E Web test suites in two different directions: 1. enabling the parallel execution of dependent E2E Web test suites in a opti- mized, efficient way 2. preventing flakiness by automated refactoring of E2E Web test suites, in order to adopt the proper waiting strategies for page elements For the first research direction we propose STILE (teST suIte paralLElizer), a tool- based approach that allows parallel execution of E2E Web test suites. Our approach generates a set of test schedules that respect two important constraints: 1) every schedule respects existing test dependencies, 2) all test scripts in the test suite are executed at least once, considering all the generated schedules. For the second research direction we propose SleepReplacer, a tool-based approach to automatically refactor E2E Web test suites in order to prevent flakiness. Both of the tool-based approaches has been fully implemented in two functioning and publicly available tools, and empirically validated on different test suites

    Ontology Services for Knowledge Organization Systems

    Get PDF
    Ontologies and other knowledge organization systems, such as controlled vocabularies, can be used to enhance the findability of information. By describing the contents of documents using a shared, harmonized terminology, information systems can provide efficient search and browsing functionalities for the contents. Explicit descriptive metadata aims to solve some of the prevailing issues in full text search in many search engines, including the processing of synonyms and homonyms. The use of ontologies as domain models enables the machine-processability of contents, semantic reasoning, information integration, and other intelligent ways of processing the data. The utilization of knowledge organization systems in content indexing and information retrieval can be facilitated by providing automated tools for their efficient use. This thesis studies and presents novel methods and systems for publishing and using knowledge organization systems as ontology services. The research is conducted by designing and evaluating prototype systems that support the use of ontologies in real-life use cases. The research follows the principles of the design science and action research methodologies. The presented ONKI system provides user interface components and application programming interfaces that can be integrated into external applications to enable ontology-based workflows. The features of the system are based on analyzing the needs of the main user groups of ontologies. The common functionalities identified in ontology-based workflows include concept search, browsing, and selection. The thesis presents the Linked Open Ontology cloud approach for managing and publishing a set of interlinked ontologies in an ontology service. The system enables the users to use multiple ontologies as a single, interoperable, cross-domain representation instead of individual ontologies. For facilitating the simultaneous use of ontologies published in different ontology repositories, the Normalized Ontology Repository approach is presented. As a use case of managing and publishing a semantically rich knowledge organization system as an ontology, the thesis presents the Taxon Meta-Ontology model for biological nomenclatures and classifications. The model supports the representation of changes and differing opinions of taxonomic concepts. The ONKI system and the ontologies developed using the methods presented in this thesis have been provided as a living lab service http://onki.fi, which has been run since 2008. The service provides tools and support for the users of ontologies, including content indexers, information searchers, ontology developers, and application developers.Ontologioita ja muita tietämyksen järjestämisen menetelmiä, kuten kontrolloituja sanastoja, voidaan käyttää tiedon löytämisen parantamiseksi. Kun dokumenttien sisällöt kuvaillaan käyttämällä jaettua, yhtenäistettyä terminologiaa, tietojärjestelmät voivat tarjota tehokkaita haku- ja selaustoiminnallisuuksia sisältöihin. Eksplisiittisesti esitetty, kuvaileva metatieto pyrkii ratkaisemaan monien hakukoneiden käyttämän kokotekstihaun ongelmia, kuten synonyymien ja homonyymien huomioimisen. Ontologioiden käyttäminen käsitemalleina mahdollistaa sisältöjen koneellisen käsittelyn, semanttisen päättelyn, tiedon integroinnin ja muita älykkäitä menetelmiä. Tietämyksen järjestämisen menetelmien hyödyntämistä sisältöjen indeksoinnissa ja haussa voidaan helpottaa tarjoamalla käyttäjille automatisoituja työkaluja niiden tehokkaaseen käyttämiseen. Tässä väitöskirjassa tutkitaan ja esitetään uudenlaisia menetelmiä ja järjestelmiä tietämyksen järjestämisen menetelmien julkaisemiseksi ontologiapalveluina. Tutkimus on toteutettu suunnittelemalla ja arvioimalla prototyyppijärjestelmiä, jotka edistävät ontologioiden käyttämistä todellisissa käyttötapauksissa. Tutkimus nojautuu suunnittelutieteen ja toimintatutkimuksen metodologioiden periaatteisiin. Työssä esitetty ONKI-järjestelmä tarjoaa käyttöliittymäkomponentteja ja ohjelmallisia rajapintoja, jotka voidaan integroida ulkoisiin sovelluksiin ontologiaperustaisten työnkulkujen mahdollistamiseksi. Järjestelmän ominaisuudet on toteutettu perustuen ontologioiden keskeisten käyttäjäryhmien tarpeiden selvittämiseen. Ontologiaperustaisista työnkuluista tunnistettuja yleisiä toiminnallisuuksia ovat käsitteen haku, selailu ja valinta. Tässä työssä esitetään linkitetyn avoimen ontologiapilven menetelmä toisiinsa linkitettyjen ontologioiden ylläpitämiseen ja julkaisemiseen ontologiapalvelussa. Järjestelmän avulla käyttäjät voivat käyttää useita ontologioita yhtenä, yhteentoimivana, alat yhdistävänä kokonaisuutena erillisten ontologioiden sijaan. Eri ontologiapalveluissa julkaistujen ontologioiden samanaikaisen käytön helpottamiseksi esitetään normalisoidun ontologiapalvelun menetelmä. Käyttötapauksena semanttisesti rikkaan tietämyksen järjestämisen menetelmän ylläpitämisestä ja julkaisemisesta työssä esitetään biologisten nimistöjen ja luokitusten taksonominen ontologiamalli. Malli mahdollistaa taksonomisten käsitteiden muutosten ja toisistaan poikkeavien näkemysten esittämisen. ONKI-järjestelmä ja työssä esitetyillä menetelmillä kehitetyt ontologiat ovat olleet käytettävissä living lab -palvelussa http://onki.fi, joka on ollut toiminnassa vuodesta 2008 lähtien. Palvelu tarjoaa työkaluja ja tukea ontologioiden käyttäjille, kuten tiedon indeksoijille, hakijoille, ontologioiden kehittäjille ja sovelluskehittäjille

    Refined electrophysiological recording and processing of neural signals from the retina and ascending visual pathways

    Get PDF
    The purpose of this thesis was the development of refined methods for recording and processing of neural signals of the retina and ascending visual pathways. The first chapter describes briefly the fundamentals of the human visual system and the basics of the functional testing of the retina and the visual pathways. The second and third chapters are dedicated to the processing of visual electrophysiological data using the newly developed software ERG Explorer, and present a proposal for an open and standardized data format, ElVisML, for future proof storage of visual electrophysiological data. The fourth chapter describes the development and application of two novel electrodes: First a contact lens electrode for the recording of electrical potentials of the ciliary muscle during accommodation, and second, the marble electrode, which is made of a super-absorbant polymer and allows for a preparation-free recording of visual evoked potentials. Results obtained in studies using the both electrodes are presented. The fifths and last chapter of the thesis presents the results from four studies within the field of visual electrophysiology. The first study examines the ophthalmological assessment of cannabis-induced perception disorder using electrophysiological methods. The second study presents a refined method for the objective assessment of the visual acuity using visual evoked potentials and introduces therefore, a refined stimulus paradigm and a novel method for the analysis of the sweep VEP. The third study presents the results of a newly developed stimulus design for full-field electrophysiology, which allows to assess previously non-recordable electroretinograms. The last study describes a relation of the spatial frequency of a visual stimulus to the amplitudes of visual evoked potentials in comparison to the BOLD response obtained using functional near-infrared spectroscopy and functional magnetic resonance imaging

    Projectional Editing of Software Product Lines–The PEoPL approach

    Get PDF

    Code generation for RESTful APIs in HEADREST

    Get PDF
    Tese de mestrado, Engenharia Informática (Engenharia de Software) Universidade de Lisboa, Faculdade de Ciências, 2018Os serviços web com APIs que aderem ao estilo arquitetural REST, conhecidos por serviços web RESTful, são atualmente muito populares. Estes serviços seguem um estilo cliente-servidor, com interações sem estado baseadas nos verbos disponibilizados pela norma HTTP. Como meio de especificar formalmente a interação entre os clientes e fornecedores de serviços REST, várias linguagens de definição de interfaces (IDL) têm sido propostas. No entanto, na sua maioria, limitam-se ao nível sintático das interfaces que especificam e à descrição das estruturas de dados e dos pontos de interação. A linguagem HEADREST foi desenvolvida como uma IDL que permite ultrapassar estas limitações, suportando a descrição das APIs RESTful também ao nível semântico. Através de tipos e asserções é possível em HEADREST não só definir a estrutura dos dados trocados mas também correlacionar o output com input e o estado do servidor. Uma das principais vantagens de ter descrições formais de APIs RESTful é a capacidade de gerar código boilerplate tanto para clientes como fornecedores. Este trabalho endereça o problema de geração de código para as APIs RESTful descritas com HEADREST e investiga de que forma as técnicas de geração de código existentes para os aspectos sintáticos das APIs RESTful podem ser estendidas para levar em conta também as propriedades comportamentais que podem ser descritas em HEADREST. Tendo em conta que a linguagem HEADREST adota muitos conceitos da Open API Specification (OAS), o trabalho desenvolvido capitaliza nas técnicas de geração de código desenvolvidas para a OAS e envolveu o desenvolvimento de protótipos de geração de código cliente e servidor a partir de especificações HEADREST.Web services with APIs that adhere to the REST architectural style, known as RESTful web services, have become popular. These services follow a client-server style, with stateless interactions based on standard HTTP verbs. In an effort to formally specify the interaction between clients and providers of RESTful services, various interface definition languages (IDL) have been proposed. However, for the most part, they limit themselves to the syntactic level of the interfaces and the description of the data structures and the interaction points. The HEADREST language was developed as an IDL that addresses these limitations, supporting the description of the RESTful APIs also at the semantical level. Through the use of types and assertions we not only define the structure of the data transmitted but also relate output with input and the state of the server. One of the main advantages of having formal descriptions of RESTful APIs is the ability to generate a lot of boilerplate code for both clients and servers. This work addresses the problem of code generation for RESTful APIs described in HEADREST and aims to investigate how the existing code generation techniques for the syntactical aspects of RESTful APIs can be extended to take into account also the behavioural properties that can be described in HEADREST. Given that HEADREST adopts many concepts from the Open API Specification (OAS), this work capitalised on the code generation tools available for OAS and encompassed the development of a prototypical implementation of a code generator for clients and servers from HEADREST specifications
    corecore