35 research outputs found

    The bigwig Project

    Get PDF
    We present the results of the <bigwig> project, which aims to design and implement a high-level domain-specific language for programming interactive Web services. A fundamental aspect of the development of the World Wide Web during the last decade is the gradual change from static to dynamic generation of Web pages. Generating Web pages dynamically in dialogue with the client has the advantage of providing up-to-date and tailor-made information. The development of systems for constructing such dynamic Web services has emerged as a whole new research area. The <bigwig> language is designed by analyzing its application domain and identifying fundamental aspects of Web services inspired by problems and solutions in existing Web service development languages. The core of the design consists of a session-centered service model together with a flexible template-based mechanism for dynamic Web page construction. Using specialized program analyses, certain Web specific properties are verified at compile-time, for instance that only valid HTML 4.01 is ever shown to the clients. In addition, the design provides high-level solutions to form field validation, caching of dynamic pages, and temporal-logic based concurrency control, and it proposes syntax macros for making highly domain-specific languages. The language is implemented via widely available Web technologies, such as Apache on the server-side and JavaScript and Java Applets on the client-side. We conclude with experience and evaluation of the project

    The bigwig Project

    Get PDF
    We present the results of the -bigwig- project, which aims to design and implement a high-level domain-specific language for programming interactive Web services.The World Wide Web has undergone an extreme development since its invention ten years ago. A fundamental aspect is the change from static to dynamic generation of Web pages. Generating Web pages dynamically in dialogue with the client has the advantage of providing up-to-date and tailor-made information. The development of systems for constructing such dynamic Web services has emerged as a whole new research area. The language is designed by analyzing its application domain and identifying fundamental aspects of Web services. Each aspect is handled by a nearly independent sublanguage, and the entire collection is integrated into a common core language. The -bigwig- compiler uses the available Web technologies as target languages, making -bigwig- available on almost any combination of browser and server, without relying on plug-ins or server modules

    High-throughput molecular assays for inclusion in personalised oncology trials – State-of-the-art and beyond

    Get PDF
    In the last decades, the development of high-throughput molecular assays has revolutionised cancer diagnostics, paving the way for the concept of personalised cancer medicine. This progress has been driven by the introduction of such technologies through biomarker-driven oncology trials. In this review, strengths and limitations of various state-of-the-art sequencing technologies, including gene panel sequencing (DNA and RNA), whole-exome/whole-genome sequencing and whole-transcriptome sequencing, are explored, focusing on their ability to identify clinically relevant biomarkers with diagnostic, prognostic and/or predictive impact. This includes the need to assess complex biomarkers, for example microsatellite instability, tumour mutation burden and homologous recombination deficiency, to identify patients suitable for specific therapies, including immunotherapy. Furthermore, the crucial role of biomarker analysis and multidisciplinary molecular tumour boards in selecting patients for trial inclusion is discussed in relation to various trial concepts, including drug repurposing. Recognising that today's exploratory techniques will evolve into tomorrow's routine diagnostics and clinical study inclusion assays, the importance of emerging technologies for multimodal diagnostics, such as proteomics and in vivo drug sensitivity testing, is also discussed. In addition, key regulatory aspects and the importance of patient engagement in all phases of a clinical trial are described. Finally, we propose a set of recommendations for consideration when planning a new precision cancer medicine trial.imag

    High-throughput molecular assays for inclusion in personalised oncology trials – State-of-the-art and beyond

    Get PDF
    In the last decades, the development of high-throughput molecular assays has revolutionised cancer diagnostics, paving the way for the concept of personalised cancer medicine. This progress has been driven by the introduction of such technologies through biomarker-driven oncology trials. In this review, strengths and limitations of various state-of-the-art sequencing technologies, including gene panel sequencing (DNA and RNA), whole-exome/whole-genome sequencing and whole-transcriptome sequencing, are explored, focusing on their ability to identify clinically relevant biomarkers with diagnostic, prognostic and/or predictive impact. This includes the need to assess complex biomarkers, for example microsatellite instability, tumour mutation burden and homologous recombination deficiency, to identify patients suitable for specific therapies, including immunotherapy. Furthermore, the crucial role of biomarker analysis and multidisciplinary molecular tumour boards in selecting patients for trial inclusion is discussed in relation to various trial concepts, including drug repurposing. Recognising that today's exploratory techniques will evolve into tomorrow's routine diagnostics and clinical study inclusion assays, the importance of emerging technologies for multimodal diagnostics, such as proteomics and in vivo drug sensitivity testing, is also discussed. In addition, key regulatory aspects and the importance of patient engagement in all phases of a clinical trial are described. Finally, we propose a set of recommendations for consideration when planning a new precision cancer medicine trial.imag

    Analyzing Ambiguity of Context-Free Grammars

    Get PDF
    It has been known since 1962 that the ambiguity problem for context-free grammars is undecidable. Ambiguity in context-free grammars is a recurring problem in language design and parser generation, as well as in applications where grammars are used as models of real-world physical structures. We observe that there is a simple linguistic characterization of the grammar ambiguity problem, and we show how to exploit this by presenting an ambiguity analysis framework based on conservative language approximations. As a concrete example, we propose a technique based on local regular approximations and grammar unfoldings. We evaluate the analysis using grammars that occur in RNA analysis in bioinformatics, and we demonstrate that it is sufficiently precise and efficient to be practically useful

    Dual Syntax for XML Languages

    No full text
    XML is successful as a machine processable data interchange format, but it is often too verbose for human use. For this reason, many XML languages permit an alternative more legible non-XML syntax. XSLT stylesheets are often used to convert from the XML syntax to the alternative syntax; however, such transformations are not reversible since no general tool exists to automatically parse the alternative syntax back into XML. We present XSugar, which makes it possible to manage dual syntax for XML languages. An XSugar specification is built around a context-free grammar that unifies the two syntaxes of a language. Given such a specification, the XSugar tool can translate from alternative syntax to XML and vice versa. Moreover, the tool statically checks that the transformations are reversible and that all XML documents generated from the alternative syntax are valid according to a given XML schema

    Static Validation of Dynamically Generated HTML

    Get PDF
    We describe a static analysis of <bigwig> programs that efficiently decides if all dynamically computed XHTML documents presented to the client will validate according to the official DTD. We employ two data-flow analyses to construct a graph summarizing the possible documents. This graph is subsequently analyzed to determine validity of those documents. By evaluating the technique on a number of realistic benchmarks, we demonstrate that it is sufficiently fast and precise to be practically useful
    corecore