2,009 research outputs found

    Can Component/Service-Based Systems Be Proved Correct?

    Get PDF
    Component-oriented and service-oriented approaches have gained a strong enthusiasm in industries and academia with a particular interest for service-oriented approaches. A component is a software entity with given functionalities, made available by a provider, and used to build other application within which it is integrated. The service concept and its use in web-based application development have a huge impact on reuse practices. Accordingly a considerable part of software architectures is influenced; these architectures are moving towards service-oriented architectures. Therefore applications (re)use services that are available elsewhere and many applications interact, without knowing each other, using services available via service servers and their published interfaces and functionalities. Industries propose, through various consortium, languages, technologies and standards. More academic works are also undertaken concerning semantics and formalisation of components and service-based systems. We consider here both streams of works in order to raise research concerns that will help in building quality software. Are there new challenging problems with respect to service-based software construction? Besides, what are the links and the advances compared to distributed systems?Comment: 16 page

    An Introduction to Programming for Bioscientists: A Python-based Primer

    Full text link
    Computing has revolutionized the biological sciences over the past several decades, such that virtually all contemporary research in the biosciences utilizes computer programs. The computational advances have come on many fronts, spurred by fundamental developments in hardware, software, and algorithms. These advances have influenced, and even engendered, a phenomenal array of bioscience fields, including molecular evolution and bioinformatics; genome-, proteome-, transcriptome- and metabolome-wide experimental studies; structural genomics; and atomistic simulations of cellular-scale molecular assemblies as large as ribosomes and intact viruses. In short, much of post-genomic biology is increasingly becoming a form of computational biology. The ability to design and write computer programs is among the most indispensable skills that a modern researcher can cultivate. Python has become a popular programming language in the biosciences, largely because (i) its straightforward semantics and clean syntax make it a readily accessible first language; (ii) it is expressive and well-suited to object-oriented programming, as well as other modern paradigms; and (iii) the many available libraries and third-party toolkits extend the functionality of the core language into virtually every biological domain (sequence and structure analyses, phylogenomics, workflow management systems, etc.). This primer offers a basic introduction to coding, via Python, and it includes concrete examples and exercises to illustrate the language's usage and capabilities; the main text culminates with a final project in structural bioinformatics. A suite of Supplemental Chapters is also provided. Starting with basic concepts, such as that of a 'variable', the Chapters methodically advance the reader to the point of writing a graphical user interface to compute the Hamming distance between two DNA sequences.Comment: 65 pages total, including 45 pages text, 3 figures, 4 tables, numerous exercises, and 19 pages of Supporting Information; currently in press at PLOS Computational Biolog

    Optimal algorithms for selecting top-k combinations of attributes : theory and applications

    Get PDF
    Traditional top-k algorithms, e.g., TA and NRA, have been successfully applied in many areas such as information retrieval, data mining and databases. They are designed to discover k objects, e.g., top-k restaurants, with highest overall scores aggregated from different attributes, e.g., price and location. However, new emerging applications like query recommendation require providing the best combinations of attributes, instead of objects. The straightforward extension based on the existing top-k algorithms is prohibitively expensive to answer top-k combinations because they need to enumerate all the possible combinations, which is exponential to the number of attributes. In this article, we formalize a novel type of top-k query, called top-k, m, which aims to find top-k combinations of attributes based on the overall scores of the top-m objects within each combination, where m is the number of objects forming a combination. We propose a family of efficient top-k, m algorithms with different data access methods, i.e., sorted accesses and random accesses and different query certainties, i.e., exact query processing and approximate query processing. Theoretically, we prove that our algorithms are instance optimal and analyze the bound of the depth of accesses. We further develop optimizations for efficient query evaluation to reduce the computational and the memory costs and the number of accesses. We provide a case study on the real applications of top-k, m queries for an online biomedical search engine. Finally, we perform comprehensive experiments to demonstrate the scalability and efficiency of top-k, m algorithms on multiple real-life datasets.Peer reviewe

    Transforming OCL to PVS: Using Theorem Proving Support for Analysing Model Constraints

    Get PDF
    The Unified Modelling Language (UML) is a de facto standard language for describing software systems. UML models are often supplemented with Object Constraint Language (OCL) constraints, to capture detailed properties of components and systems. Sophisticated tools exist for analysing UML models, e.g., to check that well-formedness rules have been satisfied. As well, tools are becoming available to analyse and reason about OCL constraints. Previous work has been done on analysing OCL constraints by translating them to formal languages and then analysing the translated constraints with tools such as theorem provers. This project contributes a transformation from OCL to the specification language of the Prototype Verification System (PVS). PVS can be used to analyse and reason about translated OCL constraints. A particular novelty of this project is that it carries out the transformation of OCL to PVS by using model transformation, as exemplified by the OMG's Model-Driven Architecture. The project implements and automates model transformations from OCL to PVS using the Epsilon Transformation Language (ETL) and tests the results using the Epsilon Comparison Language (ECL )

    Schema Independent Relational Learning

    Full text link
    Learning novel concepts and relations from relational databases is an important problem with many applications in database systems and machine learning. Relational learning algorithms learn the definition of a new relation in terms of existing relations in the database. Nevertheless, the same data set may be represented under different schemas for various reasons, such as efficiency, data quality, and usability. Unfortunately, the output of current relational learning algorithms tends to vary quite substantially over the choice of schema, both in terms of learning accuracy and efficiency. This variation complicates their off-the-shelf application. In this paper, we introduce and formalize the property of schema independence of relational learning algorithms, and study both the theoretical and empirical dependence of existing algorithms on the common class of (de) composition schema transformations. We study both sample-based learning algorithms, which learn from sets of labeled examples, and query-based algorithms, which learn by asking queries to an oracle. We prove that current relational learning algorithms are generally not schema independent. For query-based learning algorithms we show that the (de) composition transformations influence their query complexity. We propose Castor, a sample-based relational learning algorithm that achieves schema independence by leveraging data dependencies. We support the theoretical results with an empirical study that demonstrates the schema dependence/independence of several algorithms on existing benchmark and real-world datasets under (de) compositions

    Modeling and Analysis of Probabilistic Real-time Systems through Integrating Event-B and Probabilistic Model Checking

    Get PDF
    Event-B is a formal method used in the development of safety critical systems. However, these systems may introduce uncertainty, and need also to meet real-time requirements, which make their modeling and analysis a challenging task. Existing works on extending Event-B with probability and time did not address both probability and time in a single framework. Besides, they did focus the most on extending the language itself, not on integrating the extended Event-B with verification. In this paper, we aim to represent both probability and time in the Event-B language, and we will show how such a representation can be automatically translated into Probabilistic Timed Automata (PTA) described in the language of the probabilistic model checker PRISM. This translation would allow us to analyze probabilistic, as well as time-bounded probabilistic reachability properties of probabilistic real-time systems through the Probabilistic Timed CTL (PTCTL) logic
    • …
    corecore