115 research outputs found

    Static techniques for reducing memory usage in the C implementation of Whiley programs

    Get PDF
    Languages that use call-by-value semantics, such as Whiley, can make program verification easier. But effcient implementation becomes harder, due to the overhead of copying and garbage collection. This paper describes how a mixture of static analysis and runtime-monitoring can be used to eliminate unnecessary copying and deallocate memory with- out garbage collection. We show that this allows Whiley programs to be translated into effcient C implementations

    Types to the rescue: verification of REST APIs Consumer Code

    Get PDF
    Tese de mestrado, Engenharia Informática (Engenharia de Software) Universidade de Lisboa, Faculdade de Ciências, 2019As arquiteturas de software são fundamentais para o desenvolvimento de um software fiável, escalável e com uma fácil manutenção. Com a criação e crescimento da internet, surgiu a necessidade de criar padrões de software que permitam trocar informação neste novo ambiente. O protocolo SOAP e a arquitetura REST são, dos padrões que emergiram, os que mais se destacaram ao nível da utilização. Durante as últimas décadas, e devido ao grande crescimento daWorld WideWeb, a arquitetura REST tem se destacado como a mais importante e utilizada pela comunidade. REST (Representational State Transfer) retira partido das características do protocolo HTTP para descrever as mensagens trocadas entre clientes e servidores. Os dados na arquitectura REST são representados por recursos, que são identificados por um identificador único (p.e. URI) e que podem ter várias representações (em vários formatos), que são os dados concretos de um recurso. A interação com os recursos é feita usando os métodos HTTP: get para obter um recurso, post para adicionar um novo recurso, put para fazer uma atualização de um recurso, delete para remover um recurso; entre outros, sendo estes os principais para aplicações CRUD. As aplicações RESTful, isto é, aplicações que fornecem os seus serviços através da arquitetura REST, devem ser claras na especificação dos seus serviços de forma a que os seus clientes possam utilizá-las sem erros. Para tal, existem várias linguagens de especificação de APIs REST, como a Open API Specification ou a API Blueprint, no qual é possível descrever formalmente as várias operações fornecidas pelo serviço, como o formato dos pedidos de cada operação e as respetivas respostas. No entanto, estas linguagens apresentam uma limitação nas condições formais que se pode colocar nos parâmetros dos pedidos e no impacto que estes têm no formato e conteúdo da resposta. Deste modo, foi introduzida uma nova linguagem de especificação de aplicações REST, HeadREST, onde é adicionada a expressividade necessária para cobrir as lacunas das outras linguagens. Esta expressividade é introduzida com a utilização de tipos refinados, que permitem restringir os valores de um determinado tipo. Adicionalmente, é introduzida também uma operação que permite verificar se uma determinada expressão pertence a um determinado tipo. Em HeadREST, cada operação é especificada usando uma ou mais asserções. Cada asserção é composta por um método HTTP, um URI template da operação, uma pré-condição que define as condições onde esta operação é aceite, e uma pós-condição que estabelece os resultados da operação se a pré-condição for comprida. Deste modo, estas condições permitem expressar os dados enviados nos pedidos e a receber na resposta, assim como expressar o estado do conjunto de recursos antes e depois do pedido REST. Devido à utilização de tipos refinados não é possível resolver sintaticamente a relação de subtipos na validação de uma especificação HeadREST. Deste modo, é necessária uma abordagem semântica: a relação de subtipos é transformada em fórmulas de lógica de primeira ordem, e depois é utilizado um SMT solver para resolver a formula e, consecutivamente, resolver a relação de subtipos. Por outro lado, é também importante garantir que as chamadas às APIs REST cumprem as especificações das mesmas. As linguagens de programação comuns não conseguem garantir que as chamadas a um serviço REST estão de acordo com a especificação do serviço, nomeadamente se o URL da chamada é válido e se o pedido e resposta estão bem formados ao nível dos valores enviados. Assim, um cliente só percebe se as chamadas estão bem feitas em tempo de execução. Existem poucas soluções para análise estática deste tipo de chamadas (RESType é um raro exemplo) e tendem a ser limitadas e a depender de um único tipo de linguagem de especificação. Para além disso, os clientes de serviços REST tendem a ser maioritariamente desenvolvidos em JavaScript, que possui uma fraca análise estática, o que potencializa ainda mais o problema identificado.Numa primeiro passo para tentar resolver este problema desenvolveu-se a linguagem SafeScript, que se caracteriza por ser um subconjunto do JavaScript equipado com um forte sistema de tipos. O sistema de tipos é muito expressivo graças à adição de tipos refinados e também de um operador que verifica se uma expressão pertence a um tipo. SafeScript apresenta flow typing, isto é, o tipo de uma expressão depende da sua localização no fluxo de controlo do programa. Tal como no HeadREST, não é possível realizar uma simples análise sintática para a validação de tipos. No entanto, neste caso trata-se de uma linguagem imperativa com flow typing, logo uma abordagem igual de tradução direta para um SMT solver não é trivial. Deste modo, a validação de tipos é feita traduzido o código SafeScript para a linguagem intermédia Boogie, onde as necessárias validações são traduzidas como asserções, sendo que o Boogie utiliza internamente o Z3 SMT solver para resolver semanticamente as asserções. Devido à validação semântica, o compilador de SafeScript consegue detetar estaticamente diversos erros de execução comuns, como divisão por zero ou acesso a um array fora dos seus limites, e que não conseguem ser detetados por linguagens similares, como o TypeScript. SafeScript compila para JavaScript, com o intuito de poder ser utilizado em conjunto com este. Graças ao seu expressivo sistema de tipos, o validador de programas SafeScript é também um verificador estático. A partir deste é possível provar que um programa cumpre uma determinada especificação, que pode ser descrita usando os tipos refinados. Neste trabalho destacou-se a capacidade de prova do validador de SafeScript, concretamente resolvendo alguns desafios propostos pelo Verification Benchmarks Challange. A partir do SafeScript desenvolveu-se a extensão SafeRESTScript, que adiciona pedidos REST à sintaxe do SafeScript e valida-os estaticamente de encontro a uma especificação HeadREST. Para cada chamada REST são feitas principalmente duas validações. Em primeiro lugar, é verificado se o URL é um endereço válido do serviço para o método HTTP do pedido, isto é, se existe algum triplo na especificação com o par método e URL do pedido. De seguida, e com a tradução da especificação HeadREST importada para Boogie, é verificado se as chamadas REST cumprem os triplos da especificação, nomeadamente, se as pré-condições são cumpridas então as pós-condições também se devem verificar. Por exemplo, se uma pós-condição, cuja respetiva pré-condição é verdadeira para uma determinada chamada, asserta que no corpo da resposta existe um objeto com o campo id, então um acesso a este campo no corpo da resposta é validado. Neste trabalho, como exemplo ilustrativo das capacidades da linguagem, desenvolveu-se um cliente SafeRESTScript da API REST do conhecido repositório GitHub. Ambas as linguagens possuem um compilador e editor que estão disponíveis como plug-in para o IDE Eclipse, para além de uma versão terminal. As duas linguagens possuem várias limitações, e por isso muito trabalho ainda existe pela frente. No entanto, SafeScript e SafeRESTScript não têm ambição de ser linguagens de produção, mas sim contribuir para um melhoramento da análise estática de programas e mostrar que é possível auxiliar o desenvolvimento fiável de código cliente de serviços REST.REST is the architectural sytle most used in the web to exchange data. RESTful applications must be well documented so clients can use its services without doubts and errors. There are several specification languages for describing REST APIs, e.g. Open API Specification, but they lack on expressiveness to describe the exchanged data. Head- REST specification language was introduced to address this gap, containing an expressive type system that allows to describe rigorously the request and response formats of a service endpoint. On the other hand, it is also important to ensure that REST calls in client code meet the service specification. This challenge is even more important taking in account that most REST clients are made in JavaScript, a weakly typed language. To aim this problem, we firstly developed SafeScript, a subset of JavaScript equipped with a strong type system. SafeScript has a expressive type system thanks to refinement types and to an operator that checks if an expression belongs to a type. A semantic subtyping analysis is necessary; the typing validation in done by translating the code to Boogie intermediate language which uses the Z3 SMT solver for the semantic evaluation. SafeScript compiles directly to JavaScript. SafeRESTScript is an extension of SafeScript that adds REST calls, being a client-side language for consuming REST services. It uses HeadREST specifications to verify REST calls: whether the URL of the call is a valid endpoint and whether the data exchanged match the pre and post-conditions declared in the specification. With the creation of this new languages, we dot not intend in having them as production languages, but to show that it is possible to contribute with a better verification and correction in area where software reliability is weak

    Efficient system auditing for real-time systems

    Get PDF
    Auditing is a powerful tool that provides machine operators with the mechanisms to observe, and glean insights from, generic computing systems. The information obtained by auditing systems can be used to detect and explain suspicious activity, from fault/error diagnosis to intrusion detection and forensics after security incidents. While such mechanisms would be beneficial for Real-Time Systems (RTS), existing audit frameworks are rarely designed for this domain. If audit mechanisms are not carefully integrated into real-time operating systems, they can negatively impact the temporal constraints of RTS. In this paper, we demonstrate how to apply commodity audit frameworks to real-time systems. We design novel kernel-based reduction techniques that leverage the periodic, repetitive, nature of real-time (RT) applications to aggressively reduce the costs/overheads of a system-level auditing, viz. Linux Audit (a popular open source audit framework). This is coupled with a rigorous analysis to understand the conflicts between the temporal requirements of RT applications and the audit subsystem. Our approach, Ellipsis, generates succinct behaviors of RT application and retains a lossless record of process activity, enabling analysis/detection of unexpected activity while meeting temporal constraints. Our evaluation of Ellipsis, using ArduPilot (an open-source autopilot application suite) and synthetically generated tasksets, demonstrates up to 93% reduction in audit event generation

    An assessment of health hazards associated with the use of water mist systems as a cooling intervention in Australia

    Get PDF
    Water mist systems (WMS) installed and used for cooling ambient temperatures in public places fall within the category of premise plumbing. Premise plumbing refers to the water distribution networks that lie downstream of the water meter, and within buildings. The colonization of premise plumbing by opportunistic premise plumbing pathogens (OPPPs) such as Legionella pneumophila, Pseudomonas aeruginosa, Mycobacterium avium, Acanthamoeba and Naegleria fowleri is emerging as a challenge for public health and water quality management. Contrary to other premise plumbing features like showers and domestic taps that have been implicated in various waterborne infections, the health risks associated with WMS are not well understood. The primary aim of this thesis research was to advance understanding of the health risks associated with OPPPs in WMS used to cool ambient air temperatures for thermal comfort. A literature review was the foundation (1st study) of this research and aimed to characterise the state of knowledge about the health risks of OPPPs in WMS. The 2nd study, a questionnaire survey of 10 WMS owners and 22 Environmental Health Officers (EHOs) was conducted as formative research to understand and describe the characteristic features of WMS, as well as the knowledge, perspectives and practices of operators, and regulatory authorities that manage these systems within the context of public health legislation. Additionally, this formative research informed the research methodology for the 3rd study, a microbial investigation of 10 WMS, from which 30 bioaerosol (air), biofilm (surface) and water samples were collected, giving a total of ninety samples (N=90). Microbial samples were analysed by both culture-based (growth media) and culture-independent (polymerase chain reaction (PCR)) methods to quantify and identify the presence of 5 representative OPPPs: L. pneumophila, P. aeruginosa, M. avium and free-living amoebae (FLA) including Acanthamoeba and N. fowleri. Data on water profile parameters of water temperature, water pH and concentration of free residual chlorine, total dissolved solids (TDS) and total organic carbon (TOC) were statistically analysed to determine their impact on the colonisation of OPPP in WMS. This research identified a critical knowledge gap regarding the health risk of OPPPs in WMS (Study 1), as well as low levels of awareness amongst WMS managers and EHO’s regarding the health risks associated with WMS including: non-existent or adhoc maintenance regimes, and lack of training and education about the systems (Study 2). Furthermore, the Study 3 research demonstrated colonisation of WMS by public health pathogens of concern including of P. aeruginosa (49%), L. pneumophila serogroup (Sg) 2-14 (18%) and L. pneumophila Sg 1 (6 %), and Acanthamoeba (\u3c 3 %). On the positive, neither M. avium nor N. fowleri were detected in the WMSs investigated. Free residual chlorine was negatively correlated with all OPPPs, except for Acanthamoeba indicating that this may be a critical variable in the safe operation of WMS. L. pneumophila Sg 2-14 and Sg 1 were strongly correlated with both TDS and TOC concentration, indicating that these indicators could be used as a warning for at-risk systems, as could elevated water temperature that was positively correlated with P. aeruginosa. This research indicates that WMS present a potential health risk due to colonisation by L. pneumophila Sg 2-14, L. pneumophila Sg 1, P. aeruginosa and Acanthamoeba, and should be regulated under public health legislation for microbial contamination of air handling and water systems. Consideration should be given to reviewing existing public health legislation to capture WMS used as a cooling intervention with consideration of other emerging OPPPs besides Legionella spp., as well as development of codes/guidelines for the auditing and operation of WMS, and the upskilling and training of operators and EHOs on their associated health risks. This research has practical applications in public health, as well as commercial WMS cleaning and maintenance businesses, and other industries that use WMS for air cooling and dust suppression

    Fundamental Approaches to Software Engineering

    Get PDF
    This open access book constitutes the proceedings of the 23rd International Conference on Fundamental Approaches to Software Engineering, FASE 2020, which took place in Dublin, Ireland, in April 2020, and was held as Part of the European Joint Conferences on Theory and Practice of Software, ETAPS 2020. The 23 full papers, 1 tool paper and 6 testing competition papers presented in this volume were carefully reviewed and selected from 81 submissions. The papers cover topics such as requirements engineering, software architectures, specification, software quality, validation, verification of functional and non-functional properties, model-driven development and model transformation, software processes, security and software evolution

    Constructive tool design for formal languages : from semantics to executing models

    Get PDF
    Embedded, distributed, real-time, electronic systems are becoming more and more dominant in our lives. Hidden in cars, televisions, mp3-players, mobile phones and other appliances, these hardware/software systems influence our daily activities. Their design can be a huge effort and has to be carried out by engineers in a limited amount of time. Computer-aided modelling and design automation shorten the design cycle of these systems enabling companies to deliver their products sooner than their competitors. The design process is divided into different levels of abstraction, starting with a vague product idea (abstract) and ending up with a concrete description ready for implementation. Recently, research has started to focus on the system level, being a promising new area at which the product design could start. This dissertation develops a constructive approach to building tools for system-level design/description/modelling/specification languages, and shows the applicability of this method to the system-level language POOSL (Parallel Object-Oriented Specification Language). The formal semantics of this language is redefined and partly redeveloped, adding probabilistic features, real-time, inheritance, concurrency within processes, dynamic ports and atomic (indivisible) expressions, making the language suitable for performance analysis/modelling. The semantics is two-layered, using a probabilistic denotational semantics for stating the meaning of POOSL’s data layer, and using a probabilistic structural operational semantics for the process layer and architecture layer. The constructive approach has yielded the system-level simulation tool rotalumis, capable of executing large industrial designs, which has been demonstrated by two successful case studies—an ATM-packet switch (in conjunction with IBM Research at Z¨urich) and a packet routing switch for the Internet (in association with Alcatel/Bell at Antwerp). The more generally applicable optimisations of the execution engine (rotalumis) and the decisions taken in its design are discussed in full detail. Prototyping, where the system-level model functions as a part of the prototype implementation of the designed product, is supported by rotalumis-rt, a real-time variant of the execution engine. The viability of prototyping is shown by a case study of a learning infrared remote control, partially realised in hardware and completed with a system-level model. Keywords formal languages / formal specification / modelling languages / systemlevel design / embedded systems / real-time systems / performance analysis / discrete event simulation / probabilistic process algebra / design automation / prototyping / simulation tool

    Sustainability in design: now! Challenges and opportunities for design research, education and practice in the XXI century

    Get PDF
    Copyright @ 2010 Greenleaf PublicationsLeNS project funded by the Asia Link Programme, EuropeAid, European Commission

    A framework for supporting knowledge representation – an ontological based approach

    Get PDF
    Dissertação para obtenção do Grau de Mestre em Engenharia Electrotécnica e de ComputadoresThe World Wide Web has had a tremendous impact on society and business in just a few years by making information instantly available. During this transition from physical to electronic means for information transport, the content and encoding of information has remained natural language and is only identified by its URL. Today, this is perhaps the most significant obstacle to streamlining business processes via the web. In order that processes may execute without human intervention, knowledge sources, such as documents, must become more machine understandable and must contain other information besides their main contents and URLs. The Semantic Web is a vision of a future web of machine-understandable data. On a machine understandable web, it will be possible for programs to easily determine what knowledge sources are about. This work introduces a conceptual framework and its implementation to support the classification and discovery of knowledge sources, supported by the above vision, where such sources’ information is structured and represented through a mathematical vector that semantically pinpoints the relevance of those knowledge sources within the domain of interest of each user. The presented work also addresses the enrichment of such knowledge representations, using the statistical relevance of keywords based on the classical vector space model concept, and extending it with ontological support, by using concepts and semantic relations, contained in a domain-specific ontology, to enrich knowledge sources’ semantic vectors. Semantic vectors are compared against each other, in order to obtain the similarity between them, and better support end users with knowledge source retrieval capabilities

    Necessity Specifications for Robustness

    Full text link
    Robust modules guarantee to do only what they are supposed to do - even in the presence of untrusted, malicious clients, and considering not just the direct behaviour of individual methods, but also the emergent behaviour from calls to more than one method. Necessity is a language for specifying robustness, based on novel necessity operators capturing temporal implication, and a proof logic that derives explicit robustness specifications from functional specifications. Soundness and an exemplar proof are mechanised in Coq
    corecore