453 research outputs found

    Declarative Specification of Intraprocedural Control-flow and Dataflow Analysis

    Get PDF
    Static program analysis plays a crucial role in ensuring the quality and security of software applications by detecting and fixing bugs, and potential security vulnerabilities in the code. The use of declarative paradigms in dataflow analysis as part of static program analysis has become increasingly popular in recent years. This is due to its enhanced expressivity and modularity, allowing for a higher-level programming approach, resulting in easy and efficient development.The aim of this thesis is to explore the design and implementation of control-flow and dataflow analyses using the declarative Reference Attribute Grammars formalism. Specifically, we focus on the construction of analyses directly on the source code rather than on an intermediate representation.The main result of this thesis is our language-agnostic framework, called IntraCFG. IntraCFG enables efficient and effective dataflow analysis by allowing the construction of precise and source-level control-flow graphs. The framework superimposes control-flow graphs on top of the abstract syntax tree of the program. The effectiveness of IntraCFG is demonstrated through two case studies, IntraJ and IntraTeal. These case studies showcase the potential and flexibility of IntraCFG in diverse contexts, such as bug detection and education. IntraJ supports the Java programming language, while IntraTeal is a tool designed for teaching program analysis for an educational language, Teal.IntraJ has proven to be faster than and as precise as well-known industrial tools. The combination of precision, performance, and on-demand evaluation in IntraJ leads to low latency in querying the analysis results. This makes IntraJ a suitable tool for use in interactive tools. Preliminary experiments have also been conducted to demonstrate how IntraJ can be used to support interactive bug detection and fixing.Additionally, this thesis presents JFeature, a tool for automatically extracting and summarising the features of a Java corpus, including the use of different Java features (e.g., use of Lambda Expressions) across different Java versions. JFeature provides researchers and developers with a deeper understanding of the characteristics of corpora, enabling them to identify suitable benchmarks for the evaluation of their tools and methodologies

    Software Architecture in Practice: Challenges and Opportunities

    Full text link
    Software architecture has been an active research field for nearly four decades, in which previous studies make significant progress such as creating methods and techniques and building tools to support software architecture practice. Despite past efforts, we have little understanding of how practitioners perform software architecture related activities, and what challenges they face. Through interviews with 32 practitioners from 21 organizations across three continents, we identified challenges that practitioners face in software architecture practice during software development and maintenance. We reported on common software architecture activities at software requirements, design, construction and testing, and maintenance stages, as well as corresponding challenges. Our study uncovers that most of these challenges center around management, documentation, tooling and process, and collects recommendations to address these challenges.Comment: Preprint of Full Research Paper, the 31st ACM Joint European Software Engineering Conference and Symposium on the Foundations of Software Engineering (ESEC/FSE '23

    The Role of a Microservice Architecture on cybersecurity and operational resilience in critical systems

    Get PDF
    Critical systems are characterized by their high degree of intolerance to threats, in other words, their high level of resilience, because depending on the context in which the system is inserted, the slightest failure could imply significant damage, whether in economic terms, or loss of reputation, of information, of infrastructure, of the environment, or human life. The security of such systems is traditionally associated with legacy infrastructures and data centers that are monolithic, which translates into increasingly high evolution and protection challenges. In the current context of rapid transformation where the variety of threats to systems has been consistently increasing, this dissertation aims to carry out a compatibility study of the microservice architecture, which is denoted by its characteristics such as resilience, scalability, modifiability and technological heterogeneity, being flexible in structural adaptations, and in rapidly evolving and highly complex settings, making it suited for agile environments. It also explores what response artificial intelligence, more specifically machine learning, can provide in a context of security and monitorability when combined with a simple banking system that adopts the microservice architecture.Os sistemas críticos são caracterizados pelo seu elevado grau de intolerância às ameaças, por outras palavras, o seu alto nível de resiliência, pois dependendo do contexto onde se insere o sistema, a mínima falha poderá implicar danos significativos, seja em termos económicos, de perda de reputação, de informação, de infraestrutura, de ambiente, ou de vida humana. A segurança informática de tais sistemas está tradicionalmente associada a infraestruturas e data centers legacy, ou seja, de natureza monolítica, o que se traduz em desafios de evolução e proteção cada vez mais elevados. No contexto atual de rápida transformação, onde as variedades de ameaças aos sistemas têm vindo consistentemente a aumentar, esta dissertação visa realizar um estudo de compatibilidade da arquitetura de microserviços, que se denota pelas suas caraterísticas tais como a resiliência, escalabilidade, modificabilidade e heterogeneidade tecnológica, sendo flexível em adaptações estruturais, e em cenários de rápida evolução e elevada complexidade, tornando-a adequada a ambientes ágeis. Explora também a resposta que a inteligência artificial, mais concretamente, machine learning, pode dar num contexto de segurança e monitorabilidade quando combinado com um simples sistema bancário que adota uma arquitetura de microserviços

    Measuring the impact of COVID-19 on hospital care pathways

    Get PDF
    Care pathways in hospitals around the world reported significant disruption during the recent COVID-19 pandemic but measuring the actual impact is more problematic. Process mining can be useful for hospital management to measure the conformance of real-life care to what might be considered normal operations. In this study, we aim to demonstrate that process mining can be used to investigate process changes associated with complex disruptive events. We studied perturbations to accident and emergency (A &E) and maternity pathways in a UK public hospital during the COVID-19 pandemic. Co-incidentally the hospital had implemented a Command Centre approach for patient-flow management affording an opportunity to study both the planned improvement and the disruption due to the pandemic. Our study proposes and demonstrates a method for measuring and investigating the impact of such planned and unplanned disruptions affecting hospital care pathways. We found that during the pandemic, both A &E and maternity pathways had measurable reductions in the mean length of stay and a measurable drop in the percentage of pathways conforming to normative models. There were no distinctive patterns of monthly mean values of length of stay nor conformance throughout the phases of the installation of the hospital’s new Command Centre approach. Due to a deficit in the available A &E data, the findings for A &E pathways could not be interpreted

    Security and Privacy for Modern Wireless Communication Systems

    Get PDF
    The aim of this reprint focuses on the latest protocol research, software/hardware development and implementation, and system architecture design in addressing emerging security and privacy issues for modern wireless communication networks. Relevant topics include, but are not limited to, the following: deep-learning-based security and privacy design; covert communications; information-theoretical foundations for advanced security and privacy techniques; lightweight cryptography for power constrained networks; physical layer key generation; prototypes and testbeds for security and privacy solutions; encryption and decryption algorithm for low-latency constrained networks; security protocols for modern wireless communication networks; network intrusion detection; physical layer design with security consideration; anonymity in data transmission; vulnerabilities in security and privacy in modern wireless communication networks; challenges of security and privacy in node–edge–cloud computation; security and privacy design for low-power wide-area IoT networks; security and privacy design for vehicle networks; security and privacy design for underwater communications networks

    Machine Learning Algorithm for the Scansion of Old Saxon Poetry

    Get PDF
    Several scholars designed tools to perform the automatic scansion of poetry in many languages, but none of these tools deal with Old Saxon or Old English. This project aims to be a first attempt to create a tool for these languages. We implemented a Bidirectional Long Short-Term Memory (BiLSTM) model to perform the automatic scansion of Old Saxon and Old English poems. Since this model uses supervised learning, we manually annotated the Heliand manuscript, and we used the resulting corpus as labeled dataset to train the model. The evaluation of the performance of the algorithm reached a 97% for the accuracy and a 99% of weighted average for precision, recall and F1 Score. In addition, we tested the model with some verses from the Old Saxon Genesis and some from The Battle of Brunanburh, and we observed that the model predicted almost all Old Saxon metrical patterns correctly misclassified the majority of the Old English input verses

    A conceptual framework for uncertainty in software systems and its application to software architectures

    Get PDF
    The development and operation of a software system involve many aspects including processes, artefacts, infrastructure and environments. Most of these aspects are vulnerable to uncertainty. Thus, the identification, representation and management of uncertainty in software systems is important and will be of interest to many stakeholders in software systems. The hypothesis of this work is that such consideration would benefit from an underlying conceptual framework that allows stakeholders to characterise, analyse and mitigate uncertainties. This PhD proposes a framework to provide a generic foundation for the systematic and explicit consideration of uncertainty in software systems by consolidating and extending existing approaches to dealing with uncertainty, which are typically tailored to specific domains or artefacts. The thesis applies the framework to software architectures, which are fundamental in determining the structure, behaviour and qualities of software systems and are thus suited to serve as an exemplar artefact. The framework is evaluated using the software architectures of case studies from 3 different domains. The contributions of the research to the study of uncertainty in software systems include a literature review of approaches to managing uncertainty in software architecture, a review of existing work on uncertainty frameworks related to software systems, a conceptual framework for uncertainty in software systems, a conceptualisation of the workbench infrastructure as a basis for building an uncertainty consideration workbench of tools for representing uncertainty as part of software architecture descriptions, and an evaluation of the uncertainty framework using three software architecture case studies

    FuzzTheREST - Intelligent Automated Blackbox RESTful API Fuzzer

    Get PDF
    In recent years, the pervasive influence of technology has deeply intertwined with human life, impacting diverse fields. This relationship has evolved into a dependency, with software systems playing a pivotal role, necessitating a high level of trust. Today, a substantial portion of software is accessed through Application Programming Interfaces, particularly web APIs, which predominantly adhere to the Representational State Transfer architecture. However, this architectural choice introduces a wide range of potential vulnerabilities, which are available and accessible at a network level. The significance of Software testing becomes evident when considering the widespread use of software in various daily tasks that impact personal safety and security, making the identification and assessment of faulty software of paramount importance. In this thesis, FuzzTheREST, a black-box RESTful API fuzzy testing framework, is introduced with the primary aim of addressing the challenges associated with understanding the context of each system under test and conducting comprehensive automated testing using diverse inputs. Operating from a black-box perspective, this fuzzer leverages Reinforcement Learning to efficiently uncover vulnerabilities in RESTful APIs by optimizing input values and combinations, relying on mutation methods for input exploration. The system's value is further enhanced through the provision of a thoroughly documented vulnerability discovery process for the user. This proposal stands out for its emphasis on explainability and the application of RL to learn the context of each API, thus eliminating the necessity for source code knowledge and expediting the testing process. The developed solution adheres rigorously to software engineering best practices and incorporates a novel Reinforcement Learning algorithm, comprising a customized environment for API Fuzzy Testing and a Multi-table Q-Learning Agent. The quality and applicability of the tool developed are also assessed, relying on the results achieved on two case studies, involving the Petstore API and an Emotion Detection module which was part of the CyberFactory#1 European research project. The results demonstrate the tool's effectiveness in discovering vulnerabilities, having found 7 different vulnerabilities and the agents' ability to learn different API contexts relying on API responses while maintaining reasonable code coverage levels.Ultimamente, a influência da tecnologia espalhou-se pela vida humana de uma forma abrangente, afetando uma grande diversidade dos seus aspetos. Com a evolução tecnológica esta acabou por se tornar uma dependência. Os sistemas de software começam assim a desempenhar um papel crucial, o que em contrapartida obriga a um elevado grau de confiança. Atualmente, uma parte substancial do software é implementada em formato de Web APIs, que na sua maioria seguem a arquitetura de transferência de estado representacional. No entanto, esta introduz uma série vulnerabilidade. A importância dos testes de software torna-se evidente quando consideramos o amplo uso de software em várias tarefas diárias que afetam a segurança, elevando ainda mais a importância da identificação e mitigação de falhas de software. Nesta tese é apresentado o FuzzTheREST, uma framework de teste fuzzy de APIs RESTful num modelo caixa preta, com o objetivo principal de abordar os desafios relacionados com a compreensão do contexto de cada sistema sob teste e a realização de testes automatizados usando uma variedade de possíveis valores. Este fuzzer utiliza aprendizagem por reforço de forma a compreender o contexto da API que está sob teste de forma a guiar a geração de valores de teste, recorrendo a métodos de mutação, para descobrir vulnerabilidades nas mesmas. Todo o processo desempenhado pelo sistema é devidamente documentado para que o utilizador possa tomar ações mediante os resultados obtidos. Esta explicabilidade e aplicação de inteligência artificial para aprender o contexto de cada API, eliminando a necessidade de analisar código fonte e acelerando o processo de testagem, enaltece e distingue a solução proposta de outras. A solução desenvolvida adere estritamente às melhores práticas de engenharia de software e inclui um novo algoritmo de aprendizagem por reforço, que compreende um ambiente personalizado para testagem Fuzzy de APIs e um Agente de QLearning com múltiplas Q-tables. A qualidade e aplicabilidade da ferramenta desenvolvida também são avaliadas com base nos resultados obtidos em dois casos de estudo, que envolvem a conhecida API Petstore e um módulo de Deteção de Emoções que fez parte do projeto de investigação europeu CyberFactory#1. Os resultados demonstram a eficácia da ferramenta na descoberta de vulnerabilidades, tendo identificado 7 vulnerabilidades distintas, e a capacidade dos agentes em aprender diferentes contextos de API com base nas respostas da mesma, mantendo níveis de cobertura aceitáveis

    Understanding the Code of Life: Holistic Conceptual Modeling of the Genome

    Full text link
    [ES] En las últimas décadas, los avances en la tecnología de secuenciación han producido cantidades significativas de datos genómicos, hecho que ha revolucionado nuestra comprensión de la biología. Sin embargo, la cantidad de datos generados ha superado con creces nuestra capacidad para interpretarlos. Descifrar el código de la vida es un gran reto. A pesar de los numerosos avances realizados, nuestra comprensión del mismo sigue siendo mínima, y apenas estamos empezando a descubrir todo su potencial, por ejemplo, en áreas como la medicina de precisión o la farmacogenómica. El objetivo principal de esta tesis es avanzar en nuestra comprensión de la vida proponiendo una aproximación holística mediante un enfoque basado en modelos que consta de tres artefactos: i) un esquema conceptual del genoma, ii) un método para su aplicación en el mundo real, y iii) el uso de ontologías fundacionales para representar el conocimiento del dominio de una forma más precisa y explícita. Las dos primeras contribuciones se han validado mediante la implementación de sistemas de información genómicos basados en modelos conceptuales. La tercera contribución se ha validado mediante experimentos empíricos que han evaluado si el uso de ontologías fundacionales conduce a una mejor comprensión del dominio genómico. Los artefactos generados ofrecen importantes beneficios. En primer lugar, se han generado procesos de gestión de datos más eficientes, lo que ha permitido mejorar los procesos de extracción de conocimientos. En segundo lugar, se ha logrado una mejor comprensión y comunicación del dominio.[CA] En les últimes dècades, els avanços en la tecnologia de seqüenciació han produït quantitats significatives de dades genòmiques, fet que ha revolucionat la nostra comprensió de la biologia. No obstant això, la quantitat de dades generades ha superat amb escreix la nostra capacitat per a interpretar-los. Desxifrar el codi de la vida és un gran repte. Malgrat els nombrosos avanços realitzats, la nostra comprensió del mateix continua sent mínima, i a penes estem començant a descobrir tot el seu potencial, per exemple, en àrees com la medicina de precisió o la farmacogenómica. L'objectiu principal d'aquesta tesi és avançar en la nostra comprensió de la vida proposant una aproximació holística mitjançant un enfocament basat en models que consta de tres artefactes: i) un esquema conceptual del genoma, ii) un mètode per a la seua aplicació en el món real, i iii) l'ús d'ontologies fundacionals per a representar el coneixement del domini d'una forma més precisa i explícita. Les dues primeres contribucions s'han validat mitjançant la implementació de sistemes d'informació genòmics basats en models conceptuals. La tercera contribució s'ha validat mitjançant experiments empírics que han avaluat si l'ús d'ontologies fundacionals condueix a una millor comprensió del domini genòmic. Els artefactes generats ofereixen importants beneficis. En primer lloc, s'han generat processos de gestió de dades més eficients, la qual cosa ha permés millorar els processos d'extracció de coneixements. En segon lloc, s'ha aconseguit una millor comprensió i comunicació del domini.[EN] Over the last few decades, advances in sequencing technology have produced significant amounts of genomic data, which has revolutionised our understanding of biology. However, the amount of data generated has far exceeded our ability to interpret it. Deciphering the code of life is a grand challenge. Despite our progress, our understanding of it remains minimal, and we are just beginning to uncover its full potential, for instance, in areas such as precision medicine or pharmacogenomics. The main objective of this thesis is to advance our understanding of life by proposing a holistic approach, using a model-based approach, consisting of three artifacts: i) a conceptual schema of the genome, ii) a method for its application in the real-world, and iii) the use of foundational ontologies to represent domain knowledge in a more unambiguous and explicit way. The first two contributions have been validated by implementing genome information systems based on conceptual models. The third contribution has been validated by empirical experiments assessing whether using foundational ontologies leads to a better understanding of the genomic domain. The artifacts generated offer significant benefits. First, more efficient data management processes were produced, leading to better knowledge extraction processes. Second, a better understanding and communication of the domain was achieved.Las fructíferas discusiones y los resultados derivados de los proyectos INNEST2021 /57, MICIN/AEI/10.13039/501100011033, PID2021-123824OB-I00, CIPROM/2021/023 y PDC2021- 121243-I00 han contribuido en gran medida a la calidad final de este tesis.García Simón, A. (2022). Understanding the Code of Life: Holistic Conceptual Modeling of the Genome [Tesis doctoral]. Universitat Politècnica de València. https://doi.org/10.4995/Thesis/10251/19143
    corecore