41 research outputs found

    Vulnerability detection in device drivers

    Get PDF
    Tese de doutoramento, Informática (Ciência da Computação), Universidade de Lisboa, Faculdade de Ciências, 2017The constant evolution in electronics lets new equipment/devices to be regularly made available on the market, which has led to the situation where common operating systems (OS) include many device drivers(DD) produced by very diverse manufactures. Experience has shown that the development of DD is error prone, as a majority of the OS crashes can be attributed to flaws in their implementation. This thesis addresses the challenge of designing methodologies and tools to facilitate the detection of flaws in DD, contributing to decrease the errors in this kind of software, their impact in the OS stability, and the security threats caused by them. This is especially relevant because it can help developers to improve the quality of drivers during their implementation or when they are integrated into a system. The thesis work started by assessing how DD flaws can impact the correct execution of the Windows OS. The employed approach used a statistical analysis to obtain the list of kernel functions most used by the DD, and then automatically generated synthetic drivers that introduce parameter errors when calling a kernel function, thus mimicking a faulty interaction. The experimental results showed that most targeted functions were ineffective in the defence of the incorrect parameters. A reasonable number of crashes and a small number of hangs were observed suggesting a poor error containment capability of these OS functions. Then, we produced an architecture and a tool that supported the automatic injection of network attacks in mobile equipment (e.g., phone), with the objective of finding security flaws (or vulnerabilities) in Wi-Fi drivers. These DD were selected because they are of easy access to an external adversary, which simply needs to create malicious traffic to exploit them, and therefore the flaws in their implementation could have an important impact. Experiments with the tool uncovered a previously unknown vulnerability that causes OS hangs, when a specific value was assigned to the TIM element in the Beacon frame. The experiments also revealed a potential implementation problem of the TCP-IP stack by the use of disassociation frames when the target device was associated and authenticated with a Wi-Fi access point. Next, we developed a tool capable of registering and instrumenting the interactions between a DD and the OS. The solution used a wrapper DD around the binary of the driver under test, enabling full control over the function calls and parameters involved in the OS-DD interface. This tool can support very diverse testing operations, including the log of system activity and to reverse engineer the driver behaviour. Some experiments were performed with the tool, allowing to record the insights of the behaviour of the interactions between the DD and the OS, the parameter values and return values. Results also showed the ability to identify bugs in drivers, by executing tests based on the knowledge obtained from the driver’s dynamics. Our final contribution is a methodology and framework for the discovery of errors and vulnerabilities in Windows DD by resorting to the execution of the drivers in a fully emulated environment. This approach is capable of testing the drivers without requiring access to the associated hardware or the DD source code, and has a granular control over each machine instruction. Experiments performed with Off the Shelf DD confirmed a high dependency of the correctness of the parameters passed by the OS, identified the precise location and the motive of memory leaks, the existence of dormant and vulnerable code.A constante evolução da eletrónica tem como consequência a disponibilização regular no mercado de novos equipamentos/dispositivos, levando a uma situação em que os sistemas operativos (SO) mais comuns incluem uma grande quantidade de gestores de dispositivos (GD) produzidos por diversos fabricantes. A experiência tem mostrado que o desenvolvimento dos GD é sujeito a erros uma vez que a causa da maioria das paragens do SO pode ser atribuída a falhas na sua implementação. Esta tese centra-se no desafio da criação de metodologias e ferramentas que facilitam a deteção de falhas nos GD, contribuindo para uma diminuição nos erros neste tipo de software, o seu impacto na estabilidade do SO, e as ameaças de segurança por eles causadas. Isto é especialmente relevante porque pode ajudar a melhorar a qualidade dos GD tanto na sua implementação como quando estes são integrados em sistemas. Este trabalho inicia-se com uma avaliação de como as falhas nos GD podem levar a um funcionamento incorreto do SO Windows. A metodologia empregue usa uma análise estatística para obter a lista das funções do SO que são mais utilizadas pelos GD, e posteriormente constrói GD sintéticos que introduzem erros nos parâmetros passados durante a chamada às funções do SO, e desta forma, imita a integração duma falta. Os resultados das experiências mostraram que a maioria das funções testadas não se protege eficazmente dos parâmetros incorretos. Observou-se a ocorrência de um número razoável de paragens e um pequeno número de bloqueios, o que sugere uma pobre capacidade das funções do SO na contenção de erros. Posteriormente, produzimos uma arquitetura e uma ferramenta que suporta a injeção automática de ataques em equipamentos móveis (e.g., telemóveis), com o objetivo de encontrar falhas de segurança (ou vulnerabilidades) em GD de placas de rede Wi-Fi. Estes GD foram selecionados porque são de fácil acesso a um atacante remoto, o qual apenas necessita de criar tráfego malicioso para explorar falhas na sua implementação podendo ter um impacto importante. As experiências realizadas com a ferramenta revelaram uma vulnerabilidade anteriormente desconhecida que provoca um bloqueio no SO quando é atribuído um valor específico ao campo TIM da mensagem de Beacon. As experiências também revelaram um potencial problema na implementação do protocolo TCP-IP no uso das mensagens de desassociação quando o dispositivo alvo estava associado e autenticado com o ponto de acesso Wi-Fi. A seguir, desenvolvemos uma ferramenta com a capacidade de registar e instrumentar as interações entre os GD e o SO. A solução usa um GD que envolve o código binário do GD em teste, permitindo um controlo total sobre as chamadas a funções e aos parâmetros envolvidos na interface SO-GD. Esta ferramenta suporta diversas operações de teste, incluindo o registo da atividade do sistema e compreensão do comportamento do GD. Foram realizadas algumas experiências com esta ferramenta, permitindo o registo das interações entre o GD e o SO, os valores dos parâmetros e os valores de retorno das funções. Os resultados mostraram a capacidade de identificação de erros nos GD, através da execução de testes baseados no conhecimento da dinâmica do GD. A nossa contribuição final é uma metodologia e uma ferramenta para a descoberta de erros e vulnerabilidades em GD Windows recorrendo à execução do GD num ambiente totalmente emulado. Esta abordagem permite testar GD sem a necessidade do respetivo hardware ou o código fonte, e possuí controlo granular sobre a execução de cada instrução máquina. As experiências realizadas com GD disponíveis comercialmente confirmaram a grande dependência que os GD têm nos parâmetros das funções do SO, e identificaram o motivo e a localização precisa de fugas de memória, a existência de código não usado e vulnerável

    Performance et fiabilité des protocoles de tolérance aux fautes

    Get PDF
    In the modern era of on-demand ubiquitous computing, where applications and services are deployed in well-provisioned, well-managed infrastructures, administered by large groups of cloud providers such as Amazon, Google, Microsoft, Oracle, etc., performance and dependability of the systems have become primary objectives.Cloud computing has evolved from questioning the Quality-of-Service (QoS) making factors such as availability, reliability, liveness, safety and security, extremely necessary in the complete definition of a system. Indeed, computing systems must be resilient in the presence of failures and attacks to prevent their inaccessibility which can lead to expensive maintenance costs and loss of business. With the growing components in cloud systems, faults occur more commonly resulting in frequent cloud outages and failing to guarantee the QoS. Cloud providers have seen episodic incidents of arbitrary (i.e., Byzantine) faults where systems demonstrate unpredictable conducts, which includes incorrect response of a client's request, sending corrupt messages, intentional delaying of messages, disobeying the ordering of the requests, etc.This has led researchers to extensively study Byzantine Fault Tolerance (BFT) and propose numerous protocols and software prototypes. These BFT solutions not only provide consistent and available services despite arbitrary failures, they also intend to reduce the cost and performance overhead incurred by the underlying systems. However, BFT prototypes have been evaluated in ad-hoc settings, considering either ideal conditions or very limited faulty scenarios. This fails to convince the practitioners for the adoption of BFT protocols in a distributed system. Some argue on the applicability of expensive and complex BFT to tolerate arbitrary faults while others are skeptical on the adeptness of BFT techniques. This thesis precisely addresses this problem and presents a comprehensive benchmarking environment which eases the setup of execution scenarios to analyze and compare the effectiveness and robustness of these existing BFT proposals.Specifically, contributions of this dissertation are as follows.First, we introduce a generic architecture for benchmarking distributed protocols. This architecture, comprises reusable components for building a benchmark for performance and dependability analysis of distributed protocols. The architecture allows defining workload and faultload, and their injection. It also produces performance, dependability, and low-level system and network statistics. Furthermore, the thesis presents the benefits of a general architecture.Second, we present BFT-Bench, the first BFT benchmark, for analyzing and comparing representative BFT protocols under identical scenarios. BFT-Bench allows end-users evaluate different BFT implementations under user-defined faulty behaviors and varying workloads. It allows automatic deploying these BFT protocols in a distributed setting with ability to perform monitoring and reporting of performance and dependability aspects. In our results, we empirically compare some existing state-of-the-art BFT protocols, in various workloads and fault scenarios with BFT-Bench, demonstrating its effectiveness in practice.Overall, this thesis aims to make BFT benchmarking easy to adopt by developers and end-users of BFT protocols.BFT-Bench framework intends to help users to perform efficient comparisons of competing BFT implementations, and incorporating effective solutions to the detected loopholes in the BFT prototypes. Furthermore, this dissertation strengthens the belief in the need of BFT techniques for ensuring correct and continued progress of distributed systems during critical fault occurrence.A l'ère de l’informatique omniprésente et à la demande, où les applications et les services sont déployés sur des infrastructures bien gérées et approvisionnées par des grands groupes de fournisseurs d’informatique en nuage (Cloud Computing), tels Amazon,Google,Microsoft,Oracle, etc, la performance et la fiabilité de ces systèmes sont devenues des objectifs primordiaux. Cette informatique a rendu particulièrement nécessaire la prise en compte des facteurs de la Qualité de Service (QoS), telles que la disponibilité, la fiabilité, la vivacité, la sureté et la sécurité,dans la définition complète d’un système. En effet, les systèmes informatiques doivent être résistants aussi bien aux défaillances qu’aux attaques et ce, afin d'éviter qu'ils ne deviennent inaccessibles, entrainent des couts de maintenance importants et la perte de parts de marché. L'augmentation de la taille et la complexité des systèmes en nuage rend de plus en plus commun les défauts, augmentant la fréquence des pannes, et n’offrant donc plus la Garantie de Service visée. Les fournisseurs d’informatique en nuage font ainsi face épisodiquement à des fautes arbitraires, dites Byzantines, durant lesquelles les systèmes ont des comportements imprévisibles.Ce constat a amené les chercheurs à s’intéresser de plus en plus à la tolérance aux fautes byzantines (BFT) et à proposer de nombreux prototypes de protocoles et logiciels. Ces solutions de BFT visent non seulement à fournir des services cohérents et continus malgré des défaillances arbitraires, mais cherchent aussi à réduire le coût et l’impact sur les performances des systèmes sous-jacents. Néanmoins les prototypes BFT ont été évalués le plus souvent dans des contextes ad hoc, soit dans des conditions idéales, soit en limitant les scénarios de fautes. C’est pourquoi ces protocoles de BFT n’ont pas réussi à convaincre les professionnels des systèmes distribués de les adopter. Cette thèse entend répondre à ce problème en proposant un environnement complet de banc d’essai dont le but est de faciliter la création de scénarios d'exécution utilisables pour aussi bien analyser que comparer l'efficacité et la robustesse des propositions BFT existantes. Les contributions de cette thèse sont les suivantes :Nous introduisons une architecture générique pour analyser des protocoles distribués. Cette architecture comprend des composants réutilisables permettant la mise en œuvre d’outils de mesure des performances et d’analyse de la fiabilité des protocoles distribués. Cette architecture permet de définir la charge de travail, de défaillance, et l’injection de ces dernières. Elle fournit aussi des statistiques de performance, de fiabilité du système de bas niveau et du réseau. En outre, cette thèse présente les bénéfices d’une architecture générale.Nous présentons BFT-Bench, le premier système de banc d’essai de la BFT, pour l'analyse et la comparaison d’un panel de protocoles BFT utilisés dans des situations identiques. BFT-Bench permet aux utilisateurs d'évaluer des implémentations différentes pour lesquels ils définissent des comportements défaillants avec différentes charges de travail.Il permet de déployer automatiquement les protocoles BFT étudiés dans un environnement distribué et offre la possibilité de suivre et de rendre compte des aspects performance et fiabilité. Parmi nos résultats, nous présentons une comparaison de certains protocoles BFT actuels, réalisée avec BFT-Bench, en définissant différentes charges de travail et différents scénarii de fautes. Cette réelle application de BFT-Bench en démontre l’efficacité.Le logiciel BFT-Bench a été conçu en ce sens pour aider les utilisateurs à comparer efficacement différentes implémentations de BFT et apporter des solutions effectives aux lacunes identifiées des prototypes BFT. De plus, cette thèse défend l’idée que les techniques BFT sont nécessaires pour assurer un fonctionnement continu et correct des systèmes distribués confrontés à des situations critiques

    Certifications of Critical Systems – The CECRIS Experience

    Get PDF
    In recent years, a considerable amount of effort has been devoted, both in industry and academia, to the development, validation and verification of critical systems, i.e. those systems whose malfunctions or failures reach a critical level both in terms of risks to human life as well as having a large economic impact.Certifications of Critical Systems – The CECRIS Experience documents the main insights on Cost Effective Verification and Validation processes that were gained during work in the European Research Project CECRIS (acronym for Certification of Critical Systems). The objective of the research was to tackle the challenges of certification by focusing on those aspects that turn out to be more difficult/important for current and future critical systems industry: the effective use of methodologies, processes and tools.The CECRIS project took a step forward in the growing field of development, verification and validation and certification of critical systems. It focused on the more difficult/important aspects of critical system development, verification and validation and certification process. Starting from both the scientific and industrial state of the art methodologies for system development and the impact of their usage on the verification and validation and certification of critical systems, the project aimed at developing strategies and techniques supported by automatic or semi-automatic tools and methods for these activities, setting guidelines to support engineers during the planning of the verification and validation phases

    Certifications of Critical Systems – The CECRIS Experience

    Get PDF
    In recent years, a considerable amount of effort has been devoted, both in industry and academia, to the development, validation and verification of critical systems, i.e. those systems whose malfunctions or failures reach a critical level both in terms of risks to human life as well as having a large economic impact.Certifications of Critical Systems – The CECRIS Experience documents the main insights on Cost Effective Verification and Validation processes that were gained during work in the European Research Project CECRIS (acronym for Certification of Critical Systems). The objective of the research was to tackle the challenges of certification by focusing on those aspects that turn out to be more difficult/important for current and future critical systems industry: the effective use of methodologies, processes and tools.The CECRIS project took a step forward in the growing field of development, verification and validation and certification of critical systems. It focused on the more difficult/important aspects of critical system development, verification and validation and certification process. Starting from both the scientific and industrial state of the art methodologies for system development and the impact of their usage on the verification and validation and certification of critical systems, the project aimed at developing strategies and techniques supported by automatic or semi-automatic tools and methods for these activities, setting guidelines to support engineers during the planning of the verification and validation phases

    Hardware-Assisted Dependable Systems

    Get PDF
    Unpredictable hardware faults and software bugs lead to application crashes, incorrect computations, unavailability of internet services, data losses, malfunctioning components, and consequently financial losses or even death of people. In particular, faults in microprocessors (CPUs) and memory corruption bugs are among the major unresolved issues of today. CPU faults may result in benign crashes and, more problematically, in silent data corruptions that can lead to catastrophic consequences, silently propagating from component to component and finally shutting down the whole system. Similarly, memory corruption bugs (memory-safety vulnerabilities) may result in a benign application crash but may also be exploited by a malicious hacker to gain control over the system or leak confidential data. Both these classes of errors are notoriously hard to detect and tolerate. Usual mitigation strategy is to apply ad-hoc local patches: checksums to protect specific computations against hardware faults and bug fixes to protect programs against known vulnerabilities. This strategy is unsatisfactory since it is prone to errors, requires significant manual effort, and protects only against anticipated faults. On the other extreme, Byzantine Fault Tolerance solutions defend against all kinds of hardware and software errors, but are inadequately expensive in terms of resources and performance overhead. In this thesis, we examine and propose five techniques to protect against hardware CPU faults and software memory-corruption bugs. All these techniques are hardware-assisted: they use recent advancements in CPU designs and modern CPU extensions. Three of these techniques target hardware CPU faults and rely on specific CPU features: ∆-encoding efficiently utilizes instruction-level parallelism of modern CPUs, Elzar re-purposes Intel AVX extensions, and HAFT builds on Intel TSX instructions. The rest two target software bugs: SGXBounds detects vulnerabilities inside Intel SGX enclaves, and “MPX Explained” analyzes the recent Intel MPX extension to protect against buffer overflow bugs. Our techniques achieve three goals: transparency, practicality, and efficiency. All our systems are implemented as compiler passes which transparently harden unmodified applications against hardware faults and software bugs. They are practical since they rely on commodity CPUs and require no specialized hardware or operating system support. Finally, they are efficient because they use hardware assistance in the form of CPU extensions to lower performance overhead

    Caractérisation de la sûreté de fonctionnement des systèmes d'exploitation en présence de pilotes défaillants

    Get PDF
    Les pilotes de périphériques composent désormais une part essentielle des systèmes d’exploitation. Plusieurs études montrent qu’ils sont fréquemment à l’origine des dysfonctionnements des systèmes opératoires. Dans ce mémoire, nous présentons une méthode pour l’évaluation de la robustesse des noyaux face aux comportements anormaux des pilotes de périphériques. Pour cela, après avoir analysé et précisé les caractéristiques des échanges entre les pilotes et le noyau (DPI - Driver Programming Interface), nous proposons une technique originale d’injection de fautes basée sur la corruption des paramètres des fonctions manipulées au niveau de cette interface. Nous définissons différentes approches pour l’analyse et l’interprétation des résultats observés pour obtenir des mesures objectives de sûreté de fonctionnement. Ces mesures permettent la prise en compte de différents points de vue afin de répondre aux besoins réels de l’utilisateur. Enfin, nous illustrons et validons l’applicabilité de cette méthode par sa mise en oeuvre dans le cadre d’un environnement expérimental sous Linux. La méthode proposée contribue à la caractérisation de la sûreté de fonctionnement des noyaux vis-à-vis des défaillances des pilotes du système. L’impact des résultats est double : a) permettre au développeur de tels logiciels d’identifier les faiblesses potentielles affectant la sûreté de fonctionnement du système, b) aider un intégrateur dans le choix du composant le mieux adapté à ses besoins

    Efficient fault-injection-based assessment of software-implemented hardware fault tolerance

    Get PDF
    With continuously shrinking semiconductor structure sizes and lower supply voltages, the per-device susceptibility to transient and permanent hardware faults is on the rise. A class of countermeasures with growing popularity is Software-Implemented Hardware Fault Tolerance (SIHFT), which avoids expensive hardware mechanisms and can be applied application-specifically. However, SIHFT can, against intuition, cause more harm than good, because its overhead in execution time and memory space also increases the figurative “attack surface” of the system – it turns out that application-specific configuration of SIHFT is in fact a necessity rather than just an advantage. Consequently, target programs need to be analyzed for particularly critical spots to harden. SIHFT-hardened programs need to be measured and compared throughout all development phases of the program to observe reliability improvements or deteriorations over time. Additionally, SIHFT implementations need to be tested. The contributions of this dissertation focus on Fault Injection (FI) as an assessment technique satisfying all these requirements – analysis, measurement and comparison, and test. I describe the design and implementation of an FI tool, named Fail*, that overcomes several shortcomings in the state of the art, and enables research on the general drawbacks of simulation-based FI. As demonstrated in four case studies in the context of SIHFT research, Fail* provides novel fine-grained analysis techniques that exploit the newly gained possibility to analyze FI results from complete fault-space exploration. These analysis techniques aid SIHFT design decisions on the level of program modules, functions, variables, source-code lines, or single machine instructions. Based on the experience from the case studies, I address the problem of large computation efforts that accompany exhaustive fault-space exploration from two different angles: Firstly, I develop a heuristical fault-space pruning technique that allows to freely trade the total FI-experiment count for result accuracy, while still providing information on all possible faultspace coordinates. Secondly, I speed up individual TAP-based FI experiments by improving the fast-forwarding operation by several orders of magnitude for most workloads. Finally, I dissect current practices in FI-based evaluation of SIHFT-hardened programs, identify three widespread pitfalls in the result interpretation, and advance the state of the art by defining a novel comparison metric

    JVM-based Techniques for Improving Java Observability

    Get PDF
    Observability measures the support of computer systems to accurately capture, analyze, and present (collectively observe) the internal information about the systems. Observability frameworks play important roles for program understanding, troubleshooting, performance diagnosis, and optimizations. However, traditional solutions are either expensive or coarse-grained, consequently compromising their utility in accommodating today’s increasingly complex software systems. New solutions are emerging for VM-based languages due to the full control language VMs have over program executions. Existing such solutions, nonetheless, still lack flexibility, have high overhead, or provide limited context information for developing powerful dynamic analyses. In this thesis, we present a VM-based infrastructure, called marker tracing framework (MTF), to address the deficiencies in the existing solutions for providing better observability for VM-based languages. MTF serves as a solid foundation for implementing fine-grained low-overhead program instrumentation. Specifically, MTF allows analysis clients to: 1) define custom events with rich semantics ; 2) specify precisely the program locations where the events should trigger; and 3) adaptively enable/disable the instrumentation at runtime. In addition, MTF-based analysis clients are more powerful by having access to all information available to the VM. To demonstrate the utility and effectiveness of MTF, we present two analysis clients: 1) dynamic typestate analysis with adaptive online program analysis (AOPA); and 2) selective probabilistic calling context analysis (SPCC). In addition, we evaluate the runtime performance of MTF and the typestate client with the DaCapo benchmarks. The results show that: 1) MTF has acceptable runtime overhead when tracing moderate numbers of marker events; and 2) AOPA is highly effective in reducing the event frequency for the dynamic typestate analysis; and 3) language VMs can be exploited to offer greater observability

    Diagnóstico de fallos y optimización de la planificación en un marco de e-mantenimiento.

    Get PDF
    324 p.El objetivo principal es demostrar el potencial de mejora que las técnicas y metodologías relacionadas con la analítica prescriptiva, pueden proporcionar en aplicaciones de mantenimiento industrial. Las tecnologías desarrolladas se pueden agrupar en tres ámbitos: - El e-mantenimiento, relacionado fundamentalmente con el desarrollo de plataformas colaborativas e inteligentes que permiten la integración de nuevos sensores, sistemas de comunicaciones, estándares y protocolos, conceptos, métodos de almacenamiento y análisis etc. que entran continuamente en nuestro abanico de posibilidades y nos ofrecen la posibilidad de seguir una tendencia de mejora en la optimización de activos y procesos, y en la interoperabilidad entre sistemas.- Las Redes Bayesianas (Bayesian Networks ¿ BNs) junto con otras metodologías de recogida de información utilizadas en ingeniería nos ofrecen la posibilidad de automatizar la tarea de diagnóstico y predicción de fallos.- La optimización de las estrategias de mantenimiento, mediante simulaciones de fallos y análisis coste-efectividad, que ayudan a la toma de decisiones a la hora de seleccionar una estrategia de mantenimiento adecuada para el activo. Además, mediante el uso de algoritmos de optimización logramos mejorar la planificación del mantenimiento, reduciendo los tiempos y costes para realizar las tareas en un parque de activos

    Software Perfomance Assessment at Architectural Level: A Methodology and its Application

    Get PDF
    Las arquitecturas software son una valiosa herramienta para la evaluación de las propiedades cualitativas y cuantitativas de los sistemas en sus primeras fases de desarrollo. Conseguir el diseño adecuado es crítico para asegurar la bondad de dichas propiedades. Tomar decisiones tempranas equivocadas puede implicar considerables y costosos cambios en un futuro. Dichas decisiones afectarían a muchas propiedades del sistema, tales como su rendimiento, seguridad, fiabilidad o facilidad de mantenimiento. Desde el punto de vista del rendimiento software, la ingeniería del rendimiento del software (SPE) es una disciplina de investigación madura y comúnmente aceptada que propone una evaluación basada en modelos en las primeras fases del ciclo de vida de desarrollo software. Un problema en este campo de investigación es que las metodologías hasta ahora propuestas no ofrecen una interpretación de los resultados obtenidos durante el análisis del rendimiento, ni utilizan dichos resultados para proponer alternativas para la mejora de la propia arquitectura software. Hasta la fecha, esta interpretación y mejora requiere de la experiencia y pericia de los ingenieros software, en especial de expertos en ingeniería de prestaciones. Además, a pesar del gran número de propuestas para evaluar el rendimiento de sistemas software, muy pocos de estos estudios teóricos son posteriormente aplicados a sistemas software reales. El objetivo de esta tesis es presentar una metodología para el asesoramiento de decisiones arquitecturales para la mejora, desde el punto de vista de las prestaciones, de las sistemas software. La metodología hace uso del Lenguaje Unificado de Modelado (UML) para representar las arquitecturas software y de métodos formales, concretamente redes de Petri, como modelo de prestaciones. El asesoramiento, basado en patrones y antipatrones, intenta detectar los principales problemas que afectan a las prestaciones del sistema y propone posibles mejoras para mejoras dichas prestaciones. Como primer paso, estudiamos y analizamos los resultados del rendimiento de diferentes estilos arquitectónicos. A continuación, sistematizamos los conocimientos previamente obtenidos para proponer una metodología y comprobamos su aplicabilidad asesorando un caso de estudio real, una arquitectura de interoperabilidad para adaptar interfaces a personas con discapacidad conforme a sus capacidades y preferencias. Finalmente, se presenta una herramienta para la evaluación del rendimiento como un producto derivado del propio ciclo de vida software
    corecore