551 research outputs found

    PALPAS - PAsswordLess PAssword Synchronization

    Full text link
    Tools that synchronize passwords over several user devices typically store the encrypted passwords in a central online database. For encryption, a low-entropy, password-based key is used. Such a database may be subject to unauthorized access which can lead to the disclosure of all passwords by an offline brute-force attack. In this paper, we present PALPAS, a secure and user-friendly tool that synchronizes passwords between user devices without storing information about them centrally. The idea of PALPAS is to generate a password from a high entropy secret shared by all devices and a random salt value for each service. Only the salt values are stored on a server but not the secret. The salt enables the user devices to generate the same password but is statistically independent of the password. In order for PALPAS to generate passwords according to different password policies, we also present a mechanism that automatically retrieves and processes the password requirements of services. PALPAS users need to only memorize a single password and the setup of PALPAS on a further device demands only a one-time transfer of few static data.Comment: An extended abstract of this work appears in the proceedings of ARES 201

    Security and trust in cloud computing and IoT through applying obfuscation, diversification, and trusted computing technologies

    Get PDF
    Cloud computing and Internet of Things (IoT) are very widely spread and commonly used technologies nowadays. The advanced services offered by cloud computing have made it a highly demanded technology. Enterprises and businesses are more and more relying on the cloud to deliver services to their customers. The prevalent use of cloud means that more data is stored outside the organization’s premises, which raises concerns about the security and privacy of the stored and processed data. This highlights the significance of effective security practices to secure the cloud infrastructure. The number of IoT devices is growing rapidly and the technology is being employed in a wide range of sectors including smart healthcare, industry automation, and smart environments. These devices collect and exchange a great deal of information, some of which may contain critical and personal data of the users of the device. Hence, it is highly significant to protect the collected and shared data over the network; notwithstanding, the studies signify that attacks on these devices are increasing, while a high percentage of IoT devices lack proper security measures to protect the devices, the data, and the privacy of the users. In this dissertation, we study the security of cloud computing and IoT and propose software-based security approaches supported by the hardware-based technologies to provide robust measures for enhancing the security of these environments. To achieve this goal, we use obfuscation and diversification as the potential software security techniques. Code obfuscation protects the software from malicious reverse engineering and diversification mitigates the risk of large-scale exploits. We study trusted computing and Trusted Execution Environments (TEE) as the hardware-based security solutions. Trusted Platform Module (TPM) provides security and trust through a hardware root of trust, and assures the integrity of a platform. We also study Intel SGX which is a TEE solution that guarantees the integrity and confidentiality of the code and data loaded onto its protected container, enclave. More precisely, through obfuscation and diversification of the operating systems and APIs of the IoT devices, we secure them at the application level, and by obfuscation and diversification of the communication protocols, we protect the communication of data between them at the network level. For securing the cloud computing, we employ obfuscation and diversification techniques for securing the cloud computing software at the client-side. For an enhanced level of security, we employ hardware-based security solutions, TPM and SGX. These solutions, in addition to security, ensure layered trust in various layers from hardware to the application. As the result of this PhD research, this dissertation addresses a number of security risks targeting IoT and cloud computing through the delivered publications and presents a brief outlook on the future research directions.Pilvilaskenta ja esineiden internet ovat nykyään hyvin tavallisia ja laajasti sovellettuja tekniikkoja. Pilvilaskennan pitkälle kehittyneet palvelut ovat tehneet siitä hyvin kysytyn teknologian. Yritykset enenevässä määrin nojaavat pilviteknologiaan toteuttaessaan palveluita asiakkailleen. Vallitsevassa pilviteknologian soveltamistilanteessa yritykset ulkoistavat tietojensa käsittelyä yrityksen ulkopuolelle, minkä voidaan nähdä nostavan esiin huolia taltioitavan ja käsiteltävän tiedon turvallisuudesta ja yksityisyydestä. Tämä korostaa tehokkaiden turvallisuusratkaisujen merkitystä osana pilvi-infrastruktuurin turvaamista. Esineiden internet -laitteiden lukumäärä on nopeasti kasvanut. Teknologiana sitä sovelletaan laajasti monilla sektoreilla, kuten älykkäässä terveydenhuollossa, teollisuusautomaatiossa ja älytiloissa. Sellaiset laitteet keräävät ja välittävät suuria määriä informaatiota, joka voi sisältää laitteiden käyttäjien kannalta kriittistä ja yksityistä tietoa. Tästä syystä johtuen on erittäin merkityksellistä suojata verkon yli kerättävää ja jaettavaa tietoa. Monet tutkimukset osoittavat esineiden internet -laitteisiin kohdistuvien tietoturvahyökkäysten määrän olevan nousussa, ja samaan aikaan suuri osuus näistä laitteista ei omaa kunnollisia teknisiä ominaisuuksia itse laitteiden tai niiden käyttäjien yksityisen tiedon suojaamiseksi. Tässä väitöskirjassa tutkitaan pilvilaskennan sekä esineiden internetin tietoturvaa ja esitetään ohjelmistopohjaisia tietoturvalähestymistapoja turvautumalla osittain laitteistopohjaisiin teknologioihin. Esitetyt lähestymistavat tarjoavat vankkoja keinoja tietoturvallisuuden kohentamiseksi näissä konteksteissa. Tämän saavuttamiseksi työssä sovelletaan obfuskaatiota ja diversifiointia potentiaalisiana ohjelmistopohjaisina tietoturvatekniikkoina. Suoritettavan koodin obfuskointi suojaa pahantahtoiselta ohjelmiston takaisinmallinnukselta ja diversifiointi torjuu tietoturva-aukkojen laaja-alaisen hyödyntämisen riskiä. Väitöskirjatyössä tutkitaan luotettua laskentaa ja luotettavan laskennan suoritusalustoja laitteistopohjaisina tietoturvaratkaisuina. TPM (Trusted Platform Module) tarjoaa turvallisuutta ja luottamuksellisuutta rakentuen laitteistopohjaiseen luottamukseen. Pyrkimyksenä on taata suoritusalustan eheys. Työssä tutkitaan myös Intel SGX:ää yhtenä luotettavan suorituksen suoritusalustana, joka takaa suoritettavan koodin ja datan eheyden sekä luottamuksellisuuden pohjautuen suojatun säiliön, saarekkeen, tekniseen toteutukseen. Tarkemmin ilmaistuna työssä turvataan käyttöjärjestelmä- ja sovellusrajapintatasojen obfuskaation ja diversifioinnin kautta esineiden internet -laitteiden ohjelmistokerrosta. Soveltamalla samoja tekniikoita protokollakerrokseen, työssä suojataan laitteiden välistä tiedonvaihtoa verkkotasolla. Pilvilaskennan turvaamiseksi työssä sovelletaan obfuskaatio ja diversifiointitekniikoita asiakaspuolen ohjelmistoratkaisuihin. Vankemman tietoturvallisuuden saavuttamiseksi työssä hyödynnetään laitteistopohjaisia TPM- ja SGX-ratkaisuja. Tietoturvallisuuden lisäksi nämä ratkaisut tarjoavat monikerroksisen luottamuksen rakentuen laitteistotasolta ohjelmistokerrokseen asti. Tämän väitöskirjatutkimustyön tuloksena, osajulkaisuiden kautta, vastataan moniin esineiden internet -laitteisiin ja pilvilaskentaan kohdistuviin tietoturvauhkiin. Työssä esitetään myös näkemyksiä jatkotutkimusaiheista

    Chronic disease care in primary health care facilities in rural South African settings

    Get PDF
    A THESIS Submitted to the School of Public Health, Faculty of Health Sciences, University of the Witwatersrand, in fulfilment of the requirements for the degree of Doctor of Philosophy Johannesburg, South Africa 2016Background: South Africa has a dual high burden of HIV and non-communicable diseases (NCDs). In a response to the dual burden of these chronic diseases, the National Department of Health (NDoH) introduced a pilot of the Integrated Chronic Disease Management (ICDM) model in June 2011 in selected Primary Health Care (PHC) facilities, one of the first of such efforts by an African Ministry of Health. The main aim of the ICDM model is to leverage the successes of the innovative HIV treatment programme for NCDs in order to improve the quality of chronic disease care and health outcomes of adult chronic disease patients. Since the initiation of the ICDM model, little is known about the quality of chronic care resulting in the effectiveness of the model in improving health outcomes of chronic disease patients. Objectives: To describe the chronic disease profile and predictors of healthcare utilisation (HCU) in a rural population in a South African municipality; and assess quality of care and effectiveness of the ICDM model in improving health outcomes of chronic disease patients receiving treatment in PHC facilities. Methods: An NDoH pilot study was conducted in selected health facilities in the Bushbuckridge municipality, Mpumalanga province, northeast South Africa, where a part of the population has been continuously monitored by the Agincourt Health and Socio-Demographic Surveillance System (HDSS) since 1992. Two main studies were conducted to address the two research objectives. The first study was a situation analysis to describe the chronic disease profile and predictors of healthcare utilisation in the population monitored by the Agincourt HDSS. The second study evaluated quality of care in the ICDM model as implemented and assessed effectiveness of the model in improving health outcomes of patients receiving treatment in PHC facilities. This second study had three components: (1) a qualitative and (2) a quantitative evaluation of the quality of care in the ICDM model; and a (3) quantitative assessment of effectiveness of the ICDM model in improving patients‘ health outcomes. The two main studies have been categorised into three broad thematic areas: chronic disease profile and predictors of healthcare utilisation; quality of care in the ICDM model; and changes in patients‘ health outcomes attributable to the ICDM model. In the first study, a cross-sectional survey to measure healthcare utilisation was targeted at 7,870 adults 50 years and over permanently residing in the area monitored by the Agincourt HDSS in 2010, the year before the ICDM model was introduced. Secondary data on healthcare utilisation (dependent variable), socio-demographic variables drawn from the HDSS, receipt of social grants and type of medical aid (independent variables) were analysed. Predictors of HCU were determined by binary logistic regression adjusted for socio-demographic variables. The quantitative component of the second study was a cross-sectional survey conducted in 2013 in the seven PHC facilities implementing the ICDM model in the Agincourt sub-district (henceforth referred to as the ICDM pilot facilities) to better understand the quality of care in the ICDM model. Avedis Donabedian‘s theory of the relationships between structure, process, and outcome (SPO) constructs was used to evaluate quality of care in the ICDM model exploring unidirectional, mediation, and reciprocal pathways. Four hundred and thirty-five (435) proportionately sampled patients ≥ 18 years and the seven operational managers of the PHC facilities responded to an adapted satisfaction questionnaire with measures reflecting structure (e.g. equipment), process (e.g. examination) and outcome (e.g. waiting time) constructs. Seventeen dimensions of care in the ICDM model were evaluated from the perspectives of patients and providers. Eight of these 17 dimensions of care are the priority areas of the HIV treatment programme used as leverage for improving quality of care in the ICDM model: supply of critical medicines, hospital referral, defaulter tracing, prepacking of medicines, clinic appointments, reducing patient waiting time, and coherence of integrated chronic disease care (a one-stop clinic meeting most of patients‘ needs). A structural equation model was fit to operationalise Donabedian‘s theory using patient‘s satisfaction scores. The qualitative component of the second study was a case study of the seven ICDM pilot facilities conducted in 2013 to gain in-depth perspectives of healthcare providers and users regarding quality of care in the ICDM model. Of the 435 patients receiving treatment in the pilot facilities, 56 were purposively selected for focus group discussions. An in-depth interview was conducted with the seven operational managers within the pilot facilities and the health manager of the Bushbuckridge municipality. Qualitative data were analysed, with MAXQDA 2 software, to identify 17 a priori dimensions of care and emerging themes. In addition to the emerging themes, codes generated in the qualitative analysis were underpinned by Avedis Donabedian‘s SPO theoretical framework. A controlled interrupted time-series study was conducted for the 435 patients who participated in the cross-sectional study in the ICDM pilot facilities and 443 patients proportionately recruited from five PHC facilities not implementing the ICDM model (Comparison PHC facilities in the surrounding area outside the Agincourt HDSS) from 2011-2013. Health outcome data for each patient were retrieved from facility records at 30-time points (months) during the study period. We performed autoregressive moving average (ARMA) statistical modelling to account for autocorrelation inherent in the time-series data. The effect of the ICDM model on the control of BP (350 cells/mm3) was assessed by controlled segmented linear regression analysis. Results: Seventy-five percent (75%) of the 7,870 eligible adults 50+ responded to the health care utilization survey in the first study. All 5,795 responders reported health problems, of whom 96% used healthcare, predominantly at public health facilities (82%). Reported health problems were: chronic non-communicable diseases (41% - e.g. hypertension), acute conditions (27% - e.g. flu), other conditions (26% - e.g. musculoskeletal pain), chronic communicable diseases (3% e.g. HIV and TB) and injuries (3%). Chronic communicable (OR=5.91, 95% CI: 1.44, 24.32) and non-communicable (OR=2.85, 95% CI: 1.96, 4.14) diseases were the main predictors of healthcare utilisation. Out of the 17 dimensions of care assessed in the quantitative component of the quality of care study, operational managers reported dissatisfaction with patient waiting time while patients reported dissatisfaction with the appointment system, defaulter-tracing of patients and waiting time. The mediation pathway fitted perfectly with the data (coefficient of determination=1.00). The structural equation modeling showed that structure correlated with process (0.40) and outcome (0.75). Given structure, process correlated with outcome (0.88). Patients‘ perception of availability of equipment, supply of critical medicines and accessibility of care (structure construct) had a direct influence on the ability of nurses to attend to their needs, be professional and friendly (process construct). Patients also perceived that these process dimensions directly influenced coherence of care provided, competence of the nurses and patients‘ confidence in the nurses (outcome construct). These structure-related dimensions of care directly influenced outcome-related dimensions of care without the mediating effect of process factors. In the qualitative study, manager and patient narratives showed inadequacies in structure (malfunctioning blood pressure machines and staff shortage); process (irregular prepacking of drugs); and outcome (long waiting times). Patients reported anti-hypertension drug stock-outs; sub-optimal defaulter-tracing; rigid clinic appointments; HIV-related stigma in the community resulting from defaulter-tracing activities; and government nurses‘ involvement in commercial activities in the consulting rooms during office hours. Managers reported simultaneous treatment of chronic diseases by traditional healers in the community and thought there was reduced HIV stigma because HIV and NCD patients attended the same clinic. In the controlled-interrupted time series study the ARMA model showed that the pilot facilities had a 5.7% (coef=0.057; 95% CI: 0.056,0.058; P<0.001) and 1.0% (coef=0.010; 95% CI: 0.003,0.016; P=0.002) greater likelihood than the comparison facilities to control patients‘ CD4 counts and BP, respectively. In the segmented analysis, the decreasing probabilities of controlling CD4 counts and BP observed in the pilot facilities before the implementation of the ICDM model were respectively reduced by 0.23% (coef = -0.0023; 95% CI: -0.0026,-0.0021; P<0.001) and 1.5% (Coef= -0.015; 95% CI: -0.016,-0.014; P<0.001). Conclusions: HIV and NCDs were the main health problems and predictors of HCU in the population. This suggests that public healthcare services for chronic diseases are a priority among older people in this rural setting. There was poor quality of care reported in five of the eight priority areas used as leverage for the control of NCDs (referral, defaulter tracing, prepacking of medicines, clinic appointments and waiting time); hence, the need to strengthen services in these areas. Application of the ICDM model appeared effective in reducing the decreasing trend in controlling patients‘ CD4 counts and blood pressure. Suboptimal BP control observed in this study may have been due to poor quality of care in the identified priority areas of the ICDM model and unintended consequences of the ICDM model such as work overload, staff shortage, malfunctioning BP machines, anti-hypertension drug stock-outs, and HIV-related stigma in the community. Hence, the HIV programme should be more extensively leveraged to improve the quality of hypertension treatment in order to achieve optimal BP control in the nationwide implementation of the ICDM model in PHC facilities in South Africa and, potentially, other LMICs.MT201

    Unconventional gas: potential energy market impacts in the European Union

    Get PDF
    In the interest of effective policymaking, this report seeks to clarify certain controversies and identify key gaps in the evidence-base relating to unconventional gas. The scope of this report is restricted to the economic impact of unconventional gas on energy markets. As such, it principally addresses such issues as the energy mix, energy prices, supplies, consumption, and trade flows. Whilst this study touches on coal bed methane and tight gas, its predominant focus is on shale gas, which the evidence at this time suggests will be the form of unconventional gas with the most growth potential in the short- to medium-term. This report considers the prospects for the indigenous production of shale gas within the EU-27 Member States. It evaluates the available evidence on resource size, extractive technology, resource access and market access. This report also considers the implications for the EU of large-scale unconventional gas production in other parts of the world. This acknowledges the fact that many changes in the dynamics of energy supply can only be understood in the broader global context. It also acknowledges that the EU is a major importer of energy, and that it is therefore heavily affected by developments in global energy markets that are largely out of its control.JRC.F.3-Energy securit

    Automated Verification of Exam, Cash, aa Reputation, and Routing Protocols

    Get PDF
    Security is a crucial requirement in the applications based on information and communication technology, especially when an open network such as the Internet is used.To ensure security in such applications cryptographic protocols have been used.However, the design of security protocols is notoriously difficult and error-prone.Several flaws have been found on protocols that are claimed secure.Hence, cryptographic protocols must be verified before they are used.One approach to verify cryptographic protocols is the use of formal methods, which have achieved many results in recent years.Formal methods concern on analysis of protocol specifications modeled using, e.g., dedicated logics, or process algebras.Formal methods can find flaws or prove that a protocol is secure under ``perfect cryptographic assumption" with respect to given security properties. However, they abstract away from implementation errors and side-channel attacks.In order to detect such errors and attacks runtime verification can be used to analyze systems or protocols executions.Moreover, runtime verification can help in the cases where formal procedures have exponential time or suffer from termination problems.In this thesis we contribute to cryptographic protocols verification with an emphasis on formal verification and automation.Firstly, we study exam protocols. We propose formal definitions for several authentication and privacy propertiesin the Applied Pi-Calculus. We also provide an abstract definitions of verifiability properties.We analyze all these properties automatically using ProVerif on multiple case studies, and identify several flaws.Moreover, we propose several monitors to check exam requirements at runtime. These monitors are validated by analyzing a real exam executions using MARQ Java based tool.Secondly, we propose a formal framework to verify the security properties of non-transferable electronic cash protocols.We define client privacy and forgery related properties.Again, we illustrate our model by analyzing three case studies using ProVerif, and confirm several known attacks.Thirdly, we propose formal definitions of authentication, privacy, and verifiability properties of electronic reputation protocols. We discuss the proposed definitions, with the help of ProVerif, on a simple reputation protocol.Finally, we obtain a reduction result to verify route validity of ad-hoc routing protocols in presence of multiple independent attackers that do not share their knowledge.La sécurité est une exigence cruciale dans les applications basées sur l'information et la technologie de communication, surtout quand un réseau ouvert tel que l'Internet est utilisé. Pour assurer la sécurité dans ces applications des protocoles cryptographiques ont été développé. Cependant, la conception de protocoles de sécurité est notoirement difficile et source d'erreurs. Plusieurs failles ont été trouvées sur des protocoles qui se sont prétendus sécurisés. Par conséquent, les protocoles cryptographiques doivent être vérifiés avant d'être utilisés. Une approche pour vérifier les protocoles cryptographiques est l'utilisation des méthodes formelles, qui ont obtenu de nombreux résultats au cours des dernières années.Méthodes formelles portent sur l'analyse des spécifications des protocoles modélisées en utilisant, par exemple, les logiques dédiés, ou algèbres de processus. Les méthodes formelles peuvent trouver des failles ou permettent de prouver qu'un protocole est sécurisé sous certaines hypothèses par rapport aux propriétés de sécurité données. Toutefois, elles abstraient des erreurs de mise en ouvre et les attaques side-channel.Afin de détecter ces erreurs et la vérification des attaques d'exécution peut être utilisée pour analyser les systèmes ou protocoles exécutions. En outre, la vérification de l'exécution peut aider dans les cas où les procédures formelles mettent un temps exponentielle ou souffrent de problèmes de terminaison. Dans cette thèse, nous contribuons à la vérification des protocoles cryptographiques avec un accent sur la vérification formelle et l'automatisation. Tout d'abord, nous étudions les protocoles d'examen. Nous proposons des définitions formelles pour plusieurs propriétés d'authentification et de confidentialité dans le Pi-calcul Appliqué.Nous fournissons également une des définitions abstraites de propriétés de vérifiabilité. Nous analysons toutes ces propriétés en utilisant automatiquement ProVerif sur plusieurs études de cas, et avons identifié plusieurs failles. En outre, nous proposons plusieurs moniteurs de vérifier les exigences d'examen à l'exécution. Ces moniteurs sont validés par l'analyse d'un exécutions d'examen réel en utilisant l'outil MARQ Java.Deuxièmement, nous proposons un cadre formel pour vérifier les propriétés de sécurité de protocoles de monnaie électronique non transférable. Nous définissons la notion de vie privée du client et les propriétés de la falsification. Encore une fois, nous illustrons notre modèle en analysant trois études de cas à l'aide ProVerif, et confirmons plusieurs attaques connues.Troisièmement, nous proposons des définitions formelles de l'authentification, la confidentialité et les propriétés de vérifiabilité de protocoles de réputation électroniques. Nous discutons les définitions proposées, avec l'aide de ProVerif, sur un protocole de réputation simple. Enfin, nous obtenons un résultat sur la réduction de la vérification de la validité d'une route dans les protocoles de routage ad-hoc, en présence de plusieurs attaquants indépendants qui ne partagent pas leurs connaissances

    Performance assessment of real-time data management on wireless sensor networks

    Get PDF
    Technological advances in recent years have allowed the maturity of Wireless Sensor Networks (WSNs), which aim at performing environmental monitoring and data collection. This sort of network is composed of hundreds, thousands or probably even millions of tiny smart computers known as wireless sensor nodes, which may be battery powered, equipped with sensors, a radio transceiver, a Central Processing Unit (CPU) and some memory. However due to the small size and the requirements of low-cost nodes, these sensor node resources such as processing power, storage and especially energy are very limited. Once the sensors perform their measurements from the environment, the problem of data storing and querying arises. In fact, the sensors have restricted storage capacity and the on-going interaction between sensors and environment results huge amounts of data. Techniques for data storage and query in WSN can be based on either external storage or local storage. The external storage, called warehousing approach, is a centralized system on which the data gathered by the sensors are periodically sent to a central database server where user queries are processed. The local storage, in the other hand called distributed approach, exploits the capabilities of sensors calculation and the sensors act as local databases. The data is stored in a central database server and in the devices themselves, enabling one to query both. The WSNs are used in a wide variety of applications, which may perform certain operations on collected sensor data. However, for certain applications, such as real-time applications, the sensor data must closely reflect the current state of the targeted environment. However, the environment changes constantly and the data is collected in discreet moments of time. As such, the collected data has a temporal validity, and as time advances, it becomes less accurate, until it does not reflect the state of the environment any longer. Thus, these applications must query and analyze the data in a bounded time in order to make decisions and to react efficiently, such as industrial automation, aviation, sensors network, and so on. In this context, the design of efficient real-time data management solutions is necessary to deal with both time constraints and energy consumption. This thesis studies the real-time data management techniques for WSNs. It particularly it focuses on the study of the challenges in handling real-time data storage and query for WSNs and on the efficient real-time data management solutions for WSNs. First, the main specifications of real-time data management are identified and the available real-time data management solutions for WSNs in the literature are presented. Secondly, in order to provide an energy-efficient real-time data management solution, the techniques used to manage data and queries in WSNs based on the distributed paradigm are deeply studied. In fact, many research works argue that the distributed approach is the most energy-efficient way of managing data and queries in WSNs, instead of performing the warehousing. In addition, this approach can provide quasi real-time query processing because the most current data will be retrieved from the network. Thirdly, based on these two studies and considering the complexity of developing, testing, and debugging this kind of complex system, a model for a simulation framework of the real-time databases management on WSN that uses a distributed approach and its implementation are proposed. This will help to explore various solutions of real-time database techniques on WSNs before deployment for economizing money and time. Moreover, one may improve the proposed model by adding the simulation of protocols or place part of this simulator on another available simulator. For validating the model, a case study considering real-time constraints as well as energy constraints is discussed. Fourth, a new architecture that combines statistical modeling techniques with the distributed approach and a query processing algorithm to optimize the real-time user query processing are proposed. This combination allows performing a query processing algorithm based on admission control that uses the error tolerance and the probabilistic confidence interval as admission parameters. The experiments based on real world data sets as well as synthetic data sets demonstrate that the proposed solution optimizes the real-time query processing to save more energy while meeting low latency.Fundação para a Ciência e Tecnologi

    Analysis and detection of security vulnerabilities in contemporary software

    Get PDF
    Contemporary application systems are implemented using an assortment of high-level programming languages, software frameworks, and third party components. While this may help to lower development time and cost, the result is a complex system of interoperating parts whose behavior is difficult to fully and properly comprehend. This difficulty of comprehension often manifests itself in the form of program coding errors that are not directly related to security requirements but can have an impact on the security of the system. The thesis of this dissertation is that many security vulnerabilities in contemporary software may be attributed to unintended behavior due to unexpected execution paths resulting from the accidental misuse of the software components. Unlike many typical programmer errors such as missed boundary checks or user input validation, these software bugs are not easy to detect and avoid. While typical secure coding best practices, such as code reviews, dynamic and static analysis, offer little protection against such vulnerabilities, we argue that runtime verification of software execution against a specified expected behavior can help to identify unexpected behavior in the software. The dissertation explores how building software systems using components may lead to the emergence of unexpected software behavior that results in security vulnerabilities. The thesis is supported by a study of the evolution of a popular software product over a period of twelve years. While anomaly detection techniques could be applied to verify software verification at runtime, there are several practical challenges in using them in large-scale contemporary software. A model of expected application execution paths and a methodology that can be used to build it during the software development cycle is proposed. The dissertation explores its effectiveness in detecting exploits on vulnerabilities enabled by software errors in a popular, enterprise software product
    corecore