16 research outputs found

    Cryptographic Dysfunctionality-A Survey on User Perceptions of Digital Certificates

    Full text link

    Cyber Security Politics

    Get PDF
    This book examines new and challenging political aspects of cyber security and presents it as an issue defined by socio-technological uncertainty and political fragmentation. Structured along two broad themes and providing empirical examples for how socio-technical changes and political responses interact, the first part of the book looks at the current use of cyber space in conflictual settings, while the second focuses on political responses by state and non-state actors in an environment defined by uncertainties. Within this, it highlights four key debates that encapsulate the complexities and paradoxes of cyber security politics from a Western perspective – how much political influence states can achieve via cyber operations and what context factors condition the (limited) strategic utility of such operations; the role of emerging digital technologies and how the dynamics of the tech innovation process reinforce the fragmentation of the governance space; how states attempt to uphold stability in cyberspace and, more generally, in their strategic relations; and how the shared responsibility of state, economy, and society for cyber security continues to be re-negotiated in an increasingly trans-sectoral and transnational governance space. This book will be of much interest to students of cyber security, global governance, technology studies, and international relations

    Cyber Security Politics

    Get PDF
    This book examines new and challenging political aspects of cyber security and presents it as an issue defined by socio-technological uncertainty and political fragmentation. Structured along two broad themes and providing empirical examples for how socio-technical changes and political responses interact, the first part of the book looks at the current use of cyber space in conflictual settings, while the second focuses on political responses by state and non-state actors in an environment defined by uncertainties. Within this, it highlights four key debates that encapsulate the complexities and paradoxes of cyber security politics from a Western perspective – how much political influence states can achieve via cyber operations and what context factors condition the (limited) strategic utility of such operations; the role of emerging digital technologies and how the dynamics of the tech innovation process reinforce the fragmentation of the governance space; how states attempt to uphold stability in cyberspace and, more generally, in their strategic relations; and how the shared responsibility of state, economy, and society for cyber security continues to be re-negotiated in an increasingly trans-sectoral and transnational governance space. This book will be of much interest to students of cyber security, global governance, technology studies, and international relations

    Is Internet privacy dead? Recovering Internet privacy in an increasingly surveillant society.

    Get PDF
    Surveillance on the Internet is a new battleground which attracts attention from all walks of life in our society. Since the 2013 Snowden revelations, the practice of Internet surveillance has become common knowledge. This research critically examines whether or not Internet privacy is dead, with a specific focus on the technical aspects of the Internet in order to express how technology is used to enhance and to invade privacy. This sets it apart from the existing literature in the field. In this research, three jurisdictions are chosen as case studies: the US and the UK as western jurisdictions with different legal systems, and China which has extensive surveillance and limited Internet privacy. The research explores the meaning of privacy in the information society and investigates the ways in which Internet privacy is integrated in the three chosen jurisdictions are critically analysed and discussed. The research findings reveal that Internet privacy is being taken away in both the US and the UK and it is hard to be optimistic for the future in the light of the 2013 Snowden revelations and ongoing changes to legislation, particularly the Investigatory Powers Bill in the UK. Through the examination of the evolution of the Internet in China and its nascent and evolving laws relating to data protection and privacy, the research findings demonstrate that China holds a great deal of control over its Internet and has implemented technical measures of surveillance, effectively meaning that Internet privacy in China is dead. Most importantly, through the examination of these three jurisdictions, there is strong evidence to suggest that these nation states are not so different when it comes to the invasion of Internet privacy. Despite these, there is still hope and the research concludes by examining possible ways to prevent the demise of Internet privacy

    A New Approach for Predicting Security Vulnerability Severity in Attack Prone Software Using Architecture and Repository Mined Change Metrics

    Get PDF
    Billions of dollars are lost every year to successful cyber attacks that are fundamentally enabled by software vulnerabilities. Modern cyber attacks increasingly threaten individuals, organizations, and governments, causing service disruption, inconvenience, and costly incident response. Given that such attacks are primarily enabled by software vulnerabilities, this work examines the efficacy of using change metrics, along with architectural burst and maintainability metrics, to predict modules and files that might be analyzed or tested further to excise vulnerabilities prior to release. The problem addressed by this research is the residual vulnerability problem, or vulnerabilities that evade detection and persist in released software. Many modern software projects are over a million lines of code, and composed of reused components of varying maturity. The sheer size of modern software, along with the reuse of existing open source modules, complicates the questions of where to look, and in what order to look, for residual vulnerabilities. Traditional code complexity metrics, along with newer frequency based churn metrics (mined from software repository change history), are selected specifically for their relevance to the residual vulnerability problem. We compare the performance of these complexity and churn metrics to architectural level change burst metrics, automatically mined from the git repositories of the Mozilla Firefox Web Browser, Apache HTTP Web Server, and the MySQL Database Server, for the purpose of predicting attack prone files and modules. We offer new empirical data quantifying the relationship between our selected metrics and the severity of vulnerable files and modules, assessed using severity data compiled from the NIST National Vulnerability Database, and cross-referenced to our study subjects using unique identifiers defined by the Common Vulnerabilities and Exposures (CVE) vulnerability catalog. Specifically, we evaluate our metrics against the severity scores from CVE entries associated with known-vulnerable files and modules. We use the severity scores according to the Base Score Metric from the Common Vulnerability Scoring System (CVSS), corresponding to applicable CVE entries extracted from the NIST National Vulnerability Database, which we associate with vulnerable files and modules via automated and semi-automated techniques. Our results show that architectural level change burst metrics can perform well in situations where more traditional complexity metrics fail as reliable estimators of vulnerability severity. In particular, results from our experiments on Apache HTTP Web Server indicate that architectural level change burst metrics show high correlation with the severity of known vulnerable modules, and do so with information directly available from the version control repository change-set (i.e., commit) history

    Security analyses for detecting deserialisation vulnerabilities : a thesis presented in partial fulfilment of the requirements for the degree of Doctor of Philosophy in Computer Science at Massey University, Palmerston North, New Zealand

    Get PDF
    An important task in software security is to identify potential vulnerabilities. Attackers exploit security vulnerabilities in systems to obtain confidential information, to breach system integrity, and to make systems unavailable to legitimate users. In recent years, particularly 2012, there has been a rise in reported Java vulnerabilities. One type of vulnerability involves (de)serialisation, a commonly used feature to store objects or data structures to an external format and restore them. In 2015, a deserialisation vulnerability was reported involving Apache Commons Collections, a popular Java library, which affected numerous Java applications. Another major deserialisation-related vulnerability that affected 55\% of Android devices was reported in 2015. Both of these vulnerabilities allowed arbitrary code execution on vulnerable systems by malicious users, a serious risk, and this came as a call for the Java community to issue patches to fix serialisation related vulnerabilities in both the Java Development Kit and libraries. Despite attention to coding guidelines and defensive strategies, deserialisation remains a risky feature and a potential weakness in object-oriented applications. In fact, deserialisation related vulnerabilities (both denial-of-service and remote code execution) continue to be reported for Java applications. Further, deserialisation is a case of parsing where external data is parsed from their external representation to a program's internal data structures and hence, potentially similar vulnerabilities can be present in parsers for file formats and serialisation languages. The problem is, given a software package, to detect either injection or denial-of-service vulnerabilities and propose strategies to prevent attacks that exploit them. The research reported in this thesis casts detecting deserialisation related vulnerabilities as a program analysis task. The goal is to automatically discover this class of vulnerabilities using program analysis techniques, and to experimentally evaluate the efficiency and effectiveness of the proposed methods on real-world software. We use multiple techniques to detect reachability to sensitive methods and taint analysis to detect if untrusted user-input can result in security violations. Challenges in using program analysis for detecting deserialisation vulnerabilities include addressing soundness issues in analysing dynamic features in Java (e.g., native code). Another hurdle is that available techniques mostly target the analysis of applications rather than library code. In this thesis, we develop techniques to address soundness issues related to analysing Java code that uses serialisation, and we adapt dynamic techniques such as fuzzing to address precision issues in the results of our analysis. We also use the results from our analysis to study libraries in other languages, and check if they are vulnerable to deserialisation-type attacks. We then provide a discussion on mitigation measures for engineers to protect their software against such vulnerabilities. In our experiments, we show that we can find unreported vulnerabilities in Java code; and how these vulnerabilities are also present in widely-used serialisers for popular languages such as JavaScript, PHP and Rust. In our study, we discovered previously unknown denial-of-service security bugs in applications/libraries that parse external data formats such as YAML, PDF and SVG

    Eesti humanitaar- ja loodusteaduslikud kogud : seisund, kasutamine, andmebaasid

    Get PDF
    TäistekstRiiklik programm „Humanitaar- ja loodusteaduslikud kogud” kinnitati 24. detsembril 2003 Vabariigi Valitsuse korraldusega nr 865-k aastateks 2004– 2008 ning anti haridus- ja teadusministeeriumi hallata. Loodi programmi juhtkomitee (26.01.2004), kes asus kohe tegutsema, ja mõnevõrra hiljem ekspertnõukogu (9.03.2006), kelle tegevuse viljaks oli kogude programmi tulemuste esimene, n-ö vahekokkuvõte suurt huvi tekitanud konverentsi näol (16.11.2006). Ka siinset kogumikku võib vaadelda programmi tulemuste aruandena. Sisukorra järgi otsustades peaks lugeja saama kogude praegusest seisust üsna täieliku ülevaate, kas ka ammendava, selgub ehk pisut hiljem. Kahtlemata annab see kokkuvõte aga sobiva aluse edasiste sihtide seadmiseks. Et ülalviidatud Vabariigi Valitsuse korraldust ette valmistada või teisisõnu, riikliku programmi projekti koostada, moodustas HTM 2003. aasta jaanuaris komisjoni, kellel tuli mõne kuu jooksul saada ülevaade peamiselt ülikoolide, ent ka muude asutuste humanitaar- ja loodusteaduslikest kogudest, nende seisundist ja olukorra parandamiseks vajalikest meetmetest. Selle ülevaate lühendatud versioonist sündis riikliku programmi tekst, mis ilmus Riigi Teatajas ja mis võib olla heaks võrdlusaluseks, hindamaks programmi toel saavutatud progressi. Loomulikult on igal koguhoidjal oma kogusid puudutavate muutuste kohta palju üksikasjalikum pilt, kui seda võimaldaks võrdlus eelmainitud allikaga, mille põhjal saab siiski kujundada üldisema ja ülevaatlikuma seisukoha. Protsessis osalejana võin ütelda muidugi ka ilma materjalidesse süvenemata, et tänu riiklikule programmile (viimastel aastatel päris adekvaatsel rahastamistasemel) ja eriti kogudega tegelevate inimeste entusiastlikule tööle on Eesti humanitaar- ja loodusteaduslike kogude seisund möödunud viie aastaga oluliselt paranenud nii hoiutingimuste kindlustamise, säilitamise kui ka kasutusvõimaluste ajakohastamise poolest. Sellest lähtudes kaldun arvama, et edaspidi peaks esiplaanile tõusma kogudega tehtava töö pidev moderniseerimine. Loodetavasti annab siinne kogumik selleks olulisi impulsse. Dimitri Kaljo akadeemik, juhtkomitee esimeesTeose väljaandmist on rahastatud Haridus- ja Teadusministeeriumi riiklikust programmist „Eesti humanitaar- ja loodusteaduslikud kogud” (2004–2008)

    Toward Effective Knowledge Discovery in Social Media Streams

    Get PDF
    The last few decades have seen an unprecedented growth in the amount of new data. New computing and communications resources, such as cloud data platforms and mo- bile devices have enabled individuals to contribute new ideas, share points of view and exchange newsworthy bits with each other at a previously unfathomable rate. While there are many ways a modern person can communicate digitally with others, social media outlets, such as Twitter or Facebook have been occupying much of the focus of inter-person social networking in recent years. The millions of pieces of content published on social media sites have been both a blessing and a curse for those trying to make sense of the discourse. On one hand, the sheer amount of easily available, real time, contextually relevant content has been a cause of much excitement in academia and the industry. On the other hand, however, the amount of new diverse content that is being continuously published on social sites makes it difficult for researchers and industry participants to effectively grasp. Therefore, the goal of this thesis is to discover a set of approaches and techniques that would help enable data miners to quickly develop intuitions regarding the happenings in the social media space. To that aim, I concentrate on effectively visualizing social media streams as hierarchical structures, as such structures have been shown to be useful in human sense makingPh.D., Information Studies -- Drexel University, 201

    On the malware detection problem : challenges and novel approaches

    Get PDF
    Orientador: André Ricardo Abed GrégioCoorientador: Paulo Lício de GeusTese (doutorado) - Universidade Federal do Paraná, Setor de Ciências Exatas, Programa de Pós-Graduação em Informática. Defesa : Curitiba,Inclui referênciasÁrea de concentração: Ciência da ComputaçãoResumo: Software Malicioso (malware) é uma das maiores ameaças aos sistemas computacionais atuais, causando danos à imagem de indivíduos e corporações, portanto requerendo o desenvolvimento de soluções de detecção para prevenir que exemplares de malware causem danos e para permitir o uso seguro dos sistemas. Diversas iniciativas e soluções foram propostas ao longo do tempo para detectar exemplares de malware, de Anti-Vírus (AVs) a sandboxes, mas a detecção de malware de forma efetiva e eficiente ainda se mantém como um problema em aberto. Portanto, neste trabalho, me proponho a investigar alguns desafios, falácias e consequências das pesquisas em detecção de malware de modo a contribuir para o aumento da capacidade de detecção das soluções de segurança. Mais especificamente, proponho uma nova abordagem para o desenvolvimento de experimentos com malware de modo prático mas ainda científico e utilizo-me desta abordagem para investigar quatro questões relacionadas a pesquisa em detecção de malware: (i) a necessidade de se entender o contexto das infecções para permitir a detecção de ameaças em diferentes cenários; (ii) a necessidade de se desenvolver melhores métricas para a avaliação de soluções antivírus; (iii) a viabilidade de soluções com colaboração entre hardware e software para a detecção de malware de forma mais eficiente; (iv) a necessidade de predizer a ocorrência de novas ameaças de modo a permitir a resposta à incidentes de segurança de forma mais rápida.Abstract: Malware is a major threat to most current computer systems, causing image damages and financial losses to individuals and corporations, thus requiring the development of detection solutions to prevent malware to cause harm and allow safe computers usage. Many initiatives and solutions to detect malware have been proposed over time, from AntiViruses (AVs) to sandboxes, but effective and efficient malware detection remains as a still open problem. Therefore, in this work, I propose taking a look on some malware detection challenges, pitfalls and consequences to contribute towards increasing malware detection system's capabilities. More specifically, I propose a new approach to tackle malware research experiments in a practical but still scientific manner and leverage this approach to investigate four issues: (i) the need for understanding context to allow proper detection of localized threats; (ii) the need for developing better metrics for AV solutions evaluation; (iii) the feasibility of leveraging hardware-software collaboration for efficient AV implementation; and (iv) the need for predicting future threats to allow faster incident responses
    corecore