4,724 research outputs found

    Optimization of the ROCA (CVE-2017-15361) Attack

    Get PDF
    2017. aastal avastasid Tšehhi teadlased Infineoni loodud RSA võtmete genereerimis algoritmist haavatavuse CVE-2017-15361 (ROCA rünnak). Leiti, et Infineoni algoritmiga genereeritud 2048-bitiseid võtmeid on võimalik faktoriseerida halvimal juhul kõigest 140.8 CPU aastaga. Antud algortimi kasutades olid genereeritud võtmed 750 000 Eesti ID-kaardi jaoks. Selle magistritöö raames implementeeriti ROCA rünnak ning genereeritud võtmeid ja haavatavaid kiipkaarte analüüsides loodi rünnakust uus, optimiseeritud versioon, mille abil on võimalik sooritada rünnak 140.8 aasta asemel 35.2 CPU aastaga 90% võtmete puhul ning 70.4 aastaga ülejäänud võtmetel. Lisaks loodi paralleliseeritud versioon rünnakust kasutades teadusarvutuste klastrit (HPC).In 2017, Czech researchers found the vulnerability CVE-2017-15361 (the ROCA attack) in Infineon's proprietary RSA key generation algorithm. The researchers found that 2048-bit RSA key can be factored in only 140.8 CPU-years in the worst case scenario. The algorithm turned out to be used by 750 000 Estonian ID-cards. In this thesis, we implemented the ROCA attack and, based on the properties observed from the keys generated by the affected smartcards, found further optimizations which allow to improve the original attack from 140.8 CPU-years to 35.2 CPU-years for 90% of the keys and 70.4 CPU-years for the remaining 10% of the keys. As additional contribution, we provide a parallelized version of the attack that can be executed on an HPC

    From the web of bibliographic data to the web of bibliographic meaning: structuring, interlinking and validating ontologies on the semantic web

    Get PDF
    Bibliographic data sets have revealed good levels of technical interoperability observing the principles and good practices of linked data. However, they have a low level of quality from the semantic point of view, due to many factors: lack of a common conceptual framework for a diversity of standards often used together, reduced number of links between the ontologies underlying data sets, proliferation of heterogeneous vocabularies, underuse of semantic mechanisms in data structures, "ontology hijacking" (Feeney et al., 2018), point-to-point mappings, as well as limitations of semantic web languages for the requirements of bibliographic data interoperability. After reviewing such issues, a research direction is proposed to overcome the misalignments found by means of a reference model and a superontology, using Shapes Constraint Language (SHACL) to solve current limitations of RDF languages.info:eu-repo/semantics/acceptedVersio

    What makes students satisfied? A discussion and analysis of the UK’s national student survey

    Get PDF
    This paper analyses data from the National Students Survey, determining which groups of students expressed the greatest levels of satisfaction. We find students registered on clinical degrees and those studying humanities to be the most satisfied, with those in general engineering and media studies the least. We also find contentment to be higher among part-time students, and significantly higher among Russell group and post-1992 universities. We further investigate the sub-areas that drive overall student satisfaction, finding teaching and course organisation to be the most important aspects, with resources and assessment and feedback far less relevant. We then develop a multi- attribute measure of satisfaction which we argue produces a more accurate and more stable reflection of overall student satisfaction than that based on a single question

    On Using Active Learning and Self-Training when Mining Performance Discussions on Stack Overflow

    Full text link
    Abundant data is the key to successful machine learning. However, supervised learning requires annotated data that are often hard to obtain. In a classification task with limited resources, Active Learning (AL) promises to guide annotators to examples that bring the most value for a classifier. AL can be successfully combined with self-training, i.e., extending a training set with the unlabelled examples for which a classifier is the most certain. We report our experiences on using AL in a systematic manner to train an SVM classifier for Stack Overflow posts discussing performance of software components. We show that the training examples deemed as the most valuable to the classifier are also the most difficult for humans to annotate. Despite carefully evolved annotation criteria, we report low inter-rater agreement, but we also propose mitigation strategies. Finally, based on one annotator's work, we show that self-training can improve the classification accuracy. We conclude the paper by discussing implication for future text miners aspiring to use AL and self-training.Comment: Preprint of paper accepted for the Proc. of the 21st International Conference on Evaluation and Assessment in Software Engineering, 201

    Evaluating Physician Compare: Benefits and Challenges of Scorecards for Individual Physicians

    Get PDF

    She reads, he reads: gender differences and learning through self-help books

    Full text link
    Despite considerable scholarly attention given to self-help literature, there has been a lack of research about the experience of self-help reading. In this article, we explore gender differences in self-help reading. We argue that men and women read self-help books for different reasons and with different levels of engagement, and that they experience different outcomes from reading. We provide evidence from in-depth interviews with 89 women and 45 men. Women are more likely to seek out books of their own volition, to engage in learning strategies beyond reading, and to take action as a result of reading. Men are more likely to read books relating to careers, while women are more likely to read books about interpersonal relationships. We argue that these gender differences reflect profound political-economic and cultural changes, and that such changes also help explain the gendered evolution of adult, continuing, and higher education in recent decades. (DIPF/Orig.

    The NEST software development infrastructure

    Get PDF
    Software development in the Computational Sciences has reached a critical level of complexity in the recent years. This “complexity bottleneck” occurs for both the programming languages and technologies that are used during development and for the infrastructure, which is needed to sustain the development of large-scale software projects and keep the code base manageable [1].As the development shifts from specialized and solution-tailored in-house code (often developed by a single or only few developers) towards more general software packages written by larger teams of programmers, it becomes inevitable to use professional software engineering tools also in the realm of scientific software development. In addition the move to collaboration-based large-scale projects (e.g. BrainScaleS) also means a larger user base, which depends and relies on the quality and correctness of the code.In this contribution, we present the tools and infrastructure that have been introduced over the years to support the development of NEST, a simulator for large networks of spiking neuronal networks [2]. In particular, we show our use of• version control systems• bug tracking software• web-based wiki and blog engines• frameworks for carrying out unit tests• systems for continuous integration.References:[1] Gregory Wilson (2006). Where's the Real Bottleneck in Scientific Computing? American Scientist, 94(1): 5-6, doi:10.1511/2006.1.5.[2] Marc-Oliver Gewaltig and Markus Diesmann (2007) NEST (Neural Simulation Tool), Scholarpedia, 2(4): 1430

    Closing the Disparities Gap in Healthcare Quality With Performance Measurement and Public Reporting

    Get PDF
    Provides an overview of widening disparities in healthcare quality by race/ethnicity, socioeconomic status, and insurance. Discusses efforts to close the gap, including reporting quality measures and pay-for-performance, as well as challenges in data col
    corecore