26,317 research outputs found

    Cyber-crime Science = Crime Science + Information Security

    Get PDF
    Cyber-crime Science is an emerging area of study aiming to prevent cyber-crime by combining security protection techniques from Information Security with empirical research methods used in Crime Science. Information security research has developed techniques for protecting the confidentiality, integrity, and availability of information assets but is less strong on the empirical study of the effectiveness of these techniques. Crime Science studies the effect of crime prevention techniques empirically in the real world, and proposes improvements to these techniques based on this. Combining both approaches, Cyber-crime Science transfers and further develops Information Security techniques to prevent cyber-crime, and empirically studies the effectiveness of these techniques in the real world. In this paper we review the main contributions of Crime Science as of today, illustrate its application to a typical Information Security problem, namely phishing, explore the interdisciplinary structure of Cyber-crime Science, and present an agenda for research in Cyber-crime Science in the form of a set of suggested research questions

    Cognitive-support code review tools : improved efficiency of change-based code review by guiding and assisting reviewers

    Get PDF
    Code reviews, i.e., systematic manual checks of program source code by other developers, have been an integral part of the quality assurance canon in software engineering since their formalization by Michael Fagan in the 1970s. Computer-aided tools supporting the review process have been known for decades and are now widely used in software development practice. Despite this long history and widespread use, current tools hardly go beyond simple automation of routine tasks. The core objective of this thesis is to systematically develop options for improved tool support for code reviews and to evaluate them in the interplay of research and practice. The starting point of the considerations is a comprehensive analysis of the state of research and practice. Interview and survey data collected in this thesis show that review processes in practice are now largely change-based, i.e., based on checking the changes resulting from the iterative-incremental evolution of software. This is true not only for open source projects and large technology companies, as shown in previous research, but across the industry. Despite the common change-based core process, there are various differences in the details of the review processes. The thesis shows possible factors influencing these differences. Important factors seem to be the process variants supported and promoted by the used review tool. In contrast, the used tool has little influence on the fundamental decision to use regular code reviews. Instead, the interviews and survey data suggest that the decision to use code reviews depends more on cultural factors. Overall, the analysis of the state of research and practice shows that there is a potential for developing better code review tools, and this potential is associated with the opportunity to increase efficiency in software development. The present thesis argues that the most promising approach for better review support is reducing the reviewer's cognitive load when reviewing large code changes. Results of a controlled experiment support this reasoning. The thesis explores various possibilities for cognitive support, two of these in detail: Guiding the reviewer by identifying and presenting a good order of reading the code changes being reviewed, and assisting the reviewer through automatic determination of change parts that are irrelevant for review. In both cases, empirical data is used to both generate and test hypotheses. In order to demonstrate the practical suitability of the techniques, they are also used in a partner company in regular development practice. For this evaluation of the cognitive support techniques in practice, a review tool which is suitable for use in the partner company and as a platform for review research is needed. As such a tool was not available, the code review tool "CoRT" has been developed. Here, too, a combination of an analysis of the state of research, support of design decisions through scientific studies and evaluation in practical use was employed. Overall, the results of this thesis can be roughly divided into three blocks: Researchers and practitioners working on improving review tools receive an empirically and theoretically sound catalog of requirements for cognitive-support review tools. It is available explicitly in the form of essential requirements and possible forms of realization, and additionally implicitly in the form of the tool "CoRT". The second block consists of contributions to the fundamentals of review research, ranging from the comprehensive analysis of review processes in practice to the analysis of the impact of cognitive abilities (specifically, working memory capacity) on review performance. As the third block, innovative methodological approaches have been developed within this thesis, e.g., the use of process simulation for the development of heuristics for development teams and new approaches in repository and data mining

    Safety management of a complex R and D ground operating system

    Get PDF
    A perspective on safety program management was developed for a complex R&D operating system, such as the NASA-Lewis Research Center. Using a systems approach, hazardous operations are subjected to third-party reviews by designated-area safety committees and are maintained under safety permit controls. To insure personnel alertness, emergency containment forces and employees are trained in dry-run emergency simulation exercises. The keys to real safety effectiveness are top management support and visibility of residual risks

    Systemic risk in the financial sector; a review and synthesis

    Get PDF
    In a financial crisis, an initial shock gets amplified while it propagates to other financial intermediaries, ultimately disrupting the financial sector. We review the literature on such amplification mechanisms, which create externalities from risk taking. We distinguish between two classes of mechanisms: contagion within the financial sector and pro-cyclical connection between the financial sector and the real economy. Regulation can diminish systemic risk by reducing these externalities. However, regulation of systemic risk faces several problems. First, systemic risk and its costs are difficult to quantify. Second, banks have strong incentives to evade regulation meant to reduce systemic risk. Third, regulators are prone to forbearance. Finally, the inability of governments to commit not to bail out systemic institutions creates moral hazard and reduces the market’s incentive to price systemic risk. Strengthening market discipline can play an important role in addressing these problems, because it reduces the scope for regulatory forbearance, does not rely on complex information requirements, and is difficult to manipulate.

    Having options alters the attractiveness of familiar versus novel faces:sex differences and similarities

    Get PDF
    Although online dating allows us to access a wider pool of romantic partners, choice could induce an ‘assessment mindset’, orienting us toward ‘optimal’ or alternative partners and undermining our willingness to commit or remain committed to someone. Contextual changes in judgements of facial attractiveness can shed light on this issue. We directly test this proposal by activating a context where participants imagine choosing between items in picture slideshows (dates or equally attractive desserts), observing its effects on attraction to i) faces on second viewing and ii) novel versus familiar identities. Single women, relative to single men, were less attracted to the same face on second viewing (Experiments 2 and 4), with this sex difference only observed after imagining not ‘matching’ with any romantic dates in our slideshow (i.e., low choice, Experiment 4). No equivalent sex differences were observed in the absence of experimental choice slideshows (Experiment 3), and these effects (Experiment 2) were not moderated by slideshow content (romantic dates or desserts) or choice set size (five versus fifteen items). Following slideshows, novel faces were more attractive than familiar faces (Experiment 1), with this effect stronger in men than in women (Experiment 2), and stronger across both sexes after imagining ‘matching’ with desired romantic dates (i.e., high choice, Experiment 4). Our findings suggest that familiarity does not necessarily ‘breed liking’ when we have the autonomy to choose, revealing lower-order socio-cognitive mechanisms that could underpin online interactions, such as when browsing profiles and deciding how to allocate effort to different users

    An Introduction to Programming for Bioscientists: A Python-based Primer

    Full text link
    Computing has revolutionized the biological sciences over the past several decades, such that virtually all contemporary research in the biosciences utilizes computer programs. The computational advances have come on many fronts, spurred by fundamental developments in hardware, software, and algorithms. These advances have influenced, and even engendered, a phenomenal array of bioscience fields, including molecular evolution and bioinformatics; genome-, proteome-, transcriptome- and metabolome-wide experimental studies; structural genomics; and atomistic simulations of cellular-scale molecular assemblies as large as ribosomes and intact viruses. In short, much of post-genomic biology is increasingly becoming a form of computational biology. The ability to design and write computer programs is among the most indispensable skills that a modern researcher can cultivate. Python has become a popular programming language in the biosciences, largely because (i) its straightforward semantics and clean syntax make it a readily accessible first language; (ii) it is expressive and well-suited to object-oriented programming, as well as other modern paradigms; and (iii) the many available libraries and third-party toolkits extend the functionality of the core language into virtually every biological domain (sequence and structure analyses, phylogenomics, workflow management systems, etc.). This primer offers a basic introduction to coding, via Python, and it includes concrete examples and exercises to illustrate the language's usage and capabilities; the main text culminates with a final project in structural bioinformatics. A suite of Supplemental Chapters is also provided. Starting with basic concepts, such as that of a 'variable', the Chapters methodically advance the reader to the point of writing a graphical user interface to compute the Hamming distance between two DNA sequences.Comment: 65 pages total, including 45 pages text, 3 figures, 4 tables, numerous exercises, and 19 pages of Supporting Information; currently in press at PLOS Computational Biolog

    The expectations trap hypothesis

    Get PDF
    The authors examine the inflation take-off of the early 1970s in terms of the expectations trap hypothesis, according to which fear of violating the public’s inflation expectations pushed the Fed into producing high inflation. This interpretation is compared with the Phillips curve hypothesis, according to which the Fed produced high inflation as the unfortunate byproduct of a conscious decision to jump-start a weak economy. Which hypothesis is more plausible has important implications for what should be done to prevent future inflation flare-ups.Inflation (Finance) ; Phillips curve ; Economic conditions - United States

    Economics of Policy Errors and Learning in the European Union. Bruges European Economic Policy (BEEP) Briefing 31/2013

    Get PDF
    Policy errors occur regularly in EU Member States. Learning from these errors can be beneficial. This paper explains how the European Union can facilitate this learning. At present, much attention is given to “best practices”. But learning from mistakes is also valuable. The paper develops the concept of “avoidable error” and examines evidence from infringement proceedings and special reports of the European Court of Auditors which indicate that Member States do indeed commit avoidable errors. The paper considers how Member States may take measures not to repeat avoidable or predictable errors and makes appropriate proposals

    The expectations trap hypothesis

    Get PDF
    This article explores a hypothesis about the take-off in inflation in the early 1970s. According to the expectations trap hypothesis, the Fed was driven to high money growth by a fear of violating the expectations of high inflation that existed at the time. The authors argue that this hypothesis is more compelling than the Phillips curve hypothesis, according to which the Fed produced the high inflation as an unfortunate by product of a conscious decision to jump start a weak economy.Inflation (Finance) ; Phillips curve
    • 

    corecore