9,111 research outputs found

    Climate Science: Is it currently designed to answer questions?

    Full text link
    For a variety of inter-related cultural, organizational, and political reasons, progress in climate science and the actual solution of scientific problems in this field have moved at a much slower rate than would normally be possible. Not all these factors are unique to climate science, but the heavy influence of politics has served to amplify the role of the other factors. Such factors as the change in the scientific paradigm from a dialectic opposition between theory and observation to an emphasis on simulation and observational programs, the inordinate growth of administration in universities and the consequent increase in importance of grant overhead, and the hierarchical nature of formal scientific organizations are cosidered. This paper will deal with the origin of the cultural changes and with specific examples of the operation and interaction of these factors. In particular, we will show how political bodies act to control scientific institutions, how scientists adjust both data and even theory to accommodate politically correct positions, and how opposition to these positions is disposed of.Comment: 36 pages, no figures. v2: footnotes 16, 19, 20 added, footnote 17 changed, typos corrected. v3: description of John Holdren corrected, expanded discussion of I=PAT formula, typos corrected. v4: The reference to Deming (2005) added in v3 stated that a 1995 email in question was from Jonathan Overpeck. In fact, Deming had left the sender of the email unnamed. The revision v4 now omits the identification of Overpeck. However, the revision v4 now includes a more recent and verifiable reference to a 2005 emai

    Disagreeable Privacy Policies: Mismatches between Meaning and Users’ Understanding

    Get PDF
    Privacy policies are verbose, difficult to understand, take too long to read, and may be the least-read items on most websites even as users express growing concerns about information collection practices. For all their faults, though, privacy policies remain the single most important source of information for users to attempt to learn how companies collect, use, and share data. Likewise, these policies form the basis for the self-regulatory notice and choice framework that is designed and promoted as a replacement for regulation. The underlying value and legitimacy of notice and choice depends, however, on the ability of users to understand privacy policies. This paper investigates the differences in interpretation among expert, knowledgeable, and typical users and explores whether those groups can understand the practices described in privacy policies at a level sufficient to support rational decision-making. The paper seeks to fill an important gap in the understanding of privacy policies through primary research on user interpretation and to inform the development of technologies combining natural language processing, machine learning and crowdsourcing for policy interpretation and summarization. For this research, we recruited a group of law and public policy graduate students at Fordham University, Carnegie Mellon University, and the University of Pittsburgh (“knowledgeable users”) and presented these law and policy researchers with a set of privacy policies from companies in the e-commerce and news & entertainment industries. We asked them nine basic questions about the policies’ statements regarding data collection, data use, and retention. We then presented the same set of policies to a group of privacy experts and to a group of non-expert users. The findings show areas of common understanding across all groups for certain data collection and deletion practices, but also demonstrate very important discrepancies in the interpretation of privacy policy language, particularly with respect to data sharing. The discordant interpretations arose both within groups and between the experts and the two other groups. The presence of these significant discrepancies has critical implications. First, the common understandings of some attributes of described data practices mean that semi-automated extraction of meaning from website privacy policies may be able to assist typical users and improve the effectiveness of notice by conveying the true meaning to users. However, the disagreements among experts and disagreement between experts and the other groups reflect that ambiguous wording in typical privacy policies undermines the ability of privacy policies to effectively convey notice of data practices to the general public. The results of this research will, consequently, have significant policy implications for the construction of the notice and choice framework and for the US reliance on this approach. The gap in interpretation indicates that privacy policies may be misleading the general public and that those policies could be considered legally unfair and deceptive. And, where websites are not effectively conveying privacy policies to consumers in a way that a “reasonable person” could, in fact, understand the policies, “notice and choice” fails as a framework. Such a failure has broad international implications since websites extend their reach beyond the United States

    WAR: Webserver for aligning structural RNAs

    Get PDF
    We present an easy-to-use webserver that makes it possible to simultaneously use a number of state of the art methods for performing multiple alignment and secondary structure prediction for noncoding RNA sequences. This makes it possible to use the programs without having to download the code and get the programs to run. The results of all the programs are presented on a webpage and can easily be downloaded for further analysis. Additional measures are calculated for each program to make it easier to judge the individual predictions, and a consensus prediction taking all the programs into account is also calculated. This website is free and open to all users and there is no login requirement. The webserver can be found at: http://genome.ku.dk/resources/war

    Machine code and metaphysics : a perspective on software engineering

    Get PDF
    This is an open access article distributed under the Creative Commons Attribution License (CC BY) which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly citedA major, but too-little-considered problem for Software Engineering (SE) is a lack of consensus concerning Computer Science (CS) and how this relates to developing unpredictable computing technology. We consider some implications for SE of computer systems differing scientific basis, exemplified with the International Standard Organisations Open Systems Interconnection (ISO-OSI) layered architectural model. An architectural view allows comparison of computing technology components facilitating a view of computing as a continuum. For example, at one layer of computer architecture, components written in Turing-complete machine language can be seen as deterministic and consistent with a theoretical paradigm of CS. At another layer, components (applications) closer to the human sphere have been seen as non-deterministic and inconsistent with theoretical CS. We compare unpredictable development of computing technology against the cyclic legacy of technological advance and scientific discovery, and suggest that SE indicates an enabling cycle, discernible in previous scientific revolution(s), is stalled or possibly hidden. The CS consequence of divorcing technological advance from scientific consensus is particularly concerning. For example human/computing events could be seen as unpredictable virtual phenomena that somehow extend the ontology of CS. Our approach challenges practical and philosophical boundaries by investigating if applying scientific method (SM) resolves any SE/Science dichotomy.Peer reviewedFinal Published versio

    Prediction of secondary structures for large RNA molecules

    Get PDF
    The prediction of correct secondary structures of large RNAs is one of the unsolved challenges of computational molecular biology. Among the major obstacles is the fact that accurate calculations scale as O(n⁴), so the computational requirements become prohibitive as the length increases. We present a new parallel multicore and scalable program called GTfold, which is one to two orders of magnitude faster than the de facto standard programs mfold and RNAfold for folding large RNA viral sequences and achieves comparable accuracy of prediction. We analyze the algorithm's concurrency and describe the parallelism for a shared memory environment such as a symmetric multiprocessor or multicore chip. We are seeing a paradigm shift to multicore chips and parallelism must be explicitly addressed to continue gaining performance with each new generation of systems. We provide a rigorous proof of correctness of an optimized algorithm for internal loop calculations called internal loop speedup algorithm (ILSA), which reduces the time complexity of internal loop computations from O(n⁴) to O(n³) and show that the exact algorithms such as ILSA are executed with our method in affordable amount of time. The proof gives insight into solving these kinds of combinatorial problems. We have documented detailed pseudocode of the algorithm for predicting minimum free energy secondary structures which provides a base to implement future algorithmic improvements and improved thermodynamic model in GTfold. GTfold is written in C/C++ and freely available as open source from our website.M.S.Committee Chair: Bader, David; Committee Co-Chair: Heitsch, Christine; Committee Member: Harvey, Stephen; Committee Member: Vuduc, Richar

    Toward a document evaluation methodology: What does research tell us about the validity and reliability of evaluation methods?

    Get PDF
    Although the usefulness of evaluating documents has become generally accepted among communication professionals, the supporting research that puts evaluation practices empirically to the test is only beginning to emerge. This article presents an overview of the available research on troubleshooting evaluation methods. Four lines of research are distinguished concerning the validity of evaluation methods, sample composition, sample size, and the implementation of evaluation results during revisio
    corecore