653 research outputs found

    Power spectral measurements of clear-air turbulence to long wavelengths for altitudes up to 14,000 meters

    Get PDF
    Measurements of three components of clear air atmospheric turbulence were made with an airplane incorporating a special instrumentation system to provide accurate data resolution to wavelengths of approximately 12,500 m (40,000 ft). Flight samplings covered an altitude range from approximately 500 to 14,000 m (1500 to 46,000 ft) in various meteorological conditions. Individual autocorrelation functions and power spectra for the three turbulence components from 43 data runs taken primarily from mountain wave and jet stream encounters are presented. The flight location (Eastern or Western United States), date, time, run length, intensity level (standard deviation), and values of statistical degrees of freedom for each run are provided in tabular form. The data presented should provide adequate information for detailed meteorological correlations. Some time histories which contain predominant low frequency wave motion are also presented

    How do design and evaluation interrelate in HCI research?

    Get PDF
    Presented at DIS 2006, the Symposium on Designing Interactive Systems, the 6th ACM conference on Designing Interactive Systems, University Park, PA, DOI: http://dx.doi.org/10.1145/1142405.1142421Human-Computer Interaction (HCI) is defined by the Association for Computing Machinery (ACM) Special Interest Group on Computer-Human Interaction (SIGCHI) as “a discipline concerned with the design, evaluation, and implementation of interactive computing systems for human use and with the study of the major phenomenon surrounding them” [18]. In HCI there are authors that focus more on designing for usability and there are authors that focus more on evaluating usability. The relationship between these communities is not really clear. We use author cocitation analysis, multivariate techniques, and visualization tools to explore the relationships between these communities. The results of the analysis revealed seven clusters that could be identified as Design Theory and Complexity, Design Rationale, Cognitive Theories and Models, Cognitive Engineering, Computer-Supported Cooperative Work (CSCW), Participatory Design, and User-Centered Design

    Theoretical studies of the historical development of the accounting discipline: a review and evidence

    Get PDF
    Many existing studies of the development of accounting thought have either been atheoretical or have adopted Kuhn's model of scientific growth. The limitations of this 35-year-old model are discussed. Four different general neo-Kuhnian models of scholarly knowledge development are reviewed and compared with reference to an analytical matrix. The models are found to be mutually consistent, with each focusing on a different aspect of development. A composite model is proposed. Based on a hand-crafted database, author co-citation analysis is used to map empirically the entire literature structure of the accounting discipline during two consecutive time periods, 1972–81 and 1982–90. The changing structure of the accounting literature is interpreted using the proposed composite model of scholarly knowledge development

    Metacognition as Evidence for Evidentialism

    Get PDF
    Metacognition is the monitoring and controlling of cognitive processes. I examine the role of metacognition in ‘ordinary retrieval cases’, cases in which it is intuitive that via recollection the subject has a justiïŹed belief. Drawing on psychological research on metacognition, I argue that evidentialism has a unique, accurate prediction in each ordinary retrieval case: the subject has evidence for the proposition she justiïŹedly believes. But, I argue, process reliabilism has no unique, accurate predictions in these cases. I conclude that ordinary retrieval cases better support evidentialism than process reliabilism. This conclusion challenges several common assumptions. One is that non-evidentialism alone allows for a naturalized epistemology, i.e., an epistemology that is fully in accordance with scientiïŹc research and methodology. Another is that process reliabilism fares much better than evidentialism in the epistemology of memory

    Whither Evidentialist Reliabilism?

    Get PDF
    Evidentialism and Reliabilism are two of the main contemporary theories of epistemic justification. Some authors have thought that the theories are not incompatible with each other, and that a hybrid theory which incorporates elements of both should be taken into account. More recently, other authors have argued that the resulting theory is well- placed to deal with fine-grained doxastic attitudes (credences). In this paper I review the reasons for adopting this kind of hybrid theory, paying attention to the case of credences and the notion of probability involved in their treatment. I argue that the notion of probability in question can only be an epistemic (or evidential) kind of probability. I conclude that the resulting theory will be incompatible with Reliabilism in one important respect: it cannot deliver on the reductivist promise of Reliabilism. I also argue that attention to the justification of basic beliefs reveals limitations in the Evidentialist framework as well. The theory that results from the right combination of Evidentialism and Reliabilism, therefore, is neither Evidentialist nor Reliabilist

    Madelung Disease: MR Findings

    Get PDF
    Summary: Two cases of Madelung disease (benign symmetrical lipomatosis) are presented. The MR findings in this striking condition are demonstrated. Short-repetition-time/short-echo time sequences nicely show the relationship of the cervical lipomatous accumulations to the airway and major neurovascular structures in the carotid spaces. Fat-suppression techniques add no additional information in the radiologic evaluation of these patients

    Guidelines of the American Society of Mammalogists for the use of wild mammals in research

    Get PDF
    Guidelines for use of wild mammal species are updated from the American Society of Mammalogists (ASM) 2007 publication. These revised guidelines cover current professional techniques and regulations involving mammals used in research and teaching. They incorporate additional resources, summaries of procedures, and reporting requirements not contained in earlier publications. Included are details on marking, housing, trapping, and collecting mammals. It is recommended that institutional animal care and use committees (IACUCs), regulatory agencies, and investigators use these guidelines as a resource for protocols involving wild mammals. These guidelines were prepared and approved by the ASM, working with experienced professional veterinarians and IACUCs, whose collective expertise provides a broad and comprehensive understanding of the biology of nondomesticated mammals in their natural environments. The most current version of these guidelines and any subsequent modifications are available at the ASM Animal Care and Use Committee page of the ASM Web site (http://mammalsociety.org/committees/index.asp).American Society of Mammalogist

    Testing bibliometric indicators by their prediction of scientists promotions

    Get PDF
    We have developed a method to obtain robust quantitative bibliometric indicators for several thousand scientists. This allows us to study the dependence of bibliometric indicators (such as number of publications, number of citations, Hirsch index...) on the age, position, etc. of CNRS scientists. Our data suggests that the normalized h index (h divided by the career length) is not constant for scientists with the same productivity but differents ages. We also compare the predictions of several bibliometric indicators on the promotions of about 600 CNRS researchers. Contrary to previous publications, our study encompasses most disciplines, and shows that no single indicator is the best predictor for all disciplines. Overall, however, the Hirsch index h provides the least bad correlations, followed by the number of papers published. It is important to realize however that even h is able to recover only half of the actual promotions. The number of citations or the mean number of citations per paper are definitely not good predictors of promotion

    The DLV System for Knowledge Representation and Reasoning

    Full text link
    This paper presents the DLV system, which is widely considered the state-of-the-art implementation of disjunctive logic programming, and addresses several aspects. As for problem solving, we provide a formal definition of its kernel language, function-free disjunctive logic programs (also known as disjunctive datalog), extended by weak constraints, which are a powerful tool to express optimization problems. We then illustrate the usage of DLV as a tool for knowledge representation and reasoning, describing a new declarative programming methodology which allows one to encode complex problems (up to Δ3P\Delta^P_3-complete problems) in a declarative fashion. On the foundational side, we provide a detailed analysis of the computational complexity of the language of DLV, and by deriving new complexity results we chart a complete picture of the complexity of this language and important fragments thereof. Furthermore, we illustrate the general architecture of the DLV system which has been influenced by these results. As for applications, we overview application front-ends which have been developed on top of DLV to solve specific knowledge representation tasks, and we briefly describe the main international projects investigating the potential of the system for industrial exploitation. Finally, we report about thorough experimentation and benchmarking, which has been carried out to assess the efficiency of the system. The experimental results confirm the solidity of DLV and highlight its potential for emerging application areas like knowledge management and information integration.Comment: 56 pages, 9 figures, 6 table
    • 

    corecore