199 research outputs found

    Paradigmatic Explanations: Strauss's Dangerous Idea

    Get PDF
    David Friedrich Strauss is best known for his mythical interpretation of the Gospel narratives. He opposed both the supernaturalists (who regarded the Gospel stories as reliable) and the rationalists (who offered natural explanations of purportedly supernatural events). His mythical interpretation suggests that many of the stories about Jesus were woven out of pre-existing messianic beliefs and expectations. Picking up this suggestion, I argue that the Gospel writers thought paradigmatically rather than historically. A paradigmatic explanation assimilates the event-to-be- explained to what is thought to be a prototypical instance of divine action. It differs from a historical or scientific explanation insofar as it does not specify the conditions under which it should be applied. It is, therefore, a wonderfully flexible way to understand the present in the light of the past

    What is wrong with intelligent design?

    Get PDF
    While a great deal of abuse has been directed at intelligent design theory (ID), its starting point is a fact about biological organisms that cries out for explanation, namely “specified complexity” (SC). Advocates of ID deploy three kind of argument from specified complexity to the existence of a designer: an eliminative argument, an inductive argument, and an inference to the best explanation. Only the first of these merits the abuse directed at it; the other two arguments are worthy of respect. If they fail, it is only because we have a better explanation of SC, namely Darwin’s theory of evolution by natural selection

    In defense of naturalism

    Get PDF
    History and the modern sciences are characterized by what is sometimes called a “methodological naturalism” that disregards talk of divine agency. Some religious thinkers argue that this reflects a dogmatic materialism: a non-negotiable and a priori commitment to a materialist metaphysics. In response to this charge, I make a sharp distinction between procedural requirements and metaphysical commitments. The procedural requirement of history and the sciences—that proposed explanations appeal to publicly-accessible bodies of evidence—is non-negotiable, but has no metaphysical implications. The metaphysical commitment is naturalistic, but is both a posteriori and provisional, arising from the fact that for more than 400 years no proposed theistic explanation has been shown capable of meeting the procedural requirement. I argue that there is nothing to prevent religious thinkers from seeking to overturn this metaphysically naturalistic stance. But in order to do so they would need to show that their proposed theistic explanations are the best available explanations of a range of phenomena. Until this has been done, the metaphysical naturalism of history and the sciences remains defensible.Peer Reviewe

    Using multi-item psychometric scales for research and practice in human resource management

    No full text
    Questionnaires are a widely used research method in human resource management (HRM), and multi-item psychometric scales are the most widely used measures in questionnaires. These scales each have multiple items to measure a construct in a reliable and valid manner. However, using this method effectively involves complex procedures that are frequently misunderstood or unknown. Although there are existing methodological texts addressing this topic, few are exhaustive and they often omit essential practical information. The current article therefore aims to provide a detailed and comprehensive guide to the use of multi-item psychometric scales for HRM research and practice, including their structure, development, use, administration, and data preparation

    Animal welfare aspects in respect of the slaughter or killing of pregnant livestock animals (cattle, pigs, sheep, goats, horses)

    Get PDF

    Standards and Practices for Forecasting

    Get PDF
    One hundred and thirty-nine principles are used to summarize knowledge about forecasting. They cover formulating a problem, obtaining information about it, selecting and applying methods, evaluating methods, and using forecasts. Each principle is described along with its purpose, the conditions under which it is relevant, and the strength and sources of evidence. A checklist of principles is provided to assist in auditing the forecasting process. An audit can help one to find ways to improve the forecasting process and to avoid legal liability for poor forecasting

    Association of respiratory symptoms and lung function with occupation in the multinational Burden of Obstructive Lung Disease (BOLD) study

    Get PDF
    Background Chronic obstructive pulmonary disease has been associated with exposures in the workplace. We aimed to assess the association of respiratory symptoms and lung function with occupation in the Burden of Obstructive Lung Disease study. Methods We analysed cross-sectional data from 28 823 adults (≥40 years) in 34 countries. We considered 11 occupations and grouped them by likelihood of exposure to organic dusts, inorganic dusts and fumes. The association of chronic cough, chronic phlegm, wheeze, dyspnoea, forced vital capacity (FVC) and forced expiratory volume in 1 s (FEV1)/FVC with occupation was assessed, per study site, using multivariable regression. These estimates were then meta-analysed. Sensitivity analyses explored differences between sexes and gross national income. Results Overall, working in settings with potentially high exposure to dusts or fumes was associated with respiratory symptoms but not lung function differences. The most common occupation was farming. Compared to people not working in any of the 11 considered occupations, those who were farmers for ≥20 years were more likely to have chronic cough (OR 1.52, 95% CI 1.19–1.94), wheeze (OR 1.37, 95% CI 1.16–1.63) and dyspnoea (OR 1.83, 95% CI 1.53–2.20), but not lower FVC (β=0.02 L, 95% CI −0.02–0.06 L) or lower FEV1/FVC (β=0.04%, 95% CI −0.49–0.58%). Some findings differed by sex and gross national income. Conclusion At a population level, the occupational exposures considered in this study do not appear to be major determinants of differences in lung function, although they are associated with more respiratory symptoms. Because not all work settings were included in this study, respiratory surveillance should still be encouraged among high-risk dusty and fume job workers, especially in low- and middle-income countries.publishedVersio

    Machine learning algorithms performed no better than regression models for prognostication in traumatic brain injury

    Get PDF
    Objective: We aimed to explore the added value of common machine learning (ML) algorithms for prediction of outcome for moderate and severe traumatic brain injury. Study Design and Setting: We performed logistic regression (LR), lasso regression, and ridge regression with key baseline predictors in the IMPACT-II database (15 studies, n = 11,022). ML algorithms included support vector machines, random forests, gradient boosting machines, and artificial neural networks and were trained using the same predictors. To assess generalizability of predictions, we performed internal, internal-external, and external validation on the recent CENTER-TBI study (patients with Glasgow Coma Scale <13, n = 1,554). Both calibration (calibration slope/intercept) and discrimination (area under the curve) was quantified. Results: In the IMPACT-II database, 3,332/11,022 (30%) died and 5,233(48%) had unfavorable outcome (Glasgow Outcome Scale less than 4). In the CENTER-TBI study, 348/1,554(29%) died and 651(54%) had unfavorable outcome. Discrimination and calibration varied widely between the studies and less so between the studied algorithms. The mean area under the curve was 0.82 for mortality and 0.77 for unfavorable outcomes in the CENTER-TBI study. Conclusion: ML algorithms may not outperform traditional regression approaches in a low-dimensional setting for outcome prediction after moderate or severe traumatic brain injury. Similar to regression-based prediction models, ML algorithms should be rigorously validated to ensure applicability to new populations
    corecore