1,719 research outputs found

    Effective Field Theories and the Role of Consistency in Theory Choice

    Full text link
    Promoting a theory with a finite number of terms into an effective field theory with an infinite number of terms worsens simplicity, predictability, falsifiability, and other attributes often favored in theory choice. However, the importance of these attributes pales in comparison with consistency, both observational and mathematical consistency, which propels the effective theory to be superior to its simpler truncated version of finite terms, whether that theory be renormalizable (e.g., Standard Model of particle physics) or nonrenormalizable (e.g., gravity). Some implications for the Large Hadron Collider and beyond are discussed, including comments on how directly acknowledging the preeminence of consistency can affect future theory work.Comment: 17 pages, Lecture delivered at physics and philosophy conference "The Epistemology of the Large Hadron Collider", Wuppertal University, January 201

    A survey on software testability

    Full text link
    Context: Software testability is the degree to which a software system or a unit under test supports its own testing. To predict and improve software testability, a large number of techniques and metrics have been proposed by both practitioners and researchers in the last several decades. Reviewing and getting an overview of the entire state-of-the-art and state-of-the-practice in this area is often challenging for a practitioner or a new researcher. Objective: Our objective is to summarize the body of knowledge in this area and to benefit the readers (both practitioners and researchers) in preparing, measuring and improving software testability. Method: To address the above need, the authors conducted a survey in the form of a systematic literature mapping (classification) to find out what we as a community know about this topic. After compiling an initial pool of 303 papers, and applying a set of inclusion/exclusion criteria, our final pool included 208 papers. Results: The area of software testability has been comprehensively studied by researchers and practitioners. Approaches for measurement of testability and improvement of testability are the most-frequently addressed in the papers. The two most often mentioned factors affecting testability are observability and controllability. Common ways to improve testability are testability transformation, improving observability, adding assertions, and improving controllability. Conclusion: This paper serves for both researchers and practitioners as an "index" to the vast body of knowledge in the area of testability. The results could help practitioners measure and improve software testability in their projects

    An empirical study of architecting for continuous delivery and deployment

    Get PDF
    Recently, many software organizations have been adopting Continuous Delivery and Continuous Deployment (CD) practices to develop and deliver quality software more frequently and reliably. Whilst an increasing amount of the literature covers different aspects of CD, little is known about the role of software architecture in CD and how an application should be (re-) architected to enable and support CD. We have conducted a mixed-methods empirical study that collected data through in-depth, semi-structured interviews with 21 industrial practitioners from 19 organizations, and a survey of 91 professional software practitioners. Based on a systematic and rigorous analysis of the gathered qualitative and quantitative data, we present a conceptual framework to support the process of (re-) architecting for CD. We provide evidence-based insights about practicing CD within monolithic systems and characterize the principle of "small and independent deployment units" as an alternative to the monoliths. Our framework supplements the architecting process in a CD context through introducing the quality attributes (e.g., resilience) that require more attention and demonstrating the strategies (e.g., prioritizing operations concerns) to design operations-friendly architectures. We discuss the key insights (e.g., monoliths and CD are not intrinsically oxymoronic) gained from our study and draw implications for research and practice.Comment: To appear in Empirical Software Engineerin

    An empirical investigation into branch coverage for C programs using CUTE and AUSTIN

    Get PDF
    Automated test data generation has remained a topic of considerable interest for several decades because it lies at the heart of attempts to automate the process of Software Testing. This paper reports the results of an empirical study using the dynamic symbolic-execution tool. CUTE, and a search based tool, AUSTIN on five non-trivial open source applications. The aim is to provide practitioners with an assessment of what can be achieved by existing techniques with little or no specialist knowledge and to provide researchers with baseline data against which to measure subsequent work. To achieve this, each tool is applied 'as is', with neither additional tuning nor supporting harnesses and with no adjustments applied to the subject programs under test. The mere fact that these tools can be applied 'out of the box' in this manner reflects the growing maturity of Automated test data generation. However, as might be expected, the study reveals opportunities for improvement and suggests ways to hybridize these two approaches that have hitherto been developed entirely independently. (C) 2010 Elsevier Inc. All rights reserved

    What’s so bad about scientism?

    Get PDF
    In their attempt to defend philosophy from accusations of uselessness made by prominent scientists, such as Stephen Hawking, some philosophers respond with the charge of ‘scientism.’ This charge makes endorsing a scientistic stance, a mistake by definition. For this reason, it begs the question against these critics of philosophy, or anyone who is inclined to endorse a scientistic stance, and turns the scientism debate into a verbal dispute. In this paper, I propose a different definition of scientism, and thus a new way of looking at the scientism debate. Those philosophers who seek to defend philosophy against accusations of uselessness would do philosophy a much better service, I submit, if they were to engage with the definition of scientism put forth in this paper, rather than simply make it analytic that scientism is a mistake

    A family resemblance approach to the nature of science for science education

    Get PDF
    Although there is universal consensus both in the science education literature and in the science standards documents to the effect that students should learn not only the content of science but also its nature, there is little agreement about what that nature is. This led many science educators to adopt what is sometimes called “the consensus view” about the nature of science (NOS), whose goal is to teach students only those characteristics of science on which there is wide consensus. This is an attractive view, but it has some shortcomings and weaknesses. In this article we present and defend an alternative approach based on the notion of family resemblance. We argue that the family resemblance approach is superior to the consensus view in several ways, which we discuss in some detail

    An Analysis of the Demarcation Problem in Philosophy of Science and Its Application to Homeopathy

    Get PDF
    This paper presents a preliminary analysis of homeopathy from the perspective of the demarcation problem in the philosophy of science. In this context, Popper, Kuhn and Feyerabend’s solution to the problem will be given respectively and their criteria will be applied to homeopathy, aiming to shed some light on the controversy over its scientific status. It then examines homeopathy under the lens of demarcation criteria to conclude that homeopathy is regarded as science by Feyerabend and is considered as pseudoscience by Popper and Kuhn. By offering adequate tools for the analysis of the foundations, structure and implications of homeopathy, demarcation issue can help to clarify this medical controversy. The main argument of this article is that a final decision on homeopathy, whose scientific status changes depending on the criteria of the philosophers mentioned, cannot be given

    No alternative to proliferation

    Get PDF
    We reflect on the nature, role and limits of non-empirical theory assessment in fundamental physics, focusing in particular on quantum gravity. We argue for the usefulness and, to some extent, necessity of non-empirical theory assessment, but also examine critically its dangers. We conclude that the principle of proliferation of theories is not only at the very root of theory assessment but all the more necessary when experimental tests are scarce, and also that, in the same situation, it represents the only medicine against the degeneration of scientific research programmes.Comment: 15 pages; contribution to the volume "Why trust a theory?", edited by: R. Dardashti, R. Dawid, K. Thebault, to be published by Cambridge University Pres
    corecore