2,942 research outputs found

    The ghosts I called I can\u27t get rid of now : The Keynes-Tinbergen-Friedman-Phillips Critique of Keynesian Macroeconometrics

    Get PDF
    This chapter offers a fresh perspective on the much publicised dispute between those followers of Keynes who presented econometric evidence in favour of a Phillips curve trade-off, and those monetarists who presented counter econometric evidence. Contrary to common perceptions, the collapse of the Keynesian Phillips curve was a vindication of a common critique of macroeconometric practices, which was jointly authored by John Maynard Keynes, Jan Tinbergen, Milton Friedman and A.W.H. \u27Bill\u27 Phillips. This analysis is informed by the usual sources, plus two sources which had been thought to be no longer in existence (Phillips\u27 private papers and the London School of Economics (LSE) Methodology, Measurement and Testing (M2T) Staff Seminar records), plus two essays by Keynes (1938a, 1938b) which have been overlooked in this context. ISBN: 033373045

    Can UK passenger vehicles be designed to meet 2020 emissions targets? A novel methodology to forecast fuel consumption with uncertainty analysis

    Get PDF
    Vehicle manufacturers are required to reduce their European sales-weighted emissions to 95 g CO2/km by 2020, with the aim of reducing on-road fleet fuel consumption. Nevertheless, current fuel consumption models are not suited for the European market and are unable to account for uncertainties when used to forecast passenger vehicle energy-use. Therefore, a new methodology is detailed herein to quantify new car fleet fuel consumption based on vehicle design metrics. The New European Driving Cycle (NEDC) is shown to underestimate on-road fuel consumption in Spark (SI) and Compression Ignition (CI) vehicles by an average of 16% and 13%, respectively. A Bayesian fuel consumption model attributes these discrepancies to differences in rolling, frictional and aerodynamic resistances. Using projected inputs for engine size, vehicle mass, and compression ratio, the likely average 2020 on-road fuel consumption was estimated to be 7.6 L/100 km for SI and 6.4 L/100 km for CI vehicles. These compared to NEDC based estimates of 5.34 L/100 km (SI) and 4.28 L/100 km (CI), both of which exceeded mandatory 2020 fuel equivalent emissions standards by 30.2% and 18.9%, respectively. The results highlight the need for more stringent technological developments for manufacturers to ensure adherence to targets, and the requirements for more accurate measurement techniques that account for discrepancies between standardised and on-road fuel consumption.NEDC data measurements were supplied by CAP Consulting. The authors are also grateful to the Energy Efficient Cities Initiative and the EPSRC (EP/F034350/1) for funding this work.This is the author accepted mansucript. The final version is available via Elsevier at http://dx.doi.org/10.1016/j.apenergy.2015.03.04

    Interesting results - but are they valid?

    Get PDF
    QCA\u2019s grasp on causation is often questioned from a probabilistic, experimental understanding of validity. QCA results however rely on logical and set-theoretical inferences. Is a difference in languages enough to justify a separate validity yardsticks? And what secures that QCA is delivering valid results? The review of quantitative and qualitative exemplary yardsticks shows that traditions share validity concerns, yet give them different contents. The article argues that such difference is legitimized by the special assumptions about causation that inform their research processes. It therefore clarifies QCA causal ontology, identifies its special threats, and evaluates the strategies in use to prevent or tackle them - also adding a new one to address over-specified hypotheses. In this, the nomothetic yardstick proves to be a fertile framework, yet hardly a proper guideline for solutions

    Finding the needles in the haystack: Generating legal test inputs for object-oriented programs

    Get PDF
    A test input for an object-oriented program typically consists of asequence of method calls that use the API defined by the programunder test. Generating legal test inputs can be challenging because,for some programs, the set of legal method sequences is much smallerthan the set of all possible sequences; without a formalspecification of legal sequences, an input generator is bound toproduce mostly illegal sequences.We propose a scalable technique that combines dynamic analysis withrandom testing to help an input generator create legal test inputswithout a formal specification, even for programs in whichmost sequences are illegal. The technique uses an example executionof the program to infer a model of legal call sequences, and usesthe model to guide a random input generator towards legal butbehaviorally-diverse sequences.We have implemented our technique for Java, in a tool calledPalulu, and evaluated its effectiveness in creating legal inputsfor real programs. Our experimental results indicate that thetechnique is effective and scalable. Our preliminary evaluationindicates that the technique can quickly generate legal sequencesfor complex inputs: in a case study, Palulu created legal testinputs in seconds for a set of complex classes, for which it took anexpert thirty minutes to generate a single legal input

    PRES: Toward Scalable Memory-Based Dynamic Graph Neural Networks

    Full text link
    Memory-based Dynamic Graph Neural Networks (MDGNNs) are a family of dynamic graph neural networks that leverage a memory module to extract, distill, and memorize long-term temporal dependencies, leading to superior performance compared to memory-less counterparts. However, training MDGNNs faces the challenge of handling entangled temporal and structural dependencies, requiring sequential and chronological processing of data sequences to capture accurate temporal patterns. During the batch training, the temporal data points within the same batch will be processed in parallel, while their temporal dependencies are neglected. This issue is referred to as temporal discontinuity and restricts the effective temporal batch size, limiting data parallelism and reducing MDGNNs' flexibility in industrial applications. This paper studies the efficient training of MDGNNs at scale, focusing on the temporal discontinuity in training MDGNNs with large temporal batch sizes. We first conduct a theoretical study on the impact of temporal batch size on the convergence of MDGNN training. Based on the analysis, we propose PRES, an iterative prediction-correction scheme combined with a memory coherence learning objective to mitigate the effect of temporal discontinuity, enabling MDGNNs to be trained with significantly larger temporal batches without sacrificing generalization performance. Experimental results demonstrate that our approach enables up to a 4x larger temporal batch (3.4x speed-up) during MDGNN training

    Hit ā€˜em where it hurts: Measuring and testing the impact of economic nonviolent strategies on democratization

    Get PDF
    The literature on nonviolent political action has found that nonviolence far outpaces violence when it comes to winning political conflicts. Yet which actions nonviolent movements may perform to achieve success has rarely been studied. I argue that strategies which aim to limit the stateā€™s economic capacity are likely to be effective, and test whether such economic strategies are predictive of democratization. I build upon both recent and classic nonviolence- and democratization literature to craft a theoretical narrative of why I expect economic nonviolent strategies to be effective. I then craft a measurement model for economic strategies using a novel combination of the Nonviolent and Violent Campaigns and Outcomes 3.0 dataset and Bayesian item response theory methods. Using the resulting latent variable of economic strategies as an independent variable, I test whether it is predictive of transitions to democracy using Bayesian logistic regression. I find that nonviolent political campaigns that use economic strategies are more likely to cause a transition to democracy than those which do not. My findings are relevant to the nonviolence- and democratization literature as well as for practitioners of nonviolent action and fill an important research gap in an innovative way.MasteroppgaveSAMPOL350MASV-SAP

    Neutral Theory, Biased World

    Get PDF
    The ecologist today finds scarce ground safe from controversy. Decisions must be made about what combination of data, goals, methods, and theories offers them the foundations and tools they need to construct and defend their research. When push comes to shove, ecologists often turn to philosophy to justify why it is their approach that is scientific. Karl Popperā€™s image of science as bold conjectures and heroic refutations is routinely enlisted to justify testing hypotheses over merely confirming them. One of the most controversial theories in contemporary science is the Neutral Theory of Ecology. Its chief developer and proponent, Stephen Hubbell, presents the neutral theory as a bold conjecture that has so far escaped refutation. Critics of the neutral theory claim that it already stands refuted, despite what the dogmatic neutralists say. We see the controversy through a Popperian lens. But Popperā€™s is an impoverished philosophy of science that distorts contemporary ecology. The controversy surrounding the neutral theory actually rests on a methodological fault. There is a strong but messy historical link between the concepts of being neutral and being null in biology, and Hubbell perpetuates this when he claims that the neutral theory is supplies the appropriate null for testing alternative theories. What method is being followed here? There are three contenders: Null hypothesis testing tests for whether a there is a pattern to be explained. Null modeling tests for whether a process is causally relevant to a pattern. Baseline modeling apportions relative responsibility to multiple processes each relevant to a pattern. Whether the neutral theory supplies an appropriate ā€œnullā€ depends upon whether null hypothesis, null modeling, or baseline model is intended. These methods prescribe distinct inference patterns. If they are null hypothesis testing or null modeling, the neutralistsā€™s reasoning is invalid. If they are baseline modeling, the justification of a crucial assumption remains opaque. Either way, the neutral-null connection is being exploited rhetorically to privilege the neutral theory over its rivals. Clarifying the reasoning immunizes us against the rhetoric and foregrounds the underlying virtues of the neutralist approach to ecology. The Popperian lens distorts theoretical development as dogmatism. Lakatosā€™s view of science as the development of research programmes clarifies the epistemology of the neutral theory. Focusing philosophical attention on the neutralist research programme illuminates (1) the synchronic uses of the neutral theory to make predictions and give descriptions and explanations; (2) its diachronic development in response to theoretical innovation and confrontation with data; (3) its complex relationships to alternative theories. For example, baseline modeling is now seen to be its primary explanatory heuristic. The justification for baseline modeling with the neutral theory, previously hidden from view, is seen in the logic of in the neutralist research programme

    Automated program transformation through proof transformation

    Get PDF

    Inference Belief and Interpretation in Science

    Get PDF
    This monograph explores the deeply cognitive roots of human scientific quest. The process of making scientific inferences is continuous with the day-to-day inferential activity of individuals, and is predominantly inductive in nature. Inductive inference, which is fallible, exploratory, and open-ended, is of essential relevance in our incessant efforts at making sense of a complex and uncertain world around us, and covers a vast range of cognitive activities, among which scientific exploration constitutes the pinnacle. Inductive inference has a personal aspect to it, being rooted in the cognitive unconscious of individuals, which has recently been found to be of paramount importance in a wide range of complex cognitive processes. One other major aspect of the process of inference making, including the making of scientific inferences, is the role of a vast web of beliefs lodged in the human mind, as also of a huge repertoire of heuristics, that constitute an important component of ā€˜unconscious intelligenceā€™. Finally, human cognitive activity is dependent in a large measure on emotions and affects that operate mostly at an unconscious level. Of special importance in scientific inferential activity is the process of hypothesis making, which is examined in this book, along with the above aspects of inductive inference, at considerable depth. The book focuses on the inadequacy of the viewpoint of naive realism in understanding the context-dependence of scientific theories, where a cumulative progress towards an ultimate truth about Nature appears to be too simplistic a generalization. It poses a critique to the commonly perceived image of science where it is seen as the last word in logic and objectivity, the latter in the double sense of being independent of individual psychological propensities and, at the same time, approaching a correct understanding of the workings of a mind-independent nature. Adopting the naturalist point of view, it examines the essential tension between the cognitive endeavors of individuals and scientific communities, immersed in belief systems and cultures, on the one hand, and the engagement with a mind-independent reality on the other. In the end, science emerges as an interpretation of nature, which is perceived by us only contextually, as successively emerging cross-sections of a limited scope and extent. Successive waves of theory building in science appear as episodic and kaleidoscopic changes in perspective as certain in-built borders are crossed, rather than as a cumulative progress towards some ultimate truth. Based on current literature, I aim to set up, in the form of a plausible hypothesis, a framework for understanding the mechanisms underlying inductive inference in general and abduction in particular

    Physical activity and on-task behaviour in adolescent classrooms of a Further Education college

    Get PDF
    Teachers commonly report that high-levels of off-task behaviour hinders learning in their classrooms. Previous research in school children under ~12-years-of-age has demonstrated physical activity(PA) interventions may decrease off-task behaviour. The current thesis planned to extend the literature to UK Further Education College classrooms of 16-19-year-old learners via a mixed-methods design of observations and student interviews. 111college sport and drama students were observed for on-task behaviour via momentary time-sampling (70 male and 41 female, age 17.1+0.8years). In a cross-over design, observations occurred in classroom lessons immediately before and after a PA-based lesson in a sports hall/drama studio, or a seated classroom. Mean on-task behaviour was higher only in the lesson after a PA-based lesson(p<0.001). Individual-level analysis; however, highlighted that a quarter of students saw no change or a decrease in on-task behaviour after the PA-based lesson. To further explore these quantitative outcomes,36 students were questioned on their perceptions of on-task behaviour before and after PA via semi-structured interviews, with responses analysed via thematic analysis (20 male and 16 female, age 17.2+0.6years).Surprisingly, the most common factors for variations in on-task behaviour students mentioned in the interviews were not directly related to PA. For example: coursework deadlines, time-of-day variations and differences in classroom delivery. Themes students directly linked to the PA-based lessons centred on feelings of fatigue, energisation and recovery. Several students specified fatigue could help their ability to be on-task, while other students implied insufficient recovery and/or cool-down opportunities prior to subsequent lessons hindered on-task behaviour. These findings have implications for practice, principally providing empirical evidence that PA in UK FE colleges can improve classroom on-task behaviour but likewise is influenced by a range of other variables that PA may not always mitigate. These factors should be considered alongside PA interventions by teachers and academic planners for optimum on-task classrooms
    • ā€¦
    corecore