3,139 research outputs found

    Firm-specific capital, nominal rigidities, and the Taylor principle

    Get PDF
    In the presence of firm-specific capital the Taylor principle can generate multiple equilibria. Sveen and Weinke (2005b) obtain that result in the context of a Calvo-tyle sticky price model. One potential criticism is that the price stickiness which is needed for our theoretical result to be relevant from a practical point of view is somewhat to the high part of available empirical estimates. In the present paper we show that if nominal wages are not fully flexible (which is an uncontroversial empirical fact) then the Taylor principle fails already for some minor degree of price stickiness. We use our model to explain the consequences of both nominal rigidities for the desirability of alternative interest rate rules.Nominal Rigidities, Aggregate Investment, Monetary Policy.

    Two-level architecture for rule-based business process management

    Get PDF
    One of the main challenges in Business Process Management (BPM) systems is the need to adapt business rules in real time. A serious obstacle is the lack of adaptable formal models for managing dynamic business rules. This is, due to the inadequacy of the models ability to describe the rule components, meta-rules, relationships and logical dependencies. To overcome this drawback, this paper presents a two-level rule-based approach to control BPM systems. The model accounts for logical representation of rules components and their relationships in Process-based Systems, as well as a method for incremental indexing of the business rules. The incremental indexing mechanism is described as an approach to control process execution and adaptation of business rules in real time based on rules propagation. Therefore this model provides a basis for an efficient and adaptable solution for managing business rules changes

    At home in Fanon: Queer romance and mixed solidarities in contemporary African fiction

    Get PDF
    Throughout the recent iterations of student activism that have gripped South African universities, Frantz Fanon has been continuously disinterred. But the figure of Fanon often remains both abstract and plural within its articulations - interpretations of his body of work performing sometimes only partial allegiances to the whole. This means that centralising a Fanon within political discourse stands to reproduce the losses implicated in his mythification, rather than to recover new critical imports in his work. In other words, the simplification of Fanonist rhetoric fails to deal with the "un-political" dimensions of Fanon. As such the more troubling of Fanon's work, namely Black Skin, White Masks (1952), is often left un-interrogated, while The Wretched of the Earth (1961) is read like a manifesto for purposive change. Black Skin, White Masks it seems is deemed "not radical enough" because of what appears to be a problematic preoccupation with 'love and understanding.' In the following intervention, I argue that what makes this centrality of 'love and understanding' so unpalatable to radical activists is a misappropriation of Fanon's formulation of desire. This is in part, I believe, one of the flaws of Fanon setting up the dynamic of racialised desire within cisgender, heteronormative models for potential interracial relationships - "The Woman of Colour and the White Man" and "The Man of Colour and the White Woman." Hence, I consider what queering these relationships does to the way in which we read the political dimensions of Black Skin, White Masks, and whether or not this allays the allegory of revolutionary solidarity of the generic teleology of the heteronormative romance. The object of this thesis is to elucidate what possibilities for political solidarity are generated through the queered dynamic of interracial love, explored in the literature of the contemporary African diaspora. New African writers take seriously what Fanon recognised as "The Pitfalls of National Consciousness," by emptying out the category of the nation and engaging with the intersections of a trans-national, trans-gender and trans-racial politics. To demonstrate the ways in which a queer analysis of interracial romance might reimagine a raced identity politics, I analyse novels produced by members of the contemporary African diaspora, whose works deal with mixed race identity. Through my reading of Helen Oyeyemi's The Icarus Girl (2005) and Boy, Snow, Bird (2014), Yewande Omotoso's Bom Boy (2011), and Chris Abani's The Secret History of Las Vegas (2014), I hope to demonstrate that contemporary African literature is concerned with the formation of an identity that estranges the category of blackness from itself through its entanglement with a queer identity politics

    Three pitfalls in Java performance evaluation

    Get PDF
    The Java programming language has known a remarkable growth over the last decade. This is partially due to the infrastructure required to run Java ap- plications on general purpose microprocessors: a Java virtual machine (VM). The VM ensures that Java applications are portable across different hardware platforms, because it shelters the applications from the underlying system. Hence the motto write once, run (almost) anywhere. Java applications are compiled to an intermediate form, called bytecode, and consist of a number of so-called class files. The virtual machine takes care of class loading, interpreting or compiling the bytecode to the native code of the underlying hardware platform, thread scheduling, garbage collection, etc. As such, during the execution of a Java application, the VM regularly intervenes to take care of housekeeping tasks and to optimise the application as it is executing. Furthermore, the specific implementation details of most virtual machines insert non-deterministic behaviour, not into the semantic part of the execution, but rather into the lower level execution. For example, to bring a Java application up to competitive speed with classical compiled programs written in languages such as C, the virtual machine needs to optimise Java bytecode. To limit the execution overhead, most virtual machines use a time sampling mechanism to determine the hot methods in the application. This introduces non-determinism, as over several runs, the methods are not always optimised at the same moment, nor is the set of optimised methods always the same. Other factors that introduce non-determinism are the thread scheduling, garbage collection, etc. It is readily seen that performance analysis of Java applications is not as simple as it seems at first, and warrants closer inspection. In this dissertation we are mainly interested in the behaviour of Java applications and their performance. In the course of this work, we uncovered three major pitfalls that were not taken into account by researchers when analysing Java performance prior to this work. We will briefly summarise the main achievements presented in this dissertation. The first pitfall we present involves the interaction between the virtual machine, the application and the input to the application. The performance for short running applications is shown to be mainly determined by the virtual machine. For longer running applications, this influence decreases, but remains tangible. We use statistical analysis, such as principal components analysis and cluster analysis (K-means and hierarchical clustering) to demonstrate and clarify the pitfall. By means of a large number of performance char- acteristics measured using hardware performance counters, five virtual machines and fourteen benchmarks with both a small and a large input size, we demonstrate that short running workloads are primarily clustered by virtual machines. Even for long running applications from the SPECjvm98 benchmark suite, the virtual machine still exerts a large influence on the observed behaviour at the microarchitectural level. This work has shown the need for both larger and longer running benchmarks than were available prior to it – this was (partially) met by the introduction of the DaCapo benchmark suite – as well as a careful consideration when setting up an experiment to avoid measuring the virtual machine, rather than the benchmark. Prior to this work, people were quite often using simulation with short running applications (to save time) for exploring Java performance. The second pitfall we uncover involves the analysis of performance numbers. During a survey of 50 papers published at premier conferences, such as OOPSLA, PLDI, CGO, ISMM and VEE, over the past seven years, we found that a variety of approaches are used, both for experimental design – for example, the input size, virtual machines, heap sizes, etc. – and, even more importantly, for data analysis – for example, using a best out of 3 performance number. New techniques are pitted against existing work using these prevalent approaches, and conclusions regarding their successfulness in beating prior state-of-the-art are based upon them. Given the fact that the execution of Java applications usually involves non-determinism in the virtual machine – for example, when determining which methods to optimise – it should come as no surprise that the lack of statistical rigour in these prevalent approaches leads to misleading or even incorrect conclusions. By this we mean that the conclusions are either not representative of what actually happens, or even contradict reality, as modelled in a statistical manner. To circumvent this pitfall, we propose a rigorous statistical approach that uses confidence intervals to both report and compare performance numbers. We also claim that sufficient experiments should be conducted to get a reliable performance measure. The non-determinism caused by the timer-based optimisation component in a virtual machine can be eliminated using so-called replay compilation. This technique will record a compilation plan during a first execution or profiling run of the application. During a second execution, the application is iterated twice: once to compile and optimise all methods found in the compilation plan, and a second time to perform the actual measurement. It turns out however that current practice of using either a single plan – corresponding to the best performing profiling run – or a combined plan choosing the methods that were optimised in, say, more than half the profiling runs, is no match for using multiple plans. The variability observed in the plans themselves is too large to capture in one of the current practices. Consequently, using multiple plans is definitely the better option. Moreover, this allows using a matched-pair approach in the data analysis, which results in tighter confidence intervals for the mean performance number. The third pitfall we examine is the usage of global performance numbers when tuning either an application or a virtual machine. We show that Java applications exhibit phase behaviour at the method level. This means that instances of the same method show more similarity to each other, behaviourwise, than to instances of other methods. A phase can then be identified as a set of sub-trees of the dynamic call-tree, with each sub-tree headed by the same method. We present an two-step algorithm that allows correlating hardware performance counter data in step 2 with the phases determined in step 1. The information obtained can be applied to show the programmer which methods perform worse than average, for example with respect to the number of cache misses they incur. In the dissertation, we pay particular attention to statistical rigour. For each pitfall, we use statistics to demonstrate its presence. Hopefully this work will encourage other researchers to use more rigour in their work as well

    Chaos, Sunspots, and Automatic Stabilizers

    Get PDF
    We study a one-sector growth model which is standard except for the presence of an externality in the production function. The set of competitive equilibria is large. It includes constant equilibria, sunspot equilibria, cyclical and chaotic equilibria, and equilibria with deterministic or stochastic regime switching. The efficient allocation is characterized by constant employment and a constant growth rate. We identify an income tax-subsidy schedule that supports the efficient allocation as the unique equilibrium outcome. That schedule has two properties: (i) it specifies the tax rate to be an increasing function of aggregate employment, and (ii) earnings are subsidized when aggregate employment is at its efficient level. The first feature eliminates inefficient, fluctuating equilibria, while the second induces agents to internalize the externality.

    SNOMED CT standard ontology based on the ontology for general medical science

    Get PDF
    Background: Systematized Nomenclature of Medicine—Clinical Terms (SNOMED CT, hereafter abbreviated SCT) is acomprehensive medical terminology used for standardizing the storage, retrieval, and exchange of electronic healthdata. Some efforts have been made to capture the contents of SCT as Web Ontology Language (OWL), but theseefforts have been hampered by the size and complexity of SCT. Method: Our proposal here is to develop an upper-level ontology and to use it as the basis for defining the termsin SCT in a way that will support quality assurance of SCT, for example, by allowing consistency checks ofdefinitions and the identification and elimination of redundancies in the SCT vocabulary. Our proposed upper-levelSCT ontology (SCTO) is based on the Ontology for General Medical Science (OGMS). Results: The SCTO is implemented in OWL 2, to support automatic inference and consistency checking. Theapproach will allow integration of SCT data with data annotated using Open Biomedical Ontologies (OBO) Foundryontologies, since the use of OGMS will ensure consistency with the Basic Formal Ontology, which is the top-levelontology of the OBO Foundry. Currently, the SCTO contains 304 classes, 28 properties, 2400 axioms, and 1555annotations. It is publicly available through the bioportal athttp://bioportal.bioontology.org/ontologies/SCTO/. Conclusion: The resulting ontology can enhance the semantics of clinical decision support systems and semanticinteroperability among distributed electronic health records. In addition, the populated ontology can be used forthe automation of mobile health applications

    Architecture and Implementation of a Trust Model for Pervasive Applications

    Get PDF
    Collaborative effort to share resources is a significant feature of pervasive computing environments. To achieve secure service discovery and sharing, and to distinguish between malevolent and benevolent entities, trust models must be defined. It is critical to estimate a device\u27s initial trust value because of the transient nature of pervasive smart space; however, most of the prior research work on trust models for pervasive applications used the notion of constant initial trust assignment. In this paper, we design and implement a trust model called DIRT. We categorize services in different security levels and depending on the service requester\u27s context information, we calculate the initial trust value. Our trust value is assigned for each device and for each service. Our overall trust estimation for a service depends on the recommendations of the neighbouring devices, inference from other service-trust values for that device, and direct trust experience. We provide an extensive survey of related work, and we demonstrate the distinguishing features of our proposed model with respect to the existing models. We implement a healthcare-monitoring application and a location-based service prototype over DIRT. We also provide a performance analysis of the model with respect to some of its important characteristics tested in various scenarios

    Potential consequences of linear approximation in economics

    Get PDF
    Econometric models ; Monetary policy ; Equilibrium (Economics)
    corecore