4,699 research outputs found

    Modelling Taylor Rule Uncertainty

    Get PDF
    In recent years, one has witnessed a widespread attention on the way monetary policy is conducted and in particular on the role of the so-called monetary policy rules. The conventional approach in the literature consists in estimating reaction functions for a monetary authority (the Federal Reserve, in most cases) in which a nominal interest rate, directly or indirectly controlled by that monetary authority, is adjusted in response to deviations of inflation (current or expected) from target and of output from potential. These reaction functions, usually called Taylor rules, following John Taylor's seminal paper published in 1993, match a number of normative principles set forth in the literature for optimal monetary policy. This provides a good reason for the growing prominence of indications given by Taylor rule estimations in debates about current and prospective monetary policy stance. However, they are usually presented as point estimates for the interest rate, giving a sense of accuracy that can be misleading. Typically, no emphasis is placed on the risks of those estimates and, at least to a certain extent, the reader is encouraged to concentrate on an apparently precise central projection, ignoring the wide degree of uncertainty and operational difficulties surrounding the estimates. As in any forecasting exercise, there is uncertainty regarding both the estimated parameters and the way the explanatory variables evolve during the forecasting horizon. Our work presents a methodology to estimate a probability density function for the interest rate resulting from the application of a Taylor rule (the Taylor interest rate) which acknowledges that not only the explanatory variables but also the parameters of the rule are random variables.

    Conjugated linoleic acid reduces permeability and fluidity of adipose plasma membranes from obese Zucker rats

    Get PDF
    NOTICE: this is the author’s version of a work that was accepted for publication in Biochemical and Biophysical Research Communications. Changes resulting from the publishing process, such as peer review, editing, corrections, structural formatting, and other quality control mechanisms may not be reflected in this document. Changes may have been made to this work since it was submitted for publication. A definitive version was subsequently published in Biochemical and Biophysical Research Communications. July 2010; 398 (2): 199-204.Conjugated linoleic acid (CLA) is a dietary fatty acid frequently used as a body fat reducing agent whose effects upon cell membranes and cellular function remain unknown. Obese Zucker rats were fed atherogenic diets containing saturated fats of vegetable or animal origin with or without 1% CLA, as a mixture of cis(c)9,trans(t)11 and t10,c12 isomers. Plasma membrane vesicles obtained from visceral adi- pose tissue were used to assess the effectiveness of dietary fat and CLA membrane incorporation and its outcome on fluidity and permeability to water and glycerol. A significant decrease in adipose membrane fluidity was correlated with the changes observed in permeability, which seem to be caused by the incor- poration of the t10,c12 CLA isomer into membrane phospholipids. These results indicate that CLA supple- mentation in obese Zucker rats fed saturated and cholesterol rich diets reduces the fluidity and permeability of adipose membranes, therefore not supporting CLA as a body fat reducing agent through membrane fluidification in obese fat consumers

    Using a hardware coprocessor for message scheduling in fieldbus-based distributed systems

    Get PDF
    “Copyright © [2001] IEEE. Reprinted from 8th IEEE International Conference on Electronics, Circuits and Systems. ISBN:0-7803-7057-0. This material is posted here with permission of the IEEE. Internal or personal use of this material is permitted. However, permission to reprint/republish this material for advertising or promotional purposes or for creating new collective works for resale or redistribution must be obtained from the IEEE by writing to [email protected]. By choosing to view this document, you agree to all provisions of the copyright laws protecting it.”Fieldbus based distributed embedded systems used in real-time applications tend to be inflexible in what concerns changing operational parameters on-line. Recent techniques such as the planning scheduler can avoid this problem but do not show adequate responsiveness f o r automatic negotiation of parameter values. In this paper the use of ASIC based coprocessors f o r message scheduling is proposed to solve the problem. Such coprocessors can be used in the arbiter nodes of systems based on widely used producer-consumer fieldbuses like WorldFIP and CAN. A prototype built with a Xilinx FPGA is presented. First performance results are shown and analyzed. They demonstrate that the device is able to achieve the expected performance and also point to the possibility of evolution to an almost dynamic scheduling approach

    HEP-Frame: A software engineered framework to aid the development and efficient multicore execution of scientific code

    Get PDF
    This communication presents an evolutionary soft- ware prototype of a user-centered Highly Efficient Pipelined Framework, HEP-Frame, to aid the development of sustainable parallel scientific code with a flexible pipeline structure. HEP- Frame is the result of a tight collaboration between computational scientists and software engineers: it aims to improve scientists coding productivity, ensuring an efficient parallel execution on a wide set of multicore systems, with both HPC and HTC techniques. Current prototype complies with the requirements of an actual scientific code, includes desirable sustainability features and supports at compile time additional plugin interfaces for other scientific fields. The porting and development productivity was assessed and preliminary efficiency results are promising.This work was supported by FCT (Fundação para a Ciência e Tecnologia) within Project Scope (UID/CEC/00319/2013), by LIP (Laboratório de Instrumentação e Física Experimental de Partículas) and by Project Search-ON2 (NORTE-07-0162- FEDER-000086), co-funded by the North Portugal Regional Operational Programme (ON.2 - O Novo Norte), under the National Strategic Reference Framework, through the European Regional Development Fund

    Tuning pipelined scientific data analyses for efficient multicore execution

    Get PDF
    Scientific data analyses often apply a pipelined sequence of computational tasks to independent datasets. Each task in the pipeline captures and processes a dataset element, may be dependent on other tasks in the pipeline, may have a different computational complexity and may be filtered out from progressing in the pipeline. The goal of this work is to develop an efficient scheduler that automatically (i) manages a parallel data reading and an adequate data structure creation, (ii) adaptively defines the most efficient order of pipeline execution of the tasks, considering their inter-dependence and both the filtering out rate and the computational weight, and (iii) manages the parallel execution of the computational tasks in a multicore system, applied to the same or to different dataset elements. A real case study data analysis application from High Energy Physics (HEP) was used to validate the efficiency of this scheduler. Preliminary results show an impressive performance improvement of the pipeline tuning when compared to the original sequential HEP code (up to a 35x speedup in a dual 12-core system), and also show significant performance speedups over conventional parallelization approaches of this case study application (up to 10x faster in the same system).Project Search-ON2 (NORTE-07-0162- FEDER-000086), co-funded by the North Portugal Regional Operational Programme (ON.2 - O Novo Norte), under the National Strategic Reference Framework, through the European Regional Development Fund

    Removing inefficiencies from scientific code : the study of the Higgs boson couplings to top quarks

    Get PDF
    Publicado em "Computational science and its applications – ICCSA 2014 : proceedings", Series : Lecture notes in computer science, vol. 8582This paper presents a set of methods and techniques to remove inefficiencies in a data analysis application used in searches by the ATLAS Experiment at the Large Hadron Collider. Profiling scientific code helped to pinpoint design and runtime inefficiencies, the former due to coding and data structure design. The data analysis code used by groups doing searches in the ATLAS Experiment contributed to clearly identify some of these inefficiencies and to give suggestions on how to prevent and overcome those common situations in scientific code to improve the efficient use of available computational resources in a parallel homogeneous platform.This work is funded by National Funds through the FCT - Fundaçãoo para a Ciência e a Tecnologia (Portuguese Foundation for Science and Technology) within project PEst-OE/EEI/UI0752/2014, by LIP (Laborat ́orio de Instrumentação e Física Experimental de Partículas), and the SeARCH cluster (REEQ/443/EEI/2005)

    On requirements engineering for reactive systems: a formal methodology

    Get PDF
    This paper introduces a rigorous methodology for requirements specification of systems that react to external stimulus and consequently evolve through different operational modes, providing, in each of them, different functionalities. The proposed methodology proceeds in three stages, enriching a simple state- machine with local algebraic specifications. It resorts to an expressive variant of hybrid logic which is latter translated into first-order to allow for ample automatic tool support
    corecore