3,947 research outputs found

    Convergence and Cointegration

    Get PDF
    This paper provides a new, uni¯ed, and °exible framework to measure and characterize convergence in prices. We formally de¯ne this notion and propose a model to represent a wide range of transition paths that converge to a common steady-state. Our framework enables the econometric measurement of such transi- tional behaviors and the development of testing procedures. Speci¯cally, we derive a statistical test to determine whether convergence exists and, if so, which type: as catching-up or steady-state. The application of this methodology to historic wheat prices results in a novel explanation of the convergence processes experienced during the 19th century.Price convergence, cointegration, law of one price.

    Production values as program quality signals in Spanish linear TV: A comparison of two periods

    Get PDF
    Technology disruption, digitalization and media convergence have triggered a profound crisis in the television industry. In this context, quality is an essential strategic element for success, especially when consumers have learned through their experience with VOD, becoming more demanding and less loyal customers. Then, has the importance of quality signals changed with the emergence of new online alternatives? And the quality perception among viewers? Our research explores four production values (the host, content, the set, and technical quality) as TV program quality signals and their effect on the quality perception of entertainment programs of Spanish broadcasters. We compare two years: 2012 and 2016, a period during which the Spanish television market changed due to appearance of OTT services. Using t-tests and regression models, we establish that the importance of quality signals varied over this period, with content proving more important and the set less so in 2016 as compared with 2012. Additionally, in 2016, the results show that the quality perception of linear TV entertainment programs depended more on subjective elements such as liking and satisfaction than on objective elements, as it was in 2012. Finally, our findings are discussed, and some managerial implications and future research are suggested

    How to measure inflation volatility: a note

    Get PDF
    Este documento propone un modelo estadístico y un marco conceptual para estimar la volatilidad de la inflación suponiendo rational inattention, donde la caída del nivel de atención responde a la llegada de noticias al mercado. Estimamos la tendencia y la volatilidad condicional de la inflación en Alemania, España, la Unión Económica y Monetaria y Estados Unidos empleando datos mensuales desde enero de 2002 hasta marzo de 2022, y contrastamos si la inflación fue igual o inferior al 2 % en ese período y esas regiones. Descomponemos la volatilidad de la inflación en sus componentes de sorpresas «negativas» y «positivas», y caracterizamos los diferentes escenarios de volatilidad de inflación durante la gran crisis financiera de 2008, la crisis de deuda soberana y el período pos-COVID. Nuestra medida de volatilidad supera a una modelización GARCH(1,1) y a la desviación típica móvil de la inflación en ejercicios de previsión un período hacia delante, tanto dentro como fuera de la muestra. La metodología propuesta en este artículo es apropiada para estimar la volatilidad condicional de variables macrofinancieras. Recomendamos incluir esta medida en tareas de seguimiento y previsión de la dinámica de la inflación.This paper proposes a statistical model and a conceptual framework to estimate inflation volatility assuming rational inattention, where the decay in the level of attention reflects the arrival of news in the market. We estimate trend inflation and the conditional inflation volatility for Germany, Spain, the euro area and the United States using monthly data from January 2002 to March 2022 and test whether inflation was equal to or below 2% in this period in these regions. We decompose inflation volatility into positive and negative surprise components and characterise different inflation volatility scenarios during the Great Financial Crisis, the Sovereign Debt Crisis, and the post-COVID period. Our volatility measure outperforms the GARCH(1,1) model and the rolling standard deviation in one-step ahead volatility forecasts both in-sample and out-of-sample. The methodology proposed in this article is appropriate for estimating the conditional volatility of macro-financial variables. We recommend the inclusion of this measure in inflation dynamics monitoring and forecasting exercises

    On the Functional Test of Special Function Units in GPUs

    Get PDF
    The Graphics Processing Units (GPUs) usage has extended from graphic applications to others where their high computational power is exploited (e.g., to implement Artificial Intelligence algorithms). These complex applications usually need highly intensive computations based on floating-point transcendental functions. GPUs may efficiently compute these functions in hardware using ad hoc Special Function Units (SFUs). However, a permanent fault in such units could be very critical (e.g., in safety-critical automotive applications). Thus, test methodologies for SFUs are strictly required to achieve the target reliability and safety levels. In this work, we present a functional test method based on a Software-Based Self-Test (SBST) approach targeting the SFUs in GPUs. This method exploits different approaches to build a test program and applies several optimization strategies to exploit the GPU parallelism to speed up the test procedure and reduce the required memory. The effectiveness of this methodology was proven by resorting to an open-source GPU model (FlexGripPlus) compatible with NVIDIA GPUs. The experimental results show that the proposed technique achieves 90.75% of fault coverage and up to 94.26% of Testable Fault Coverage, reducing the required memory and test duration with respect to pseudorandom strategies proposed by other authors

    Exploring Hardware Fault Impacts on Different Real Number Representations of the Structural Resilience of TCUs in GPUs

    Get PDF
    The most recent generations of graphics processing units (GPUs) boost the execution of convolutional operations required by machine learning applications by resorting to specialized and efficient in-chip accelerators (Tensor Core Units or TCUs) that operate on matrix multiplication tiles. Unfortunately, modern cutting-edge semiconductor technologies are increasingly prone to hardware defects, and the trend to highly stress TCUs during the execution of safety-critical and high-performance computing (HPC) applications increases the likelihood of TCUs producing different kinds of failures. In fact, the intrinsic resiliency to hardware faults of arithmetic units plays a crucial role in safety-critical applications using GPUs (e.g., in automotive, space, and autonomous robotics). Recently, new arithmetic formats have been proposed, particularly those suited to neural network execution. However, the reliability characterization of TCUs supporting different arithmetic formats was still lacking. In this work, we quantitatively assessed the impact of hardware faults in TCU structures while employing two distinct formats (floating-point and posit) and using two different configurations (16 and 32 bits) to represent real numbers. For the experimental evaluation, we resorted to an architectural description of a TCU core (PyOpenTCU) and performed 120 fault simulation campaigns, injecting around 200,000 faults per campaign and requiring around 32 days of computation. Our results demonstrate that the posit format of TCUs is less affected by faults than the floating-point one (by up to three orders of magnitude for 16 bits and up to twenty orders for 32 bits). We also identified the most sensible fault locations (i.e., those that produce the largest errors), thus paving the way to adopting smart hardening solutions

    Experimental Study for the Stripping of PTFE Coatings on Al-Mg Substrates Using Dry Abrasive Materials

    Get PDF
    Polytetrafluoroethylene (PTFE) coatings are used in many applications and processing industries. With their use, they wear out and lose properties and must be replaced by new ones if the cost of the element so advises. There are different stripping techniques, but almost all of them are very difficult and require strict environmental controls. It is a challenge to approach the process through efficient and more sustainable techniques. In the present work, we have studied the stripping of PTFE coatings by projection with abrasives (1 step) as an alternative to carbonization + sandblasting procedures (2 steps). For this purpose, different types of abrasives have been selected: brown corundum, white corundum, glass microspheres, plastic particles, and a walnut shell. The tests were performed at pressures from 0.4 to 0.6 MPa on PTFE-coated aluminium substrates of EN AW-5182 H111 alloy. Stripping rates, surface roughness, and substrate hardness have been studied. Scanning electron microscopy (SEM) images of sandblasted specimens have also been obtained. All abrasives improved mechanical and surface properties in one-step vs. two-step processes. The abrasives of plastic and glass microspheres are the most appropriate for the one-step process, which increases the hardness and roughness level Ra in the substrate. Corundum abrasives enable the highest stripping rates

    Inverse Layer Dependence of Friction on Chemically Doped MoS_{2}

    Full text link
    We present the results of atomic-force-microscopy-based friction measurements on Re-doped molybdenum disulfide (MoS2). In stark contrast to the seemingly universal observation of decreasing friction with increasing number of layers on two-dimensional (2D) materials, friction on Re-doped MoS2 exhibits an anomalous, i.e. inverse dependency on the number of layers. Raman spectroscopy measurements revealed signatures of Re intercalation, leading to a decoupling between neighboring MoS2 layers and enhanced electron-phonon interactions, thus resulting in increasing friction with increasing number of layers: a new paradigm in the mechanics of 2D materials.Comment: 15 pages incl. Supplemental Material, 5 figure

    A Multi-level Approach to Evaluate the Impact of GPU Permanent Faults on CNN's Reliability

    Get PDF
    Graphics processing units (GPUs) are widely used to accelerate Artificial Intelligence applications, such as those based on Convolutional Neural Networks (CNNs). Since in some domains in which CNNs are heavily employed (e.g., automotive and robotics) the expected lifetime of GPUs is over ten years, it is of paramount importance to study the impact of permanent faults (e.g. due to aging). Crucially, while the impact of transient faults on GPUs running CNNs has been widely studied, an accurate evaluation of the impact of permanent faults is still lacking. Performing this evaluation is challenging due to the complexity of GPU devices and the software implementing a CNN. In this work, we propose a methodology that combines the accuracy of gate-level fault simulation with the speed and flexibility of software fault injection to evaluate the effects of permanent hardware faults affecting a GPU. First, we profile the executed low-level GPU instructions during the CNN inference. Then, using extensive gate-level fault injection campaigns, we provide an accurate analysis of the effects of permanent faults on the internal modules executing the targeted instructions. Finally, we propagate these effects using fast software-based fault injection. The method allows, for the first time, to estimate the percentage of permanent faults leading the CNN to produce wrong results (i.e., changing the result of its work). The method's feasibility, which allows for flexibly trade-off accuracy with the required computational effort, is shown using LeNet running on an Ampere Nvidia GPU as a case study. The method reduces the computational effort for the evaluation by several orders of magnitude with respect to plain gate- and RTL-level faults simulation
    corecore