6,594 research outputs found

    THE PROBLEM OF ESTIMATING CAUSAL RELATIONS BY REGRESSING ACCOUNTING (SEMI) IDENTITIES

    Get PDF
    Inferences about the coefficient values of a model estimated with a linear regression cannot be made when both the dependent and the independent variable are part of an accounting (semi) identity. The coefficients will no longer indicate a causal relation as they must adapt to satisfy the identity. A good example is an investment-cash flow sensitivity model. Este trabajo habla de la imposibilidad de extraer conclusiones sobre el valor de los coeficientes de un modelo de regresión lineal que intenta estimar una relación causal, cuando tanto la variable dependiente como la variable independiente forman parte de una (semi) identidad contable. Los coeficientes no sirven para explicar la relación causal, ya que su valor se adaptará para cumplir la identidad. Como ejemplo ilustrativo se presenta el modelo de la sensibilidad de la inversión al cash-flow.Sensibilidad de la inversión al cash flow, identidades contables, semi-identidades contables Investment-cash flow sensitivities, Accounting identities, Accounting semi-identities

    Small shops for sale! The effects of big-box openings on grocery stores

    Get PDF
    This paper evaluates the effects of big-box openings on the closure of grocery stores at the municipality level. To estimate these effects, I use a discontinuity in commercial regulation in Spain as the source of exogenous variation for the period 2003 to 2011. More specifically, this regulation, which varies by region, establishes entry barriers on big-box stores in municipalities of less than 10,000 inhabitants. I first test whether there is a discontinuity on the number of big-box openings when crossing the population threshold from regulated to non-regulated areas. This first stage shows that non-regulated municipalities recorded 0.3 more big-box openings than the regulated ones. I then use this discontinuity as an instrument to examine the effects of these openings on the number of grocery stores. The results show that, four years after the big-box opening, between 20 and 30% of the grocery stores in the municipality have disappeared. However, even if a big-box store opening is a big threat to grocery stores the results also indicate that it does not seem to be the case for the city centre’s activity given that the empty commercial premises are taken by some new small retail stores. Additionally, when examining by typology, the conventional big-boxes (those selling well-known brands) seem to compete more with grocery stores than do the discount big-boxes (those selling their own, lower price brands) and the former are, therefore, more instrumental in forcing them to close down

    Comparing MapReduce and pipeline implementations for counting triangles

    Get PDF
    A generalized method to define the Divide & Conquer paradigm in order to have processors acting on its own data and scheduled in a parallel fashion. MapReduce is a programming model that follows this paradigm, and allows for the definition of efficient solutions by both decomposing a problem into steps on subsets of the input data and combining the results of each step to produce final results. Albeit used for the implementation of a wide variety of computational problems, MapReduce performance can be negatively affected whenever the replication factor grows or the size of the input is larger than the resources available at each processor. In this paper we show an alternative approach to implement the Divide & Conquer paradigm, named pipeline. The main features of pipeline are illustrated on a parallel implementation of the well-known problem of counting triangles in a graph. This problem is especially interesting either when the input graph does not fit in memory or is dynamically generated. To evaluate the properties of pipeline, a dynamic pipeline of processes and an ad-hoc version of MapReduce are implemented in the language Go, exploiting its ability to deal with channels and spawned processes. An empirical evaluation is conducted on graphs of different sizes and densities. Observed results suggest that pipeline allows for the implementation of an efficient solution of the problem of counting triangles in a graph, particularly, in dense and large graphs, drastically reducing the execution time with respect to the MapReduce implementation.Peer ReviewedPostprint (published version

    Dynamic Pipeline: an adaptive solution for big data

    Get PDF
    The Dynamic Pipelineis a concurrent programming pattern amenable to be parallelized. Furthermore, the number of processing units used in the parallelization is adjusted to the size of the problem, and each processing unit uses a reduced memory footprint. Contrary to other approaches, the Dynamic Pipeline can be seen as ageneralization of the (parallel) Divide and Conquer schema, where systems can be reconfigured depending on the particular instance of the problem to be solved. We claim that the Dynamic Pipelines is useful to deal with Big Data related problems. In particular, we have designed and implemented algorithms for computing graphs parameters as number of triangles, connected components, and maximal cliques, among others. Currently, we are focused on designing and implementing an efficient algorithm to evaluate conjunctive query.Peer ReviewedPostprint (author's final draft

    Mismatch distance term compensation in centroid configurations with nonzero-area devices

    Get PDF
    This paper presents an analytical approach to distance term compensation in mismatch models of integrated devices. Firstly, the conditions that minimize parameter mismatch are examined under the assumption of zero-area devices. The analytical developments are illustrated using centroid configurations. Then, deviations from the previous approach due to the nonzero device areas are studied and evaluated

    Comparing MapReduce and pipeline implementations for counting triangles

    Get PDF
    A common method to define a parallel solution for a computational problem consists in finding a way to use the Divide and Conquer paradigm in order to have processors acting on its own data and scheduled in a parallel fashion. MapReduce is a programming model that follows this paradigm, and allows for the definition of efficient solutions by both decomposing a problem into steps on subsets of the input data and combining the results of each step to produce final results. Albeit used for the implementation of a wide variety of computational problems, MapReduce performance can be negatively affected whenever the replication factor grows or the size of the input is larger than the resources available at each processor. In this paper we show an alternative approach to implement the Divide and Conquer paradigm, named dynamic pipeline. The main features of dynamic pipelines are illustrated on a parallel implementation of the well-known problem of counting triangles in a graph. This problem is especially interesting either when the input graph does not fit in memory or is dynamically generated. To evaluate the properties of pipeline, a dynamic pipeline of processes and an ad-hoc version of MapReduce are implemented in the language Go, exploiting its ability to deal with channels and spawned processes. An empirical evaluation is conducted on graphs of different topologies, sizes, and densities. Observed results suggest that dynamic pipelines allows for an efficient implementation of the problem of counting triangles in a graph, particularly, in dense and large graphs, drastically reducing the execution time with respect to the MapReduce implementation.Peer ReviewedPostprint (published version

    A basic building block approach to CMOS design of analog neuro/fuzzy systems

    Get PDF
    Outlines a systematic approach to design fuzzy inference systems using analog integrated circuits in standard CMOS VLSI technologies. The proposed circuit building blocks are arranged in a layered neuro/fuzzy architecture composed of 5 layers: fuzzification, T-norm, normalization, consequent, and output. Inference is performed by using Takagi and Sugeno's (1989) IF-THEN rules, particularly where the rule's output contains only a constant term-a singleton. A simple CMOS circuit with tunable bell-like transfer characteristics is used for the fuzzification. The inputs to this circuit are voltages while the outputs are currents. Circuit blocks proposed for the remaining layers operate in the current-mode domain. Innovative circuits are proposed for the T-norm and normalization layers. The other two layers use current mirrors and KCL. All the proposed circuits emphasize simplicity at the circuit level-a prerequisite to increasing system level complexity and operation speed. A 3-input, 4-rule controller has been designed for demonstration purposes in a 1.6 /spl mu/m CMOS single-poly, double-metal technology. We include measurements from prototypes of the membership function block and detailed HSPICE simulations of the whole controller. These results operation speed in the range of 5 MFLIPS (million fuzzy logic inferences per second) with systematic errors below 1%

    Method based on life cycle assessment and TOPSIS to integrate environmental award criteria into green public procurement

    Get PDF
    Green public procurement (GPP) aims to integrate environmental criteria into public tender as instrument to develop and encourage production and consumption of sustainable products and services. Inclusion of award criteria in GPP is a key factor of its success. To this aim, a new method for assessing environmental award criteria in GPP processes is introduced in this study, providing easy and effective communication of the environmental benefits of the products and services purchased. The method is intended for its use by public authorities and companies. The main novelty of the method lies in its ability to evaluate the achievement of each award criterion during the GPP process using a simplified life cycle assessment methodology and a further simplification of the environmental indicators in one score using TOPSIS. This method is applied to public procurement of urban furniture as a case study
    corecore