218 research outputs found

    Predatory Pricing Policy under EC and US Law

    Get PDF
    Predatory pricing poses a dilemma which perplexed and intrigued the competition community for many years. It is one of the most discussed topics in the area of antitrust economics, as the critical issue is to meld economic insights with sound legal rules. Despite the energy devoted to the subject by many distinguished observers from the economic and legal professions and their attempts to find proper rules that can be applied by competition policy authorities, little agreement has emerged. Predatory business behaviour has various forms, p.ex. non-price predation. This paper, however, deals with the particular significant category of predatory conduct, which could be called the ''traditional'' model of predatory pricing. The discussion will further be based on the consensus in modern economics that predatory pricing can be a successful and therefore rational business strategy. The basic concept of predatory pricing can roughly be described as follows. When a company is accused of predatory pricing, its being accused of pricing at levels that are unreasonably low, whether because there are below some measure of cost or because they otherwise generate an inadequate return. So far, there seems nothing wrong with the low pricing, since low prices are apparently beneficient for the customer and in fact usually the result and aim of a free market and healthy competition, low prices being the hallmark of competition. On the other hand, history and economic theory teach that predatory pricing can be an instrument of abuse. The predator offers its goods or services at unrealistically low prices in order to achieve a longer-term objective. The predatory company may be attempting to deter a rivals entry on the market or to drive him out of the market, so that the former attains a monopoly position, then being able to recoup its losses from the below-cost selling period along with making even more profits by holding the prices on high level. This subsequently turns the apparent benefit of the former lower price around into the opposite, hurting the costumer and the rival and thus competition as such by the unfair practice. However, even tough this basic theory seems straight forward, the crux of determining predatory pricing lies in detail. The difficulty of assessing predatory pricing is rooted within the arrangement of the economic elements and the legal aspects, hence to merge economic insights with practically workable rules. The critical issue for antitrust analysis is to distinguish in a practical manner predatory conduct from merely healthy competition. Tests on how to determine the thin line between unlawful conduct and healthy competition are disputed in the academic debate in mind-numbing detail. The only basic agreement in the wide ranging approach suggestions appears to be that scrutinizing a company`s conduct requires careful examination and factual inquiry which has to be guided by a sound legal rule and a thorough economic analysis. However, disagreement is vast concerning the recognition of a proper and workable rule. It ranges from the acknowledgment that predatory pricing occurs rather seldom and any attempt to restrict competition harms more that it helps, to detailed economic analysis tests which seem to overload the courts ability to work efficiently. The concept of predatory pricing has thus been a familiar one for many years. It was not until the last two decades however, that a new literature in economics and law has emerged which re-examines the logic of predatory pricing strategy more general, involving strategic, game-theoretic analyses of imperfectly competitive behaviour in contrast to the more standard economic logic embodied in the Chicago school of though, along with a deeper understanding of imperfect information between competitors. In the United States, predatory pricing has been of concern at least since the perceived activities of J. D. Rockefeller`s Standard Oil Company helped to give birth to the Clayton Act in 1914. In the EC however, it was not until the late 1980s that the ECJ and the Commission had the chance to deal with predatory pricing cases, most prominently the renowned AKZO case. Therefore one may perceive that EC competition law can draw from the US experience. This is not lastly illustrated by the fact that for the most part theories on predatory pricing have been developed by legal and economic scholars in the US. Courts and competition authorities subsequently had the chance to investigate predatory pricing claims and develop their own tests, incorporation the theories which emerged in the academic debate over the last 20 years. In the EC, the Commission and the ECJ in recent times decided on predatory pricing cases involving market dominating companies such as AKZO, Tetra Pak and Irish sugar. The US Supreme Court on the other hand set new standards on identifying the issue in its landmark Brooke Group decision. These decisions were both criticized and welcomed by the competition community, clarifying the approaches on predatory pricing, but at the same time leaving several problems unsolved. The aim of this paper is to illustrate the problem of predatory pricing with a view to both sides of the Atlantic and to analyse the different approaches put forward by the scholary legal and economic debate and their utilization by the competition authorities. To achieve this, the phenomenon of predatory pricing will be described in a general manner followed by a look at the economic situation behind a predatory business strategy. Then the main theories on how to assess predatory pricing will be scrutinised. These theories cover a wide array of approaches, recognising that on the one hand predatory pricing can be an abuse and on the other hand that price reductions are the hallmark of competition. After that, the legal provisions in the EU and US under which predatory pricing is dealt with will be explained. Subsequently, the leading decisions on the topic by the ECJ, the Commission and the US Supreme Court are examined and the different approaches of the competition authorities are compared, illustrating the difficulties they face when dealing with predatory pricing. Concluding the discussion, the paper will identify the main elements of a workable theory by scrutinising the way the competition authorities have incorporated the academic debate in their decisions and how they were able to work with these approaches

    Elastic contact to nearly incompressible coatings -- Stiffness enhancement and elastic pile-up

    Full text link
    We have recently proposed an efficient computation method for the frictionless linear elastic axisymmetric contact of coated bodies [A. Perriot and E. Barthel, J. Mat. Res. 19 (2004) 600]. Here we give a brief description of the approach. We also discuss implications of the results for the instrumented indentation data analysis of coated materials. Emphasis is laid on incompressible or nearly incompressible materials (Poisson ratio ν>0.4\nu>0.4): we show that the contact stiffness rises much more steeply with contact radius than for more compressible materials and significant elastic pile-up is evidenced. In addition the dependence of the penetration upon contact radius increasingly deviates from the homogeneous reference case when the Poisson ratio increases. As a result, this algorithm may be helpful in instrumented indentation data analysis on soft and nearly incompressible layers

    Los números

    Get PDF
    128 páginas.La teoría de los números ocupa un peculiar y distinguido lugar entre las diversas ramas de las matemáticas. Que su objetivo principal sea el estudio de algo tan conocido y familiar como son los enteros, sus propiedades y sus relaciones, explica el interés que ha suscitado siempre entre muchos ciudadanos, quienes, aun careciendo de la formación matemática apropiada, se sienten fascinados por sus problemas, tan fáciles de enunciar y, sin embargo, tan difíciles a veces de resolver. Este libro no pretende ser, ni mucho menos, un tratado de la teoría de los números, sino tan sólo un vehículo que permita al lector pasear por algunos de sus parajes más asequibles. Una especie de guía turística para aritméticos aficionados y para todos aquellos que tengan curiosidad acerca de las propiedades de los números y aprecien el arte de engarzar las ideas que conlleva todo razonamiento matemático.Peer reviewe

    Automatic Processing and Cross Section Analysis of Topology Optimization Results

    Get PDF
    Finite element-based topology optimizations have become a vital tool in the design of lightweight-optimized components and structures. Used in the early stages of the product development process, they can help identify load paths and shape the fundamental design of a product. However, the interpretation of topology optimization results can be a challenging and time-consuming task. In order to properly analyze the topologies, both knowledge in the field of finite element computations as well as computer-aided engineering and design are necessary. This multi-disciplinary engineering expertise can be expensive and not feasible for small and medium-sized businesses. By automating this vital step in a product development process, the utilization of topology optimizations in all product development processes can be greatly promoted. We propose a two-fold process for automatically processing density-based topology optimization results for future CAD design proposals: extracting a wireframe and deriving cross sections. The wireframe extraction supports mixed-element models featuring two-dimensional and three-dimensional finite elements and optional mirror symmetry consideration. The cross section extraction is currently available for three-dimensional finite elements. The wireframe extraction is fundamentally based on voxelization and skeletonization of a finite element- and density-based topology optimization. The cross section extraction makes use of the element density as well as the stress distributions from the topology optimization result. At first, load conditions are identified for each beam of the previously extracted wireframe and an appropriate beam profile is chosen from a set of profiles (currently circle and rectangle, both filled or hollow, and I-beams). Then, using a least square loss optimization and image processing-based shape averaging, the geometric dimensions of each selected beam profile are determined. In the end, a model is available which can be converted into a parametric CAD model for further design work

    Intensive language training enhances brain plasticity in chronic aphasia

    Get PDF
    BACKGROUND: Focal clusters of slow wave activity in the delta frequency range (1–4 Hz), as measured by magnetencephalography (MEG), are usually located in the vicinity of structural damage in the brain. Such oscillations are usually considered pathological and indicative of areas incapable of normal functioning owing to deafferentation from relevant input sources. In the present study we investigated the change in Delta Dipole Density in 28 patients with chronic aphasia (>12 months post onset) following cerebrovascular stroke of the left hemisphere before and after intensive speech and language therapy (3 hours/day over 2 weeks). RESULTS: Neuropsychologically assessed language functions improved significantly after training. Perilesional delta activity decreased after therapy in 16 of the 28 patients, while an increase was evident in 12 patients. The magnitude of change of delta activity in these areas correlated with the amount of change in language functions as measured by standardized language tests. CONCLUSIONS: These results emphasize the significance of perilesional areas in the rehabilitation of aphasia even years after the stroke, and might reflect reorganisation of the language network that provides the basis for improved language functions after intensive training

    Optimal boundary control of a system of semilinear parabolic equations

    Get PDF
    In this work, boundary control problems governed by a system of semilinear parabolic PDEs with pointwise control constraints are considered. This class of problems is related to applications in the chemical catalysis. After discussing existence and uniqueness of the state equation with both linear and nonlinear boundary conditions, the existence of an optimal solution is shown. Necessary and sufficient optimality conditions are derived to deal with numerical examples, which conclude the paper

    Automated Derivation of CAD Designs from Topology Optimization Results

    Get PDF
    Topology optimizations are gaining track in many development processes, especially in designing large structural components such as railway car bodies. This paper proposes an automatic process to derive design proposals from SIMP topology optimization results in early product development stages. The topologies are interpreted as truss-like structures in order to be modelled using beam elements in FE or CAD applications. In a first step, a wireframe is extracted by using a voxelization approach with subsequent skeletonization and junction identification using flood-fill algorithms. The wireframe can already be used as a basis for a parametric CAD design since it follows the topology optimization result closely. In order to extrude beam sections along the wireframe axes, cross sections are extracted from the topology optimization result. This is done by first matching the geometric outline of a topology branch to basic shapes such as rectangles. These shapes are refined by identifying the load case of each branch which is achieved by evaluating the stress distributions in a branch of the topology optimization. The consolidated result is a model consisting of beam elements which can further be processed in an FE or CAD application. The process enables a faster design exploration in the early product development process with the goal of achieving lightweight designs

    3C 220.3: a radio galaxy lensing a submillimeter galaxy

    Get PDF
    Herschel Space Observatory photometry and extensive multiwavelength followup have revealed that the powerful radio galaxy 3C 220.3 at z=0.685 acts as a gravitational lens for a background submillimeter galaxy (SMG) at z=2.221. At an observed wavelength of 1mm, the SMG is lensed into three distinct images. In the observed near infrared, these images are connected by an arc of 1.8" radius forming an Einstein half-ring centered near the radio galaxy. In visible light, only the arc is apparent. 3C 220.3 is the only known instance of strong galaxy-scale lensing by a powerful radio galaxy not located in a galaxy cluster and therefore it offers the potential to probe the dark matter content of the radio galaxy host. Lens modeling rejects a single lens, but two lenses centered on the radio galaxy host A and a companion B, separated by 1.5", provide a fit consistent with all data and reveal faint candidates for the predicted fourth and fifth images. The model does not require an extended common dark matter halo, consistent with the absence of extended bright X-ray emission on our Chandra image. The projected dark matter fractions within the Einstein radii of A (1.02") and B (0.61") are about 0.4 +/- 0.3 and 0.55 +/- 0.3. The mass to i-band light ratios of A and B, M/L ~ 8 +/- 4 Msun/Lsun, appear comparable to those of radio-quiet lensing galaxies at the same redshift in the CASTLES, LSD, and SL2S samples. The lensed SMG is extremely bright with observed f(250um) = 440mJy owing to a magnification factor mu~10. The SMG spectrum shows luminous, narrow CIV 154.9nm emission, revealing that the SMG houses a hidden quasar in addition to a violent starburst. Multicolor image reconstruction of the SMG indicates a bipolar morphology of the emitted ultraviolet (UV) light suggestive of cones through which UV light escapes a dust-enshrouded nucleus.Comment: 17 pages, 14 Figures, accepted for publication in Ap
    • …
    corecore