3,204 research outputs found

    Relative Factor Price Changes and Equity Prices

    Get PDF
    This paper suggests that the decline in equity prices, and thus in Tobin's average q, during the 1970s may be attributable to changes in expected relative factor prices. More specifically, q is shown to be a negative function of the extent to which current relative factor price expectations differ from those when capital was put in place. Because relative factor prices became more volatile after 1967, the observed decline in average q, and thus in stock prices, can be explained by the "relative price" hypothesis.

    Initial explorations of ARM processors for scientific computing

    Full text link
    Power efficiency is becoming an ever more important metric for both high performance and high throughput computing. Over the course of next decade it is expected that flops/watt will be a major driver for the evolution of computer architecture. Servers with large numbers of ARM processors, already ubiquitous in mobile computing, are a promising alternative to traditional x86-64 computing. We present the results of our initial investigations into the use of ARM processors for scientific computing applications. In particular we report the results from our work with a current generation ARMv7 development board to explore ARM-specific issues regarding the software development environment, operating system, performance benchmarks and issues for porting High Energy Physics software.Comment: Submitted to proceedings of the 15th International Workshop on Advanced Computing and Analysis Techniques in Physics Research (ACAT2013), Beijing. arXiv admin note: text overlap with arXiv:1311.100

    Designing Computing System Architecture and Models for the HL-LHC era

    Full text link
    This paper describes a programme to study the computing model in CMS after the next long shutdown near the end of the decade.Comment: Submitted to proceedings of the 21st International Conference on Computing in High Energy and Nuclear Physics (CHEP2015), Okinawa, Japa

    Optimizing CMS build infrastructure via Apache Mesos

    Full text link
    The Offline Software of the CMS Experiment at the Large Hadron Collider (LHC) at CERN consists of 6M lines of in-house code, developed over a decade by nearly 1000 physicists, as well as a comparable amount of general use open-source code. A critical ingredient to the success of the construction and early operation of the WLCG was the convergence, around the year 2000, on the use of a homogeneous environment of commodity x86-64 processors and Linux. Apache Mesos is a cluster manager that provides efficient resource isolation and sharing across distributed applications, or frameworks. It can run Hadoop, Jenkins, Spark, Aurora, and other applications on a dynamically shared pool of nodes. We present how we migrated our continuos integration system to schedule jobs on a relatively small Apache Mesos enabled cluster and how this resulted in better resource usage, higher peak performance and lower latency thanks to the dynamic scheduling capabilities of Mesos.Comment: Submitted to proceedings of the 21st International Conference on Computing in High Energy and Nuclear Physics (CHEP2015), Okinawa, Japa

    HEP C++ Meets reality

    Get PDF
    In 2007 the CMS experiment first reported some initial findings on the impedance mismatch between HEP use of C++ and the current generation of compilers and CPUs. Since then we have continued our analysis of the CMS experiment code base, including the external packages we use. We have found that large amounts of C++ code has been written largely ignoring any physical reality of the resulting machine code and run time execution costs, including and especially software developed by experts. We report on a wide range issues affecting typical high energy physics code, in the form of coding pattern - impact - lesson - improvement
    • …
    corecore