478 research outputs found

    Land and stock bubbles, crashes and exit strategies in Japan circa 1990 and in 2013

    Get PDF
    We study the land and stock markets in Japan circa 1990 and in 2013. While the Nikkei stock average in the late 1980s and its -48% crash in 1990 is generally recognized as a financial market bubble, a bigger bubble and crash was in the land market. The crash in the Nikkei which started on the first trading day of 1990 was predictable in April 1989 using the bond-stock earnings yield model which signaled a crash but not when. We show that it was possible to use the change point detection model based solely on price movements for profitable exits of long positions both circa 1990 and in 2013

    Performance Measurements of Supercomputing and Cloud Storage Solutions

    Full text link
    Increasing amounts of data from varied sources, particularly in the fields of machine learning and graph analytics, are causing storage requirements to grow rapidly. A variety of technologies exist for storing and sharing these data, ranging from parallel file systems used by supercomputers to distributed block storage systems found in clouds. Relatively few comparative measurements exist to inform decisions about which storage systems are best suited for particular tasks. This work provides these measurements for two of the most popular storage technologies: Lustre and Amazon S3. Lustre is an open-source, high performance, parallel file system used by many of the largest supercomputers in the world. Amazon's Simple Storage Service, or S3, is part of the Amazon Web Services offering, and offers a scalable, distributed option to store and retrieve data from anywhere on the Internet. Parallel processing is essential for achieving high performance on modern storage systems. The performance tests used span the gamut of parallel I/O scenarios, ranging from single-client, single-node Amazon S3 and Lustre performance to a large-scale, multi-client test designed to demonstrate the capabilities of a modern storage appliance under heavy load. These results show that, when parallel I/O is used correctly (i.e., many simultaneous read or write processes), full network bandwidth performance is achievable and ranged from 10 gigabits/s over a 10 GigE S3 connection to 0.35 terabits/s using Lustre on a 1200 port 10 GigE switch. These results demonstrate that S3 is well-suited to sharing vast quantities of data over the Internet, while Lustre is well-suited to processing large quantities of data locally.Comment: 5 pages, 4 figures, to appear in IEEE HPEC 201

    Enabling On-Demand Database Computing with MIT SuperCloud Database Management System

    Full text link
    The MIT SuperCloud database management system allows for rapid creation and flexible execution of a variety of the latest scientific databases, including Apache Accumulo and SciDB. It is designed to permit these databases to run on a High Performance Computing Cluster (HPCC) platform as seamlessly as any other HPCC job. It ensures the seamless migration of the databases to the resources assigned by the HPCC scheduler and centralized storage of the database files when not running. It also permits snapshotting of databases to allow researchers to experiment and push the limits of the technology without concerns for data or productivity loss if the database becomes unstable.Comment: 6 pages; accepted to IEEE High Performance Extreme Computing (HPEC) conference 2015. arXiv admin note: text overlap with arXiv:1406.492

    Lustre, Hadoop, Accumulo

    Full text link
    Data processing systems impose multiple views on data as it is processed by the system. These views include spreadsheets, databases, matrices, and graphs. There are a wide variety of technologies that can be used to store and process data through these different steps. The Lustre parallel file system, the Hadoop distributed file system, and the Accumulo database are all designed to address the largest and the most challenging data storage problems. There have been many ad-hoc comparisons of these technologies. This paper describes the foundational principles of each technology, provides simple models for assessing their capabilities, and compares the various technologies on a hypothetical common cluster. These comparisons indicate that Lustre provides 2x more storage capacity, is less likely to loose data during 3 simultaneous drive failures, and provides higher bandwidth on general purpose workloads. Hadoop can provide 4x greater read bandwidth on special purpose workloads. Accumulo provides 10,000x lower latency on random lookups than either Lustre or Hadoop but Accumulo's bulk bandwidth is 10x less. Significant recent work has been done to enable mix-and-match solutions that allow Lustre, Hadoop, and Accumulo to be combined in different ways.Comment: 6 pages; accepted to IEEE High Performance Extreme Computing conference, Waltham, MA, 201

    Air Quality Monitoring and On-Site Computer System for Livestock and Poultry Environment Studies

    Get PDF
    This article reviews the development of agricultural air quality (AAQ) research on livestock and poultry environments, summarizes various measurement and control devices and the requirements of data acquisition and control (DAC) for comprehensive AAQ studies, and introduces a new system to meet DAC and other requirements. The first experimental AAQ study was reported in 1953. Remarkable progress has been achieved in this research field during the past decades. Studies on livestock and poultry environment expanded from indoor air quality to include pollutant emissions and the subsequent health, environmental, and ecological impacts beyond the farm boundaries. The pollutants of interest included gases, particulate matter (PM), odor, volatile organic compounds (VOC), endotoxins, and microorganisms. During this period the research projects, scales, and boundaries continued to expand significantly. Studies ranged from surveys and short-term measurements to national and international collaborative projects. While much research is still conducted in laboratories and experimental facilities, a growing number of investigations have been carried out in commercial livestock and poultry farms. The development of analytical instruments and computer technologies has facilitated significant changes in the methodologies used in this field. The quantity of data obtained in a single project during AAQ research has increased exponentially, from several gas concentration samples to 2.4 billion data points. The number of measurement variables has also increased from a few to more than 300 at a single monitoring site. A variety of instruments and sensors have been used for on-line, real-time, continuous, and year-round measurements to determine baseline pollutant emissions and test mitigation technologies. New measurement strategies have been developed for multi-point sampling. These advancements in AAQ research have necessitated up-to-date systems to not only acquire data and control sampling locations, but also monitor experimental operation, communicate with researchers, and process post-acquisition signals and post-measurement data. An on-site computer system (OSCS), consisting of DAC hardware, a personal computer, and on-site AAQ research software, is needed to meet these requirements. While various AAQ studies involved similar objectives, implementation of OSCS was often quite variable among projects. Individually developed OSCSs were usually project-specific, and their development was expensive and time-consuming. A new OSCS, with custom-developed software AirDAC, written in LabVIEW, was developed with novel and user-friendly features for wide ranging AAQ research projects. It reduced system development and operational cost, increased measurement reliability and work efficiency, and enhanced quality assurance and quality control in AAQ studies

    Lessons Learned from a Decade of Providing Interactive, On-Demand High Performance Computing to Scientists and Engineers

    Full text link
    For decades, the use of HPC systems was limited to those in the physical sciences who had mastered their domain in conjunction with a deep understanding of HPC architectures and algorithms. During these same decades, consumer computing device advances produced tablets and smartphones that allow millions of children to interactively develop and share code projects across the globe. As the HPC community faces the challenges associated with guiding researchers from disciplines using high productivity interactive tools to effective use of HPC systems, it seems appropriate to revisit the assumptions surrounding the necessary skills required for access to large computational systems. For over a decade, MIT Lincoln Laboratory has been supporting interactive, on-demand high performance computing by seamlessly integrating familiar high productivity tools to provide users with an increased number of design turns, rapid prototyping capability, and faster time to insight. In this paper, we discuss the lessons learned while supporting interactive, on-demand high performance computing from the perspectives of the users and the team supporting the users and the system. Building on these lessons, we present an overview of current needs and the technical solutions we are building to lower the barrier to entry for new users from the humanities, social, and biological sciences.Comment: 15 pages, 3 figures, First Workshop on Interactive High Performance Computing (WIHPC) 2018 held in conjunction with ISC High Performance 2018 in Frankfurt, German

    Measuring the Impact of Spectre and Meltdown

    Full text link
    The Spectre and Meltdown flaws in modern microprocessors represent a new class of attacks that have been difficult to mitigate. The mitigations that have been proposed have known performance impacts. The reported magnitude of these impacts varies depending on the industry sector and expected workload characteristics. In this paper, we measure the performance impact on several workloads relevant to HPC systems. We show that the impact can be significant on both synthetic and realistic workloads. We also show that the performance penalties are difficult to avoid even in dedicated systems where security is a lesser concern

    Large Scale Application of Vibration Sensors for Fan Monitoring at Commercial Layer Hen Houses

    Get PDF
    Continuously monitoring the operation of each individual fan can significantly improve the measurement quality of aerial pollutant emissions from animal buildings that have a large number of fans. To monitor the fan operation by detecting the fan vibration is a relatively new technique. A low-cost electronic vibration sensor was developed and commercialized. However, its large scale application has not yet been evaluated. This paper presents long-term performance results of this vibration sensor at two large commercial layer houses. Vibration sensors were installed on 164 fans of 130 cm diameter to continuously monitor the fan on/off status for two years. The performance of the vibration sensors was compared with fan rotational speed (FRS) sensors. The vibration sensors exhibited quick response and high sensitivity to fan operations and therefore satisfied the general requirements of air quality research. The study proved that detecting fan vibration was an effective method to monitor the on/off status of a large number of single-speed fans. The vibration sensor itself was 2moreexpensivethanamagneticproximityFRSsensorbuttheoverallcostincludinginstallationanddataacquisitionhardwarewas2 more expensive than a magnetic proximity FRS sensor but the overall cost including installation and data acquisition hardware was 77 less expensive than the FRS sensor. A total of nine vibration sensors failed during the study and the failure rate was related to the batches of product. A few sensors also exhibited unsteady sensitivity. As a new product, the quality of the sensor should be improved to make it more reliable and acceptable
    corecore