307 research outputs found

    Economic Value Added

    Get PDF
    Economic Value Added (EVA), when applied properly in a company, impacts all departments and decisions. The equation for EVA as well as the adjustments that must be made to current accounting practices is the basis for an understanding of EVA. The success of EVA is displayed as companies that have implemented EVA to varying degrees are compared with companies that have not implemented EVA. Once the argument for the overall superiority of EVA is made, traditional performance measures and current accounting practices are evaluated. Then, the importance of creating value within corporations becomes apparent. Finally, a detailed example of the implementation process that took place several years ago at Harsco argued in favor of all companies adopting EVA

    The 2008 Terrestrial Vegetation of Biscayne National Park FL, USA Derived From Aerial Photography, NDVI, and LiDAR

    Get PDF
    Established as a National Park in 1980, Biscayne National Park (BISC) comprises an area of nearly 700 km2 , of which most is under water. The terrestrial portions of BISC include a coastal strip on the south Florida mainland and a set of Key Largo limestone barrier islands which parallel the mainland several kilometers offshore and define the eastern rim of Biscayne Bay. The upland vegetation component of BISC is embedded within an extensive coastal wetland network, including an archipelago of 42 mangrove-dominated islands with extensive areas of tropical hardwood forests or hammocks. Several databases and vegetation maps describe these terrestrial communities. However, these sources are, for the most part, outdated, incomplete, incompatible, or/and inaccurate. For example, the current, Welch et al. (1999), vegetation map of BISC is nearly 10 years old and represents the conditions of Biscayne National Park shortly after Hurricane Andrew (August 24, 1992). As a result, a new terrestrial vegetation map was commissioned by The National Park Service Inventory and Monitoring Program South Florida / Caribbean Network

    Performance Measurements of Supercomputing and Cloud Storage Solutions

    Full text link
    Increasing amounts of data from varied sources, particularly in the fields of machine learning and graph analytics, are causing storage requirements to grow rapidly. A variety of technologies exist for storing and sharing these data, ranging from parallel file systems used by supercomputers to distributed block storage systems found in clouds. Relatively few comparative measurements exist to inform decisions about which storage systems are best suited for particular tasks. This work provides these measurements for two of the most popular storage technologies: Lustre and Amazon S3. Lustre is an open-source, high performance, parallel file system used by many of the largest supercomputers in the world. Amazon's Simple Storage Service, or S3, is part of the Amazon Web Services offering, and offers a scalable, distributed option to store and retrieve data from anywhere on the Internet. Parallel processing is essential for achieving high performance on modern storage systems. The performance tests used span the gamut of parallel I/O scenarios, ranging from single-client, single-node Amazon S3 and Lustre performance to a large-scale, multi-client test designed to demonstrate the capabilities of a modern storage appliance under heavy load. These results show that, when parallel I/O is used correctly (i.e., many simultaneous read or write processes), full network bandwidth performance is achievable and ranged from 10 gigabits/s over a 10 GigE S3 connection to 0.35 terabits/s using Lustre on a 1200 port 10 GigE switch. These results demonstrate that S3 is well-suited to sharing vast quantities of data over the Internet, while Lustre is well-suited to processing large quantities of data locally.Comment: 5 pages, 4 figures, to appear in IEEE HPEC 201

    Orthogonal weighted linear L1 and L∞ approximation and applications

    Get PDF
    AbstractLet S={s1,s2,...,sn} be a set of sites in Ed, where every site si has a positive real weight ωi. This paper gives algorithms to find weighted orthogonal L∞ and L1 approximating hyperplanes for S. The algorithm for the weighted orthogonal L1 approximation is shown to require O(nd) worst-case time and O(n) space for d ≥ 2. The algorithm for the weighted orthogonal L∞ approximation is shown to require O(n log n) worst-case time and O(n) space for d = 2, and O(n⌊dl2 + 1⌋) worst-case time and O(n⌊(d+1)/2⌋) space for d > 2. In the latter case, the expected time complexity may be reduced to O(n⌊(d+1)/2⌋). The L∞ approximation algorithm can be modified to solve the problem of finding the width of a set of n points in Ed, and the problem of finding a stabbing hyperplane for a set of n hyperspheres in Ed with varying radii. The time and space complexities of the width and stabbing algorithms are seen to be the same as those of the L∞ approximation algorithm

    Lessons Learned from a Decade of Providing Interactive, On-Demand High Performance Computing to Scientists and Engineers

    Full text link
    For decades, the use of HPC systems was limited to those in the physical sciences who had mastered their domain in conjunction with a deep understanding of HPC architectures and algorithms. During these same decades, consumer computing device advances produced tablets and smartphones that allow millions of children to interactively develop and share code projects across the globe. As the HPC community faces the challenges associated with guiding researchers from disciplines using high productivity interactive tools to effective use of HPC systems, it seems appropriate to revisit the assumptions surrounding the necessary skills required for access to large computational systems. For over a decade, MIT Lincoln Laboratory has been supporting interactive, on-demand high performance computing by seamlessly integrating familiar high productivity tools to provide users with an increased number of design turns, rapid prototyping capability, and faster time to insight. In this paper, we discuss the lessons learned while supporting interactive, on-demand high performance computing from the perspectives of the users and the team supporting the users and the system. Building on these lessons, we present an overview of current needs and the technical solutions we are building to lower the barrier to entry for new users from the humanities, social, and biological sciences.Comment: 15 pages, 3 figures, First Workshop on Interactive High Performance Computing (WIHPC) 2018 held in conjunction with ISC High Performance 2018 in Frankfurt, German

    Measuring the Impact of Spectre and Meltdown

    Full text link
    The Spectre and Meltdown flaws in modern microprocessors represent a new class of attacks that have been difficult to mitigate. The mitigations that have been proposed have known performance impacts. The reported magnitude of these impacts varies depending on the industry sector and expected workload characteristics. In this paper, we measure the performance impact on several workloads relevant to HPC systems. We show that the impact can be significant on both synthetic and realistic workloads. We also show that the performance penalties are difficult to avoid even in dedicated systems where security is a lesser concern
    • …
    corecore