307 research outputs found
Economic Value Added
Economic Value Added (EVA), when applied properly in a company, impacts all departments and decisions. The equation for EVA as well as the adjustments that must be made to current accounting practices is the basis for an understanding of EVA. The success of EVA is displayed as companies that have implemented EVA to varying degrees are compared with companies that have not implemented EVA. Once the argument for the overall superiority of EVA is made, traditional performance measures and current accounting practices are evaluated. Then, the importance of creating value within corporations becomes apparent. Finally, a detailed example of the implementation process that took place several years ago at Harsco argued in favor of all companies adopting EVA
Recommended from our members
Using Nanoparticle X-ray Spectroscopy to Probe the Formation of Reactive Chemical Gradients in Diffusion-Limited Aerosols.
For aerosol particles that exist in highly viscous, diffusion-limited states, steep chemical gradients are expected to form during photochemical aging in the atmosphere. Under these conditions, species at the aerosol surface are more rapidly transformed than molecules residing in the particle interior. To examine the formation and evolution of chemical gradients at aerosol interfaces, the heterogeneous reaction of hydroxyl radicals (OH) on ∼200 nm particles of pure squalane (a branched, liquid hydrocarbon) and octacosane (a linear, solid hydrocarbon) and binary mixtures of the two are used to understand how diffusion limitations and phase separation impact the particle reactivity. Aerosol mass spectrometry is used to measure the effective heterogeneous OH uptake coefficient (γeff) and oxidation kinetics in the bulk, which are compared with the elemental composition of the surface obtained using X-ray photoemission. When diffusion rates are fast relative to the reaction frequency, as is the case for squalane and low-viscosity squalane-octacosane mixtures, the reaction is efficient (γeff ∼ 0.3) and only limited by the arrival of OH to the interface. However, for cases, where the diffusion rates are slower than reaction rates, as in pure octacosane and higher-viscosity squalane-octacosane mixtures, the heterogeneous reaction occurs in a mixing-limited regime and is ∼10× slower (γeff ∼ 0.03). This is in contrast to carbon and oxygen K edge X-ray absorption measurements that show that the octacosane interface is oxidized much more rapidly than that of pure squalane particles. The O/C ratio of the surface (estimated to be the top 6-8 nm of the interface) is measured to change with rate constants of (3.0 ± 0.9) × 10-13 and (8.6 ± 1.2) × 10-13 cm3 molecule-1 s-1 for squalane and octacosane particles, respectively. The differences in surface oxidation rates are analyzed using a previously published reaction-diffusion model, which suggests that a 1-2 nm highly oxidized crust forms on octacosane particles, whereas in pure squalane, the reaction products are homogeneously mixed within the aerosol. This work illustrates how diffusion limitations can form particles with highly oxidized surfaces even at relatively low oxidant exposures, which is in turn expected to influence their microphysics in the atmosphere
The 2008 Terrestrial Vegetation of Biscayne National Park FL, USA Derived From Aerial Photography, NDVI, and LiDAR
Established as a National Park in 1980, Biscayne National Park (BISC) comprises an area of nearly 700 km2 , of which most is under water. The terrestrial portions of BISC include a coastal strip on the south Florida mainland and a set of Key Largo limestone barrier islands which parallel the mainland several kilometers offshore and define the eastern rim of Biscayne Bay. The upland vegetation component of BISC is embedded within an extensive coastal wetland network, including an archipelago of 42 mangrove-dominated islands with extensive areas of tropical hardwood forests or hammocks. Several databases and vegetation maps describe these terrestrial communities. However, these sources are, for the most part, outdated, incomplete, incompatible, or/and inaccurate. For example, the current, Welch et al. (1999), vegetation map of BISC is nearly 10 years old and represents the conditions of Biscayne National Park shortly after Hurricane Andrew (August 24, 1992). As a result, a new terrestrial vegetation map was commissioned by The National Park Service Inventory and Monitoring Program South Florida / Caribbean Network
Performance Measurements of Supercomputing and Cloud Storage Solutions
Increasing amounts of data from varied sources, particularly in the fields of
machine learning and graph analytics, are causing storage requirements to grow
rapidly. A variety of technologies exist for storing and sharing these data,
ranging from parallel file systems used by supercomputers to distributed block
storage systems found in clouds. Relatively few comparative measurements exist
to inform decisions about which storage systems are best suited for particular
tasks. This work provides these measurements for two of the most popular
storage technologies: Lustre and Amazon S3. Lustre is an open-source, high
performance, parallel file system used by many of the largest supercomputers in
the world. Amazon's Simple Storage Service, or S3, is part of the Amazon Web
Services offering, and offers a scalable, distributed option to store and
retrieve data from anywhere on the Internet. Parallel processing is essential
for achieving high performance on modern storage systems. The performance tests
used span the gamut of parallel I/O scenarios, ranging from single-client,
single-node Amazon S3 and Lustre performance to a large-scale, multi-client
test designed to demonstrate the capabilities of a modern storage appliance
under heavy load. These results show that, when parallel I/O is used correctly
(i.e., many simultaneous read or write processes), full network bandwidth
performance is achievable and ranged from 10 gigabits/s over a 10 GigE S3
connection to 0.35 terabits/s using Lustre on a 1200 port 10 GigE switch. These
results demonstrate that S3 is well-suited to sharing vast quantities of data
over the Internet, while Lustre is well-suited to processing large quantities
of data locally.Comment: 5 pages, 4 figures, to appear in IEEE HPEC 201
Orthogonal weighted linear L1 and L∞ approximation and applications
AbstractLet S={s1,s2,...,sn} be a set of sites in Ed, where every site si has a positive real weight ωi. This paper gives algorithms to find weighted orthogonal L∞ and L1 approximating hyperplanes for S. The algorithm for the weighted orthogonal L1 approximation is shown to require O(nd) worst-case time and O(n) space for d ≥ 2. The algorithm for the weighted orthogonal L∞ approximation is shown to require O(n log n) worst-case time and O(n) space for d = 2, and O(n⌊dl2 + 1⌋) worst-case time and O(n⌊(d+1)/2⌋) space for d > 2. In the latter case, the expected time complexity may be reduced to O(n⌊(d+1)/2⌋). The L∞ approximation algorithm can be modified to solve the problem of finding the width of a set of n points in Ed, and the problem of finding a stabbing hyperplane for a set of n hyperspheres in Ed with varying radii. The time and space complexities of the width and stabbing algorithms are seen to be the same as those of the L∞ approximation algorithm
Lessons Learned from a Decade of Providing Interactive, On-Demand High Performance Computing to Scientists and Engineers
For decades, the use of HPC systems was limited to those in the physical
sciences who had mastered their domain in conjunction with a deep understanding
of HPC architectures and algorithms. During these same decades, consumer
computing device advances produced tablets and smartphones that allow millions
of children to interactively develop and share code projects across the globe.
As the HPC community faces the challenges associated with guiding researchers
from disciplines using high productivity interactive tools to effective use of
HPC systems, it seems appropriate to revisit the assumptions surrounding the
necessary skills required for access to large computational systems. For over a
decade, MIT Lincoln Laboratory has been supporting interactive, on-demand high
performance computing by seamlessly integrating familiar high productivity
tools to provide users with an increased number of design turns, rapid
prototyping capability, and faster time to insight. In this paper, we discuss
the lessons learned while supporting interactive, on-demand high performance
computing from the perspectives of the users and the team supporting the users
and the system. Building on these lessons, we present an overview of current
needs and the technical solutions we are building to lower the barrier to entry
for new users from the humanities, social, and biological sciences.Comment: 15 pages, 3 figures, First Workshop on Interactive High Performance
Computing (WIHPC) 2018 held in conjunction with ISC High Performance 2018 in
Frankfurt, German
Measuring the Impact of Spectre and Meltdown
The Spectre and Meltdown flaws in modern microprocessors represent a new
class of attacks that have been difficult to mitigate. The mitigations that
have been proposed have known performance impacts. The reported magnitude of
these impacts varies depending on the industry sector and expected workload
characteristics. In this paper, we measure the performance impact on several
workloads relevant to HPC systems. We show that the impact can be significant
on both synthetic and realistic workloads. We also show that the performance
penalties are difficult to avoid even in dedicated systems where security is a
lesser concern
- …