32 research outputs found
Coupling Memory and Computation for Locality Management
We articulate the need for managing (data) locality automatically rather than leaving it to the programmer, especially in parallel programming systems. To this end, we propose techniques for coupling tightly the computation (including the thread scheduler) and the memory manager so that data and computation can be positioned closely in hardware. Such tight coupling of computation and memory management is in sharp contrast with the prevailing practice of considering each in isolation. For example, memory-management techniques usually abstract the computation as an unknown "mutator", which is treated as a "black box". As an example of the approach, in this paper we consider a specific class of parallel computations, nested-parallel computations. Such computations dynamically create a nesting of parallel tasks. We propose a method for organizing memory as a tree of heaps reflecting the structure of the nesting. More specifically, our approach creates a heap for a task if it is separately scheduled on a processor. This allows us to couple garbage collection with the structure of the computation and the way in which it is dynamically scheduled on the processors. This coupling enables taking advantage of locality in the program by mapping it to the locality of the hardware. For example for improved locality a heap can be garbage collected immediately after its task finishes when the heap contents is likely in cache
Hierarchical Memory Management for Parallel Programs
International audienceAn important feature of functional programs is that they are parallel by default. Implementing an efficient parallel functional language, however, is a major challenge, in part because the high rate of allocation and freeing associated with functional programs requires an efficient and scalable memory manager. In this paper, we present a technique for parallel memory management for strict functional languages with nested parallelism. At the highest level of abstraction, the approach consists of a technique to organize memory as a hierarchy of heaps, and an algorithm for performing automatic memory reclamation by taking advantage of a disentanglement property of parallel functional programs. More specifically, the idea is to assign to each parallel task its own heap in memory and organize the heaps in a hierarchy/tree that mirrors the hierarchy of tasks. We present a nested-parallel calculus that specifies hierarchical heaps and prove in this calculus a disentanglement property, which prohibits a task from accessing objects allocated by another task that might execute in parallel. Leveraging the disentanglement property, we present a garbage collection technique that can operate on any subtree in the memory hierarchy concurrently as other tasks (and/or other collections) proceed in parallel. We prove the safety of this collector by formalizing it in the context of our parallel calculus. In addition, we describe how the proposed techniques can be implemented on modern shared-memory machines and present a prototype implementation as an extension to MLton, a high-performance compiler for the Standard ML language. Finally, we evaluate the performance of this implementation on a number of parallel benchmarks
Experimental investigation of different geometries of fixed oscillating water column devices
publisher: Elsevier articletitle: Experimental investigation of different geometries of fixed oscillating water column devices journaltitle: Renewable Energy articlelink: http://dx.doi.org/10.1016/j.renene.2016.11.061 content_type: article copyright: © 2016 Elsevier Ltd. All rights reserved
Distributed, Robust Auto-Scaling Policies for Power Management in Compute Intensive Server Farms
Abstract—Server farms today often over-provision resources to handle peak demand, resulting in an excessive waste of power. Ideally, server farm capacity should be dynamically adjusted based on the incoming demand. However, the unpredictable and time-varying nature of customer demands makes it very difficult to efficiently scale capacity in server farms. The problem is further exacerbated by the large setup time needed to increase capacity, which can adversely impact response times as well as utilize additional power. In this paper, we present the design and implementation of a class of Distributed and Robust Auto-Scaling policies (DRAS policies), for power management in compute intensive server farms. Results indicate that the DRAS policies dynamically adjust server farm capacity without requiring any prediction of the future load, or any feedback control. Implementation results on a 21 server test-bed show that the DRAS policies provide near-optimal response time while lowering power consumption by about 30 % when compared to static provisioning policies that employ a fixed number of servers. I
Competitive Behavior-Based Price Discrimination for Software Upgrades
The introduction of product upgrades in a competitive environment is commonly observed in the software industry. When introducing a new product, a software vendor may employ behavior-based price discrimination (BBPD) by offering a discount over its market price to entice existing customers of the competitor. This type of pricing is referred to as competitive upgrade discount pricing and is possible because the vendor can use proof of purchase of a competitor\u27s product as credible evidence to offer the discount. At the same time, the competitor may offer a discount to its own previous customers in order to induce them to buy its upgrade. We formulate a game-theoretic model involving an incumbent and entrant where both firms can offer discounts to existing customers of the incumbent. Although several equilibrium possibilities exist, we establish that an equilibrium with competitive upgrade discount pricing is observed only for a unique market structure and a corresponding unique set of prices. In this equilibrium, instead of leveraging its first mover advantage, the incumbent cedes market share to the entrant. Furthermore, the profits of both the incumbent and the entrant reduce with switching costs. This implies that the use of BBPD has product design implications because firms may influence the switching costs between their products by making appropriate compatibility decisions. In addition, lower switching costs result in reduced consumer surplus. Hence, a social planner may want to increase switching costs. The resulting policy implications are different from those prevalent in other industries such as mobile telecommunications where the regulators reduced switching costs by enforcing number portability