143 research outputs found

    Online Multistage Subset Maximization Problems

    Get PDF
    Numerous combinatorial optimization problems (knapsack, maximum-weight matching, etc.) can be expressed as subset maximization problems: One is given a ground set N={1,...,n}, a collection F subseteq 2^N of subsets thereof such that the empty set is in F, and an objective (profit) function p: F -> R_+. The task is to choose a set S in F that maximizes p(S). We consider the multistage version (Eisenstat et al., Gupta et al., both ICALP 2014) of such problems: The profit function p_t (and possibly the set of feasible solutions F_t) may change over time. Since in many applications changing the solution is costly, the task becomes to find a sequence of solutions that optimizes the trade-off between good per-time solutions and stable solutions taking into account an additional similarity bonus. As similarity measure for two consecutive solutions, we consider either the size of the intersection of the two solutions or the difference of n and the Hamming distance between the two characteristic vectors. We study multistage subset maximization problems in the online setting, that is, p_t (along with possibly F_t) only arrive one by one and, upon such an arrival, the online algorithm has to output the corresponding solution without knowledge of the future. We develop general techniques for online multistage subset maximization and thereby characterize those models (given by the type of data evolution and the type of similarity measure) that admit a constant-competitive online algorithm. When no constant competitive ratio is possible, we employ lookahead to circumvent this issue. When a constant competitive ratio is possible, we provide almost matching lower and upper bounds on the best achievable one

    Comparison of different approaches to multistage lot sizing with uncertain demand

    Get PDF
    We study a new variant of the classical lot sizing problem with uncertain demand where neither the planning horizon nor demands are known exactly. This situation arises in practice when customer demands arriving over time are confirmed rather lately during the transportation process. In terms of planning, this setting necessitates a rolling horizon procedure where the overall multistage problem is dissolved into a series of coupled snapshot problems under uncertainty. Depending on the available data and risk disposition, different approaches from online optimization, stochastic programming, and robust optimization are viable to model and solve the snapshot problems. We evaluate the impact of the selected methodology on the overall solution quality using a methodology-agnostic framework for multistage decision-making under uncertainty. We provide computational results on lot sizing within a rolling horizon regarding different types of uncertainty, solution approaches, and the value of available information about upcoming demands

    Designing energy-efficient sub-threshold logic circuits using equalization and non-volatile memory circuits using memristors

    Full text link
    The very large scale integration (VLSI) community has utilized aggressive complementary metal-oxide semiconductor (CMOS) technology scaling to meet the ever-increasing performance requirements of computing systems. However, as we enter the nanoscale regime, the prevalent process variation effects degrade the CMOS device reliability. Hence, it is increasingly essential to explore emerging technologies which are compatible with the conventional CMOS process for designing highly-dense memory/logic circuits. Memristor technology is being explored as a potential candidate in designing non-volatile memory arrays and logic circuits with high density, low latency and small energy consumption. In this thesis, we present the detailed functionality of multi-bit 1-Transistor 1-memRistor (1T1R) cell-based memory arrays. We present the performance and energy models for an individual 1T1R memory cell and the memory array as a whole. We have considered TiO2- and HfOx-based memristors, and for these technologies there is a sub-10% difference between energy and performance computed using our models and HSPICE simulations. Using a performance-driven design approach, the energy-optimized TiO2-based RRAM array consumes the least write energy (4.06 pJ/bit) and read energy (188 fJ/bit) when storing 3 bits/cell for 100 nsec write and 1 nsec read access times. Similarly, HfOx-based RRAM array consumes the least write energy (365 fJ/bit) and read energy (173 fJ/bit) when storing 3 bits/cell for 1 nsec write and 200 nsec read access times. On the logic side, we investigate the use of equalization techniques to improve the energy efficiency of digital sequential logic circuits in sub-threshold regime. We first propose the use of a variable threshold feedback equalizer circuit with combinational logic blocks to mitigate the timing errors in digital logic designed in sub-threshold regime. This mitigation of timing errors can be leveraged to reduce the dominant leakage energy by scaling supply voltage or decreasing the propagation delay. At the fixed supply voltage, we can decrease the propagation delay of the critical path in a combinational logic block using equalizer circuits and, correspondingly decrease the leakage energy consumption. For a 8-bit carry lookahead adder designed in UMC 130 nm process, the operating frequency can be increased by 22.87% (on average), while reducing the leakage energy by 22.6% (on average) in the sub-threshold regime. Overall, the feedback equalization technique provides up to 35.4% lower energy-delay product compared to the conventional non-equalized logic. We also propose a tunable adaptive feedback equalizer circuit that can be used with sequential digital logic to mitigate the process variation effects and reduce the dominant leakage energy component in sub-threshold digital logic circuits. For a 64-bit adder designed in 130 nm our proposed approach can reduce the normalized delay variation of the critical path delay from 16.1% to 11.4% while reducing the energy-delay product by 25.83% at minimum energy supply voltage. In addition, we present detailed energy-performance models of the adaptive feedback equalizer circuit. This work serves as a foundation for the design of robust, energy-efficient digital logic circuits in sub-threshold regime

    PRODUCTION SEQUENCING AND STABILITY ANALYSIS OF A JUST-IN-TIME SYSTEM WITH SEQUENCE DEPENDENT SETUPS

    Get PDF
    Just-In-Time (JIT) production systems is a popular area for researchers but real-world issues such as sequence dependent setups are often overlooked. This research investigates an approach for determining stability and an approach for mixed product sequencing in production systems with sequence dependent setups and buffer thresholds which signal replenishment of a given buffer. Production systems in this research operate under JIT pull production principles by producing only when demand exists and idle when no demand exists. In the first approach, an iterative method is presented to determine stability for a multi-product production system that operates with replenishment signals and may have sequence dependent setups. In this method, a network of nodes representing machine states and arcs representing the buffer inventory levels is used to find a stable trajectory for the production system via an iterative procedure. The method determines suitable buffer levels for the production system that ensure that a trajectory originating from any point within a buffer region will always map to a point contained on another buffer region for all future mappings. This iterative method for determining the stability of a production system was implemented using an algorithm to calculate the buffer inventory regions for all arcs in a given arc-node network. The algorithm showed favorable results for two and three product systems in which sequence dependent setups may exist. In the second approach, a product sequencing algorithm determines a product sequence for a production system based on system parameters – setup times, buffer levels, usage rates, production rates, etc. The algorithm selects a product by evaluating the goodness of each product that has reached the replenishment threshold at the current time. The algorithm also incorporates a lookahead function that calculates the goodness for some time interval into the future. The lookahead function considers all branches of the tree of potential sequences to prevent the sequence from travelling down a dead-end branch in which the system will be unable to avoid a depleted buffer. The sequencing algorithm allows the user to weight the five terms of the goodness equations (current and lookahead) to control the behavior of the sequence

    NOVEL GROUND BOUNCE NOISE REDUCTION WITH ENHANCED POWER AND AREA EFFICIENCY FOR LOW POWER PORTABLE APPLICATION

    Get PDF
    As technology scales into the nanometer regime ground bounce noise and heat dissipation immunity are becoming important metric of comparable importance to leakage current, active power, delay and area for the analysis and design of complex arithmetic logic circuits. In this paper, low leakage 1bit PFAL full adder cells are proposed for mobile applications with low ground bounce noise and heat dissipation in the circuits using adiabatic logic. The simulations are done using DSCH &MicrowindSoftware

    Dynamic Stochastic Inventory Management in E-Grocery Retailing: The Value of Probabilistic Information

    Full text link
    Inventory management optimisation in a multi-period setting with dependent demand periods requires the determination of replenishment order quantities in a dynamic stochastic environment. Retailers are faced with uncertainty in demand and supply for each demand period. In grocery retailing, perishable goods without best-before-dates further amplify the degree of uncertainty due to stochastic spoilage. Assuming a lead time of multiple days, the inventory at the beginning of each demand period is determined jointly by the realisations of these stochastic variables. While existing contributions in the literature focus on the role of single components only, we propose to integrate all of them into a joint framework, explicitly modelling demand, supply shortages, and spoilage using suitable probability distributions learned from historic data. As the resulting optimisation problem is analytically intractable in general, we use a stochastic lookahead policy incorporating Monte Carlo techniques to fully propagate the associated uncertainties in order to derive replenishment order quantities. We develop a general inventory management framework and analyse the benefit of modelling each source of uncertainty with an appropriate probability distribution. Additionally, we conduct a sensitivity analysis with respect to location and dispersion of these distributions. We illustrate the practical feasibility of our framework using a case study on data from a European e-grocery retailer. Our findings illustrate the importance of properly modelling stochastic variables using suitable probability distributions for a cost-effective inventory management process

    Ubik: efficient cache sharing with strict qos for latency-critical workloads

    Get PDF
    Chip-multiprocessors (CMPs) must often execute workload mixes with different performance requirements. On one hand, user-facing, latency-critical applications (e.g., web search) need low tail (i.e., worst-case) latencies, often in the millisecond range, and have inherently low utilization. On the other hand, compute-intensive batch applications (e.g., MapReduce) only need high long-term average performance. In current CMPs, latency-critical and batch applications cannot run concurrently due to interference on shared resources. Unfortunately, prior work on quality of service (QoS) in CMPs has focused on guaranteeing average performance, not tail latency. In this work, we analyze several latency-critical workloads, and show that guaranteeing average performance is insufficient to maintain low tail latency, because microarchitectural resources with state, such as caches or cores, exert inertia on instantaneous workload performance. Last-level caches impart the highest inertia, as workloads take tens of milliseconds to warm them up. When left unmanaged, or when managed with conventional QoS frameworks, shared last-level caches degrade tail latency significantly. Instead, we propose Ubik, a dynamic partitioning technique that predicts and exploits the transient behavior of latency-critical workloads to maintain their tail latency while maximizing the cache space available to batch applications. Using extensive simulations, we show that, while conventional QoS frameworks degrade tail latency by up to 2.3x, Ubik simultaneously maintains the tail latency of latency-critical workloads and significantly improves the performance of batch applications.United States. Defense Advanced Research Projects Agency (Power Efficiency Revolution For Embedded Computing Technologies Contract HR0011-13-2-0005)National Science Foundation (U.S.) (Grant CCF-1318384

    Approximate logic circuits: Theory and applications

    Get PDF
    CMOS technology scaling, the process of shrinking transistor dimensions based on Moore's law, has been the thrust behind increasingly powerful integrated circuits for over half a century. As dimensions are scaled to few tens of nanometers, process and environmental variations can significantly alter transistor characteristics, thus degrading reliability and reducing performance gains in CMOS designs with technology scaling. Although design solutions proposed in recent years to improve reliability of CMOS designs are power-efficient, the performance penalty associated with these solutions further reduces performance gains with technology scaling, and hence these solutions are not well-suited for high-performance designs. This thesis proposes approximate logic circuits as a new logic synthesis paradigm for reliable, high-performance computing systems. Given a specification, an approximate logic circuit is functionally equivalent to the given specification for a "significant" portion of the input space, but has a smaller delay and power as compared to a circuit implementation of the original specification. This contributions of this thesis include (i) a general theory of approximation and efficient algorithms for automated synthesis of approximations for unrestricted random logic circuits, (ii) logic design solutions based on approximate circuits to improve reliability of designs with negligible performance penalty, and (iii) efficient decomposition algorithms based on approxiiii mate circuits to improve performance of designs during logic synthesis. This thesis concludes with other potential applications of approximate circuits and identifies. open problems in logic decomposition and approximate circuit synthesis
    corecore