79 research outputs found

    Toward Sustainability, High Density and Short Response Time by Live-cube Storage Systems

    Get PDF
    This paper studies random storage in a live-cube storage system where loads are stored multi-deep. Although such storage systems are still rare, they are increasingly used, for example in automated car parking systems. Each load is accessible individually and can be moved to a lift on every level of the system in x- and y-directions by a shuttle as long as an open slot is available next to it, comparable to Sam Loyd’s sliding puzzles. A lift moves the loads across different levels in z-direction. We derive the expected travel time of a random load from its storage location to the input/output point. We optimize system dimensions by minimizing the expected travel time

    A Comparative Evaluation of Snow Depth and Snow Water Equivalent Using Empirical Algorithms and Multivariate regressions

    Get PDF
    Space-borne passive microwave (PM) radiometers have provided an opportunity to estimate Snow water equivalent (SWE) and Snow depth (SD) at both regional and global scales. This study attempts to employ empirical algorithms and multivariate regressions (MRs) using Special Sensor Microwave Imager (SSM/I) brightness temperature (TB) in order to achieve an accurate assessment of SD and SWE which well suited for the interest study area. The SSM/I data consist of Pathfinder Daily EASE-Grid TB supplied by the National Snow and Ice Data Centre (NSIDC). For the present study, satellite-based data were gathered from 1992 through 2015 in two versions (v1: 09 July 1987 to 29 April 2009; v2: 14 December 2006 up to now). The results indicated that a stepwise multivariate nonlinear regression (MNLR) outperformed (r = 0.41, and 0.344 for SD and SWE, respectively) other methods. However, a fairly unsatisfactory correlation between ground-based and satellite derived data has been confirmed due to the sparse ground based data and not considering other parameters (snow density, moisture, etc.

    Response time analysis of a live-cube compact storage system with two storage classes

    Get PDF
    We study a next generation of storage systems: live-cube compact storage systems. These systems are becoming increasingly popular, due to their small physical and environmental footprint paired with a large storage space. At each level of a live-cube system, multiple shuttles take care of themovement of unit loads in the x and y directions. When multiple empty locations are available, the shuttles can cooperate to create a virtual aisle for the retrieval of a desired unit load. A lift takes care of the movement across different levels in the z-direction. Two-class-based storage, in which high turnover unit loads are stored at storage locations closer to the Input/Output point, can result in a short response time. We study two-class-based storage for a live-cube system and derive closed-form formulas for the expected retrieval time. Although the system needs to be decomposed into several cases and sub-cases, we eventually obtain simple-to-use closed-form formulas to evaluate the performance of systems with any configuration and first zone boundary. Continuous-space closed-form formulas are shown to be very close to the results obtained for discretespace live-cube systems. The numerical results show that two-class-based storage can reduce the average response time of a live-cube system by up to 55% compared with random storage for the instances tested

    Optimal two-class-based storage in a live-cube compact storage system

    Get PDF
    Live-cube compact storage systems realize high storage space utilization and high throughput, due to full automation and independent movements of unit loads in three-dimensional space. Applying an optimal two-class-based storage policy where high-turnover products are stored at locations closer to the Input/Output point significantly reduces the response time. Live-cube systems are used in various sectors, such as warehouses and distribution centers, parking systems, and container yards. The system stores unit loads, such as pallets, cars, or containers, multi-deep at multiple levels of storage grids. Each unit load is located on its own shuttle. Shuttles move unit loads at each level in the x and y directions, with a lift taking care of the movement in the z-direction. Movement of a requested unit load to the lift location is comparable to solving a Sam Loyd’s puzzle game where 15 numbered tiles move in a 4 × 4 grid. However, with multiple empty locations, a virtual aisle can be created to shorten the retrieval time for a requested unit load. In this article, we optimize the dimensions and zone boundary of a two-class live-cube compact storage system leading to a minimum response time. We propose a mixed-integer nonlinear model that consists of 36 sub-cases, each representing a specific configuration and first zone boundary. Properties of the optimal system are used to simplify the model without losing any optimality. The overall optimal solutions are then obtained by solving the remaining sub-cases. Although the solution procedure is tedious, we eventually obtain two sets of closed-form expressions for the optimal system dimensions and first zone boundary for any desired system size. In addition, we propose an algorithm to obtain the optimal first zone boundary for situations where the optimal system dimensions cannot be achieved. To test the effectiveness of optimal system dimensions and first zone boundary on the performance of a two-class-based live-cube system, we perform a sensitivity analysis by varying the ABC curve, system size, first zone size, and shape factor. The results show that for most cases an optimal two-class-based storage outperforms random storage, with up to 45% shorter expected retrieval time

    Modelling load retrievals in Puzzle-Based Storage systems

    Get PDF
    Puzzle-based storage systems are a new type of automated storage systems that allow storage of unit loads (e.g. cars, pallets, boxes) in a rack on a very small footprint with individual accessibility of all loads. They resemble the famous 15-sliding tile puzzle. Current models for such systems study retrieving loads one at a time. However, much time can be saved by considering multiple retrieval loads simultaneously. We develop an optimal method to do this for two loads and heuristics for three or more loads. Optimal retrieval paths are constructed for multiple load retrieval, which consists of moving multiple loads first to an intermediary ‘joining location’. We find that, compared to individual retrieval, optimal dual load retrieval saves on average 17% move time, and savings from the heuristic is almost the same. For three loads, savings are 23% on average. A limitation of our method is that it is valid only for systems with a very high space utilisation, i.e. only one empty locatio

    The impact of integrated cluster-based storage allocation on parts-to-picker warehouse performance

    Get PDF
    Order picking is one of the most demanding activities in many warehouses in terms of capital and labor. In parts-to-picker systems, automated vehicles or cranes bring the parts to a human picker. The storage assignment policy, the assignment of products to the storage locations, influences order picking efficiency. Commonly used storage assignment policies, such as full turnover-based and class-based storage, only consider the frequency at which each product has been requested but ignore information on the frequency at which products are ordered jointly, known as product affinity. Warehouses can use product affinity to make informed decisions and assign multiple correlated products to the same inventory “pod” to reduce retrieval time. Existing affinity-based assignments sequentially cluster products with high affinity and assign the clusters to storage locations. We propose an integrated cluster allocation (ICA) policy to minimize the retrieval time of parts-to-picker systems based on both product turnover and affinity obtained from historical customer orders. We formulate a mathematical model that can solve small instances and develop a greedy construction heuristic for solving large instances. The ICA storage policy can reduce total retrieval time by up to 40% c

    Optimizing make-to-stock policies through a robust lot-sizing model

    Get PDF
    In this paper we consider a practical lot-sizing problem faced by an industrial company. The company plans the production for a set of products following a Make-To-Order policy. When the productive capacity is not fully used, the remaining capacity is devoted to the production of those products whose orders are typically quite below the established minimum production level. For these products the company follows a Make-To-Stock (MTS) policy since part of the production is to fulfill future estimated orders. This yields a particular lot-sizing problem aiming to decide which products should be produced and the corresponding batch sizes. These lot-sizing problems typically face uncertain demands, which we address here through the lens of robust optimization. First we provide a mixed integer formulation assuming the future demands are deterministic and we tighten the model with valid inequalities. Then, in order to account for uncertainty of the demands, we propose a robust approach where demands are assumed to belong to given intervals and the number of deviations to the nominal estimated value is limited. As the number of products can be large and some instances may not be solved to optimality, we propose two heuristics. Computational tests are conducted on a set of instances generated from real data provided by our industrial partner. The heuristics proposed are fast and provide good quality solutions for the tested instances. Moreover, since they are based on the mathematical model and use simple strategies to reduce the instances size, these heuristics could be extended to solve other multi-item lot-sizing problems where demands are uncertain.publishe

    Understanding and Improving Patient Flow in Outpatient Clinics and Emergency Departments

    No full text
    Improving patient flow is a critical aspect of quality management in emergency departments and other healthcare settings. By improving the flow of patients in healthcare facilities, we can decrease wait times and boost patient and staff satisfaction. Many patients face physical pain and suffering while waiting for treatment in healthcare facilities. Long wait times may also result in treatable illnesses and injuries becoming chronic conditions. This dissertation includes three main chapters, corresponding to three essays on understanding and improving patient flow in outpatient clinics and emergency departments. In some outpatient clinics, lab tests must be completed before the clinic appointment, as doctors need to have the test results when seeing a patient. Achieving this tight coordination of a patient's testing and his or her subsequent doctor's appointment may be difficult in a facility where many physicians share the same testing resources. The second chapter presents a mixed-integer programming (MIP)-based approach to reduce the likelihood of a patient not completing testing in time for the clinic appointment. In the third chapter, we focus on improving patient flow in emergency departments by looking at the physician scheduling problem. We show that the scheduling of physicians has a direct impact on the waiting time of patients. Chapter 4 presents a new crowding measure in emergency departments that is based on patient volume and mix of patients. We assess the relevance and significance of the proposed measure

    An Improved Stochastic Generation Approach for Assessing the Vulnerability of Water Resource Systems under Changing Streamflow Conditions

    No full text
    Water-related disasters such as floods and droughts highlight the urgent need for securing water resource systems for human and ecosystem utilizations. Increasing anthropogenic interventions along with climate variability and change have exacerbated the intensity and frequency of such water-related events, which will continue to increase in the future. Such pressures introduce substantial and unprecedented vulnerability to water resource management. Understanding the extent of potential vulnerabilities, however, is not trivial due to the uncertainty in current top-down impact assessments. To address current limitations, bottom-up frameworks have been proposed in the past decade to provide alternatives to top-down and scenario-led vulnerability assessments. The core idea behind bottom-up schemes is to analyze the potential impacts directly as a function of potential changes in streamflow conditions through a systematic stress testing scheme. To make such stress tests reliable, systematic methodologies are needed to synthesize streamflow, and other hydroclimatic variables, beyond the historical observations. Despite ongoing advances in stochastic streamflow generations under stationary conditions – with which the vulnerability assessment can be performed – little attention has been given to advancing the perturbation algorithms for altering the streamflow characteristics under nonstationary conditions; and in fact, only a few incorporate climate-related proxies into streamflow generation. This thesis aims to shed light on some limitations of bottom-up approaches and propose an improved stochastic streamflow generation framework for impact assessment in water resources systems under changing streamflow conditions. This takes place through: (1) Identifying uncertainties in current stochastic streamflow generation approaches as well as how and why these uncertainties matter to bottom-up impact assessment; (2) providing a guideline on the choice of the optimal scheme(s) for stochastic generation of streamflow series in various temporal and spatial scales; (3) proposing a methodology to incorporate the effect of large scale climate indices in stochastic streamflow generation; (4) identifying the types of changes in the streamflow regime through a systematic and globally-relevant approach; as well as (5) proposing a generic algorithm to shift a wide range of streamflow characteristics in streamflow time series, and to make a transient and non-stationary flow generation. This research results in an improved stochastic streamflow generation scheme capable of generating scenarios of change under nonstationary conditions. The skill of the proposed algorithm is assessed over multiple natural streams, showing good performance in representing the plausible changes required for the vulnerability assessment of water resource systems
    • 

    corecore