299 research outputs found

    Modeling and Communicating Flexibility in Smart Grids Using Artificial Neural Networks as Surrogate Models

    Get PDF
    Increasing shares of renewable energies and the transition towards electric vehicles pose major challenges to the energy system. In order to tackle these in an economically sensible way, the flexibility of distributed energy resources (DERs), such as battery energy storage systems, combined heat and power plants, and heat pumps, needs to be exploited. Modeling and communicating this flexibility is a fundamental step when trying to achieve control over DERs. The literature proposes and makes use of many different approaches, not only for the exploitation itself, but also in terms of models. In the first step, this thesis presents an extensive literature review and a general framework for classifying exploitation approaches and the communicated models. Often, the employed models only apply to specific types of DERs, or the models are so abstract that they neglect constraints and only roughly outline the true flexibility. Surrogate models, which are learned from data, can pose as generic DER models and may potentially be trained in a fully automated process. In this thesis, the idea of encoding the flexibility of DERs into ANNs is systematically investigated. Based on the presented framework, a set of ANN-based surrogate modeling approaches is derived and outlined, of which some are only applicable for specific use cases. In order to establish a baseline for the approximation quality, one of the most versatile identified approaches is evaluated in order to assess how well a set of reference models is approximated. If this versatile model is able to capture the flexibility well, a more specific model can be expected to do so even better. The results show that simple DERs are very closely approximated, and for more complex DERs and combinations of multiple DERs, a high approximation quality can be achieved by introducing buffers. Additionally, the investigated approach has been tested in scheduling tasks for multiple different DERs, showing that it is indeed possible to use ANN-based surrogates for the flexibility of DERs to derive load schedules. Finally, the computational complexity of utilizing the different approaches for controlling DERs is compared

    Reducing the tied up capital through investigation of production postponement and inventory

    Get PDF
    VI Delimitations The company has several product families but only one is investigated in this project. There are today two main warehouses at the production site, the warehouse for raw material and the one for finished goods, where only the latter was studied. There are other places within the company where capital could be tied up as well, e.g. when products are being processed in a machine, but this was not examined either. The inventories were investigated by questioning its size and location. The suggested changes might require capacity or technical adjustments in the production; the feasibility or practicality of implementing those changes in reality is not discussed in depth in this report. Methodology At first, a literature study was carried out to find useable analytical tools for the inventory review and to understand the possibilities and the limitations with a postponement strategy. Company knowledge was obtained through interviews, observations and from the information systems. A simulation model was created to investigate if it was possible to have an intermediate storage at different steps in the production. Analysis In the first part of the analysis the main finding was the high inventory level in the finished goods inventory. The level was far exceeding the level set by Hyde themselves (safety stock + forecast) resulting in a high cost of tied up capital. A review was also carried out to look upon if the supposed inventory levels would have been sufficient to avoid stock outs in 2012. It showed two stock-out occasions during the year, where it would not have been possible to deliver to the customer in time. Due to the high inventory levels this was no problem in reality. Four different postponement scenarios were investigated where all or a few product groups were placed in an intermediate storage. Only one of the scenarios, to place product group YB before the mixing, was feasible since the other scenarios resulted in longer production times than the maximum time to shipping. This was true even though extensive investments in production capacity were accounted for. VII Results and conclusion The purpose of the master thesis has been fulfilled; ways to reduce the tied up capital have been recommended through general inventory reduction (871 kSEK/year) and partly the use of a postponement strategy. The most efficient way to reduce the inventory levels is to use a more flexible production planning where smaller batches are enabled. Flexible manufacturing is also required to make a postponement strategy possible. Postponement of some products could be beneficial but the savings are limited, hence postponement is only recommended if no additional investments are required

    MODEL-BASED DESIGN AND IMPLEMENTATION OF DEEP WAVEFORM ANALYSIS SYSTEMS

    Get PDF
    Analysis of signals of relatively long duration, an area that is referred to as deep waveform analysis, is of increasing importance in instrumentation systems for wireless communications. For example, jitter measurement of deep waveforms must be performed during design and manufacturing tests for complex communications circuitry or equipment. As requirements for bit error rate performance become more stringent and data volumes increase, it becomes increasingly important and interesting to perform deep waveform analysis computations in long, or even temporally unbounded, waveforms. Real-time response and limited hardware resources challenge the design methods of deep waveform analysis systems. Previous methods for deep waveform analysis required storage and computation across all samples of the waveform at once. However, as the amount of data in the waveform grows, and especially if the waveform is unbounded, storage of the waveform in its entirety becomes impractical. The need to satisfy stringent real-time constraints, handle large volumes of data at high sample rates, and operate on resource-constrained platforms result in challenging problems in the development of advanced systems for deep waveform analysis. In this thesis, we have developed new design methodologies and design optimization methods to address these problems. The contributions of the thesis are geared toward handling large, possibly unbounded, signal data sets, and providing novel trade-offs among measurement accuracy, memory constraints, and real-time performance. Motivated by performance bottlenecks that we observed in our experimentation with deep waveform analysis, we have also developed a new model of computation for representing signal processing applications in a way that improves the efficiency of data communication between computational modules. The main contributions of this thesis are summarized in the following. (1). Design methodology for deep waveform analysis systems. We have developed a new design methodology for deep waveform analysis under limited resources. The methodology builds on the formalisms of dataflow-based design and implementation of signal processing systems. Our proposed methodology is shown to help significantly advance the prior state of the art in jitter measurement system design, and it forms an important foundation for later contributions that are presented in the thesis. Our approach is demonstrated through extensive experiments using actual measured data. Through its incorporation of high-level dataflow principles, the approach is suitable for efficient mapping to a variety of platforms, including multicore processors and graphics processing unit (GPU) devices for high performance signal processing. (2). Design optimization for gapless deep waveform analysis. We have developed novel models and design optimization methods for addressing the real-time processing challenges of gapless deep waveform applications. A gapless signal processing application is characterized by one or more continuous streams of input data, where the data must be processed reliably without dropping any of the input samples. The strict real-time processing requirements for gapless deep waveform applications can be very challenging when input data rates are high, processing requirements are intensive, or the target platform is significantly resource constrained. The methods developed in this part of the thesis focus on optimizing the throughput of deep waveform analysis subject to the on-board memory constraints of a given data acquisition system interface, processor memory constraints, and the constraint of gapless processing. (3). Passive-active flow graphs for dataflow-based implementation. We introduce a new model of computation called passive-active flow graphs (PAFGs), which complement conventional dataflow-based application representations. We have developed PAFGs to address important bottlenecks in dataflow graph implementation associated with communication between computational modules (dataflow graph vertices). We demonstrate the use of PAFGs as an intermediate representation for refining dataflow graphs into efficient implementations. We develop formal underpinnings of the PAFG model of computation, and introduce systematic transformation techniques for deriving and optimizing PAFG representations

    Enabling 5G Edge Native Applications

    Get PDF

    Material handling optimization in warehousing operations

    Get PDF
    Tableau d’honneur de la Faculté des études supérieures et postdoctorales, 2018-2019.Les activités de distribution et d’entreposage sont des piliers importants de la chaîne d’approvisionnement. Ils assurent la stabilité du flux de matières et la synchronisation de toutes les parties prenantes du réseau. Un centre de distribution (CD) agit comme un point de découplage entre l’approvisionnement, la production et les ventes. La distribution comprend un large éventail d’activités visant à assurer la satisfaction de la demande. Ces activités passent de la réception au stockage des produits finis ou semi-finis, à la préparation des commandes et à la livraison. Les opérations d’un CD sont maintenant perçues comme des facteurs critiques d’amélioration. Elles sont responsables de la satisfaction d’un marché en évolution, exigeant des délais de livraison toujours plus rapides et plus fiables, des commandes exactes et des produits hautement personnalisés. C’est pourquoi la recherche en gestion des opérations met beaucoup d’efforts sur le problème de gestion des CDs. Depuis plusieurs années, nous avons connu de fortes avancées en matière d’entreposage et de préparation de commandes. L’activité de préparation de commandes est le processus consistant à récupérer les articles à leur emplacement de stockage afin d’assembler des commandes. Ce problème a souvent été résolu comme une variante du problème du voyageur de commerce, où l’opérateur se déplace à travers les allées de l’entrepôt. Cependant, les entrepôts modernes comportent de plus en plus de familles de produits ayant des caractéristiques très particulières rendant les méthodes conventionnelles moins adéquates. Le premier volet de cette thèse par articles présente deux importants et complexes problèmes de manutention des produits lors de la préparation des commandes. Le problème de préparation des commandes a été largement étudié dans la littérature au cours des dernières décennies. Notre recherche élargit le spectre de ce problème en incluant un ensemble de caractéristiques associées aux installations physiques de la zone de prélèvement, comme les allées étroites, et aux caractéristiques des produits (poids, volume, catégorie, fragilité, etc.). Une perspective plus appliquée à la réalité des opérations est utilisée dans notre développement d’algorithmes. Les déplacements liés à la préparation des commandes sont fortement influencés par le positionnement des produits. La position des produits dans la zone de prélèvement est déterminée par une stratégie d’affectation de stockage (storage assignment strategy). Beaucoup de ces stratégies utilisent de l’information sur les ventes des produits afin de faciliter l’accès aux plus populaires. Dans l’environnement concurrentiel d’aujourd’hui, la durée de vie rentable d’un produit peut être relativement courte. Des promotions peuvent également être faites pour pousser différents produits sur le marché. Le positionnement fourni par la stratégie d’hier ne sera probablement plus optimal aujourd’hui. Il existe plusieurs études mesurant l’impact d’une bonne réaffectation de produits sur les opérations de prélèvement. Cependant, ils étudient la différence des performances avec les positionnements passés et actuels. La littérature démontre clairement que cela apporte des avantages en termes d’efficacité. Toutefois, les déplacements nécessaires pour passer d’une position à une autre peuvent constituer une activité très exigeante. Ceci constitue le second volet de cette thèse qui présente des avancées intéressantes sur le problème de repositionnement des produits dans la zone de prélèvement. Nous présentons le problème de repositionnement des produits sous une forme encore peu étudiée aux meilleurs de nos connaissances : le problème de repositionnement. Plus précisément, nous étudions la charge de travail requise pour passer d’une configuration à l’autre. Cette thèse est structuré comme suit. L’introduction présente les caractéristiques et les missions d’un système de distribution. Le chapitre 1 fournit un survol de la littérature sur les principales fonctions d’un centre de distribution et met l’accent sur la préparation des commandes et les décisions qui affectent cette opération. Le chapitre 2 est consacré à l’étude d’un problème de préparation de commandes en allées étroites avec des équipements de manutention contraignants. Dans le chapitre 3, nous étudions un problème de préparation des commandes où les caractéristiques des produits limitent fortement les routes de prélèvement. Le chapitre 4 présente une variante du problème de repositionnement (reassignment) avec une formulation originale pour le résoudre. La conclusion suit et résume les principales contributions de cette thèse. Mots clés : Préparation des commandes, entreposage, problèmes de routage, algorithmes exacts et heuristiques, réaffectation des produits, manutention.Distribution and warehousing activities are important pillars to an effective supply chain. They ensure the regulation of the operational flow and the synchronization of all actors in the network. Hence, distribution centers (DCs) act as crossover points between the supply, the production and the demand. The distribution includes a wide range of activities to ensure the integrity of the demand satisfaction. These activities range from the reception and storage of finished or semi-finished products to the preparation of orders and delivery. Distribution has been long seen as an operation with no or low added value; this has changed, and nowadays it is perceived as one of the critical areas for improvement. These activities are responsible for the satisfaction of an evolving market, requiring ever faster and more reliable delivery times, exact orders and highly customized products. This leads to an increased research interest on operations management focused on warehousing. For several years, we have witnessed strong advances in warehousing and order picking operations. The order picking activity is the process of retrieving items within the storage locations for the purpose of fulfilling orders. This problem has long been solved as a variant of the travelling salesman problem, where the order picker moves through aisles. However, modern warehouses with more and more product families may have special characteristics that make conventional methods irrelevant or inefficient. The first part of this thesis presents two practical and challenging material handling problems for the order picking within DCs. Since there are many research axes in the field of warehousing operations, we concentrated our efforts on the order picking problem and the repositioning of the products within the picking area. The order picking problem has been intensively studied in the literature. Our research widens the spectrum of this problem by including a set of characteristics associated with the physical facilities of the picking area and characteristics of the product, such as its weight, volume, category, fragility, etc. This means that a more applied perspective on the reality of operations is used in our algorithms development. The order picking workload is strongly influenced by the positioning of the products. The position of products within the picking area is determined by a storage assignment strategy. Many of these strategies use product sales information in order to facilitate access to the most popular items. In today’s competitive environment, the profitable lifetime of a product can be relatively short. The positioning provided by yesterday’s assignment is likely not the optimal one in the near future. There are several studies measuring the impact of a good reassignment of products on the picking operations. However, they study the difference between the two states of systems on the picking time. It is clear that this brings benefits. However, moving from one position to another is a very workload demanding activity. This constitutes the second part of this thesis which presents interesting advances on the repositioning of products within the picking area. We introduce the repositioning problem as an innovative way of improving performance, in what we call the reassignment problem. More specifically, we study the workload required to move from one setup to the next. This thesis is structured as follows. The introduction presents the characteristics and missions of a distribution system. Chapter 1 presents an overview of the literature on the main functions of a DC and emphasizes on order picking and decisions affecting this operation. Chapter 2 is devoted to the study of a picking problem with narrow aisles facilities and binding material handling equipment. In Chapter 3, we study the picking problem with a set of product features that strongly constrain the picking sequence. Chapter 4 presents a variant of the reassignment problem with a strong and new formulation to solve it. The conclusion follows and summarizes the main contributions of this thesis. Key words: Order-picking, warehousing, routing problems, exact and heuristic algorithms, products reassignment, material handling

    System design for periodic data production management

    Get PDF
    This research project introduces a new type of information system, the periodic data production management system, and proposes several innovative system design concepts for this application area. Periodic data production systems are common in the information industry for the production of information. These systems process large quantities of data in order to produce statistical reports in predefined intervals. The workflow of such a system is typically distributed world-wide and consists of several semi-computerized production steps which transform data packages. For example, market research companies apply these systems in order to sell marketing information over specified timelines. production of information. These systems process large quantities of data in order to produce statistical reports in predefined intervals. The workflow of such a system is typically distributed world-wide and consists of several semi-computerized production steps which transform data packages. For example, market research companies apply these systems in order to sell marketing information over specified timelines. There has been identified a lack of concepts for IT-aided management in this area. This thesis clearly defines the complex requirements of periodic data production management systems. It is shown that these systems can be defines as IT-support for planning, monitoring and controlling periodic data production processes. Their significant advantages are that information industry will be enabled to increase production performance, and to ease (and speed up) the identification of the production progress as well as the achievable optimisation potential in order to control rationalisation goals. In addition, this thesis provides solutions for he generic problem how to introduce such a management system on top of an unchangeable periodic data production system. Two promising system designs for periodic data production management are derived, analysed and compared in order to gain knowledge about appropriate concepts and this application area. Production planning systems are the metaphor models used for the so-called closely coupled approach. The metaphor model for the loosely coupled approach is project management. The latter approach is prototyped as an application in the market research industry and used as case study. Evaluation results are real-world experiences which demonstrate the extraordinary efficiency of systems based on the loosely coupled approach. Special is a scenario-based evaluation that accurately demonstrates the many improvements achievable with this approach. Main results are that production planning and process quality can vitally be improved. Finally, among other propositions, it is suggested to concentrate future work on the development of product lines for periodic data production management systems in order to increase their reuse

    Density-Aware Linear Algebra in a Column-Oriented In-Memory Database System

    Get PDF
    Linear algebra operations appear in nearly every application in advanced analytics, machine learning, and of various science domains. Until today, many data analysts and scientists tend to use statistics software packages or hand-crafted solutions for their analysis. In the era of data deluge, however, the external statistics packages and custom analysis programs that often run on single-workstations are incapable to keep up with the vast increase in data volume and size. In particular, there is an increasing demand of scientists for large scale data manipulation, orchestration, and advanced data management capabilities. These are among the key features of a mature relational database management system (DBMS). With the rise of main memory database systems, it now has become feasible to also consider applications that built up on linear algebra. This thesis presents a deep integration of linear algebra functionality into an in-memory column-oriented database system. In particular, this work shows that it has become feasible to execute linear algebra queries on large data sets directly in a DBMS-integrated engine (LAPEG), without the need of transferring data and being restricted by hard disc latencies. From various application examples that are cited in this work, we deduce a number of requirements that are relevant for a database system that includes linear algebra functionality. Beside the deep integration of matrices and numerical algorithms, these include optimization of expressions, transparent matrix handling, scalability and data-parallelism, and data manipulation capabilities. These requirements are addressed by our linear algebra engine. In particular, the core contributions of this thesis are: firstly, we show that the columnar storage layer of an in-memory DBMS yields an easy adoption of efficient sparse matrix data types and algorithms. Furthermore, we show that the execution of linear algebra expressions significantly benefits from different techniques that are inspired from database technology. In a novel way, we implemented several of these optimization strategies in LAPEG’s optimizer (SpMachO), which uses an advanced density estimation method (SpProdest) to predict the matrix density of intermediate results. Moreover, we present an adaptive matrix data type AT Matrix to obviate the need of scientists for selecting appropriate matrix representations. The tiled substructure of AT Matrix is exploited by our matrix multiplication to saturate the different sockets of a multicore main-memory platform, reaching up to a speed-up of 6x compared to alternative approaches. Finally, a major part of this thesis is devoted to the topic of data manipulation; where we propose a matrix manipulation API and present different mutable matrix types to enable fast insertions and deletes. We finally conclude that our linear algebra engine is well-suited to process dynamic, large matrix workloads in an optimized way. In particular, the DBMS-integrated LAPEG is filling the linear algebra gap, and makes columnar in-memory DBMS attractive as efficient, scalable ad-hoc analysis platform for scientists

    Helmholtz Portfolio Theme Large-Scale Data Management and Analysis (LSDMA)

    Get PDF
    The Helmholtz Association funded the "Large-Scale Data Management and Analysis" portfolio theme from 2012-2016. Four Helmholtz centres, six universities and another research institution in Germany joined to enable data-intensive science by optimising data life cycles in selected scientific communities. In our Data Life cycle Labs, data experts performed joint R&D together with scientific communities. The Data Services Integration Team focused on generic solutions applied by several communities
    • …
    corecore