65 research outputs found

    Robotized sorting systems: Large-scale scheduling under real-time conditions with limited lookahead

    Get PDF
    A major drawback of most automated warehousing solutions is that fixedly installed hardware makes them inflexible and hardly scalable. In the recent years, numerous robotized warehousing solutions have been innovated, which are more adaptable to varying capacity situations. In this paper, we consider robotized sorting systems where autonomous mobile robots load individual pieces of stock keeping units (SKUs) at a loading station, drive to the collection points temporarily associated with the orders demanding the pieces, and autonomously release them, e.g., by tilting a tray mounted on top of each robot. In these systems, a huge number of products approach the loading station with an interarrival time of very few seconds, so that we face a very challenging scheduling environment in which the following operational decisions must be taken in real time: First, since pieces of the same SKU are interchangeable among orders with a demand for this specific SKU, we have to assign pieces to suitable orders. Furthermore, each order has to be temporarily assigned to a collection point. Finally, we have to match robots and transport jobs, where pieces have to be delivered between loading station and selected collection points. These interdependent decisions become even more involved, since we (typically) do not posses complete knowledge on the arrival sequence but have merely a restricted lookahead of the next approaching products. In this paper, we show that even in such a fierce environment sophisticated optimization, based on a novel two-step multiple-scenario approach applied under real-time conditions, can be a serviceable tool to significantly improve the sortation throughput

    Entwurf eines BetriebsfĂĽhrungsverfahrens fĂĽr Mittel- und Niederspannungsnetze unter BerĂĽcksichtigung von bidirektionalem vertikalem Leistungsfluss

    Get PDF
    Globale Trends zeigen eine weltweit wachsende Energienachfrage und immer knapper werdende fossile Ressourcen. Damit auch zukünftig eine sichere und zuverlässige elektrische Energieversorgung gewährleistet werden kann, gewinnt der Aspekt einer nachhaltigen elektrischen Energieerzeugung immer mehr an Bedeutung. Dieser Herausforderung wird mit einem stetig steigenden Anteil Erneuerbarer Energien begegnet. Infolgedessen verlagert sich speziell in Deutschland, durch der Abschaltung konventioneller Kraftwerke, ein zunehmender Teil der Stromerzeugung in die Mittel- und Niederspannungsebenen. Die daraus folgend zunehmende Bedeutung der Verteilnetze an der Energieversorgung bedarf einer Umstrukturierung der Netzbetriebsführung. Eine mögliche Lösung bildet das in dieser Arbeit entworfene Betriebsführungsverfahren, dessen Fokus, aufgrund zeitweise benötigter Rückspeisungen in die Hochspannungsebene, auf dem bidirektionalen, vertikalen Leistungsfluss zwischen der 20-kV- und 110-kV-Spannungsebene liegt. Dabei werden unter Anderem Konzepte zur Regelung und Steuerung von Speichern und elektrischen Energieerzeugungsanlagen vorgestellt. Anhand von numerischen Fallstudien konnte nachgewiesen werden, dass das entworfene Betriebsführungsverfahren vordefinierte Leistungsflüsse zwischen der 20-kV- und 110-kV-Spannungsebene realisieren kann, ohne dabei netzkritische Zustände zu erzeugen

    Vertikaler Netzbetrieb: ein Ansatz zur Koordination von Netzbetriebsinstanzen verschiedener Netzebenen

    Get PDF
    Die in Deutschland zunehmende Abschaltung und Stilllegung konventioneller Kraftwerke resultiert in einem Defizit an regelbarer Erzeugung. Folglich bedarf es zu dessen Ausgleich Verfahren, die sich neuer betrieblicher Freiheitsgrade wie bspw. aktiver Netzanschlussnehmer in den Verteilernetzen bedienen. Die Dissertation widmet sich der Fragestellung, wie genau diese neuen betrieblichen Freiheitsgrade in den Verteilernetzebenen nutzbar gemacht werden können, ohne dass deren Aktivierung einen negativen Einfluss auf deren Anschlussnetze hat. Hierfür wird der „Vertikale Netzbetrieb“ und dessen neuartige Koordinationsschnittstelle zwischen den Netzbetreibern - die PQu(t)-Kapabilität, welche den zulässigen Lösungsraum aller möglichen Arbeitspunkte an einem Netzverknüpfungspunkt beschreibt - vorgeschlagen. Des Weiteren wird eine Methode zur Berechnung der abgangsscharfen Arbeitspunktänderungen entworfen. Die Validierung der Tauglichkeit des Ansatzes und der Methoden erfolgt mittels numerischer Fallstudien.In Germany, more and more power plants are shut down or even decommissioned. Consequently, there is a loss of controllable power generation. Thus, system operation requires new degrees of freedom like controllable loads, decentralized storages or electric vehicles located on the DSOs’ levels. This thesis investigates, how these new distribution system’s operational degrees of freedom can be utilized without a negative impact on whose system. Therefore, an active coordination between system operators of different grid levels is required. Hence, three possible system operation approaches are discussed. Thereby, the proposed approach of the Vertical System Operation is preferred. The realization of this concept requires a new kind of coordination interface between the system operators. Therefore, the PQu(t)-Capability is introduced, which is not limited to the vertical coordination and can also be used for the horizontal connection of power system. In addition, this work presents a method to determine feeder specific operation point adjustments. Thereby, all proposed methods consider the dedicated capability curve of each unit, the PQu(t)-Capability of possible subordinated grids as well as depending on the grid level the N-1 criterion. Based on numerical case studies the permissibility of the proposed approach and its methods are validated. Therefore, different grids and voltage levels have to be analyzed. Thus, existing reference grids are chosen and adapted regarding the requirements for the simulation of the vertical coordination. Besides the distribution and dimensioning of the loads, RES and potentials of operational degrees of freedoms, possible subordinated grids have to be considered. It is shown, that the Vertical System Operation approach is capable to coordinate system operators of different voltage levels. Furthermore, it enables the utilization of the distribution levels’ operational degrees of freedom avoiding the violation of the system security

    A unifying perspective on protocol mediation: interoperability in the Future Internet

    Get PDF
    Given the highly dynamic and extremely heterogeneous software systems composing the Future Internet, automatically achieving interoperability between software components —without modifying them— is more than simply desirable, it is quickly becoming a necessity. Although much work has been carried out on interoperability, existing solutions have not fully succeeded in keeping pace with the increasing complexity and heterogeneity of modern software, and meeting the demands of runtime support. On the one hand, solutions at the application layer target higher automation and loose coupling through the synthesis of intermediary entities, mediators, to compensate for the differences between the interfaces of components and coordinate their behaviours, while assuming the use of the same middleware solution. On the other hand, solutions to interoperability across heterogeneous middleware technologies do not reconcile the differences between components at the application layer. In this paper we propose a unified approach for achieving interoperability between heterogeneous software components with compatible functionalities across the application and middleware layers. First, we provide a solution to automatically generate cross-layer parsers and composers that abstract network messages into a uniform representation independent of the middleware used. Second, these generated parsers and composers are integrated within a mediation framework to support the deployment of the mediators synthesised at the application layer. More specifically, the generated parser analyses the network messages received from one component and transforms them into a representation that can be understood by the application-level mediator. Then, the application-level mediator performs the necessary data conversion and behavioural coordination. Finally, the composer transforms the representation produced by the application-level mediator into network messages that can be sent to the other component. The resulting unified mediation framework reconciles the differences between software components from the application down to the middleware layers. We validate our approach through a case study in the area of conference management

    Study of the Effect of Mold Corner Shape on the Initial Solidification Behavior of Molten Steel Using Mold Simulator

    Get PDF
    The chamfered mold with a typical corner shape (angle between the chamfered face and hot face is 45 deg) was applied to the mold simulator study in this paper, and the results were compared with the previous results from a well-developed right-angle mold simulator system. The results suggested that the designed chamfered structure would increase the thermal resistance and weaken the two-dimensional heat transfer around the mold corner, causing the homogeneity of the mold surface temperatures and heat fluxes. In addition, the chamfered structure can decrease the fluctuation of the steel level and the liquid slag flow around the meniscus at mold corner. The cooling intensities at different longitudinal sections of shell are close to each other due to the similar time-average solidification factors, which are 2.392 mm/s1/2 (section A-A: chamfered center), 2.372 mm/s1/2 (section B-B: 135 deg corner), and 2.380 mm/s1/2 (section D-D: face), respectively. For the same oscillation mark (OM), the heights of OM roots at different positions (profile L1 (face), profile L2 (135 deg corner), and profile L3 (chamfered center)) are very close to each other. The average value of height difference (HD) between two OMs roots for L1 and L2 is 0.22 mm, and for L2 and L3 is 0.38 mm. Finally, with the help of metallographic examination, the shapes of different hooks were also discussed

    [Stammbuch Andreas Schwerdfeger] / Andreas Schwerdfeger

    No full text
    [STAMMBUCH ANDREAS SCHWERDFEGER] / ANDREAS SCHWERDFEGER [Stammbuch Andreas Schwerdfeger] / Andreas Schwerdfeger (1) Einband (1) Beschreibung / Eintragung des Eigners (16) Register über sämtliche hierinn befindliche Nahmen (17) Einträge Bl. 1 - 19 (21) Einträge Bl. 22 - 40 (50) Einträge Bl. 41 - 60 (81) Einträge Bl. 61 - 74 (101

    Sediment feeding to reduce bed degradation of the Rhine river near the German/Dutch border

    No full text
    The river Rhine is one of the most important rivers in Europe. It can be described as a 1320 km long lifeline for about 50 million people from 9 different countries who live in the 185,300 km2 catchment area. Their economy is heavily dependent on the opportunities the Rhine River provides. The Rhine River has a long history of human intervention that dates back to the Roman era. Nowadays, the development trend of the river bed in the Rhine reach between Emmerich (Germany - RKm 849), Pannerdensche Kop (Netherlands - RKm 867.5) and upper Waal is characterized by a general degradation, reflected in a lowering of both water and bed levels of about 2 to 2.5 cm/year. This degradation may be due to the combination of several major human interventions during the last two centuries such as land use changes, dredging works, river training works, navigation, subsidence due to coal mining, etc. The related drop of water levels will result in reduced water depth for navigation, lowering of the adjacent ground water table, it will affect the water level at intakes and reduce the navigation depths at locks. Also changes in water distribution at the Rhine bifurcation (Pannerdensche Kop) might occur and increased maintenance costs for different structures such as groynes, bridges, embankments could be required. This thesis explores one of the possible solutions to negate this bed degradation in the reach of the German-Dutch border, as considered by the Public Works Department of The Netherlands. This solution focuses on feeding sediments to the river to compensate the degradation suffered by the river. Due to the strong gradation of the bed of the Rhine river, a 1D graded-sediment model based on an active-layer approach was used to study the effects of the proposed alternative solution. The main goal of this thesis was to deal with graded sediment behaviour via the use of state-of-the-art graded-sediment models, and to study the impact of feeding different mixtures of sediment to the river to control degradation. The results of the study show that artificial bed-load supply appears to be a sustainable and a flexible solution, and also that there is some freedom in the choice of the mixture to be fed to the river. Interpretation of the results of the modeling was complicated by a phenomenon of ""self development"" of the model, which in the end affects the quantification of changes in composition. This ""self-development"" is due to the fact that it is not possible to schematize the essential characteristics of the river in the studied reach (strong reduction in slope and particle size in combination with strongly graded sediment) in a model which is not in equilibrium before feeding starts. It was also observed that the thickness of the transport layer in the active-layer model is very important in relation to the celerity of the morphological process and the progress of the changes in composition of the riverbed caused by the combination of feeding and the ""self development"". In spite of these complications, it could be shown that feeding of sediment is a very attractive solution, which is both flexible and economically feasible. Still it would be worthwhile to further compare feeding with other possible measures such as the implementation of fixed layers in the Rhine River. Based on the results of this study, it is recommended that a medium-term feeding project is started in parallel with field and other studies for two reasons: (i) to monitor the impact on bed degradation and bed composition and when needed to adjust the feeding strategy, and (ii) to feed and improve the current existing graded-sediment models

    Context-aware scanning and determinism-preserving grammar composition, in theory and practice

    No full text
    University of Minnesota Ph.D. dissertation. July 2010. Major: Computer Science. Advisor: Eric Van Wyk. 1 computer file (PDF); xx, 296 pages, appendix A.This thesis documents several new developments in the theory of parsing, and also the practical value of their implementation in the Copper parser generator. The most widely-used apparatus for syntactic analysis of programming languages consists of a scanner based on deterministic finite automata, built from a set of regular expressions called the lexical syntax, and a separate parser, operating on the output of this scanner, built from a context-free grammar and utilizing the LALR(1) algorithm. While the LALR(1) algorithm has the advantage of guaranteeing a non-ambiguous parse, and the approach of keeping the scanner and parser separate make the compilation process more clean and modular, it is also a brittle approach. The class of grammars that can be parsed with an LALR(1) parser is not closed under grammar composition, and small changes to an LALR(1) grammar can remove the grammar from the LALR(1) class. Also, the separation of scanner and parser prevent the use, in any organized way of parser context to resolve ambiguities in the lexical syntax. One area in which both of these drawbacks pose a particular problem is that of parsing embedded and extensible languages. In particular, it forms one of the last major impediments to the development of an extensible compiler in which language extensions are imported and composed by the end user (programmer) in an analogous manner to the way libraries are presently imported. This is due not only to the problem of the LALR(1) grammar class not being closed under composition, but to the very real possibility that the lexical syntax of two different extensions will clash, making it impossible to construct a scanner without extensive manual resolution of ambiguities, if at all. This thesis presents three innovations that are a large step towards eliminating parsing as an Achilles’ heel in this application. Firstly, it describes a new algorithm of scanning called context-aware scanning, in which the scanner at each scan is made aware of what sorts of tokens are recognized as valid by the parser at that point. By allowing the use of parser context in the scanner to disambiguate, context-aware scanning makes the specification of embedded languages much simpler — instead of specifying a scanner that must explicitly switch “modes” to scan on the different embedded languages, one simply compiles a context-aware scanner from a single lexical specification, which has implicit “modes” to scan properly on each embedded language. Similarly, in an extensible language, this puts a degree of separation between the lexical syntax of different language extensions, so that any clashes of this sort will not be an issue. Secondly, the thesis describes an analysis that operates on grammar extensions of a certain form, and can recognize their membership in a class of extensions that can be composed with each other and still produce a deterministic parser—enabling end-users to compose extensions written by different language developers with this guarantee of determinism. The analysis is made practical by context-aware scanning, which ensures a lack of lexical issues to go along with the lack of syntactic nondeterminism. It is this analysis — the first of its kind — that is the largest step toward realizing the sort of extensible compiler described above, as the writer of each extension can test it independently using the analysis and thus resolve any lexical or syntactic issues with the extension before the end user ever sees it. Thirdly, the thesis describes a corollary of this analysis, which allows extensions that have passed the analysis to be distributed in parse table form and then composed on-the-fly by the end users, with the same guarantee of determinism. Besides expediting the operation of composition, this also enables the use of the analysis in situations where the writer of a language or language extension does not want to release its grammar to the public. Finally, the thesis discusses how these approaches have been implemented and made practical in Copper, including runtime tests, implementations and analyses of real-world grammars and extensions

    Eine Beschreibung der Stadt Wien aus der Zeit Kaiser Karls VI

    No full text
    von Prof. Dr. Josef Schwerdfege
    • …
    corecore