567 research outputs found

    How to escape contagion in the interest rate trap

    Get PDF

    Linear Programming for a Cutting Problem in the Wood Processing Industry – A Case Study

    Get PDF
    In this paper the authors present a case study from the wood-processing industry. It focuses on a cutting process in which material from stock is cut down in order to provide the items required by the customers in the desired qualities, sizes, and quantities. In particular, two aspects make this cutting process special. Firstly, the cutting process is strongly interdependent with a preceding handling process, which, consequently, cannot be planned independently. Secondly, if the trim loss is of a certain minimum size, it can be returned into stock and used as input to subsequent cutting processes. In order to reduce the cost of the cutting process, a decision support tool has been developed which incorporates a linear programming model as a central feature. The model is described in detail, and experience from the application of the tool is reported.one-dimensional cutting, linear programming, wood-processing industry

    Magnetic Domain Structure of La0.7Sr0.3MnO3 thin-films probed at variable temperature with Scanning Electron Microscopy with Polarization Analysis

    Get PDF
    The domain configuration of 50 nm thick La0.7SrMnO3 films has been directly investigated using scanning electron microscopy with polarization analysis (SEMPA), with magnetic contrast obtained without the requirement for prior surface preparation. The large scale domain structure reflects a primarily four-fold anisotropy, with a small uniaxial component, consistent with magneto-optic Kerr effect measurements. We also determine the domain transition profile and find it to be in agreement with previous estimates of the domain wall width in this material. The temperature dependence of the image contrast is investigated and compared to superconducting-quantum interference device magnetometry data. A faster decrease in the SEMPA contrast is revealed, which can be explained by the technique's extreme surface sensitivity, allowing us to selectively probe the surface spin polarization which due to the double exchange mechanism exhibits a distinctly different temperature dependence than the bulk magnetization

    Linear Programming for a Cutting Problem in the Wood Processing Industry: A Case Study

    Get PDF
    In this paper the authors present a case study from the wood-processing industry. It focuses on a cutting process in which material from stock is cut down in order to provide the items required by the customers in the desired qualities, sizes, and quantities. In particular, two aspects make this cutting process special. Firstly, the cutting process is strongly interdependent with a preceding handling process, which, consequently, cannot be planned independently. Secondly, if the trim loss is of a certain minimum size, it can be returned into stock and used as input to subsequent cutting processes. In order to reduce the cost of the cutting process, a decision support tool has been developed which incorporates a linear programming model as a central feature. The model is described in detail, and experience from the application of the tool is reported

    Computation of eigenvectors of block tridiagonal matrices based on twisted factorizations

    Get PDF
    Die Berechnung von Eigenwerten und Eigenvektoren von blocktridiagonalen Matrizen und Bandmatrizen stellt einen gewichtigen Aspekt von zahlreichen Anwendungen aus dem Scientific Computing dar. Bisherige Algorithmen zur Bestimmung von Eigenvektoren in solchen Matrizen basierten zumeist auf einer vorhergehenden Tridiagonalisierung der Matrix. Obwohl die Bestimmung von Eigenwerten und Eigenvektoren in tridiagonalen Matrizen sehr effizient durchgeführt werden kann, ist der notwendige Tridiagonalisierungsprozess jedoch sehr rechenintensiv. Des weiteren benötigen zahlreiche Methoden noch Maßnahmen zur Sicherstellung der Orthogonalität der resultierenden Eigenvektoren, was eine zusätzliche Bürde für die Rechenleistung darstellt. In dieser Arbeit wird eine neue Methode zur Berechnung von Eigenvektoren in blocktridiagonalen Matrizen untersucht, die im Wesentlichen auf der Verwendung von Twisted Factorizations beruht. Hierfür werden die grundlegenden Prinzipien eines Algorithmus zur Berechnung von geblockten Twisted Factorizations von blocktridiagonalen Matrizen erläutert. Des weiteren werden einige interessante Eigenschaften von Twisted Factorizations aufgezeigt, sowie die Beziehung des Blocks, bei dem sich die Faktorisierungen treffen, zu einem Eigenvektor erklärt. Diese Beziehung kann zur effizienten Bestimmung von Eigenvektoren herangezogen werden. Im Gegensatz zu bisherigen Methoden ist der hier vorgestellte Algorithmus nicht auf eine Reduktion zur tridiagonalen Form angewiesen und beinhaltet nur einen einzigen Schritt der inversen Iteration. Dies wird durch das Auffinden eines Startvektors, der das Residuum des Eigenpaares minimiert, ermöglicht. Einer der Hauptpunkte dieser Arbeit ist daher die Evaluierung verschiedener Strategien zur Selektion eines geeigneten Startvektors. Des weiteren werden im Rahmen dieser Arbeit Daten zur Genauigkeit, Orthogonalität und des Zeitverhaltens einer Computerimplementation des neuen Algorithmus vorgestellt und mit gängigen Methoden verglichen. Die gewonnenen Daten zeigen nicht nur, daß der Algorithmus Eigenvektoren mit sehr geringen Residuen zurückliefert, sondern auch bei der Berechnung von Eigenvektoren in großen Matrizen und/oder Matrizen mit geringer Bandbreite effizienter ist. Aufgrund seiner Struktur und dem inhärenten Parallelisierungspotential ist der neue Algorithmus hervorragend dazu geeignet, moderne und zukünftige Hardware auszunutzen, welche von einem hohen Maß an Nebenläufigkeit geprägt sind.Computing the eigenvalues and eigenvectors of a band or block tridiagonal matrix is an important aspect of various applications in Scientific Computing. Most existing algorithms for computing eigenvectors of a band matrix rely on a prior tridiagonalization of the matrix. While the eigenvalues and eigenvectors of tridiagonal matrices can be computed very efficiently, the preceding tridiagonalization process can be relatively costly. Moreover, many eigensolvers require additional measures to ensure the orthogonality of the computed eigenvectors, which constitutes a significant computational expense. In this thesis we explore a new method for computing eigenvectors of block tridiagonal matrices based on twisted factorizations. We describe the basic principles of an algorithm for computing block twisted factorizations of block tridiagonal matrices. We also show some interesting properties of these twisted factorizations and investigate the relation of the block, where the factorizations meet, to an eigenvector of the block tridiagonal matrix. This relation can be exploited to compute the eigenvector in a very efficient way. Contrary to most conventional techniques, our algorithm for the determination of eigenvectors does not require a reduction of the matrix to tridiagonal form, and attempts to compute a good eigenvector approximation with only a single step of inverse iteration. This idea is based on finding a starting vector for inverse iteration which minimizes the residual of the resulting eigenpair. One of the main contributions of this thesis is the investigation and evaluation of different strategies for the selection of a suitable starting vector. Furthermore, we present experimental data for the accuracy, orthogonality and runtime behavior of an implementation of the new algorithm, and compare these results with existing methods. Our results show that our new algorithm returns eigenvectors with very low residuals, while being more efficient in terms of computational costs for large matrices and/or for small bandwidths. Due to its structure and inherent parallelization potential, the new algorithm is also well suited for exploiting modern and future hardware, which are characterized by a high degree of concurrency

    Methodological studies concerning free energy simulations

    Get PDF
    Die Bestimmung von freien Energieunterschieden ist essentiell für die Untersuchung von zahlreichen Prozessen, wie Wirkmechanismen von Medikamenten, den Verlauf von enzymatischen Reaktionen oder die Löslichkeit von Chemikalien. Mittels Molekulardynamiksimulationen sind freie Energierechnungen in der Lage freie Energieunterschiede mit hoher Genauigkeit zu bestimmen. Diese Genauigkeit ist jedoch mit einem gewaltigen Rechenaufwand verbunden, sodass eine Bestimmung oft Tage oder Wochen dauert. Aus diesem Grunde werden erhebliche Anstrengungen zur Optimierung dieser Techniken unternommen. Der erste Teil dieser Dissertation beschreibt die Anwendung der Bennett's Acceptance Ratio Methode (BAR) auf Probleme, wo konventionelle freie Energierechnungen nicht anwendbar sind. Dies illustriert die Vielseitigkeit dieser Methode. Desweiteren demonstrieren wir eine Erweiterung von BAR zur Behandlung von gebiasten Simulationen, die nicht der klassischen Boltzmann-Verteilung gehorchen. Wir bezeichnen diese Methode ergo als Non-Boltzmann Bennett (NBB). Im Rahmen von einigen praktischen Anwendungen wird anschliessend gezeigt, wie eine kreative Wahl des gebiasten Zustands die Effizienz von freien Energierechnungen erhöhen kann. Im zweiten Teil werden BAR und NBB zur Bestimmung von Hydrationsenergien verwendet. Speziell in der Proteinfaltung oder beim Binden von Liganden spielen die Energiekosten für die (De-)Solvatation eine erhebliche Rolle. Unglücklicherweise können die Hydrationsenergien von Aminosäuren nicht experimentell bestimmt werden, weshalb oft auf Schätzungen mittels Seitenkettenanaloga zurückgegriffen wird. Die Annahme, dass Seitenketten repräsentativ für volle Aminosäuren sind, ist jedoch bisher nicht wissenschaftlich getestet worden. Daher wurden die Hydrationsenergien sowohl von Aminosäuren, als auch von Seitenkettenanaloga bestimmt. Es zeigt sich dabei eine erhebliche Diskrepanz, was sich auf zwei Effekte zurückführen läßt: Solventexklusion und Selbstsolvatation. Während beim Erstem der Zugang zum Lösungsmittel sterisch versperrt wird, ensteht zweiteres durch Wechselwirkungen des Aminosäurerückgrats mit polaren Gruppen der Seitenkette. Da viele Techniken in der computergestützten Chemie Selbstlösung nicht berücksichtigen, hat dies schwere Auswirkungen auf die Genauigkeit. Wir illustrieren dies anhand von impliziten Solventmethoden und diskutieren den Einfluss unserer Ergebnisse auf Proteinstudien.The determination of free energy differences is fundamental to the study of several processes such as the binding of drugs to proteins, the paths of enzymatic reactions or the solubility of chemical compounds. By employing molecular dynamics simulations, free energy calculations are capable to compute such free energy differences with high accuracy. However, this accuracy comes at excessive computational costs, often requiring days or weeks to obtain exact results. Thus, considerable effort still has to be invested in the optimization of such techniques. The first half of this dissertation focuses on the application of Bennett's Acceptance Ratio method (BAR) to problems where standard methods to compute free energy differences are not feasible. This highlights the unique versatility of BAR. Furthermore, we demonstrate how to extend BAR in order to make use of non-Boltzmann probability distributions in biased simulations. We refer to this method as Non-Boltzmann Bennett (NBB). The NBB method is illustrated by several examples that demonstrate how a creative choice of the biased state can also improve the efficiency of free energy simulations. The second half is concerned with the application of BAR and NBB to the study of hydration free energies. Especially in protein folding or ligand binding (de)-solvation penalties can contribute considerably to the free energy difference. Unfortunately, hydration free energies of amino acids cannot be measured experimentally. Thus, approximations based on side chain analog data are used instead. However, the assumption that side chain analogs are representative for full amino acids has never been thoroughly tested. We, therefore, computed both relative and absolute solvation free energies of amino acids and side chain analogs, showing that the results can deviate considerably due to two effects: Solvent exclusion and self-solvation. While the former accounts for the reduction of solute--solvent interactions due to sterical occlusions, the latter arises from interactions between the backbone and the polar functional groups of the side chains. Since several techniques in computational chemistry do not account for self-solvation, this finding has severe consequences. We illustrate this for several implicit solvent models and briefly discuss the implication of our results for the field of protein science

    Examining trade-offs between social, psychological, and energy potential of urban form

    Get PDF
    Urban planners are often challenged with the task of developing design solutions which must meet multiple, and often contradictory, criteria. In this paper, we investigated the trade-offs between social, psychological, and energy potential of the fundamental elements of urban form: the street network and the building massing. Since formal methods to evaluate urban form from the psychological and social point of view are not readily available, we developed a methodological framework to quantify these criteria as the first contribution in this paper. To evaluate the psychological potential, we conducted a three-tiered empirical study starting from real world environments and then abstracting them to virtual environments. In each context, the implicit (physiological) response and explicit (subjective) response of pedestrians were measured. To quantify the social potential, we developed a street network centrality-based measure of social accessibility. For the energy potential, we created an energy model to analyze the impact of pure geometric form on the energy demand of the building stock. The second contribution of this work is a method to identify distinct clusters of urban form and, for each, explore the trade-offs between the select design criteria. We applied this method to two case studies identifying nine types of urban form and their respective potential trade-offs, which are directly applicable for the assessment of strategic decisions regarding urban form during the early planning stages

    Backcasting and a new way of command in computational design : Proceedings

    Get PDF
    It's not uncommon that analysis and simulation methods are used mainly to evaluate finished designs and to proof their quality. Whereas the potential of such methods is to lead or control a design process from the beginning on. Therefore, we introduce a design method that move away from a “what-if” forecasting philosophy and increase the focus on backcasting approaches. We use the power of computation by combining sophisticated methods to generate design with analysis methods to close the gap between analysis and synthesis of designs. For the development of a future-oriented computational design support we need to be aware of the human designer’s role. A productive combination of the excellence of human cognition with the power of modern computing technology is needed. We call this approach “cognitive design computing”. The computational part aim to mimic the way a designer’s brain works by combining state-of-the-art optimization and machine learning approaches with available simulation methods. The cognition part respects the complex nature of design problems by the provision of models for human-computation interaction. This means that a design problem is distributed between computer and designer. In the context of the conference slogan “back to command”, we ask how we may imagine the command over a cognitive design computing system. We expect that designers will need to let go control of some parts of the design process to machines, but in exchange they will get a new powerful command on complex computing processes. This means that designers have to explore the potentials of their role as commanders of partially automated design processes. In this contribution we describe an approach for the development of a future cognitive design computing system with the focus on urban design issues. The aim of this system is to enable an urban planner to treat a planning problem as a backcasting problem by defining what performance a design solution should achieve and to automatically query or generate a set of best possible solutions. This kind of computational planning process offers proof that the designer meets the original explicitly defined design requirements. A key way in which digital tools can support designers is by generating design proposals. Evolutionary multi-criteria optimization methods allow us to explore a multi-dimensional design space and provide a basis for the designer to evaluate contradicting requirements: a task urban planners are faced with frequently. We also reflect why designers will give more and more control to machines. Therefore, we investigate first approaches learn how designers use computational design support systems in combination with manual design strategies to deal with urban design problems by employing machine learning methods. By observing how designers work, it is possible to derive more complex artificial solution strategies that can help computers make better suggestions in the future

    An Emission-free Modular Vehicle Concept for Inner Urban Transportation in Near Future Megacities (Urban MoVe-T)

    Get PDF
    As the demands for fast and sustainable transportation in megacities are growing, new transportation concepts with full electric vehicles for the distribution of goods are needed. Not only the “last mile” is important for the delivery of small goods, but also the distribution from the big logistic centres in the outskirts to the inner city has a big influence on the volume of traffic. A new concept for a zero emission vehicle, the “Urban MoVe-T” (Urban Modular Vehicle for Transportation) has been developed within the Institute of Vehicle Concepts to meet these new requirements It consists of two parts, the “Ultra Mobility Tractor” (UMT) and secondly different “Intelligent Versatile Modules” (IVM). The concept of the one-axle-two-wheeled UMT is comparable to the well-known Segway© technology, but includes an additional driver cabin and an integrated safety cell for the driver. Moreover the UMT has an innovative wheel integrated bearing and chassis suspension system. This suspension system includes an innovative electric motor coupling, which enables to keep the unsprung masses low, while providing the unit compact. With the vehicle`s tractor module the driver has a high manoeuvrability, which is important, when used for delivery in historic city centres like Vienna. Coupled with the smallest Intelligent Versatile Module, the whole transportation vehicle has a total length of 2.5m and width of 1.7m, but is capable to carry two euro-pallets with a payload of 1t each. The vehicle has the highest payload per vehicle volume rate up to date. Due to its compact dimensions it enables the new concept to be integrated in existing infrastructures like into the commuter railway system. It can be parked transverse to the direction of travel of the train and can later on leave the train without changing the direction itself. The IVMs used for carrying the load all have 360° turn able wheels for manoeuvring on the spot to meet the requirements for the usage with the UMT. As all the IVMs have their own electric drive system, the load modules are able to drive independently when used on restricted sites like in distribution centres in the outskirts of the megacities. A bigger version of the IVM is able to carry six euro-pallets and has the dimensions of 3.5m length and 2.5m width; in total the tractor/load module combination has a total length as well as a turning radius of less than 5m, which is an unchallenged value for a vehicle in the sector of transportation. The big IVMs are also capable to be put autonomous onto the train for a fast and effective transportation from the distribution centre into the city centre. There the UMTs could fetch the IVMs and bring them to their destination. So a whole new effective distribution can be achieved with a significant reduction of the overall traffic on the streets of the big cities

    HochgeschwindigleitszĂĽge - Zug der Zukunft

    Get PDF
    Wie können wir den Bahnverkehr noch sicherer, effizienter und umweltfreundlicher gestalten ? Wie müssen die Züge von Morgen beschaffen sein? Diesen Fragen gehen die Schienverkehrsforscher beim DLR nach. Im Projekt 'Next Generation Train' arbeiten DLR-Wissenschaftler daran, Hochgeschwindigkeits-Züge der nächsten Generation schnell, sicher, komfortabel und umwerltverträglich zu machen
    • …
    corecore