24 research outputs found
Challenges Of Production Planning And Control For Powder Bed Fusion Of Metal With Laser Beam: A Perspective From The Industry
Due to technological advance, the Additive Manufacturing (AM) technology Powder Bed Fusion of Metal with Laser Beam (PBF-LB/M) is in widespread industrial use. PBF-LB/M offers the flexibility to generate different geometries in one build job independent of tools. Therefore, exploiting tool-dependent economies of scale is not required for efficient manufacturing of various complex geometries in small quantities. However, PBF LB/M production lines are capital intensive and include post-processing steps. Thus, high utilization and low work in process must be ensured to minimize costs, but reaching high utilization contradicts minimizing work in process and throughput time. In production planning and control (PPC), the trade-off between those production logistics key performance indicators (KPIs) is optimized. The advantage of flexibility to manufacture various geometries in one build job of PBF-LB/M comes with challenges for PPC. In this work, those challenges are analysed to derive implications for improvement, based on interviews with experts from the industry. Results show a need for PBF LB/M specific PPC. The need is higher the greater the technological control of PBF LB/M and the volume of a product program of a company are. Unlike for Conventional Manufacturing (CM), nesting and scheduling cannot be addressed separately in PPC for PBF LB/M. Thus, the optimization of production logistics KPIs is more complex due to more degrees of freedom. Combined with a typically shorter planning horizon for AM, this requires automated optimization software tools for combined nesting and scheduling. Currently, PPC that considers AM characteristics does not address CM steps in the post-process adequately, even though they cause a large proportion of effort and time. Furthermore, high automatization parallel to heterogenous manual tasks require a low number of workers with training in various skills
Highâperformance carbon fibers prepared by continuous stabilization and carbonization of electron beamâirradiated textile grade polyacrylonitrile fibers
The manufacturing of highâperformance carbon fibers (CFs) from lowâcost textile grade poly(acrylonitrile) (PAN) homoâ and copolymers using continuous electron beam (EB) irradiation, stabilization, and carbonization on a kilogram scale is reported. The resulting CFs have tensile strengths of up to 3.1 ± 0.6 GPa and Young's moduli of up to 212 ± 9 GPa, exceeding standard grade CFs such as Toray T300. Additionally, the Weibull strength and modulus, the microstructure, and the morphology of these CFs are determined.Dralon Gmb
Lignin/poly(vinylpyrrolidone) multifilament fibers dryâspun from water as carbon fiber precursors
The preparation of lignin-based carbon fibers by dry spinning from aqueous solution followed by stabilization and continuous carbonization to endless yarns is reported. The influence of carbonization temperature and draw ratio on the morphology and mechanical properties of the final carbon fibers is investigated by single-fiber testing, wide-angle X-ray scattering, scanning electron microscopy, and Raman spectroscopy. A draw ratio of 5% (1.05) with a carbonization temperature of 1400 °C leads to the best mechanical properties. The resulting multifilament carbon fibers have an average diameter between 10-12 ”m, an average tensile strength of 1.30 ± 0.32 GPa, a Young's modulus of 101 ± 18 GPa, and an elongation at break of 1.31 ± 0.23%. The maximum Weibull strength (0) is 1.04 GPa with a Weibull modulus (m) of 5.1. The use of a water-soluble system is economically advantageous; also, unlike melt-spun lignin fibers, the dry-spun precursor fibers can be thermally converted without any additional crosslinking step.Technikum Laubholz Gmb
Overview of the MOSAiC expedition: Physical oceanography
Arctic Ocean properties and processes are highly relevant to the regional and global coupled climate system,
yet still scarcely observed, especially in winter. Team OCEAN conducted a full year of physical oceanography
observations as part of the Multidisciplinary drifting Observatory for the Study of the Arctic Climate
(MOSAiC), a drift with the Arctic sea ice from October 2019 to September 2020. An international team
designed and implemented the program to characterize the Arctic Ocean system in unprecedented detail, from
the seafloor to the air-sea ice-ocean interface, from sub-mesoscales to pan-Arctic. The oceanographic
measurements were coordinated with the other teams to explore the ocean physics and linkages to the
climate and ecosystem. This paper introduces the major components of the physical oceanography program
and complements the other team overviews of the MOSAiC observational program. Team OCEANâs sampling
strategy was designed around hydrographic ship-, ice- and autonomous platform-based measurements to
improve the understanding of regional circulation and mixing processes. Measurements were carried out
both routinely, with a regular schedule, and in response to storms or opening leads. Here we present alongdrift time series of hydrographic properties, allowing insights into the seasonal and regional evolution of the
water column from winter in the Laptev Sea to early summer in Fram Strait: freshening of the surface,
deepening of the mixed layer, increase in temperature and salinity of the Atlantic Water. We also highlight
the presence of Canada Basin deep water intrusions and a surface meltwater layer in leads. MOSAiC most
likely was the most comprehensive program ever conducted over the ice-covered Arctic Ocean. While data
analysis and interpretation are ongoing, the acquired datasets will support a wide range of physical
oceanography and multi-disciplinary research. They will provide a significant foundation for assessing and
advancing modeling capabilities in the Arctic Ocean
A Comparison of Soft-In/Soft-Out Algorithms for Turbo-Detection
In turbo--detection the "turbo principle" is applied to joint equalization and decoding. The performance of a turbo scheme strongly depends on the quality of the soft values passed between the soft--in/soft-- out decoders. In this paper we describe the differences between optimum and suboptimum soft--in/soft--out algorithms for equalization and decoding and compare them in a turbo--detection scheme concerning complexity and performance for perfect and mismatched channel estimation. Furthermore, a possibility to improve the soft--values of suboptimum algorithms which tend to be too optimistic is mentioned. I. INTRODUCTION AND PRINCIPLE OF TURBO--DETECTION The "turbo principle" which was first applied to parallel concatenated convolutional codes ("turbo--codes") in [1] can be applied to many detection and decoding problems. The idea of turbo--codes is to build a strong code by concatenation of simple component codes so that decoding can be performed in steps using algorithms of managea..
Iterative Equalization and Decoding for the GSM-System
In iterative equalization and decoding ("turbodetection ") the "turbo--principle" is used for detection of coded data transmitted over a frequency selective channel. Due to the burst and interleaver structure the turbo-detection scheme cannot be adopted to the GSM without modifications. We present some possibilities to adopt turbo detection to TDMA systems with interblock -interleaving like GSM and show that the turbo principle also works very well if extrinsic information is not available for all bits. Furthermore, we show that turbo-detection is a suitable method to improve the bit error rate not only of protected bits but also of bits transmitted uncoded in a system with unequal error protection, e.g. the class 2 bits in the GSM speech channel. 1 Introduction The so called "turbo" principle first used in [1] for iterative decoding of parallel concatenated codes can be used in a wide variety of receiver detection and decoder tasks. The basic idea is to use a maximum a posteriori (MA..
Sediment budget of the Laptev Sea, sediment core PS51/092-12, PM9499, PM9462 (Table 2)
This article presents a mass balance calculation of the sediment sources and sinks of the Laptev Sea. Sediment input into three regional sectors calculated on the basis of fluvial sediment discharge and coastal erosion sediment supply is compared with sediment output as estimated from sedimentation rates of well-dated marine sediment cores and data on sediment export to the central Arctic Ocean by sea ice and through bottom currents.
Within the uncertainties of the calculations, input and output are very well balanced. The calculation reveals that the sediment budget of the Laptev Sea is mainly controlled by fluvial and coastal sediment input. The major fraction of the material is simply deposited on the Laptev Sea shelf. However, for the western Laptev Sea, where sedimentation rates are low due to the absence of large rivers, export by sea ice is the main output factor
An estimation of the sediment budget in the Laptev Sea during the last 5,000 years
This article presents a mass balance calculation of the sediment
sources and sinks ofthc Laptev Sea. Sediment input into three regional sectors
calculated on the basis offluvial sediment discharge and coastal erosion sediment
supply is compared with sediment output as estimated from sedimentation
rates of well-dated marine sediment cores and data onsediment export to
the central Arctic Ocean byseaice and through bottom currents.
Within the uncertainties of the calculations, input and output arevery weil balanced.
The calculation reveals that the sediment budget of the Laptev Sea is
mainly controlled byfluvial and coastal sediment input. The major fraction of
the material is simply deposited on the Laptev Sea shelf. However, for the
western Laptev Sea, where sedimentation rates are low due to the absence of
large rivers, export byseaice is the main output factor
Reduced-Complexity Optimization of Distributed Quantization Using the Information Bottleneck Principle
This paper addresses the optimization of distributed compression in a sensor network. A direct communication among the sensors is not possible so that noisy measurements of a single relevant signal have to be locally compressed in order to meet the rate constraints of the communication links to a common receiver. This scenario is widely known as the Chief Executive Officer (CEO) problem and represents a long-standing problem in information theory. In recent years significant progress has been achieved and the rate region has been completely characterized for specific distributions of involved processes and distortion measures. While algorithmic solutions of the CEO problem are principally known, their practical implementation quickly becomes challenging due to complexity reasons. In this contribution, an efficient greedy algorithm to determine feasible solutions of the CEO problem is derived using the information bottleneck (IB) approach. Following the Wyner-Ziv coding principle, the quantizers are successively designed using already optimized quantizer mappings as side-information. However, processing this side-information in the optimization algorithm becomes a major bottleneck because the memory complexity grows exponentially with number of sensors. Therefore, a sequential compression scheme leading to a compact representation of the side-information and ensuring moderate memory requirements even for larger networks is introduced. This internal compression is optimized again by means of the IB method. Numerical results demonstrate that the overall loss in terms of relevant mutual information can be made sufficiently small even with a significant compression of the side-information. The performance is compared to separately optimized quantizers and a centralized quantization. Moreover, the influence of the optimization order for asymmetric scenarios is discussed
Sea level evolution of Laptev and East Siberian Sea - evidence from geological data and glacial isostatic adjustment
Laptev Sea and East Siberian Sea are extended shallow shelf seas which were partly land-fallen during glaciated times where the global mean sea level (GMSL) was about -120 m below its present value. At the same time tectonic activity is present, which is evident in uplifted marine terraces of the New Siberian Islands. The marine terraces may be identified and mapped in historical airborne photographs
and recent radar imagery.
To improve the environmental history of this region a reconstruction of the sea level and shore line migration is necessary which is based on modelling the glacial isostatic adjustment (GIA) including levering. GIA describes the deformational response of the solid earth to the glacially related water-mass redistribution, whereas levering only describes the deformational response of the solid earth to
the varying ocean load.
For these shallow seas, we expect a deviation from the GMSL between +10 and +30 m by levering alone and due to the vicinity to the Pleistocene ice sheets a further correction at the order of +10 m. These mechanisms reduce therefore the GMSL drop of sea level between 10 and 30 % at last glacial maximum and markedly influence the following evolution of sea level. The variability is dominated by the rheological earth structure considered in the modelling. As the limited knowledge of the rheological earth structure hinders realistic predictions of GIA for this region we will first discuss the variability of sea level history due to GIA for the last 20,000 yr. Then, we will constrain the model dependent variability by consideration of geological proxies of sea level change for this region. Analyses on Laptev Sea sediment cores will reveal a detailed chronology of changing water masses linked to sea level rise